Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Anime Schoolgirl
Nov 28, 2002

the amd and intel threads....are merging!!!

Adbot
ADBOT LOVES YOU

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
I legitimately can't tell what thread I'm in sometimes, bouncing between them in my control panel as I do.

SwissArmyDruid
Feb 14, 2014

by sebmojo

eames posted:

Vendors are now reacting to the 18-month timebomb errata. None of them are allowed to mention the component or company, Synology even had to pull a statement because they mentioned Intel.

Pfsense/Netgate vowed to replace all affected units within 3 years of purchase which seems fair.

https://blog.pfsense.org/?p=2297

Still, having a ticking timebomb as a firewall which is often a single point of failure feels bad.

edit: better link:

https://www.servethehome.com/intel-atom-c2000-series-bug-quiet/

....christ, and I was just looking at getting a replacement NAS for the D525-based one I've got whirring along right behind me. I dodged a loving bullet, huh?

FuturePastNow
May 19, 2014


Around 2000-2004 I built a ton of cheap PCs for family members and those things all failed within a year or two. I don't know if I should blame bad capacitors in general or direct my hatred at Biostar in particular.

Anime Schoolgirl
Nov 28, 2002

FuturePastNow posted:

Around 2000-2004 I built a ton of cheap PCs for family members and those things all failed within a year or two. I don't know if I should blame bad capacitors in general or direct my hatred at Biostar in particular.

cap plague was a thing throughout 1999-2007

so yeah blame biostar

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Well, gently caress. The C2550/C2750D4i boards were the only mITX boards that supported ECC and had 8+ SATA ports. Where do I go from here on my fantasy ZFS server build?

Anime Schoolgirl
Nov 28, 2002

Paul MaudDib posted:

Well, gently caress. The C2550/C2750D4i boards were the only mITX boards that supported ECC and had 8+ SATA ports. Where do I go from here on my fantasy ZFS server build?
buy a c236 and dehumanize yourself and face to bloodshed

or wait for zen motherboards that have ecc traces

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

NihilismNow posted:

Appearently the fastest Via is the quad core 2.0 ghz E series. The 1.6 Ghz version of that board is slightly faster than a AMD E350 so you're looking at least at 2012-2013 low end performance. Up to 16 GB DDR 1333, 4x PCIe slot. Price is not to great at $330 though.

Uh, to be super clear here my parents bought that processor in 2012 on an embedded board for their TV PC at their up-north cabin (relying on the GPU) and I still consider it barely adequate for that role even in 2012 (it sometimes struggles to push enough bits around given its clocks and its single core) let alone for H265 or other modern poo poo. Being real honest here we're talking 2008 low-end performance, it's like 2012 low-end if you cut the clocks in half.

champagne posting
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER

Anime Schoolgirl posted:

the amd and intel threads....are merging!!!

Considering the slow evolution in processors we may as well just have one thread for talking about chips.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Paul MaudDib posted:

Well, gently caress. The C2550/C2750D4i boards were the only mITX boards that supported ECC and had 8+ SATA ports. Where do I go from here on my fantasy ZFS server build?

It's mATX but this Xeon-D has 14 (6 SATA3 from chipset and 8 LSI SAS3) :getin:

http://www.asrockrack.com/general/productdetail.asp?Model=D1541D4U-2T8R

I'd go for one of the mITX 6 port Xeon-Ds for sure, even with "only" 6 SATA3 ports they have x16 gen3 PCIe slots for all your HBA needs..

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

priznat posted:

It's mATX but this Xeon-D has 14 (6 SATA3 from chipset and 8 LSI SAS3) :getin:

http://www.asrockrack.com/general/productdetail.asp?Model=D1541D4U-2T8R

I'd go for one of the mITX 6 port Xeon-Ds for sure, even with "only" 6 SATA3 ports they have x16 gen3 PCIe slots for all your HBA needs..

Too big, I want this to fit into a U-NAS 800. Also, ideally I'd like to hold the PCIe port open too.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
Yeah, understandable. It's too bad that they don't offer as many ports as the atoms, I'm curious to see what the skylake/kaby lake xeon-ds are like or if they are even making them. We have a few of the supermicro systems at work and they are great little servers. Very compact and powerful.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Actually I guess the U-NAS 800 also supports a SAS backplane. Is that workable with SATA drives at all? (I need JBOD for a ZFS array ideally)

Josh Lyman
May 24, 2009


I kinda want to build a 7600K system, and they're only $220 for the CPU at Microcenter.

But then I remember I basically just use my 3570K as an HTPC and occasional Diablo 3. :negative:

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Well, if you game at all, there's not a whole lot of reason to go for anything less than a x600K of some generation. That's been the "gold but I'm on a budget" standard for years now.

Also, Kaby Lake gets you 4K netflix, so it'll be easier to pass down the line as a dedicated media PC if you ever upgrade.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
I've never really looked into it as computer software standards and interfaces aren't my area of expertise, but where do 'chipset' pcie/sata/usb etc get their bandwidth from? I mean, the host chip itself provides the 'lanes' and other bandwidth at the requisite speeds, but how does that interface with the CPU so the devices on that off board host don't see latency relative to the devices talking directly with the cpu (hyperconnect?)? After all, that has only so many pads and besides the many hundred needed for the various vccs and grounds, I can only think of RAM traces, PCIe, gpio, jtag, spi, i2c serial interfaces and various other control/clock inputs and outputs that don't serve double duty as gpios. Basically, how do they multiply the pcie lanes or other communications bandwidth going to the chipset from the cpu? Is it really just a giant buffer? Basically, if the CPU has 20 lanes, 16 for graphics and 4 lanes for the chipset, do they really just buffer the data from allll the other like what, 12 lanes? to squeeze through that 4 lane interface to the cpu?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Watermelon Daiquiri posted:

I've never really looked into it as computer software standards and interfaces aren't my area of expertise, but where do 'chipset' pcie/sata/usb etc get their bandwidth from? I mean, the host chip itself provides the 'lanes' and other bandwidth at the requisite speeds, but how does that interface with the CPU so the devices on that off board host don't see latency relative to the devices talking directly with the cpu (hyperconnect?)? After all, that has only so many pads and besides the many hundred needed for the various vccs and grounds, I can only think of RAM traces, PCIe, gpio, jtag, spi, i2c serial interfaces and various other control/clock inputs and outputs that don't serve double duty as gpios. Basically, how do they multiply the pcie lanes or other communications bandwidth going to the chipset from the cpu? Is it really just a giant buffer? Basically, if the CPU has 20 lanes, 16 for graphics and 4 lanes for the chipset, do they really just buffer the data from allll the other like what, 12 lanes? to squeeze through that 4 lane interface to the cpu?

https://en.wikipedia.org/wiki/Multiplexing

A whole bunch of channels' worth of devices share a couple PCIe2.0 channels' worth of bandwidth. The PCH is in charge of making them take turns. It's not really a crisis unless you are really using all the things at once (in which case you connect them to the CPU anyway for latency reasons).

It's not really that big a problem once you understand the concept. There are 200 million phone lines on the west coast and 200 million phone lines on the east coast sharing 1 million lines between them! :derp: (but mostly everyone isn't calling everyone all at the same time)

Also, the PCH channels are typically dedicated and additional to the GPU channels. You get 3.0x40 through the GPU plus 2.0x4 through the PCH or something like that (Intel X99 architecture as an example). The PCH lanes go over separate traces.

Welcome to CompEng101. It's cool stuff. Maybe check this out: https://www.amazon.com/Advanced-Computer-Architectures-Sajjan-Shiva/dp/0849337585 ($10 shipped used, this was my textbook for my computer architectures class and I thought it was OK)

Paul MaudDib fucked around with this message at 10:23 on Feb 8, 2017

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Ok, then exactly as I thought. I was trying to move a poo poo ton of data between multiple hard drives both internal and external and other devices on the network and things were getting stupidly laggy. I figured it was due to something like this. Also, according to ark, the pcm handles the dimms? I thought those talked directly to the cpu?

E: the data sheet for the z97 chipset doesn't have ram listed, so the ark page must just be talking about how z97 has dimms available on it

Watermelon Daiquiri fucked around with this message at 10:16 on Feb 8, 2017

Platystemon
Feb 13, 2012

BREADS
e: Nevermind.

Platystemon fucked around with this message at 10:19 on Feb 8, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Watermelon Daiquiri posted:

Ok, then exactly as I thought. I was trying to move a poo poo ton of data between multiple hard drives both internal and external and other devices on the network and things were getting stupidly laggy. I figured it was due to something like this. Also, according to ark, the pcm handles the dimms? I thought those talked directly to the cpu?

E: the data sheet for the z97 chipset doesn't have ram listed, so the ark page must just be talking about how z97 has dimms available on it

Here's what X99 looks like.



"X99 Chipset" = PCH here.

So the memory is hanging off the CPU now. Although maybe not on older architectures.

Paul MaudDib fucked around with this message at 10:25 on Feb 8, 2017

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Paul MaudDib posted:

Well, gently caress. The C2550/C2750D4i boards were the only mITX boards that supported ECC and had 8+ SATA ports. Where do I go from here on my fantasy ZFS server build?

Just get one with B1 or higher stepping. While existing products are rightly hosed, it seems that Intel found a fairly trivial fix for it in production and newer versions should be unaffected.

eames
May 9, 2009

Most companies are rolling out board level fixes/workarounds which allow the buggy processors to be used without the issue occurring. I'm not even sure if there's a newer stepping available at the moment?

pfsense posted:

A board level workaround has been identified for the existing production stepping of the component which resolves the issue. This workaround is being cut into production as soon as possible after Chinese New Year. Additionally, some of our products are able to be reworked post-production to resolve the issue.

C2000 series based products should be safe to buy in a few weeks, assuming you don't end up with old stock.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

eames posted:

Most companies are rolling out board level fixes/workarounds which allow the buggy processors to be used without the issue occurring. I'm not even sure if there's a newer stepping available at the moment?


C2000 series based products should be safe to buy in a few weeks, assuming you don't end up with old stock.

Where the hell is Denverton anyway? Shouldn't all of this be less relevant as new generation stuff hits wide release?

Here's an article from 2015 saying "Hey, Denverton is almost here"

Twerk from Home fucked around with this message at 16:05 on Feb 8, 2017

EdEddnEddy
Apr 5, 2012



Paul MaudDib posted:

Too big, I want this to fit into a U-NAS 800. Also, ideally I'd like to hold the PCIe port open too.

Anything inherently wrong with this?

Outside of the limit of only 16G ECC ram, it looks like a well stocked board for use in a NAS. Hell even has 3 nic ports for some teaming action hah.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

EdEddnEddy posted:

Anything inherently wrong with this?

Outside of the limit of only 16G ECC ram, it looks like a well stocked board for use in a NAS. Hell even has 3 nic ports for some teaming action hah.

Only 6 SATA ports.

redeyes
Sep 14, 2002

by Fluffdaddy

Paul MaudDib posted:

Only 6 SATA ports.

Looks like you would use the PCIe 16x slot with a HBA adapter or similar and the PCIe riser.

ufarn
May 30, 2009
20C lower temps by delidding a 7700K. Pretty crazy, but I didn't know about delidding a week ago, so.

https://www.youtube.com/watch?v=HNLubjXKHLs

Sounds like Intel has some sloppy QC, or is it just me?

PC LOAD LETTER
May 23, 2005
WTF?!

ufarn posted:

Sounds like Intel has some sloppy QC, or is it just me?

Its not a QC issue its a design/cost issue. They save a few bucks per CPU with the current IHS/TIM set up vs using a soldered IHS. For stock clocks it works fine enough. Its when you want to OC that it becomes a problem.

Same issue happened with Haswell at release. That was why Intel released Devil's Canyon which did help but wasn't a whole lot better really.

I still think they should just scrap the IHS for OC'ers chips and just ship them with a shim and a bare die. A good shim makes it hard to mess up the die and it should be fairly cheap to do while making OC'ers happy.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
^ For what it's worth I recall reading that Intel said soldered heatspreaders work less well with smaller dies, which is at least part of why they switched to a different TIM in the Sandy->Ivy Bridge transition.

redeyes posted:

Looks like you would use the PCIe 16x slot with a HBA adapter or similar and the PCIe riser.

Yeah, he said already that he wants 8 ports without having to use the PCIe slot.

GRINDCORE MEGGIDO
Feb 28, 1985


PC LOAD LETTER posted:

I still think they should just scrap the IHS for OC'ers chips and just ship them with a shim and a bare die. A good shim makes it hard to mess up the die and it should be fairly cheap to do while making OC'ers happy.

I really think they should do this too. Mostly because you know heatsinks would fit it (with the IHS removed).

NihilismNow
Aug 31, 2003

Paul MaudDib posted:

Too big, I want this to fit into a U-NAS 800. Also, ideally I'd like to hold the PCIe port open too.

ASrock Rack C236 WSI is mitx with 8x Sata ports up to 32 GB ECC DDR4.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Anime Schoolgirl posted:

buy a c236 and dehumanize yourself and face to bloodshed
Back when I researched my NAS, an C226-based mainboard and an E3-1220V3 was just minimally more expensive than going with a C2750D4I. I valued the headroom in CPU power over the minor power savings of the Avoton.

Of course, I was so stupid and went mini-ITX with both mainboard and case, and that's biting me in the rear end soon. The only PCIe slot is now occupied with 10GbE, and there wasn't any space in the case anymore regardless, HBA or not. :(

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Combat Pretzel posted:

Back when I researched my NAS, an C226-based mainboard and an E3-1220V3 was just minimally more expensive than going with a C2750D4I. I valued the headroom in CPU power over the minor power savings of the Avoton.

Of course, I was so stupid and went mini-ITX with both mainboard and case, and that's biting me in the rear end soon. The only PCIe slot is now occupied with 10GbE, and there wasn't any space in the case anymore regardless, HBA or not. :(

This is why Xeon-D is the answer, full Broadwell cores, onboard 10GbE and low TDP all in one.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Was released a year after I built my NAS. Would have been nice to have.

--edit:
Seems to be 10GBase-T. If I want 10GbE with SFP+, I have to defer to a mainboard with some third-party chipset. Not cool.
--edit:
Nevermind, Supermicro would have me covered.
--edit:
Hmmm sweet. 4C/8T 2.2GHz with 2x 10GbE SFP+, with 4 SATA3 via SoC and another 16 ports via LSI controller. 600€, this is tempting for a summer project.

Now I just need to find a cheap used 19" server rack.

Combat Pretzel fucked around with this message at 20:50 on Feb 8, 2017

AVeryLargeRadish
Aug 19, 2011

I LITERALLY DON'T KNOW HOW TO NOT BE A WEIRD SEXUAL CREEP ABOUT PREPUBESCENT ANIME GIRLS, READ ALL ABOUT IT HERE!!!

PC LOAD LETTER posted:

Its not a QC issue its a design/cost issue. They save a few bucks per CPU with the current IHS/TIM set up vs using a soldered IHS. For stock clocks it works fine enough. Its when you want to OC that it becomes a problem.

Same issue happened with Haswell at release. That was why Intel released Devil's Canyon which did help but wasn't a whole lot better really.

I still think they should just scrap the IHS for OC'ers chips and just ship them with a shim and a bare die. A good shim makes it hard to mess up the die and it should be fairly cheap to do while making OC'ers happy.

It's not just a cost issue, on smaller dies Intel found that the solder tended to crack after extended heating and cooling cycles, frequently destroying the chip underneath, on larger die chips like the 6+ core i7 and Xeon chips the solder holds up better because of the larger area to distribute heat into.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Combat Pretzel posted:

Back when I researched my NAS, an C226-based mainboard and an E3-1220V3 was just minimally more expensive than going with a C2750D4I. I valued the headroom in CPU power over the minor power savings of the Avoton.

Of course, I was so stupid and went mini-ITX with both mainboard and case, and that's biting me in the rear end soon. The only PCIe slot is now occupied with 10GbE, and there wasn't any space in the case anymore regardless, HBA or not. :(

Look at you, the loving scrub who filled his only slot with an interconnect that didn't even guarantee single-microsecond latency, what are you even going to do with that terrible interconnect? Not join the cool kids at the HPC table, that's for sure

just kidding though I'm sure your wife appreciates the best-effort delivery even if it's not reliable

Paul MaudDib fucked around with this message at 21:04 on Feb 8, 2017

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
really just kidding but if you're springing for 10GbE could you really not afford infiniband?

Paul MaudDib fucked around with this message at 21:07 on Feb 8, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
FreeNAS and Infiniband are all but square with each other. Easy way out. --edit: Also, selection of used stuff on eBay here is a far cry compared to what can be had in the US. At least I got Intel X520 cards instead of Mellanox.

Combat Pretzel fucked around with this message at 22:49 on Feb 8, 2017

KKKLIP ART
Sep 3, 2004

I have one of those C2550 boards for my home NAS that I purchased in January of last year :smith:

Adbot
ADBOT LOVES YOU

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

PC LOAD LETTER posted:

Its not a QC issue its a design/cost issue. They save a few bucks per CPU with the current IHS/TIM set up vs using a soldered IHS. For stock clocks it works fine enough. Its when you want to OC that it becomes a problem.

I wish they would try to save a few cents more and use less glue in the lid. I mean, in the video he didn't actually remove the lid, he just removed the excess glue.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply