|
the amd and intel threads....are merging!!!
|
# ? Feb 7, 2017 23:30 |
|
|
# ? Apr 27, 2024 00:34 |
|
I legitimately can't tell what thread I'm in sometimes, bouncing between them in my control panel as I do.
|
# ? Feb 8, 2017 00:05 |
|
eames posted:Vendors are now reacting to the 18-month timebomb errata. None of them are allowed to mention the component or company, Synology even had to pull a statement because they mentioned Intel. ....christ, and I was just looking at getting a replacement NAS for the D525-based one I've got whirring along right behind me. I dodged a loving bullet, huh?
|
# ? Feb 8, 2017 00:15 |
|
Around 2000-2004 I built a ton of cheap PCs for family members and those things all failed within a year or two. I don't know if I should blame bad capacitors in general or direct my hatred at Biostar in particular.
|
# ? Feb 8, 2017 00:33 |
|
FuturePastNow posted:Around 2000-2004 I built a ton of cheap PCs for family members and those things all failed within a year or two. I don't know if I should blame bad capacitors in general or direct my hatred at Biostar in particular. cap plague was a thing throughout 1999-2007 so yeah blame biostar
|
# ? Feb 8, 2017 00:42 |
|
Well, gently caress. The C2550/C2750D4i boards were the only mITX boards that supported ECC and had 8+ SATA ports. Where do I go from here on my fantasy ZFS server build?
|
# ? Feb 8, 2017 07:38 |
|
Paul MaudDib posted:Well, gently caress. The C2550/C2750D4i boards were the only mITX boards that supported ECC and had 8+ SATA ports. Where do I go from here on my fantasy ZFS server build? or wait for zen motherboards that have ecc traces
|
# ? Feb 8, 2017 07:41 |
|
NihilismNow posted:Appearently the fastest Via is the quad core 2.0 ghz E series. The 1.6 Ghz version of that board is slightly faster than a AMD E350 so you're looking at least at 2012-2013 low end performance. Up to 16 GB DDR 1333, 4x PCIe slot. Price is not to great at $330 though. Uh, to be super clear here my parents bought that processor in 2012 on an embedded board for their TV PC at their up-north cabin (relying on the GPU) and I still consider it barely adequate for that role even in 2012 (it sometimes struggles to push enough bits around given its clocks and its single core) let alone for H265 or other modern poo poo. Being real honest here we're talking 2008 low-end performance, it's like 2012 low-end if you cut the clocks in half.
|
# ? Feb 8, 2017 07:43 |
|
Anime Schoolgirl posted:the amd and intel threads....are merging!!! Considering the slow evolution in processors we may as well just have one thread for talking about chips.
|
# ? Feb 8, 2017 07:48 |
|
Paul MaudDib posted:Well, gently caress. The C2550/C2750D4i boards were the only mITX boards that supported ECC and had 8+ SATA ports. Where do I go from here on my fantasy ZFS server build? It's mATX but this Xeon-D has 14 (6 SATA3 from chipset and 8 LSI SAS3) http://www.asrockrack.com/general/productdetail.asp?Model=D1541D4U-2T8R I'd go for one of the mITX 6 port Xeon-Ds for sure, even with "only" 6 SATA3 ports they have x16 gen3 PCIe slots for all your HBA needs..
|
# ? Feb 8, 2017 07:47 |
|
priznat posted:It's mATX but this Xeon-D has 14 (6 SATA3 from chipset and 8 LSI SAS3) Too big, I want this to fit into a U-NAS 800. Also, ideally I'd like to hold the PCIe port open too.
|
# ? Feb 8, 2017 08:27 |
|
Yeah, understandable. It's too bad that they don't offer as many ports as the atoms, I'm curious to see what the skylake/kaby lake xeon-ds are like or if they are even making them. We have a few of the supermicro systems at work and they are great little servers. Very compact and powerful.
|
# ? Feb 8, 2017 08:34 |
|
Actually I guess the U-NAS 800 also supports a SAS backplane. Is that workable with SATA drives at all? (I need JBOD for a ZFS array ideally)
|
# ? Feb 8, 2017 09:11 |
|
I kinda want to build a 7600K system, and they're only $220 for the CPU at Microcenter. But then I remember I basically just use my 3570K as an HTPC and occasional Diablo 3.
|
# ? Feb 8, 2017 09:34 |
|
Well, if you game at all, there's not a whole lot of reason to go for anything less than a x600K of some generation. That's been the "gold but I'm on a budget" standard for years now. Also, Kaby Lake gets you 4K netflix, so it'll be easier to pass down the line as a dedicated media PC if you ever upgrade.
|
# ? Feb 8, 2017 09:44 |
I've never really looked into it as computer software standards and interfaces aren't my area of expertise, but where do 'chipset' pcie/sata/usb etc get their bandwidth from? I mean, the host chip itself provides the 'lanes' and other bandwidth at the requisite speeds, but how does that interface with the CPU so the devices on that off board host don't see latency relative to the devices talking directly with the cpu (hyperconnect?)? After all, that has only so many pads and besides the many hundred needed for the various vccs and grounds, I can only think of RAM traces, PCIe, gpio, jtag, spi, i2c serial interfaces and various other control/clock inputs and outputs that don't serve double duty as gpios. Basically, how do they multiply the pcie lanes or other communications bandwidth going to the chipset from the cpu? Is it really just a giant buffer? Basically, if the CPU has 20 lanes, 16 for graphics and 4 lanes for the chipset, do they really just buffer the data from allll the other like what, 12 lanes? to squeeze through that 4 lane interface to the cpu?
|
|
# ? Feb 8, 2017 09:49 |
|
Watermelon Daiquiri posted:I've never really looked into it as computer software standards and interfaces aren't my area of expertise, but where do 'chipset' pcie/sata/usb etc get their bandwidth from? I mean, the host chip itself provides the 'lanes' and other bandwidth at the requisite speeds, but how does that interface with the CPU so the devices on that off board host don't see latency relative to the devices talking directly with the cpu (hyperconnect?)? After all, that has only so many pads and besides the many hundred needed for the various vccs and grounds, I can only think of RAM traces, PCIe, gpio, jtag, spi, i2c serial interfaces and various other control/clock inputs and outputs that don't serve double duty as gpios. Basically, how do they multiply the pcie lanes or other communications bandwidth going to the chipset from the cpu? Is it really just a giant buffer? Basically, if the CPU has 20 lanes, 16 for graphics and 4 lanes for the chipset, do they really just buffer the data from allll the other like what, 12 lanes? to squeeze through that 4 lane interface to the cpu? https://en.wikipedia.org/wiki/Multiplexing A whole bunch of channels' worth of devices share a couple PCIe2.0 channels' worth of bandwidth. The PCH is in charge of making them take turns. It's not really a crisis unless you are really using all the things at once (in which case you connect them to the CPU anyway for latency reasons). It's not really that big a problem once you understand the concept. There are 200 million phone lines on the west coast and 200 million phone lines on the east coast sharing 1 million lines between them! (but mostly everyone isn't calling everyone all at the same time) Also, the PCH channels are typically dedicated and additional to the GPU channels. You get 3.0x40 through the GPU plus 2.0x4 through the PCH or something like that (Intel X99 architecture as an example). The PCH lanes go over separate traces. Welcome to CompEng101. It's cool stuff. Maybe check this out: https://www.amazon.com/Advanced-Computer-Architectures-Sajjan-Shiva/dp/0849337585 ($10 shipped used, this was my textbook for my computer architectures class and I thought it was OK) Paul MaudDib fucked around with this message at 10:23 on Feb 8, 2017 |
# ? Feb 8, 2017 09:58 |
Ok, then exactly as I thought. I was trying to move a poo poo ton of data between multiple hard drives both internal and external and other devices on the network and things were getting stupidly laggy. I figured it was due to something like this. Also, according to ark, the pcm handles the dimms? I thought those talked directly to the cpu? E: the data sheet for the z97 chipset doesn't have ram listed, so the ark page must just be talking about how z97 has dimms available on it Watermelon Daiquiri fucked around with this message at 10:16 on Feb 8, 2017 |
|
# ? Feb 8, 2017 10:13 |
|
e: Nevermind.
Platystemon fucked around with this message at 10:19 on Feb 8, 2017 |
# ? Feb 8, 2017 10:16 |
|
Watermelon Daiquiri posted:Ok, then exactly as I thought. I was trying to move a poo poo ton of data between multiple hard drives both internal and external and other devices on the network and things were getting stupidly laggy. I figured it was due to something like this. Also, according to ark, the pcm handles the dimms? I thought those talked directly to the cpu? Here's what X99 looks like. "X99 Chipset" = PCH here. So the memory is hanging off the CPU now. Although maybe not on older architectures. Paul MaudDib fucked around with this message at 10:25 on Feb 8, 2017 |
# ? Feb 8, 2017 10:18 |
|
Paul MaudDib posted:Well, gently caress. The C2550/C2750D4i boards were the only mITX boards that supported ECC and had 8+ SATA ports. Where do I go from here on my fantasy ZFS server build? Just get one with B1 or higher stepping. While existing products are rightly hosed, it seems that Intel found a fairly trivial fix for it in production and newer versions should be unaffected.
|
# ? Feb 8, 2017 14:39 |
|
Most companies are rolling out board level fixes/workarounds which allow the buggy processors to be used without the issue occurring. I'm not even sure if there's a newer stepping available at the moment?pfsense posted:A board level workaround has been identified for the existing production stepping of the component which resolves the issue. This workaround is being cut into production as soon as possible after Chinese New Year. Additionally, some of our products are able to be reworked post-production to resolve the issue. C2000 series based products should be safe to buy in a few weeks, assuming you don't end up with old stock.
|
# ? Feb 8, 2017 15:54 |
|
eames posted:Most companies are rolling out board level fixes/workarounds which allow the buggy processors to be used without the issue occurring. I'm not even sure if there's a newer stepping available at the moment? Where the hell is Denverton anyway? Shouldn't all of this be less relevant as new generation stuff hits wide release? Here's an article from 2015 saying "Hey, Denverton is almost here" Twerk from Home fucked around with this message at 16:05 on Feb 8, 2017 |
# ? Feb 8, 2017 16:03 |
|
Paul MaudDib posted:Too big, I want this to fit into a U-NAS 800. Also, ideally I'd like to hold the PCIe port open too. Anything inherently wrong with this? Outside of the limit of only 16G ECC ram, it looks like a well stocked board for use in a NAS. Hell even has 3 nic ports for some teaming action hah.
|
# ? Feb 8, 2017 17:15 |
|
EdEddnEddy posted:Anything inherently wrong with this? Only 6 SATA ports.
|
# ? Feb 8, 2017 17:30 |
|
Paul MaudDib posted:Only 6 SATA ports. Looks like you would use the PCIe 16x slot with a HBA adapter or similar and the PCIe riser.
|
# ? Feb 8, 2017 17:35 |
|
20C lower temps by delidding a 7700K. Pretty crazy, but I didn't know about delidding a week ago, so. https://www.youtube.com/watch?v=HNLubjXKHLs Sounds like Intel has some sloppy QC, or is it just me?
|
# ? Feb 8, 2017 17:51 |
|
ufarn posted:Sounds like Intel has some sloppy QC, or is it just me? Its not a QC issue its a design/cost issue. They save a few bucks per CPU with the current IHS/TIM set up vs using a soldered IHS. For stock clocks it works fine enough. Its when you want to OC that it becomes a problem. Same issue happened with Haswell at release. That was why Intel released Devil's Canyon which did help but wasn't a whole lot better really. I still think they should just scrap the IHS for OC'ers chips and just ship them with a shim and a bare die. A good shim makes it hard to mess up the die and it should be fairly cheap to do while making OC'ers happy.
|
# ? Feb 8, 2017 18:24 |
|
^ For what it's worth I recall reading that Intel said soldered heatspreaders work less well with smaller dies, which is at least part of why they switched to a different TIM in the Sandy->Ivy Bridge transition.redeyes posted:Looks like you would use the PCIe 16x slot with a HBA adapter or similar and the PCIe riser. Yeah, he said already that he wants 8 ports without having to use the PCIe slot.
|
# ? Feb 8, 2017 18:35 |
|
PC LOAD LETTER posted:I still think they should just scrap the IHS for OC'ers chips and just ship them with a shim and a bare die. A good shim makes it hard to mess up the die and it should be fairly cheap to do while making OC'ers happy. I really think they should do this too. Mostly because you know heatsinks would fit it (with the IHS removed).
|
# ? Feb 8, 2017 18:40 |
|
Paul MaudDib posted:Too big, I want this to fit into a U-NAS 800. Also, ideally I'd like to hold the PCIe port open too. ASrock Rack C236 WSI is mitx with 8x Sata ports up to 32 GB ECC DDR4.
|
# ? Feb 8, 2017 19:13 |
|
Anime Schoolgirl posted:buy a c236 and dehumanize yourself and face to bloodshed Of course, I was so stupid and went mini-ITX with both mainboard and case, and that's biting me in the rear end soon. The only PCIe slot is now occupied with 10GbE, and there wasn't any space in the case anymore regardless, HBA or not.
|
# ? Feb 8, 2017 19:17 |
|
Combat Pretzel posted:Back when I researched my NAS, an C226-based mainboard and an E3-1220V3 was just minimally more expensive than going with a C2750D4I. I valued the headroom in CPU power over the minor power savings of the Avoton. This is why Xeon-D is the answer, full Broadwell cores, onboard 10GbE and low TDP all in one.
|
# ? Feb 8, 2017 19:29 |
|
Was released a year after I built my NAS. Would have been nice to have. --edit: Seems to be 10GBase-T. If I want 10GbE with SFP+, I have to defer to a mainboard with some third-party chipset. Not cool. --edit: Nevermind, Supermicro would have me covered. --edit: Hmmm sweet. 4C/8T 2.2GHz with 2x 10GbE SFP+, with 4 SATA3 via SoC and another 16 ports via LSI controller. 600€, this is tempting for a summer project. Now I just need to find a cheap used 19" server rack. Combat Pretzel fucked around with this message at 20:50 on Feb 8, 2017 |
# ? Feb 8, 2017 20:29 |
PC LOAD LETTER posted:Its not a QC issue its a design/cost issue. They save a few bucks per CPU with the current IHS/TIM set up vs using a soldered IHS. For stock clocks it works fine enough. Its when you want to OC that it becomes a problem. It's not just a cost issue, on smaller dies Intel found that the solder tended to crack after extended heating and cooling cycles, frequently destroying the chip underneath, on larger die chips like the 6+ core i7 and Xeon chips the solder holds up better because of the larger area to distribute heat into.
|
|
# ? Feb 8, 2017 20:48 |
|
Combat Pretzel posted:Back when I researched my NAS, an C226-based mainboard and an E3-1220V3 was just minimally more expensive than going with a C2750D4I. I valued the headroom in CPU power over the minor power savings of the Avoton. Look at you, the loving scrub who filled his only slot with an interconnect that didn't even guarantee single-microsecond latency, what are you even going to do with that terrible interconnect? Not join the cool kids at the HPC table, that's for sure just kidding though I'm sure your wife appreciates the best-effort delivery even if it's not reliable Paul MaudDib fucked around with this message at 21:04 on Feb 8, 2017 |
# ? Feb 8, 2017 21:02 |
|
really just kidding but if you're springing for 10GbE could you really not afford infiniband?
Paul MaudDib fucked around with this message at 21:07 on Feb 8, 2017 |
# ? Feb 8, 2017 21:04 |
|
FreeNAS and Infiniband are all but square with each other. Easy way out. --edit: Also, selection of used stuff on eBay here is a far cry compared to what can be had in the US. At least I got Intel X520 cards instead of Mellanox.
Combat Pretzel fucked around with this message at 22:49 on Feb 8, 2017 |
# ? Feb 8, 2017 22:46 |
|
I have one of those C2550 boards for my home NAS that I purchased in January of last year
|
# ? Feb 8, 2017 22:51 |
|
|
# ? Apr 27, 2024 00:34 |
|
PC LOAD LETTER posted:Its not a QC issue its a design/cost issue. They save a few bucks per CPU with the current IHS/TIM set up vs using a soldered IHS. For stock clocks it works fine enough. Its when you want to OC that it becomes a problem. I wish they would try to save a few cents more and use less glue in the lid. I mean, in the video he didn't actually remove the lid, he just removed the excess glue.
|
# ? Feb 9, 2017 01:02 |