Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Deathreaper
Mar 27, 2010
Got impatient for Zen and pulled the trigger on a i7-5960X for cheap on craigslist for my gaming/workstation mini-itx build - It's going to be interesting fitting this into a Ncase M-1 with a full size GPU. Doing some calculations, I just realized how scary easy this chip can hit 400 W of power consumption when overclocking. Can something like a Corsair H100i v2 (240mm) handle that much?

Adbot
ADBOT LOVES YOU

Kazinsal
Dec 13, 2011
400W? That sounds like full system load.

Or maybe like, unsafe voltages, barely stable clocks, and AVX blend test in Prime95.

You will not hit 400W consumption from that CPU.

Gwaihir
Dec 8, 2009
Hair Elf

Kazinsal posted:

400W? That sounds like full system load.

Or maybe like, unsafe voltages, barely stable clocks, and AVX blend test in Prime95.

You will not hit 400W consumption from that CPU.

Going by benchmark measurements, to hit 400w you'd have to get a CPU to clock to 4.7ghz and be using 1.4+ core volts which is insanely high for a chip with a stock voltage of 1.05

I can definitely pull just under double it's stick 140w TDP though with a typical overclock to 4.4ish ghz though, depending on how much your particular CPU needs volts.

e: If you're planning to push clocks, you do absolutely want a 240mm rad minimum because it will crank out quite a lot of wattage.

Gwaihir fucked around with this message at 18:19 on Jan 7, 2017

NihilismNow
Aug 31, 2003

Gwaihir posted:

Going by benchmark measurements, to hit 400w you'd have to get a CPU to clock to 4.7ghz and be using 1.4+ core volts which is insanely high for a chip with a stock voltage of 1.05

I can definitely pull just under double it's stick 140w TDP though with a typical overclock to 4.4ish ghz though, depending on how much your particular CPU needs volts.

Add a high end overclocked videocard and you drives and what not and you're dissipating 500 watt in the volume of a large shoe box. Should be interesting.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

GRINDCORE MEGGIDO
Feb 28, 1985



I don't really see the point at all of passive watercooling now... there's always a pump noise, the overclocks aren't going to be "amazing", and it's never going to run fully passive like big air heatsinks can.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

GRINDCORE MEGGIDO posted:

I don't really see the point at all of passive watercooling now... there's always a pump noise, the overclocks aren't going to be "amazing", and it's never going to run fully passive like big air heatsinks can.

Pumps have gotten pretty damned good, though. Most of the ones I've seen in the recent AIOs are near silent, especially if you've got a case that has any sort of noise abatement padding. Hell, I run a system with a H100 and a H60 look-alike in it, and if I turn the fans to the low setting the entire system is damned near silent while still providing excellent cooling. And I can put that in a case that doesn't have to be like 2' wide to accommodate a 3lbs giant gently caress-off heatsink tower.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
I have literally never heard a noise from my H50 AIO pump, or my DDC pump :shrug:

Prescription Combs
Apr 20, 2005
   6

GRINDCORE MEGGIDO posted:

I don't really see the point at all of passive watercooling now... there's always a pump noise, the overclocks aren't going to be "amazing", and it's never going to run fully passive like big air heatsinks can.

Pumps are quiet as heck. The fans are the loudest thing in the AIO I used to use along with my current custom loop.

GRINDCORE MEGGIDO
Feb 28, 1985


I used water cooling for years, it let me run a pretty powerful system in very small mitx, which was great. I always heard the h50 pump though, even trying a few h50s. the pc was passive except for that and one 700rpm fan.

I'm all for water sometimes. But I don't see the point of fully passive water cooling, like that external rad.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Put it out a window in the winter. Get max temps of like 30C. Victory!

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

GRINDCORE MEGGIDO posted:

I used water cooling for years, it let me run a pretty powerful system in very small mitx, which was great. I always heard the h50 pump though, even trying a few h50s. the pc was passive except for that and one 700rpm fan.

I'm all for water sometimes. But I don't see the point of fully passive water cooling, like that external rad.

I posted that image for the "external radiator" idea.

GRINDCORE MEGGIDO
Feb 28, 1985


PerrineClostermann posted:

I posted that image for the "external radiator" idea.

This guy built a giant heat exchanger underground. That's dedication:
http://www.overclock.net/t/671177/12-feet-under-1000-square-feet-of-geothermal-pc-cooling

Sashimi
Dec 26, 2008


College Slice
I was about to joke about not being hardcore enough if you didn't use a pump with an underground connection to the water table for cooling, then I scrolled down...

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



Sashimi posted:

I was about to joke about not being hardcore enough if you didn't use a pump with an underground connection to the water table for cooling, then I scrolled down...

A part of me wants to call him out on some of his hydraulics/soil comments but whatever, it's an interesting project.

big shtick energy
May 27, 2004


An interesting talk from intel looking at moore's law going into the future: https://player.vimeo.com/video/164169553

Setset
Apr 14, 2012
Grimey Drawer

Probably get the same results with a $100 AIO

Anime Schoolgirl
Nov 28, 2002

normally you'd use this on something like an HPC cluster but a single desktop computer lmfao

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
Yes. It's just a fun dumb hobby project

ItBurns
Jul 24, 2007
It's neat and he probably had fun and learned a lot.

SourKraut posted:

A part of me wants to call him out on some of his hydraulics/soil comments but whatever, it's an interesting project.

How threatened does the dirt-scientist community feel by this amateur overstepping the bounds of his meager knowledge?

silence_kit
Jul 14, 2011

by the sex ghost

DuckConference posted:

An interesting talk from intel looking at moore's law going into the future: https://player.vimeo.com/video/164169553

Huh, I had never thought about the tack that the speaker took. Well it was not his own tack--he said that it was recognized at the very beginning of the integrated circuit industry by Gordon Moore. The R & D into smaller transistors, wires, etc. still makes sense purely on a cost per function basis, if for no other reason, and he claimed that Intel was still spending less on manufacturing R&D than the money it made back on reducing the chip footprint per function.

I suspect that his claim that Intel can include the same functionality in half the die area and thus they are able to halve the cost per function every process node is a little misleading, though. I am not a VLSI designer, so I may be all wet here, but still, I thought that there were a lot of functions on a computer chip which do not benefit much from scaling, like longer-distance communication across the chip & storage (SRAM). And there's the issue that because of the heat dissipation constraint, you are obligated to design your circuits with a lower activity factor if you want a higher density at the same speed. I suspect he may not be accounting for those factors when he presents the cost/function plot, and is presenting the cost/function plot as if the entire computer chip were a single adder circuit, without having many of the real constraints of a real computer chip.

FormatAmerica
Jun 3, 2005
Grimey Drawer

silence_kit posted:

I suspect he may not be accounting for those factors when he presents the cost/function plot, and is presenting the cost/function plot as if the entire computer chip were a single adder circuit, without having many of the real constraints of a real computer chip.

That's what the R&D budget is for :haw:

big shtick energy
May 27, 2004


silence_kit posted:

And there's the issue that because of the heat dissipation constraint, you are obligated to design your circuits with a lower activity factor if you want a higher density at the same speed. I suspect he may not be accounting for those factors when he presents the cost/function plot, and is presenting the cost/function plot as if the entire computer chip were a single adder circuit, without having many of the real constraints of a real computer chip.

I think that's why he was so focused on the need to move to lower energy devices. It's not primarily about laptop battery life or anything, it's about keeping power density below surface-of-the-sun levels.

Your point about interfaces and interconnect scaling is a good point though.

lDDQD
Apr 16, 2006

silence_kit posted:

I suspect that his claim that Intel can include the same functionality in half the die area and thus they are able to halve the cost per function every process node is a little misleading, though. I am not a VLSI designer, so I may be all wet here, but still, I thought that there were a lot of functions on a computer chip which do not benefit much from scaling, like longer-distance communication across the chip & storage (SRAM). And there's the issue that because of the heat dissipation constraint, you are obligated to design your circuits with a lower activity factor if you want a higher density at the same speed. I suspect he may not be accounting for those factors when he presents the cost/function plot, and is presenting the cost/function plot as if the entire computer chip were a single adder circuit, without having many of the real constraints of a real computer chip.

A smaller transistor is basically all-around better. It has lower parasitic capacitance, and thus can switch on and off faster. You get lower threshold voltage, which in turn, allows you to have smaller current flowing through the transistor that is in the 'on' state; consuming less power. You do run into some problems once these things get small enough. For eample, nobody worried about sub-threshold leakage until we hit around 100-ish nanometers. Then, it started becoming a problem - worse, as the devices got smaller. The leakage current was really bad mostly due to the sub-optimal geometry of the classical planar MOSFET. It was easy to manufacture, but it's not a very good shape. For a while, though, it didn't matter. So, they came up with non-planar designs: the finFET attempts to solve these geometry problems by having the channel stick out of the chip vertically; greatly reducing the area that is in contact with the substrate. More improvements are to come on this front, in the shape of the gate-all-aroundFET: a natural improvement over the finFET. The optimum shape for a MOSFET would probably be a sphere, by the way, but it would be a nightmare to manufacture.

There are all sorts of problems cropping up with wires on the chips now, too - so you're right about that. They're actually experimenting with using silver wires just to get higher conductivity, even though dealing with silver is a colossal pain in the rear end; you need diffusion barriers so it doesn't start migrating into the silicon. There are also problems with high density memory, but overall, you can definitely fit more SRAM into the same amount of die area, as the transistors become smaller. These problems weren't really a huge concern until very recently, though. DRAM has got it way worse, too.... at some point it might actually turn out to be the case that you can continue making smaller and faster SRAM, but DRAM hits a brick wall, where you can't feasibly make it any faster or smaller. Hopefully by then, we'll ditch it entirely, it was always a source of endless annoyance anyway. It's not even getting significantly quicker; the latency maybe got cut in half since DDR1.

silence_kit
Jul 14, 2011

by the sex ghost

lDDQD posted:

A smaller transistor is basically all-around better. It has lower parasitic capacitance, and thus can switch on and off faster. You get lower threshold voltage, which in turn, allows you to have smaller current flowing through the transistor that is in the 'on' state; consuming less power.

No, the great thing about lower threshold voltage is that you don't need to use as big of a voltage to switch the transistor on. Lowering the voltage needed to switch the transistor on is a great thing for power--power dissipation due to charging the wires and transistors in the digital switching circuits on computer chips is proportional to the voltage squared. As you hinted at later in your post, a problem with lower threshold voltages is that in the presence of transistor threshold voltage variation (this is a problem that the finFET helps address) and a fundamental limit on the voltage sensitivity of a transistor, a low threshold voltage device can also mean higher off-state leakage current.

High on-current density (in mA/um of gate width) is actually a good thing in a transistor. It is an important figure of merit for transistors in computer chips and is proportional to transistor switching speed. All other things being equal, a transistor in a complementary switching circuit which has a low on-current density dissipates the same amount of energy per switching operation as the transistor with the high on-current density, it is just that the time it takes for the transistor to switch for one cycle is less for the higher current transistor. And we all know in the hobbyist computer thread that it is better to be fast than slow.

lDDQD posted:

There are all sorts of problems cropping up with wires on the chips now, too - so you're right about that. They're actually experimenting with using silver wires just to get higher conductivity, even though dealing with silver is a colossal pain in the rear end; you need diffusion barriers so it doesn't start migrating into the silicon.

I didn't know that they were seriously considering silver. I thought the issue with silver was the same one that everyone's grandmother who owns a fine dining set would know--silver isn't that stable in air, and tends to tarnish easily.

lDDQD posted:

There are also problems with high density memory, but overall, you can definitely fit more SRAM into the same amount of die area, as the transistors become smaller. These problems weren't really a huge concern until very recently, though. DRAM has got it way worse, too.... at some point it might actually turn out to be the case that you can continue making smaller and faster SRAM, but DRAM hits a brick wall, where you can't feasibly make it any faster or smaller. Hopefully by then, we'll ditch it entirely, it was always a source of endless annoyance anyway. It's not even getting significantly quicker; the latency maybe got cut in half since DDR1.

Again, I'm not in the industry, but I thought I read somewhere that the brand-new aggressively scaled transistors tend to not be used in SRAM due to problems with transistor variation. Maybe I am mixing up the transistors in the SRAM on computer chips with the transistors in flash memory chips.

silence_kit fucked around with this message at 02:19 on Jan 9, 2017

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Well, I am in the industry (for now) and we have many different sram types for different sizes, Vts, and on resistances, though I've never actually gone through them to see what the exact parameters are. I'd say there are probably at least over ten different sram versions used in different areas of the chip.

Silver is definitely interesting, and given a lot of the metallization is done in a vacuum due to plasmas and ion beams being involved, oxidation from air isn't that much of an issue. While silver does react slowly with oxygen, its actually hydrogen sulfide thats the main tarnisher for silver. While the layers separating the interconnects from the outside are thin, they should definitely be thick enough to stop most diffusion fluxing through (the al layer alone is like 20-30um), unless I'm completely mixing things up.

Watermelon Daiquiri fucked around with this message at 00:00 on Jan 9, 2017

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

silence_kit posted:

Again, I'm not in the industry, but I thought I read somewhere that the brand-new aggressively scaled transistors tend to not be used in SRAM due to problems with transistor variation. Maybe I am mixing up the transistors in the SRAM on computer chips with the transistors in flash memory chips.

SRAM cells are built from relatively ordinary CMOS transistors, so if the transistors scale, so does the SRAM cell. As Watermelon Daiquiri says, there's lots of variants on the basic SRAM cell, optimized for different applications (in some cases even the basic circuit is different, e.g. Intel uses 8-transistor (8T) SRAM for some CPU caches and denser 6T SRAM for others), but fundamentally it still scales.

Flash memory cells are indeed a different story. Like DRAM, flash is based on storing charge, but unlike DRAM the charge storage element isn't a relatively conventional capacitor. Instead it's a special transistor gate which is "floating", i.e. buried completely in insulation (SiO2) with no direct circuit connection. Manipulating the charge stored in the floating gate requires application of enough voltage (using a second gate structure) to achieve hot charge carrier injection, i.e. to tunnel electrons across the oxide barrier. By altering the amount of stored charge on the gate, you change the behavior of the transistor's channel, which can be sensed to non-destructively read the memory cell's contents.

Flash has indeed hit an X-Y scaling limit, based (IIRC) on the minimum geometry the oxide walls need to keep the gate properly isolated from adjacent memory cells. However, the current focus for flash scaling is the Z axis. In the past couple years, first Samsung and now others have been rolling out 3D NAND, which stacks multiple planes of memory on a single die. IIRC Samsung's now shipping 48-layer 3D NAND parts.

e: After rereading I realized I should note that the things they're doing to build 3D NAND aren't easily applied to logic, and NAND only gets away with it because there is no active power use while a memory cell is idle. There's been exploration of building 3D logic before, but power density is a real killer.

BobHoward fucked around with this message at 01:44 on Jan 9, 2017

silence_kit
Jul 14, 2011

by the sex ghost

BobHoward posted:

e: After rereading I realized I should note that the things they're doing to build 3D NAND aren't easily applied to logic, and NAND only gets away with it because there is no active power use while a memory cell is idle. There's been exploration of building 3D logic before, but power density is a real killer.

I don't really get how 3D NAND flash works or really how any flash memory works, but from what I can tell from Googling cartoons of various 3D NAND structures, is that the channel material is a vertical cylinder made of deposited polysilicon and not the normal mono-crystalline silicon that logic transistors are made of. If that is true, then yeah, that wouldn't port to logic--I'm sure that the switching speeds of the polysilicon transistors in the 3D flash memory are way too slow for logic. Apparently they are acceptable for non-volatile memory.

silence_kit fucked around with this message at 02:24 on Jan 9, 2017

AARP LARPer
Feb 19, 2005

THE DARK SIDE OF SCIENCE BREEDS A WEAPON OF WAR

Buglord

BobHoward posted:

SRAM cells are built from relatively ordinary CMOS transistors, so if the transistors scale, so does...

As someone who knows only enough to put together a home pc, this was a great read. Thank you for posting it.

VulgarandStupid
Aug 5, 2003
I AM, AND ALWAYS WILL BE, UNFUCKABLE AND A TOTAL DISAPPOINTMENT TO EVERYONE. DAE WANNA CUM PLAY WITH ME!?




Digital Foundry put up a good review of the 7600K. So a few things are noted here. H270 boards should support 2400 mhz RAM as opposed to just 2133 of Z170. Clock for clock, 7600k is about the same as a 6600k. However, 6600k's only OC to about 4.5 or 4.6 and 7600k's can do 4.8 no problem, maybe 5.0 or slightly more. And as always, with these guys, they point out that faster RAM does make a difference in many newer games.

Edit: Forgot the link https://www.youtube.com/watch?v=gYb0y8LNAVI

VulgarandStupid fucked around with this message at 11:13 on Jan 9, 2017

Spermanent Record
Mar 28, 2007
I interviewed a NK escapee who came to my school and made a thread. Then life got in the way and the translation had to be postponed. I did finish it in the end, but nobody is going to pay 10 bux to update my.avatar
Do you think the 7600k is worth $45 more than the 6600k based solely on overclocking potential? It's annoying how expensive these are.

VulgarandStupid
Aug 5, 2003
I AM, AND ALWAYS WILL BE, UNFUCKABLE AND A TOTAL DISAPPOINTMENT TO EVERYONE. DAE WANNA CUM PLAY WITH ME!?




Spermanent Record posted:

Do you think the 7600k is worth $45 more than the 6600k based solely on overclocking potential? It's annoying how expensive these are.

Probably not, but as always it's not just about the clocks. You're still getting things like more PCI-E lanes, faster RAM on the non-high end motherboards. We still have no idea how good Optane will actually be. I mean, I'm not going to run and buy any of this stuff, I just like to see what they are offering.

Also, I'm not seeing a $45 difference. It's $20 on Newegg and $30 at Microcenter. That's not too bad a pill to swallow for ~.5ghz difference. If anything, I'd say motherboard price differential would be a bigger concern at the moment.

Spermanent Record
Mar 28, 2007
I interviewed a NK escapee who came to my school and made a thread. Then life got in the way and the translation had to be postponed. I did finish it in the end, but nobody is going to pay 10 bux to update my.avatar
Yeah this is in Korea. The prices are a bit stupid here.

Kaddish
Feb 7, 2002
So just as an aside - I've been running my 7700k at 5GHz since yesterday. At first I set voltage manually but have been using adaptive and it's working really well. It sometimes hits 1.37v but I haven't seen it above that. Average temps never go above 70 in a typical gaming session and usually sit around 60. This is with a Corsair H100i v2.

champagne posting
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER

http://www.guru3d.com/news-story/kaby-lake-pentium-processors-get-hyper-threading.html

Has this been discussed yet? If not I feel it deserves a view since pentiums with hyperthreading is a pretty big deal.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Boiled Water posted:

http://www.guru3d.com/news-story/kaby-lake-pentium-processors-get-hyper-threading.html

Has this been discussed yet? If not I feel it deserves a view since pentiums with hyperthreading is a pretty big deal.

They also lost ECC compatibility though, so Pentium based servers are out the window! Intel's trying to make these consumer parts and killing most of the interesting uses for them.

I do think that most commodity cheapo laptop & desktop manufacturers should be using these though, because a hyper-threaded 3.5GHz Skylake dual-core is enough CPU for any office use.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
So about PCIe lanes on Haswell-E... My 5820K seems to have a fixed setup of 16x, 8x and 4x for all three PCIe slots hooked up directly to the CPU. Can each slot run at its own speed, or does an older PCIe gen device in the mix drag everything down? Say like a PCIe 3.0 graphics card in the 16x slot getting hampered by an 10GbE network card, that's PCIe 2.0, in the 8x slot?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I'm trying to figure out what exactly you're saying here. The 5820k itself does not have a "fixed" allocation of PCIe lanes--it supports a maximum of 28, but the actual division thereof is left to the motherboard. If your motherboard happens to support 1x16, 1x8, and 1x4 slots, then presumably it would be able to run all three at full speed at the same time, though if there are other onboard items (like potentially a network card or a m.2 slot) that also use PCIe, you might see it cut bandwidth from some of those slots to power the other items. The motherboard's manual will detail how all that works out. Do note that, in the case of video cards, the difference between a x16 and an x8 slot is negligible (like 1%).

As far as generations go, everything works independently. The card in the x16 slot does not care what generation the card in the x4 slot is, and vice-versa.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

DrDork posted:

As far as generations go, everything works independently. The card in the x16 slot does not care what generation the card in the x4 slot is, and vice-versa.
Yea, that's what I wanted to know. I was unsure whether all lanes were forced to the lowest generation and speed or whether they're independent. It probably wouldn't make that much of a difference, but I'd like the graphics card to go at full blast with PCIe 3.0, regardless of a network card that only does PCIe 2.0 (obviously I'd like a 10GbE card on the CPU bus, not the DMI). Good to know that they don't influence each other.

Adbot
ADBOT LOVES YOU

mewse
May 2, 2006

Twerk from Home posted:

They also lost ECC compatibility though, so Pentium based servers are out the window!

This is bullshit.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply