Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

New Zealand can eat me posted:

I have purchased 50 of those 493-3697-ND's from Digi Key. They have a better ESR, at 5mOhm vs 8mOhm, and Ripple Voltage (~7 vs ~5).

It'll very likely work fine with those caps, but I would've tried to match exact if possible. Someone presumably designed and verified that board based on the original caps, and it may not actually work better if you change them for "better" ones.

For instance, ESR is really a curve, X axis frequency, Y axis ESR. Reducing this curve to a single number to quote for part listings is usually done by picking the lowest point on that curve. If cap A has its 5mOhm ESR low point centered at 217 MHz, it might underperform cap B that's 8mOhm at 333 Mhz because noise near 333 MHz was more important to suppress on that board design.

It's not a big deal. It's likely that the ESR curves of the original caps and your replacements aren't different enough in shape or frequency response to matter. I just wanted to put it out there that, when replacing capacitors, don't assume that better specs will perform better.

Adbot
ADBOT LOVES YOU

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

BobHoward posted:

It'll very likely work fine with those caps, but I would've tried to match exact if possible. Someone presumably designed and verified that board based on the original caps, and it may not actually work better if you change them for "better" ones.

For instance, ESR is really a curve, X axis frequency, Y axis ESR. Reducing this curve to a single number to quote for part listings is usually done by picking the lowest point on that curve. If cap A has its 5mOhm ESR low point centered at 217 MHz, it might underperform cap B that's 8mOhm at 333 Mhz because noise near 333 MHz was more important to suppress on that board design.

It's not a big deal. It's likely that the ESR curves of the original caps and your replacements aren't different enough in shape or frequency response to matter. I just wanted to put it out there that, when replacing capacitors, don't assume that better specs will perform better.

Pretty much, using caps to filter noise and prevent ringing in a circuit is half black magic and half careful placement. Changing things like ESR and ripple can do odd things if the design is borderline.

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

VulgarandStupid posted:

There's also not a lot of great desktop applications for USB-C compared to mobile solutions. For example you wouldn't power your desktop with USB-C. You don't need external GPU, and you don't need to reduce the number of ports just to make your desktop physically smaller.

But you'll need USB-C if you want to charge upcoming phones with your computer as phones start to restandardize to that form factor. Various other devices are going to be restandardizing to USB-C ports like external hard drives, and eventually even printers, flash drives etc all will.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

fishmech posted:

But you'll need USB-C if you want to charge upcoming phones with your computer as phones start to restandardize to that form factor. Various other devices are going to be restandardizing to USB-C ports like external hard drives, and eventually even printers, flash drives etc all will.

I think his point is that USB-C isn't necessarily a speed rating, and USB-A to C cables are readily available, making physical USB-C ports on the desktop not exactly essential. Just handy if all you have are C to C cables.

HalloKitty fucked around with this message at 15:44 on Feb 20, 2017

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

VulgarandStupid posted:

There's also not a lot of great desktop applications for USB-C compared to mobile solutions. For example you wouldn't power your desktop with USB-C. You don't need external GPU, and you don't need to reduce the number of ports just to make your desktop physically smaller.
Being able to unify different connectors like HDMI and DisplayPort onto a single USB cable is pretty cool, and being able to use the same cables with laptops and mobile devices is good from a usability perspective IMO. I think of it as the vision of Thunderbolt and USB unified together. I don't really long for the days of PS/2 mice and keyboards exactly but until now we've been forced to stop at the video outputs.

Platystemon
Feb 13, 2012

BREADS
The best part about USB‐C is that it has rotational symmetry so there’s none of that “takes three attempts to get USB‐A the right way round” nonsense.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Platystemon posted:

The best part about USB‐C is that it has rotational symmetry so there’s none of that “takes three attempts to get USB‐A the right way round” nonsense.

USB-A plugs can be designed in a reversible manner, they exist in the market. Not saying it's not a benefit of USB-C, just that it could have been done already.

Edit: there's even a Micro-B reversible cable available

HalloKitty fucked around with this message at 22:50 on Feb 20, 2017

VulgarandStupid
Aug 5, 2003
I AM, AND ALWAYS WILL BE, UNFUCKABLE AND A TOTAL DISAPPOINTMENT TO EVERYONE. DAE WANNA CUM PLAY WITH ME!?




necrobobsledder posted:

Being able to unify different connectors like HDMI and DisplayPort onto a single USB cable is pretty cool, and being able to use the same cables with laptops and mobile devices is good from a usability perspective IMO. I think of it as the vision of Thunderbolt and USB unified together. I don't really long for the days of PS/2 mice and keyboards exactly but until now we've been forced to stop at the video outputs.

So you want to put plug dongles into your desktop? What's the point? You're just wasting money on dongles or expensive cables. Also, as of right now, there are very few Thunderbolt displays and they are more expensive as they're generally marketed towards Apple users.

I'm not saying USB-C has no merits, but I think they are a bit overstated and not that necessary in the desktop market. They make tons of sense for mobile devices.

Edit: As a side note, I think Thunderbolt could have some really cool applications. For example, you could run a computer like a home server. Just put your Thunderbolt monitor in your room/office and the computer in the basement, then run a Thunderbolt up from the basement to your monitor. You can plug all your peripherals into the monitor, but take the heat and noise generated by computer out of the room as well as keep your computer in a generally cooler enviroment to begin with. It's a much better idea than Linus' whole room water cooling or similar ideas. I just don't know what the constraints of the Thunderbolt look like in terms of bandwidth and how long you can run your cables.

VulgarandStupid fucked around with this message at 23:25 on Feb 20, 2017

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Yeah, I highly doubt it could run longer than like a few meters given the incredibly high bandwidth it has. This is an excellent application for optical cables, however. If they can deliver the same bandwidth (which I have no doubt it could), it would be a good way to send that data over a long distance, though it'd be more of a hassle than regular cat cables if it breaks.

repiv
Aug 13, 2009

Yeah TB3-over-copper is limited to just 2 meters. AFAIK there's no optical TB3 cables yet but if it's anything like TB2 they won't be cheap.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Yeah, this would be something that'd have to rely on whatever that phrase is that means bigger demand leads to lower prices (like how there are so many tvs made that the crystal used for the color carrier is so cheap people design circuits around them (Nintendo, Intel, Apple, e.g))


Also, I've been wondering if it could be feasible to use a topology similar to vnand for logic (or better yet sram and edram caches). I imagine if it were feasible someone would already be looking into it, but in any case if heat isn't too much of an issue, I was thinking it could be set up in cadence where each logical unit (like a ram cell or a nand or shift register or whatever) would be pre-built in a relatively small area (maybe in a few different vertical configurations to cover as many as possible in and out points) and the auto-router would select the most compatible config and maybe optimize the location to sync things up as needed.

Of course, if heat is an issue maybe a dedicated ground and maybe power layers that has no routes on them besides visa between layers would be feasible to help wick heat away (though I doubt the small volume would be enough for multiple watts). I'm also imagining that to get connections to where they need to go (as opposed to the simple lines in flash) you'd have to spread things out to allow room, compromising the areal savings, possible enough to the point that the increased cost involved with ultra-anisotropic etching and-- critically-- masks and ArF/KrF photo layers if those don't make it too expensive on their own. Still, maybe there is a compromise in there somewhere-- sram and edram are relatively simple (but expansive) circuits ( I recall that the snapdragons are around 30-40% sram by area, and there is the edram on the xbone that compromised the graphics due to its size). There might be room for 3-5 layers in the 11-13 metal layers current 14-6nm chips have, which could reduce the size taken up by caches significantly. Actually, given that sram and flash are kinda similar I wouldn't be surprised if it is done already in some products.

I'm talking out my rear end for most of this, and I only did a cursory amount of research. I'm still interested in the feasibility and difficulties involved, though.

KingEup
Nov 18, 2004
I am a REAL ADDICT
(to threadshitting)


Please ask me for my google inspired wisdom on shit I know nothing about. Actually, you don't even have to ask.

VulgarandStupid posted:

Edit: As a side note, I think Thunderbolt could have some really cool applications. For example, you could run a computer like a home server. Just put your Thunderbolt monitor in your room/office and the computer in the basement, then run a Thunderbolt up from the basement to your monitor. You can plug all your peripherals into the monitor, but take the heat and noise generated by computer out of the room as well as keep your computer in a generally cooler enviroment to begin with. It's a much better idea than Linus' whole room water cooling or similar ideas. I just don't know what the constraints of the Thunderbolt look like in terms of bandwidth and how long you can run your cables.

Linus has already done this: https://m.youtube.com/watch?v=NshXgisNly4

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

VulgarandStupid posted:


I'm not saying USB-C has no merits, but I think they are a bit overstated and not that necessary in the desktop market. They make tons of sense for mobile devices.


I really don't get this attitude. Most normal computers these days are laptops, it's been that way for over 10 years. Why do you think it's not useful to have the same ports on your desktop as on your laptop? Especially when most computer accessories necessarily are aiming to be usable on laptops, and thus will use connectors suitable for them where possible.

When USB itself was first coming out, it was also much more uesful for laptops and other mobile devices because there was plenty of room for your parallel and serial ports on a desktop, but with every peripheral moving to USB...

silence_kit
Jul 14, 2011

by the sex ghost

Watermelon Daiquiri posted:

Also, I've been wondering if it could be feasible to use a topology similar to vnand for logic (or better yet sram and edram caches). I imagine if it were feasible someone would already be looking into it

I think the vertical cylindrical channel transistors used in vNAND flash are polysilicon channel transistors, which have lower electron and hole mobilities and higher interface trap densities than the normal silicon transistors used for logic. The lower mobilities and worse interface trap densities of polysilicon when compared to the normal silicon means that it will be harder to run the polysilicon channel transistors at the high speeds and small supply voltages (low power) that people who are designing VLSI logic circuits in the latest technologies are accustomed to.

It is harder technically to do, but if you could make the little cylindrical pillars out of the more normal channel material, mono-crystalline silicon, then these new types of tubular transistors could be run at the same speeds & voltages as the normal transistors at increased density. Lots of people have and probably still are working on this idea.

Here's an article I quickly Googled reporting on a conference paper written by researchers at IMEC (a European VLSI technology research lab) who were basically working on the same thing you were talking about : http://www.analog-eetimes.com/news/imec-reports-nanowire-fet-vertical-sram I think this idea is pretty old though--I bet if you were to do enough Googling (key words: vertical nano-wire, gate-all-around), you could find more stuff on this.

Watermelon Daiquiri posted:

Yeah, this would be something that'd have to rely on whatever that phrase is that means bigger demand leads to lower prices (like how there are so many tvs made that the crystal used for the color carrier is so cheap people design circuits around them (Nintendo, Intel, Apple, e.g))

What are you talking about here? I've never heard of this. A lot of people worried about the material cost of indium, in the indium tin oxide (ITO) transparent electrode used in display technology, but their worries weren't really well-founded and ended up being wrong, and we now enjoy really really cheap displays.

silence_kit fucked around with this message at 02:34 on Feb 21, 2017

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
Yeah, I know gate all around is 'The Next Big Thing', but I guess I had assumed it'd just be done in a similar manner to finfets and contructed in a 2-d plane on the surface of the wafer. I can't tell (and I'm not really in the mood to research more at the moment) but is the bit line completely surrounded by the word and control gates, or is it only on one or two sides? I saw both varieties when I looked earlier though a circular gate surrounding the channel would be the better option than having different segments to the gate. After all, sharp edges really love to gently caress with electricity lol. I completely forgot that vnand uses poly, though, that does complicate things...



e: vvvv Thanks! I hate it when I can't think of the proper word...

Watermelon Daiquiri fucked around with this message at 04:38 on Feb 21, 2017

Col.Kiwi
Dec 28, 2004
And the grave digger puts on the forceps...

Watermelon Daiquiri posted:

whatever that phrase is that means bigger demand leads to lower prices
Economy of scale

VulgarandStupid
Aug 5, 2003
I AM, AND ALWAYS WILL BE, UNFUCKABLE AND A TOTAL DISAPPOINTMENT TO EVERYONE. DAE WANNA CUM PLAY WITH ME!?




fishmech posted:

I really don't get this attitude. Most normal computers these days are laptops, it's been that way for over 10 years. Why do you think it's not useful to have the same ports on your desktop as on your laptop? Especially when most computer accessories necessarily are aiming to be usable on laptops, and thus will use connectors suitable for them where possible.

When USB itself was first coming out, it was also much more uesful for laptops and other mobile devices because there was plenty of room for your parallel and serial ports on a desktop, but with every peripheral moving to USB...

Well, the attitude is echoed by the motherboard manufacturers. It's probably seen as an extra cost, that consumers aren't really demanding. Also you have to remember that the PC industry is not quick to adopt anything. We've been using needlessly large ATX towers for 20 years and only in recent years have we seen
OEMs release mini-pcs that have decent usage cases. I mean the real reason we have USB-C is because Apple decided that the USB port is too big for their thin laptop, so now we are using it.

Anyway this whole conversation got started over someone looking for multiple USB-C ports on a motherboard. A bunch of us said it's not that important, and he probably shouldn't get hung up on his build. That point still stands. It could all change in a few years, but with licensing fees and other snafus, plus the PC industry dragging its feet, it will probably take longer than you think. Plus gaining USB-C will never mean losing USB3 compatibility.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

silence_kit posted:

I think the vertical cylindrical channel transistors used in vNAND flash are polysilicon channel transistors, which have lower electron and hole mobilities and higher interface trap densities than the normal silicon transistors used for logic. The lower mobilities and worse interface trap densities of polysilicon when compared to the normal silicon means that it will be harder to run the polysilicon channel transistors at the high speeds and small supply voltages (low power) that people who are designing VLSI logic circuits in the latest technologies are accustomed to.

It is harder technically to do, but if you could make the little cylindrical pillars out of the more normal channel material, mono-crystalline silicon, then these new types of tubular transistors could be run at the same speeds & voltages as the normal transistors at increased density. Lots of people have and probably still are working on this idea.

Here's an article I quickly Googled reporting on a conference paper written by researchers at IMEC (a European VLSI technology research lab) who were basically working on the same thing you were talking about : http://www.analog-eetimes.com/news/imec-reports-nanowire-fet-vertical-sram I think this idea is pretty old though--I bet if you were to do enough Googling (key words: vertical nano-wire, gate-all-around), you could find more stuff on this.

It is doubtful anyone is seriously working on this for logic. 3D SRAM may make sense.

The problem with 3D logic is that planar logic in high speed chips already generates more watts per mm^2 than the surface of the sun. (That is not hyperbole btw.) The only way to scale this figure up is to go to much more costly cooling systems. You see that today in the liquid cooling systems overclockers love, but that stuff doesn't sell outside of "enthusiasts". Chip makers are going to be designing mass market products around the limitations of air cooling for the forseeable future.

Which basically implies that 3D logic is a no-go. You are immediately doubling power density, and the layer further away from the cooler gets extra hot. Add a third layer and things get even worse.

If you try to solve these problems by backing off on voltage and therefore frequency, well, the whole point of the exercise was to make a fast chip, right? Sacrificing a lot of performance to gain density isn't a great tradeoff, especially in light of the fact that this is likely to be quite expensive to build compared to planar logic.

quote:

What are you talking about here? I've never heard of this.

I think they were talking about 14.31818 MHz, which was a clock frequency needed by NTSC TV sets. Since quartz oscillators and crystals cut for that frequency were so common, they were very cheap. Lots of designs with no need to be NTSC compatible used that frequency (or ran it through a simple divider to generate a slower frequency) just because it was lots cheaper than picking anything else.

Platystemon
Feb 13, 2012

BREADS

BobHoward posted:

The problem with 3D logic is that planar logic in high speed chips already generates more watts per mm^2 than the surface of the sun. (That is not hyperbole btw.)

Doesn’t check out? :confused:

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

VulgarandStupid posted:

Anyway this whole conversation got started over someone looking for multiple USB-C ports on a motherboard. A bunch of us said it's not that important, and he probably shouldn't get hung up on his build. That point still stands. It could all change in a few years, but with licensing fees and other snafus, plus the PC industry dragging its feet, it will probably take longer than you think. Plus gaining USB-C will never mean losing USB3 compatibility.

Frankly, more than one USB-C on the motherboard is all but pointless right now: the vast majority of peripherals are still USB-A, and remember that motherboard USB ports are on the rear of the computer. Those aren't the ones you're going to be using frequently. Sure, toss one USB-C on there for possible use for monitors or whatever, but otherwise pretty much everything that people have plugged in back there are going to be all but permanent installations: keyboards, mice, etc.

What you really want is a motherboard with a USB3(.1) header on it so that you can connect up a USB front panel. Then you can get your USB-C or whatever else it is you want up front (or case-mounted) where it'll actually help when you're looking for somewhere to plug your cellphone or external HDD from 2019 in to.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.

BobHoward posted:

It is doubtful anyone is seriously working on this for logic. 3D SRAM may make sense.

The problem with 3D logic is that planar logic in high speed chips already generates more watts per mm^2 than the surface of the sun. (That is not hyperbole btw.) The only way to scale this figure up is to go to much more costly cooling systems. You see that today in the liquid cooling systems overclockers love, but that stuff doesn't sell outside of "enthusiasts". Chip makers are going to be designing mass market products around the limitations of air cooling for the forseeable future.

Which basically implies that 3D logic is a no-go. You are immediately doubling power density, and the layer further away from the cooler gets extra hot. Add a third layer and things get even worse.

If you try to solve these problems by backing off on voltage and therefore frequency, well, the whole point of the exercise was to make a fast chip, right? Sacrificing a lot of performance to gain density isn't a great tradeoff, especially in light of the fact that this is likely to be quite expensive to build compared to planar logic.


I think they were talking about 14.31818 MHz, which was a clock frequency needed by NTSC TV sets. Since quartz oscillators and crystals cut for that frequency were so common, they were very cheap. Lots of designs with no need to be NTSC compatible used that frequency (or ran it through a simple divider to generate a slower frequency) just because it was lots cheaper than picking anything else.

I figured that might be the case, then, unless resistances would be appreciably lower. Maybe with new micro-(nano?)fluidics research something something could be feasible. And yeah, it is the ntsc color burst crystal (which is actually 1/4 * 14.31818 = 3.57954 MHz). A lot of things used clocks proportional to that frequency-- 1/2 of it is used in the NES for example, and the SNES both used it exactly as well as 3/4 of it.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

DrDork posted:

Frankly, more than one USB-C on the motherboard is all but pointless right now: the vast majority of peripherals are still USB-A, and remember that motherboard USB ports are on the rear of the computer. Those aren't the ones you're going to be using frequently. Sure, toss one USB-C on there for possible use for monitors or whatever, but otherwise pretty much everything that people have plugged in back there are going to be all but permanent installations: keyboards, mice, etc.

What you really want is a motherboard with a USB3(.1) header on it so that you can connect up a USB front panel. Then you can get your USB-C or whatever else it is you want up front (or case-mounted) where it'll actually help when you're looking for somewhere to plug your cellphone or external HDD from 2019 in to.

That's what I want, a standardized motherboard connector so case manufacturers can have a usb c or two on the front for plugging in stuff.

I suppose a hub or whatever would be fine too.

silence_kit
Jul 14, 2011

by the sex ghost

BobHoward posted:

It is doubtful anyone is seriously working on this for logic. 3D SRAM may make sense.

The problem with 3D logic is that planar logic in high speed chips already generates more watts per mm^2 than the surface of the sun. (That is not hyperbole btw.) The only way to scale this figure up is to go to much more costly cooling systems. You see that today in the liquid cooling systems overclockers love, but that stuff doesn't sell outside of "enthusiasts". Chip makers are going to be designing mass market products around the limitations of air cooling for the forseeable future.

Which basically implies that 3D logic is a no-go. You are immediately doubling power density, and the layer further away from the cooler gets extra hot. Add a third layer and things get even worse.

If you try to solve these problems by backing off on voltage and therefore frequency, well, the whole point of the exercise was to make a fast chip, right? Sacrificing a lot of performance to gain density isn't a great tradeoff, especially in light of the fact that this is likely to be quite expensive to build compared to planar logic.

I get your argument here. I too seriously doubt that improved cooling will be economical. But let's go into a time warp back to 10-15 years ago. Couldn't you have made a similar argument against the development of more dense VLSI technology back then, the same as what you are doing now? What is the difference between now and then?

BobHoward posted:

I think they were talking about 14.31818 MHz, which was a clock frequency needed by NTSC TV sets. Since quartz oscillators and crystals cut for that frequency were so common, they were very cheap. Lots of designs with no need to be NTSC compatible used that frequency (or ran it through a simple divider to generate a slower frequency) just because it was lots cheaper than picking anything else.

Oh, I see.

silence_kit fucked around with this message at 18:09 on Feb 21, 2017

JnnyThndrs
May 29, 2001

HERE ARE THE FUCKING TOWELS

BobHoward posted:

I think they were talking about 14.31818 MHz, which was a clock frequency needed by NTSC TV sets. Since quartz oscillators and crystals cut for that frequency were so common, they were very cheap. Lots of designs with no need to be NTSC compatible used that frequency (or ran it through a simple divider to generate a slower frequency) just because it was lots cheaper than picking anything else.

Holy poo poo, I always wondered why those crystals were so common in all kinds of poo poo that had nothing to do with TV's. :ms:

fishmech
Jul 16, 2006

by VideoGames
Salad Prong

VulgarandStupid posted:

Well, the attitude is echoed by the motherboard manufacturers. It's probably seen as an extra cost, that consumers aren't really demanding. Also you have to remember that the PC industry is not quick to adopt anything. We've been using needlessly large ATX towers for 20 years and only in recent years have we seen
OEMs release mini-pcs that have decent usage cases. I mean the real reason we have USB-C is because Apple decided that the USB port is too big for their thin laptop, so now we are using it.

Anyway this whole conversation got started over someone looking for multiple USB-C ports on a motherboard. A bunch of us said it's not that important, and he probably shouldn't get hung up on his build. That point still stands. It could all change in a few years, but with licensing fees and other snafus, plus the PC industry dragging its feet, it will probably take longer than you think. Plus gaining USB-C will never mean losing USB3 compatibility.

Well no, it's not echoed by them really? USB-C is still very new so it's not in quite yet, but it's also not particularly common on consumer laptops either. Sure, some MacBooks have them at the moment, but Macs are a tiny share of the laptop market just as they are a tiny share of desktops, and you don't even get USB-C on every current Mac model. But I don't see any motherboard manufacturers saying they're going to refuse to implement USB-C either. And I certainly don't see them saying they're going to refuse to implement it because it's "for mobile".

It's absolutely going to change in a few years once USB-C goes from something available on a small fraction of phones and laptops to being the standard for phones and available on nearly all laptops. And since a user can expect to normally keep the same desktop motherboard for 5 years or so these days, it absolutely makes sense to want to try to find one with multiple ports if you can get it, so you don't need to try to replace it or take up a PCI express slot with an addon USB-C card in a few years.

mewse
May 2, 2006


Those corning optical cables with integrated media transceivers look super cool

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

silence_kit posted:

I get your argument here. I too seriously doubt that improved cooling will be economical. But let's go into a time warp back to 10-15 years ago. Couldn't you have made a similar argument against the development of more dense VLSI technology back then, the same as what you are doing now? What is the difference between now and then?


Oh, I see.

The sun is ~63W/mm^2. A modern high power chip like AMD's bulldozer 135W boondoggle has more heat per unit area than a turbine fan blade or a nuclear reactor core.

We're approaching the fundamental limit of the thermal conductivity and impedance of the silicon bulk substrate. Having the heatsink directly attached to the silicon die can't pull the heat out fast enough unless the deltaT is super high, which is why those phase change chillers work so well on the retarded 5+ Ghz overclocks. For people who don't want to have what amounts to a miniature commercial freezer plant in their computer require either a chip that doesn't generate as much heat, or a non-traditional cooling method.

Laser etching microchannels in the substrate itself and pumping water through them is stupidly efficient. A whitepaper back in the 80s managed to sink north of 700w in a chip about the size of an 12 core Xeon, with a 60C water temp rise. Something like that might end up being the next high performance cooling solution, assuming they don't just eat the loss and make the chips twice as big to cut the density down enough.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

priznat posted:

That's what I want, a standardized motherboard connector so case manufacturers can have a usb c or two on the front for plugging in stuff.

This already exists: motherboards can have a USB 3.1 header on them that's just a set of exposed pins. Cases, break-out boxes, or whatever else, can plug into that pin set and then provide whatever sort of physical USB interface you want (A, B, or C). That is, for example, how I added a USB-C port to the front of my computer without having to get a different motherboard.

I think the argument was more about replacing a bunch of the USB-A ports on the rear of a motherboard with C's, which at this juncture makes little sense.

mewse
May 2, 2006

Seems like the new Atom chips are going to be what people will want for microservers since the pentium line is losing ECC mem support

https://www.servethehome.com/intel-atom-c3000-denverton-first-benchmarks-can-expect-finally-launched/

silence_kit
Jul 14, 2011

by the sex ghost

Methylethylaldehyde posted:

The sun is ~63W/mm^2. A modern high power chip like AMD's bulldozer 135W boondoggle has more heat per unit area than a turbine fan blade or a nuclear reactor core.

The point I'm making is that the famous slide which compares computer chip power densities to the sun, rockets, nuclear bombs, etc. is like 15 years old now, and in the meantime there has been much investment into and improvement in the device/interconnect density in the state-of-the-art VLSI technologies. Circuit designers have been able to figure out how to take advantage of the increased device/interconnect density of computer chip technologies since then without requiring exotic and probably uneconomical/unpractical cooling, why would further improvements to density be any different? What am I missing here?

evilweasel
Aug 24, 2002

silence_kit posted:

The point I'm making is that the famous slide which compares computer chip power densities to the sun, rockets, nuclear bombs, etc. is like 15 years old now, and in the meantime there has been much investment into and improvement in the device/interconnect density in the state-of-the-art VLSI technologies. Circuit designers have been able to figure out how to take advantage of the increased device/interconnect density of computer chip technologies since then without creating molten computer chips, why would further improvements to density be any different? What am I missing here?

If your chip is a flat plane, you're pulling heat out from every part of the chip with the heatsink. If you have multiple chip layers, you have (at best) chip layers where on one side the heatsink is replaced by something creating just as much heat as that layer is. If you have three layers, you have a layer where there is no heatsink at all.

crazypenguin
Mar 9, 2005
nothing witty here, move along

Oh neat. I didn't know this was possible, let alone at only a $500 premium.

silence_kit
Jul 14, 2011

by the sex ghost

evilweasel posted:

If your chip is a flat plane, you're pulling heat out from every part of the chip with the heatsink. If you have multiple chip layers, you have (at best) chip layers where on one side the heatsink is replaced by something creating just as much heat as that layer is. If you have three layers, you have a layer where there is no heatsink at all.

In the case of these hypothetical 'tubular transistors', the layer thicknesses that the transistors occupy are pretty thin and I'd be shocked if the additional thermal resistance would be that much. I don't see how putting two transistors on a tube is that much different when compared to just doubling the device density for the normal transistors. Maybe it is a little worse.

silence_kit fucked around with this message at 23:59 on Feb 21, 2017

evilweasel
Aug 24, 2002

silence_kit posted:

In the case of these hypothetical 'tubular transistors', the layer thicknesses that the transistors occupy are pretty thin and I'd be shocked if the additional thermal resistance would be that much. I don't see how putting two transistors on a tube is that much different when compared to just doubling the device density for the normal transistors. Maybe it is a little worse.

Smaller transistors use less power so when you double the density of transistors by shrinking them, you're reducing the heat generated per transistor. When you stack them on top of each other to double the density, you're not.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.
You are assuming that the resistances will be the same or similar. A good portion of the resistance seen in a device is at the contacts and the layers joining the contacts to the S/D, so if we reduce the number of contacts by having multiple gates affecting the same channel, we reduce the amount of joule heating from the contacts. Of course, I was never a design engineer so I have no idea if they aren't doing this already with finfets (or maybe planar). Of course, having 3 dimensions of freedom would be much more effective than 2-D. Of course, all of this relies heavily upon being able to have silicon as the channel material and having it doped heavily and consistently. Since it would likely only be feasible to have a few layers of 'vertical space' to start with, it might be possible to pre-dope everything they need and then just carve away everything but the silicon needed for the channels (only using a max of like 3 per channel to start with). From there it should be simple to build everything up in between. On paper at least, it should reduce heat by anywhere from like a quarter (to pull a # out of rear end) to maybe even 2/3 depending on how many gates there are per channel, the density achieved, and how much the contact resistance is compared to the channel. I'd love to explore this on the dime of a university, even if it is just enough to see it isn't ever going to be feasible.

silence_kit
Jul 14, 2011

by the sex ghost

evilweasel posted:

Smaller transistors use less power so when you double the density of transistors by shrinking them, you're reducing the heat generated per transistor. When you stack them on top of each other to double the density, you're not.

OK, this makes sense. Thanks.

Watermelon Daiquiri posted:

You are assuming that the resistances will be the same or similar. A good portion of the resistance seen in a device is at the contacts and the layers joining the contacts to the S/D, so if we reduce the number of contacts by having multiple gates affecting the same channel, we reduce the amount of joule heating from the contacts.

The extra series resistance in the ohmic contact, oddly enough, doesn't change the energy consumption per switching cycle, but it does slow down the switching speed. One quantity which is important to power consumption is the device capacitance, which tends to get smaller (and for thermal sinking reasons, it kind of must) as you move to smaller and smaller process nodes, as evilweasel pointed out.

The 'dynamic dissipation' section of the following Wikipedia page kind of explains this: https://en.m.wikipedia.org/wiki/CMOS

silence_kit fucked around with this message at 02:50 on Feb 22, 2017

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

silence_kit posted:

The point I'm making is that the famous slide which compares computer chip power densities to the sun, rockets, nuclear bombs, etc. is like 15 years old now, and in the meantime there has been much investment into and improvement in the device/interconnect density in the state-of-the-art VLSI technologies. Circuit designers have been able to figure out how to take advantage of the increased device/interconnect density of computer chip technologies since then without requiring exotic and probably uneconomical/unpractical cooling, why would further improvements to density be any different? What am I missing here?

They're still incredibly limited by the total thermal envelope of the part. The fact that they got smaller and more efficient just means you can cram more of them per unit area before running into the same issue. A simple example would be why the 24 core stupid expensive Xeons don't clock faster, and it's because they run right up against the 135w TDP limit imposed. If the chip had a 200W power budget due to better cooling technologies, you'd see it clock ~30% faster.

It becomes even more challenging once you have stacked layers of chips. You could fit a metric asston of HBM memory on die by stacking it up super high, but you pretty quickly run into massive issues with dissipating the heat out of the center of the stack, which limits the total power and thus speed of the HBM stack. Being able to wick the heat out faster than the molocrystaline silicon can conduct it will be key in the next 10 years to improving package TDP and interconnect density.

Hell, look at what heat pipes did for the entire CPU cooling industry, before you'd a hugeass copper heatsink with some Delta 140 CFM fan that sounded like a model jet taking off and cook the chip with a 100W load on it. Now you have the heatpipe tower coolers that can handle 135W silently, just due to how much better they're able to pull the heat away from the chip and towards the extremities of the fins.

I personally am looking at the technology that uses a vapor phase change system through etched paths on die, using the same phase changing goo the heatpipes use, possibly with a pump to encourage the liquid to flow into the chip. Then you can stack them literally as high as you want and can afford to cool, and the total thickness of silicon needed to conduct the heat through goes down substantially.

Watermelon Daiquiri
Jul 10, 2010
I TRIED TO BAIT THE TXPOL THREAD WITH THE WORLD'S WORST POSSIBLE TAKE AND ALL I GOT WAS THIS STUPID AVATAR.

silence_kit posted:

OK, this makes sense. Thanks.


The extra series resistance in the ohmic contact, oddly enough, doesn't change the energy consumption per switching cycle, but it does slow down the switching speed. One quantity which is important to power consumption is the device capacitance, which tends to get smaller (and for power dissipation reasons, it kind of must) as you move to smaller and smaller process nodes, as evilweasel pointed out.

The 'dynamic dissipation' section of the following Wikipedia page kind of explains this: https://en.m.wikipedia.org/wiki/CMOS

Huh, really? Surely it must play some part, like the resistance current sees when it flows to and during discharge? Incidentally, this pdf I found compares pre '03 intel cpu power density to each other and various heat sources including the surface of the sun. As 95W consumer Core dies are not much larger than 1cm^2, if we use a naive interpretation of dissipated power over die size, even an overclocked processor probably isn't that much hotter than a nuclear reactor at the most.

KingEup
Nov 18, 2004
I am a REAL ADDICT
(to threadshitting)


Please ask me for my google inspired wisdom on shit I know nothing about. Actually, you don't even have to ask.

crazypenguin posted:

Oh neat. I didn't know this was possible, let alone at only a $500 premium.

Yeah, it's definitely the way to go. I'm so sick of trying to build a quiet PC that $500 is a bargain.

Adbot
ADBOT LOVES YOU

silence_kit
Jul 14, 2011

by the sex ghost

Watermelon Daiquiri posted:

Huh, really? Surely it must play some part, like the resistance current sees when it flows to and during discharge?

While it increases the resistance current sees when it charges up the capacitances in the wires & transistors in the circuit, the effect of the increased resistance at the same time lowers the current levels in the charging event. In the end, it doesn't really change the energy dissipated per switching cycle, although it makes the switching cycle take longer.

It is kind of odd and maybe counter-intuitive. Here is a physics webpage which goes over the problem of charging up a capacitor through a resistor, which actually is a pretty good model of a sub-block in the logical switching circuits in computer chips: http://hyperphysics.phy-astr.gsu.edu/hbase/electric/capeng2.html

Methylethylaldehyde posted:

They're still incredibly limited by the total thermal envelope of the part. The fact that they got smaller and more efficient just means you can cram more of them per unit area before running into the same issue. A simple example would be why the 24 core stupid expensive Xeons don't clock faster, and it's because they run right up against the 135w TDP limit imposed. If the chip had a 200W power budget due to better cooling technologies, you'd see it clock ~30% faster.

It becomes even more challenging once you have stacked layers of chips. You could fit a metric asston of HBM memory on die by stacking it up super high, but you pretty quickly run into massive issues with dissipating the heat out of the center of the stack, which limits the total power and thus speed of the HBM stack. Being able to wick the heat out faster than the molocrystaline silicon can conduct it will be key in the next 10 years to improving package TDP and interconnect density.

Hell, look at what heat pipes did for the entire CPU cooling industry, before you'd a hugeass copper heatsink with some Delta 140 CFM fan that sounded like a model jet taking off and cook the chip with a 100W load on it. Now you have the heatpipe tower coolers that can handle 135W silently, just due to how much better they're able to pull the heat away from the chip and towards the extremities of the fins.

I personally am looking at the technology that uses a vapor phase change system through etched paths on die, using the same phase changing goo the heatpipes use, possibly with a pump to encourage the liquid to flow into the chip. Then you can stack them literally as high as you want and can afford to cool, and the total thickness of silicon needed to conduct the heat through goes down substantially.

Maybe in some applications it will be acceptable, but people for a while now have been accustomed to computer chips being everywhere and requiring low maintenance and upkeep. I don't know if in a lot of applications people are going to want to deal with the hassle of plumbing.

Edit: hey, wait a second, I talked about an old demonstration of that idea (in-chip water cooling) in the AMD thread, and you told me that it was impractical!

silence_kit posted:

I think this is an old idea. I found a paper from 1981 where, using the same/similar micro-fabrication technology that they use to make the transistors and wires on the chips, the authors of the paper etched numerous micro-fins on the back of a chip and ran cold water over them to cool the chip. They were able to achieve a 71 degrees C temperature rise at 781 W/cm^2 chip power density.

I'm not really a computer chip cooling enthusiast, so I'm not sure if that 71 C at 781 W/cm^2 number is impressive or not. It's obviously not that practical.

Methylethylaldehyde posted:

A current gen Xeon is about 6.2 cm^2, so in theory that would be a single chunk of silicon with about 4.8Kw of power flowing through it. The interior wall of a nuclear reactor vessel is about 240ish, and a rocket nozzle would be about 850-900 W/cm^2.

You'd need some really complex and novel form of thermal management to make sure that the microfluidics are behaving properly, and that pump flow and head pressure are maintained. It would suck to have a small clog block off part of the chip and have it cook to death before the thermal conductivity is able to alert the thermistor that the chip has caught fire.

silence_kit fucked around with this message at 04:14 on Feb 22, 2017

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply