Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
silence_kit
Jul 14, 2011

by the sex ghost

SuperDucky posted:

We'll still build Williamette P4 boards for someone that wants to order a min. quantity. Everything else we had to EOL because we just can't get some of the IC's necessary to build 'em.

I have a story which tops yours. I have been told that point-contact diodes (these are the oldest semi-conductor devices which date back to the 19th century and consist of a wire pressed against a semiconductor) are still produced in small quantities for use in ancient government electronic systems.

Adbot
ADBOT LOVES YOU

silence_kit
Jul 14, 2011

by the sex ghost

SwissCM posted:

It's more of a bummer that they're selling what are likely deliberately gimped cores for market segmentation purposes, though that's capitalism for you. Zen can't come soon enough.

That's what all computer chip companies do though. It is way more expensive to design and manufacture separate chip designs than it is to design one chip, disable different amounts of its functionality after manufacture, and sell the disabled chips.

silence_kit
Jul 14, 2011

by the sex ghost
They might. It could make a lot of sense to relabel a bunch of potential i5s sitting in storage as Pentiums and sell those, versus starting another production run, and paying that cost.

In order for them to not ever do what I'm suggesting, Intel has to perfectly predict demand for each of the different product lines, have a perfect understanding of their manufacturing process and tune the designs to match demand. I don't think that they have that understanding. It is way easier to disable i5s and turn them into Pentiums.

Intel isn't the only company doing this. Other computer chip companies probably more heavily rely on this, since they don't do the manufacturing themselves and probably aren't able to predict yields as well.

silence_kit fucked around with this message at 22:04 on Nov 14, 2016

silence_kit
Jul 14, 2011

by the sex ghost

DuckConference posted:

The 7nm process generation at TSMC, Samsung, and Global Foundries will all be based on standard FinFet silicon techniques, with the main difference being scheduling, the exact PPA (power, performance, area), and whether and how they plan to being in EUV lithography to partially replace standard 193nm immersion lithography. TSMC isn't planning on any EUV for their 7nm node, while the other two are, however, TSMCs 7nm generation process is scheduled to hit production a lot sooner than the other two, probably in time for a 2018 apple product.

I don't know if Intel has given many details of their 7nm process (which probably won't come until 2020 or 2021 at their current pace) but I don't think anyone is expecting any exotic silicon replacements. Gate-all-around transistors, III-V transistors, quantum-well FETs, and some other things are vaguely on future roadmaps, but I haven't heard of any use of graphene happening on the next few nodes.

I honestly would be shocked if they were to ever change the channel material. However, if they were to do it, they'd probably use germanium for the p-type transistors and indium gallium arsenide for the n-type transistors. Graphene is really not a great transistor material for switching circuits in computer chips.

Watermelon Daiquiri posted:

Heh, I have to wonder if there is some industrial espionage going on that allows them to know what dimensions everyone else is shooting for. Hell, the only way we have to know is either by claimed transistor density (which relies on them being truthful) or cutting the drat things open (which is really only done by Chipworks afaik, since it takes alot of money for that).

Not true, the integrated circuit foundries do have the ability to do surgery and autopsies on their chips. It's how they figure out how to improve their manufacturing processes. They have this capability, so I'm sure they reverse engineer the others' products.

However, a lot of the art is not in what materials/dimensions they use, it is the exact details of the construction, which is harder to gather from this kind of analysis. It's like cutting open a pastry--you can kind of figure out the ingredients inside, but it's hard to reverse-engineer the exact recipe the chef used to make it.

silence_kit fucked around with this message at 02:52 on Nov 21, 2016

silence_kit
Jul 14, 2011

by the sex ghost

Eletriarnation posted:

Yes. If you desperately want ECC you should either buy a socket AM1 board (I think the ASUS ones support it at least), get a cheap prebuilt server, or get ready to pay through the nose if you really need a self-built Intel system. It's kind of nonsensical because from what I understand all of that is taking place in the memory controller and the board basically just needs an option to tell the processor to turn it on, but I guess it's a way they see to make more money and that's what matters.

LOL, everyone here is constantly bitching about how the new computer chips they desperately want to buy aren't as fast as they'd like, but then they simultaneously bitch about the price of high-end products not targeted towards consumers.

Computer chip companies would go bankrupt and would not be able to afford the development of new and better products and technologies if they only charged marginal production cost + 30% profit for their products like how the posters in this thread would demand. Computer chip development cost/production set-up cost is insanely expensive, and the high Xeon prices help bankroll that development.

silence_kit fucked around with this message at 18:52 on Dec 10, 2016

silence_kit
Jul 14, 2011

by the sex ghost

It's totally true though. If Intel had set the price for all their computer chips to be the marginal production cost x 1.3 (maybe this number is something like $1 - $20), they wouldn't have been able to bankroll development of new chips and new chip technology, and then the posters in this thread would be throwing even bigger tantrums than they already do about how they think computer chips should go faster than they currently do.

silence_kit fucked around with this message at 00:32 on Dec 11, 2016

silence_kit
Jul 14, 2011

by the sex ghost

Anime Schoolgirl posted:

ah yes, consumer desktop chips, known for their immense sales volume of $(rear end pennies), not like the multi-billion dollar datacenter xeon market that's completely and utterly insignificant, surely it is worth making the former pay 200 dollars for the privilege of not having ram errors

Their sales volume is still pretty high--I think that most of Intel's revenue may still be in their non-server products. It's obviously not as profitable as server chips, and since sales of PCs are going down that will probably change in the future.

Whining about how computer chip companies disable functionality in their chips is like complaining about how software companies charge money for their products when 'bits are free man'.

silence_kit
Jul 14, 2011

by the sex ghost

weak wrists big dick posted:

Didn't intel pull some poo poo a while back with a certain line of processor (Atom or Pentium, I think) where if you bought a scratch off card it would allow the processor to use another megabyte of cache and enable hyperthreading that was already there in the first place?

This is how all computer chips have worked for some time now. The set-up cost for production of computer chips is incredibly high, so to get around this, computer chip companies manufacture only one type of chip, and disable different amounts of its functionality to create the different product categories.

If this bothers you, I hope you are also railing against software companies for daring to charge money for their software when they also let you download the free, limited functionality version from their website. They could be giving you the full featured version at no additional cost.

silence_kit
Jul 14, 2011

by the sex ghost

Fame Douglas posted:

At least in theory, chips vary in production quality enough for binning to make sense. Selling scratch-off cards to unlock processor functionality just seems scammy, even if it isn't functionally different from regular processor branding.

I'm not in the business, so I can't say for sure, but I strongly suspect they disable what would be perfectly good i7s to create Pentiums.

The alternative to that would be to either 1) have perfect knowledge of your manufacturing process and perfect prediction of product sales and tune the manufacturing process so that the spectrum of the differently abled chips meshed with the sales breakdown of product categories or 2) wildly overproduce chips and leave tons of inventory on the shelves. It makes more sense to me at least for Intel to convert i7s to Pentiums instead of 2) and I don't think they have the capability to totally do 1).

silence_kit
Jul 14, 2011

by the sex ghost
I have only a rudimentary knowledge in computer architecture. What is the important distinction between a core and an execution unit? Also, what exactly is Intel's Hyper-Threading technology? In my mind, I thought it was some kind of abstraction which allowed for a hardware notion of a thread which allowed for faster context switching, but I might be wrong there.


There's already an abbreviation for this: e.g.

silence_kit
Jul 14, 2011

by the sex ghost

ehnus posted:

An execution unit is a block of silicon that does the processing. Typically these include arithmetic units, floating point units, and load/store units. A "core" is a combination of some number of execution units, register files, and cache. Register files are where the intermediate computation forms are stored, sort of like memory but filled with the data used by the execution units.

Thanks, that helps a lot.

ehnus posted:

Hyperthreaded CPUs have separate register files per-thread but share execution units. Benefits to hyper threading come in situations where the execution units would normally lie in wait. For example, if the code is waiting on data to make it into the registers from the cache, or into the cache from other tiers of cache (or main memory), they just sit there twiddling their thumbs until they can work again. If you can have another computation queued up in another thread you can keep the execution unit packed.

OK, so my earlier conception of hyper-threading was very wrong. Hyper-threading presents multiple independent instruction streams (might be using that term incorrectly) to the programmer, but these instruction streams under the hood share computation circuits or 'execution units' and under certain workloads, will have to wait for their turn to use the integer addition circuit, for example.

silence_kit
Jul 14, 2011

by the sex ghost

HalloKitty posted:

Not surprised by it at all, but it doesn't stop it being a cynical marketing tactic. It's not like cutting off parts of a system behind a licence excites the engineers that design those machines.

Again, if you have a problem with how the computer chip industry creates product lines by manufacturing one product and disabling varying amounts of functionality, then you must have major issues with the software industry. 'Why is my free/lower cost software copy artificially devoid of features? The additional marginal production cost to let me download/authenticate the program with all the features is zero!'

silence_kit
Jul 14, 2011

by the sex ghost
What is the difference? I honestly don't see it.

silence_kit
Jul 14, 2011

by the sex ghost

PerrineClostermann posted:

Hardware is a physical object. Software is numbers and configuration.

Why is this distinction important?

Why is it okay in your mind for software companies to disable functionality which could be added for free to their lower trim levels when it is not ok for computer chip companies to do so?

silence_kit fucked around with this message at 18:38 on Dec 20, 2016

silence_kit
Jul 14, 2011

by the sex ghost
Designing a computer chip is mostly 'numbers and configuration'. The per-part raw material cost and even the per-part production cost is shockingly low if you amortize the production set-up cost over many units. I don't know the exact number, but Intel is charging you $200 for something that costs $1-10 for them to produce.

silence_kit
Jul 14, 2011

by the sex ghost

Watermelon Daiquiri posted:

But seriously, I can guarantee that the raw materials going into each wafer are only on the order of like a grand, and with probably 350 good kaby lake dies per wafer you can see how cheap it is that way. However, given the ArF and KrF litho tools are loving expensive (1-2B alone for a fab) and that's not even mentioning the rest, $3-400 for a good enough die on a piece of precision made fiberglass is an ok deal. As the E/X stuff is 2.5 times larger, that increases the costs a fair amount as it takes just as much time, money, and effort to move a lot of 140mm2 dies as it does 350mm2 ones, and thats not even taking into account the fact that an increased die size increases the odds of a random die being defective (like what, 6 times as probable? or am I messing up my math?).

No way, material cost per wafer has to be O($100). I happen to know that plain 100mm electronics grade silicon wafers in small volumes are ~$20 each (this is actually incredibly amazing by the way--electronics grade silicon may be the purest material known to man, and yet it is so cheap). Intel, although they are buying much bigger wafers, probably are able to get a much better price/area than I could.

The other materials on the wafer used to make the transistors and the wires are used in such small quantities, being very thin films on the wafer, that I'd be shocked if they greatly increased the cost over the wafer cost, even though now they are often oddball rare elements. The package material cost is probably greater than the die material cost.

You are right in pointing out that the cost of manufacturing the wires, transistors, etc. is probably way more than the material cost. This article estimates that the loaded production cost (I'm not sure exactly what that means, I assume it has the capital cost of all of the lithography and the other manufacturing equipment baked into it) is $5k per wafer produced for the 14nm node.

Watermelon Daiquiri posted:

Intel is kinda hurting too, so they are looking for ways of filling the coffers and funding the exponentially expensive nodes. Honestly, given the brick wall they are desperately speeding towards (which given their anemic foundry poo poo means it'll hurt worse than tsmc or samsung or whoever) Can you really blame them for trying to milk things for as long as they can?

Let's not get too silly here, Intel is reaping the full benefits of having a monopoly on server and PC chips. PC sales are dropping and development cost of new chip technology is greatly increasing, but it doesn't mean that you have to feel bad for the company.

silence_kit
Jul 14, 2011

by the sex ghost

Watermelon Daiquiri posted:

If they truly are going to ditch silicon for <10nm stuff, hooo boy will xeons cost a leg. You think those cheap silicon wafers are expensive...

If they do it, they probably aren't going to switch away from the silicon wafer. They'll deposit/grow the new transistor channel materials on top of the silicon wafer.

silence_kit fucked around with this message at 05:43 on Dec 21, 2016

silence_kit
Jul 14, 2011

by the sex ghost

Watermelon Daiquiri posted:

Oh duh. Yeah, they already use sige in there. :downs:

SiGe is in the source and drain of at least the p-type transistors in the latest integrated circuit manufacturing processes. It isn't the channel material.

It is also in the base of bipolar transistors in certain, more specialty manufacturing processes, but that isn't really a technology for computer chips.

Even if you aren't using the silicon in the transistor, they already know how to make the silicon wafers very flat, very pure, and at low cost. Silicon also has a much better thermal conductivity than other low cost non-silicon substrates like sapphire or quartz.

I would be shocked if they actually changed the transistor channel material, but I'd be even more shocked if they changed the substrate material.

silence_kit
Jul 14, 2011

by the sex ghost

Harik posted:

100mm wafers might as well be free. The problem is you get a compounding cost increase with every step in size. As of 2014, 300mm were running $400 and 450s were looking at $6-800. It's hard to make a perfect 300 or 450mm ingot, and they're hard to cut perfectly, which means thicker slices and more post-processing to polish down perfectly. The cost per square inch jumps up about 50% per step, with a big spike when a size is new that settles down to the 1.5^generation curve.

That's just the silicon. I'd consider all the chemical processes involved in production to be material costs as well, since there's some amount of loss on each step.

I just priced 300mm plain Si wafers on a website which sells wafers for electronics to scientists and it is $80 per wafer for a box of 25. Where are you getting the $400 number?

Maybe your $400 number includes the epitaxy cost. That sounds high to me though. People have told me that epitaxy is expensive, and I understand why it is expensive if you order something custom as a one-off, but no one has ever explained to me why it has to be expensive in volume.

silence_kit fucked around with this message at 03:56 on Dec 23, 2016

silence_kit
Jul 14, 2011

by the sex ghost

Watermelon Daiquiri posted:

Material costs depend on the product node, and the bargaining skills of the company.

Yeah, if anything, I'd think that they'd be able to get a better price than me buying a box of wafers from some web online store.

Watermelon Daiquiri posted:

However, of the raw materials the Si makes up over half.

I don't think the wafer is that expensive, but in the number are you including the silane for the various deposition steps most of which doesn't get incorporated into the wafer and goes straight into the exhaust of the CVD equipment?

Watermelon Daiquiri posted:

Also, those wafer prices have no bearing on industry costs. Ive seen the wafers and work done in uni clean rooms and theres really no comparison. The reason those are so cheap is most likely because they arent pure, relatively speaking. Remember, with sub 20-nm real gate lengths and low metal layer pitches coming over the next few years, a single impurity atom can gently caress an entire die. The wafers places like intel tsmc samsung etc use by necessity need to have way way upwards of just 99.9% pure silicon, unless they need something predoped. Even then, it would be way way upwards of just 99.9% pure Si and Ar or B or P or whatever they use.

No, those wafers I priced were electronics grade, and were pretty high resistivity. University researchers obviously can't afford to implement and don't need great manufacturing uniformity for the science experiments that they do, but if they are making silicon transistor devices, they do need high purity silicon.

New Zealand can eat me posted:

I would also like to know the answer to this!

My best guess is that it's a time issue, the equipment seems really expensive so having to do a large volume means a long queue and space being taken up while materials wait around?

Almost all mass-production manufacturing equipment is expensive to buy and expensive to run, but if the throughput is high enough the purchase and operation costs can be very low per manufactured unit. This kind of strategy only works if you can manufacture and sell many units. Computer chips are probably most popular type of integrated circuit and they are produced and sold in large quantities.

In the CVD epitaxy reactors, they are batch processing many wafers at a time. It's just a question of what the process time is and whether it actually is a problem.

silence_kit fucked around with this message at 17:43 on Dec 23, 2016

silence_kit
Jul 14, 2011

by the sex ghost

DuckConference posted:

An interesting talk from intel looking at moore's law going into the future: https://player.vimeo.com/video/164169553

Huh, I had never thought about the tack that the speaker took. Well it was not his own tack--he said that it was recognized at the very beginning of the integrated circuit industry by Gordon Moore. The R & D into smaller transistors, wires, etc. still makes sense purely on a cost per function basis, if for no other reason, and he claimed that Intel was still spending less on manufacturing R&D than the money it made back on reducing the chip footprint per function.

I suspect that his claim that Intel can include the same functionality in half the die area and thus they are able to halve the cost per function every process node is a little misleading, though. I am not a VLSI designer, so I may be all wet here, but still, I thought that there were a lot of functions on a computer chip which do not benefit much from scaling, like longer-distance communication across the chip & storage (SRAM). And there's the issue that because of the heat dissipation constraint, you are obligated to design your circuits with a lower activity factor if you want a higher density at the same speed. I suspect he may not be accounting for those factors when he presents the cost/function plot, and is presenting the cost/function plot as if the entire computer chip were a single adder circuit, without having many of the real constraints of a real computer chip.

silence_kit
Jul 14, 2011

by the sex ghost

lDDQD posted:

A smaller transistor is basically all-around better. It has lower parasitic capacitance, and thus can switch on and off faster. You get lower threshold voltage, which in turn, allows you to have smaller current flowing through the transistor that is in the 'on' state; consuming less power.

No, the great thing about lower threshold voltage is that you don't need to use as big of a voltage to switch the transistor on. Lowering the voltage needed to switch the transistor on is a great thing for power--power dissipation due to charging the wires and transistors in the digital switching circuits on computer chips is proportional to the voltage squared. As you hinted at later in your post, a problem with lower threshold voltages is that in the presence of transistor threshold voltage variation (this is a problem that the finFET helps address) and a fundamental limit on the voltage sensitivity of a transistor, a low threshold voltage device can also mean higher off-state leakage current.

High on-current density (in mA/um of gate width) is actually a good thing in a transistor. It is an important figure of merit for transistors in computer chips and is proportional to transistor switching speed. All other things being equal, a transistor in a complementary switching circuit which has a low on-current density dissipates the same amount of energy per switching operation as the transistor with the high on-current density, it is just that the time it takes for the transistor to switch for one cycle is less for the higher current transistor. And we all know in the hobbyist computer thread that it is better to be fast than slow.

lDDQD posted:

There are all sorts of problems cropping up with wires on the chips now, too - so you're right about that. They're actually experimenting with using silver wires just to get higher conductivity, even though dealing with silver is a colossal pain in the rear end; you need diffusion barriers so it doesn't start migrating into the silicon.

I didn't know that they were seriously considering silver. I thought the issue with silver was the same one that everyone's grandmother who owns a fine dining set would know--silver isn't that stable in air, and tends to tarnish easily.

lDDQD posted:

There are also problems with high density memory, but overall, you can definitely fit more SRAM into the same amount of die area, as the transistors become smaller. These problems weren't really a huge concern until very recently, though. DRAM has got it way worse, too.... at some point it might actually turn out to be the case that you can continue making smaller and faster SRAM, but DRAM hits a brick wall, where you can't feasibly make it any faster or smaller. Hopefully by then, we'll ditch it entirely, it was always a source of endless annoyance anyway. It's not even getting significantly quicker; the latency maybe got cut in half since DDR1.

Again, I'm not in the industry, but I thought I read somewhere that the brand-new aggressively scaled transistors tend to not be used in SRAM due to problems with transistor variation. Maybe I am mixing up the transistors in the SRAM on computer chips with the transistors in flash memory chips.

silence_kit fucked around with this message at 02:19 on Jan 9, 2017

silence_kit
Jul 14, 2011

by the sex ghost

BobHoward posted:

e: After rereading I realized I should note that the things they're doing to build 3D NAND aren't easily applied to logic, and NAND only gets away with it because there is no active power use while a memory cell is idle. There's been exploration of building 3D logic before, but power density is a real killer.

I don't really get how 3D NAND flash works or really how any flash memory works, but from what I can tell from Googling cartoons of various 3D NAND structures, is that the channel material is a vertical cylinder made of deposited polysilicon and not the normal mono-crystalline silicon that logic transistors are made of. If that is true, then yeah, that wouldn't port to logic--I'm sure that the switching speeds of the polysilicon transistors in the 3D flash memory are way too slow for logic. Apparently they are acceptable for non-volatile memory.

silence_kit fucked around with this message at 02:24 on Jan 9, 2017

silence_kit
Jul 14, 2011

by the sex ghost

Watermelon Daiquiri posted:

Also, I've been wondering if it could be feasible to use a topology similar to vnand for logic (or better yet sram and edram caches). I imagine if it were feasible someone would already be looking into it

I think the vertical cylindrical channel transistors used in vNAND flash are polysilicon channel transistors, which have lower electron and hole mobilities and higher interface trap densities than the normal silicon transistors used for logic. The lower mobilities and worse interface trap densities of polysilicon when compared to the normal silicon means that it will be harder to run the polysilicon channel transistors at the high speeds and small supply voltages (low power) that people who are designing VLSI logic circuits in the latest technologies are accustomed to.

It is harder technically to do, but if you could make the little cylindrical pillars out of the more normal channel material, mono-crystalline silicon, then these new types of tubular transistors could be run at the same speeds & voltages as the normal transistors at increased density. Lots of people have and probably still are working on this idea.

Here's an article I quickly Googled reporting on a conference paper written by researchers at IMEC (a European VLSI technology research lab) who were basically working on the same thing you were talking about : http://www.analog-eetimes.com/news/imec-reports-nanowire-fet-vertical-sram I think this idea is pretty old though--I bet if you were to do enough Googling (key words: vertical nano-wire, gate-all-around), you could find more stuff on this.

Watermelon Daiquiri posted:

Yeah, this would be something that'd have to rely on whatever that phrase is that means bigger demand leads to lower prices (like how there are so many tvs made that the crystal used for the color carrier is so cheap people design circuits around them (Nintendo, Intel, Apple, e.g))

What are you talking about here? I've never heard of this. A lot of people worried about the material cost of indium, in the indium tin oxide (ITO) transparent electrode used in display technology, but their worries weren't really well-founded and ended up being wrong, and we now enjoy really really cheap displays.

silence_kit fucked around with this message at 02:34 on Feb 21, 2017

silence_kit
Jul 14, 2011

by the sex ghost

BobHoward posted:

It is doubtful anyone is seriously working on this for logic. 3D SRAM may make sense.

The problem with 3D logic is that planar logic in high speed chips already generates more watts per mm^2 than the surface of the sun. (That is not hyperbole btw.) The only way to scale this figure up is to go to much more costly cooling systems. You see that today in the liquid cooling systems overclockers love, but that stuff doesn't sell outside of "enthusiasts". Chip makers are going to be designing mass market products around the limitations of air cooling for the forseeable future.

Which basically implies that 3D logic is a no-go. You are immediately doubling power density, and the layer further away from the cooler gets extra hot. Add a third layer and things get even worse.

If you try to solve these problems by backing off on voltage and therefore frequency, well, the whole point of the exercise was to make a fast chip, right? Sacrificing a lot of performance to gain density isn't a great tradeoff, especially in light of the fact that this is likely to be quite expensive to build compared to planar logic.

I get your argument here. I too seriously doubt that improved cooling will be economical. But let's go into a time warp back to 10-15 years ago. Couldn't you have made a similar argument against the development of more dense VLSI technology back then, the same as what you are doing now? What is the difference between now and then?

BobHoward posted:

I think they were talking about 14.31818 MHz, which was a clock frequency needed by NTSC TV sets. Since quartz oscillators and crystals cut for that frequency were so common, they were very cheap. Lots of designs with no need to be NTSC compatible used that frequency (or ran it through a simple divider to generate a slower frequency) just because it was lots cheaper than picking anything else.

Oh, I see.

silence_kit fucked around with this message at 18:09 on Feb 21, 2017

silence_kit
Jul 14, 2011

by the sex ghost

Methylethylaldehyde posted:

The sun is ~63W/mm^2. A modern high power chip like AMD's bulldozer 135W boondoggle has more heat per unit area than a turbine fan blade or a nuclear reactor core.

The point I'm making is that the famous slide which compares computer chip power densities to the sun, rockets, nuclear bombs, etc. is like 15 years old now, and in the meantime there has been much investment into and improvement in the device/interconnect density in the state-of-the-art VLSI technologies. Circuit designers have been able to figure out how to take advantage of the increased device/interconnect density of computer chip technologies since then without requiring exotic and probably uneconomical/unpractical cooling, why would further improvements to density be any different? What am I missing here?

silence_kit
Jul 14, 2011

by the sex ghost

evilweasel posted:

If your chip is a flat plane, you're pulling heat out from every part of the chip with the heatsink. If you have multiple chip layers, you have (at best) chip layers where on one side the heatsink is replaced by something creating just as much heat as that layer is. If you have three layers, you have a layer where there is no heatsink at all.

In the case of these hypothetical 'tubular transistors', the layer thicknesses that the transistors occupy are pretty thin and I'd be shocked if the additional thermal resistance would be that much. I don't see how putting two transistors on a tube is that much different when compared to just doubling the device density for the normal transistors. Maybe it is a little worse.

silence_kit fucked around with this message at 23:59 on Feb 21, 2017

silence_kit
Jul 14, 2011

by the sex ghost

evilweasel posted:

Smaller transistors use less power so when you double the density of transistors by shrinking them, you're reducing the heat generated per transistor. When you stack them on top of each other to double the density, you're not.

OK, this makes sense. Thanks.

Watermelon Daiquiri posted:

You are assuming that the resistances will be the same or similar. A good portion of the resistance seen in a device is at the contacts and the layers joining the contacts to the S/D, so if we reduce the number of contacts by having multiple gates affecting the same channel, we reduce the amount of joule heating from the contacts.

The extra series resistance in the ohmic contact, oddly enough, doesn't change the energy consumption per switching cycle, but it does slow down the switching speed. One quantity which is important to power consumption is the device capacitance, which tends to get smaller (and for thermal sinking reasons, it kind of must) as you move to smaller and smaller process nodes, as evilweasel pointed out.

The 'dynamic dissipation' section of the following Wikipedia page kind of explains this: https://en.m.wikipedia.org/wiki/CMOS

silence_kit fucked around with this message at 02:50 on Feb 22, 2017

silence_kit
Jul 14, 2011

by the sex ghost

Watermelon Daiquiri posted:

Huh, really? Surely it must play some part, like the resistance current sees when it flows to and during discharge?

While it increases the resistance current sees when it charges up the capacitances in the wires & transistors in the circuit, the effect of the increased resistance at the same time lowers the current levels in the charging event. In the end, it doesn't really change the energy dissipated per switching cycle, although it makes the switching cycle take longer.

It is kind of odd and maybe counter-intuitive. Here is a physics webpage which goes over the problem of charging up a capacitor through a resistor, which actually is a pretty good model of a sub-block in the logical switching circuits in computer chips: http://hyperphysics.phy-astr.gsu.edu/hbase/electric/capeng2.html

Methylethylaldehyde posted:

They're still incredibly limited by the total thermal envelope of the part. The fact that they got smaller and more efficient just means you can cram more of them per unit area before running into the same issue. A simple example would be why the 24 core stupid expensive Xeons don't clock faster, and it's because they run right up against the 135w TDP limit imposed. If the chip had a 200W power budget due to better cooling technologies, you'd see it clock ~30% faster.

It becomes even more challenging once you have stacked layers of chips. You could fit a metric asston of HBM memory on die by stacking it up super high, but you pretty quickly run into massive issues with dissipating the heat out of the center of the stack, which limits the total power and thus speed of the HBM stack. Being able to wick the heat out faster than the molocrystaline silicon can conduct it will be key in the next 10 years to improving package TDP and interconnect density.

Hell, look at what heat pipes did for the entire CPU cooling industry, before you'd a hugeass copper heatsink with some Delta 140 CFM fan that sounded like a model jet taking off and cook the chip with a 100W load on it. Now you have the heatpipe tower coolers that can handle 135W silently, just due to how much better they're able to pull the heat away from the chip and towards the extremities of the fins.

I personally am looking at the technology that uses a vapor phase change system through etched paths on die, using the same phase changing goo the heatpipes use, possibly with a pump to encourage the liquid to flow into the chip. Then you can stack them literally as high as you want and can afford to cool, and the total thickness of silicon needed to conduct the heat through goes down substantially.

Maybe in some applications it will be acceptable, but people for a while now have been accustomed to computer chips being everywhere and requiring low maintenance and upkeep. I don't know if in a lot of applications people are going to want to deal with the hassle of plumbing.

Edit: hey, wait a second, I talked about an old demonstration of that idea (in-chip water cooling) in the AMD thread, and you told me that it was impractical!

silence_kit posted:

I think this is an old idea. I found a paper from 1981 where, using the same/similar micro-fabrication technology that they use to make the transistors and wires on the chips, the authors of the paper etched numerous micro-fins on the back of a chip and ran cold water over them to cool the chip. They were able to achieve a 71 degrees C temperature rise at 781 W/cm^2 chip power density.

I'm not really a computer chip cooling enthusiast, so I'm not sure if that 71 C at 781 W/cm^2 number is impressive or not. It's obviously not that practical.

Methylethylaldehyde posted:

A current gen Xeon is about 6.2 cm^2, so in theory that would be a single chunk of silicon with about 4.8Kw of power flowing through it. The interior wall of a nuclear reactor vessel is about 240ish, and a rocket nozzle would be about 850-900 W/cm^2.

You'd need some really complex and novel form of thermal management to make sure that the microfluidics are behaving properly, and that pump flow and head pressure are maintained. It would suck to have a small clog block off part of the chip and have it cook to death before the thermal conductivity is able to alert the thermistor that the chip has caught fire.

silence_kit fucked around with this message at 04:14 on Feb 22, 2017

silence_kit
Jul 14, 2011

by the sex ghost

Tokamak posted:

The article pretty much explains why it isn't likely to be real in the first couple of paragraphs. Pretty much the only reason the writer thinks it could be real is that they used an acronym of an upcoming microarchitecture in one of the drawings. It isn't really feasible with how chips are currently produced. Anyone buying them would have to get a lot of them, and you wouldn't be designing them off a website.

Why isn't it feasible? I certainly don't think that some guy in this thread is going to be able to afford a custom multi-chip-module assembly for his gaming rig, but why would it be unreasonable for Intel to offer this kind of service for its larger enterprise customers?

silence_kit
Jul 14, 2011

by the sex ghost

Ak Gara posted:

I have a weird CPU question.

As you go smaller and smaller nanomater fabrications, you can either put more stuff on the same size die, or use the same amount of stuff on a smaller die, yes?

When you put the same amount of stuff on the smaller die, you use less electricity. The smaller transistors and wires use less electricity than the larger ones.

silence_kit
Jul 14, 2011

by the sex ghost

SinineSiil posted:

I thought of an interesting question: Why can't CPU's run at even lower frequencies to save more power? Mine runs at 800 MHz at lowest.

Computer chips use electricity even when the circuits are not switching at the clock frequency--there's a 'static' power dissipation overhead in which the chip uses electricity even if the clock frequency of the chip were to be set to 0 Hz. This is to be contrasted with the usual 'dynamic' power dissipation from the transistors in the circuits switching on and off which increases with clock frequency. One source of the static power dissipation overhead is from transistor off-state leakage.

Transistors on computer chips are electrically controlled switches, but they aren't ideal switches--when turned off, they actually conduct a small amount of electricity. Per transistor, this amount of electricity isn't a lot, but if you multiply this amount times the number of transistors on modern computer chips (~ a billion), it can add up to Watts.

silence_kit fucked around with this message at 15:16 on Mar 9, 2017

silence_kit
Jul 14, 2011

by the sex ghost

eames posted:


Tokamak posted:

The article pretty much explains why it isn't likely to be real in the first couple of paragraphs. Pretty much the only reason the writer thinks it could be real is that they used an acronym of an upcoming microarchitecture in one of the drawings. It isn't really feasible with how chips are currently produced. Anyone buying them would have to get a lot of them, and you wouldn't be designing them off a website.

One of the purposes of patent drawings is to express some of the patent's embodiments (implementations). So saying a customisable chip could have the latest intel cores, FPGA... is for illustrative purposes. They need to say that it can everything, be customised however, and procured whenever to cover their bases. So going with a codename (which we've known for a year prior) for a something that isn't even out makes the most sense in that context.

This is like seeing that PS4 patent where you say 'Mcdonalds' to skip an advert, and extrapolating that Mcdonalds is so specific that there must be some deal in place for it to happen. So therefore it will happen.

http://wccftech.com/intel-kaby-lake-g-hbm2-gpu-multi-die/ :toot:

To be fair, I think Tokamak in that quote was arguing against the idea of Intel offering a custom multi-chip-module design, manufacturing, & assembly service and not arguing against the idea of Intel selling multi-chip-module CPUs.

silence_kit fucked around with this message at 04:33 on Apr 5, 2017

silence_kit
Jul 14, 2011

by the sex ghost

KOTEX GOD OF BLOOD posted:

Isn't there some kind of quantum physics issue with that (disclaimer: I studied political science and remember reading about this on wikipedia)

Usually the theoretical limitation cited which would prevent the gate lengths of transistors from being arbitrary small is direct quantum mechanical tunneling from source to drain. This problem would mean that the transistors still conduct electricity when switched off and can't really be fully switched off.

There is an expectation for transistors in computer chips that the transistors conduct little electricity when switched off, and because of this expectation, circuits are designed so that most of the transistors on a computer chip spend most of their time sitting idle and switched off.

silence_kit fucked around with this message at 13:39 on Jun 18, 2018

silence_kit
Jul 14, 2011

by the sex ghost

Eletriarnation posted:

I think the original Moore's law was just about transistor density in a given area of integrated circuit, as the article was titled "Cramming more components onto integrated circuits". All the other conclusions followed from that given some assumptions which were valid at the time. Those assumptions, e.g. "the cost per wafer of size X for process Y is similar to the cost of a size X wafer from process (Y+1)" are not necessarily valid anymore.

Obviously, Moore's Law became all about transistor and wire shrinkage later, but I don't think the original article really emphasizes transistor & wire miniaturization.

When I read the original article, I got the impression that Moore's thinking at the time (the 60's) was that the way that the cost/function was going to go down and (# of devices)/circuit was going to go up was through improved yields due to better manufacturing know-how, and through larger die sizes (which would not be necessarily low-yielding, due to proportionally greater improvements in manufacturing know-how).

See below quote from the article, with the last part in bold which reveals this thinking, in my opinion:

quote:

The complexity for minimum component costs has increased at a rate of roughly a factor of two per year (see graph on next page). Certainly over the short term this rate can be expected to continue, if not to increase. Over the
longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000.

I believe that such a large circuit can be built on a single wafer.

At one point in the original article he mentions transistor & wire miniaturization improving the power-delay figure of merit, but I think this is the only reference to miniaturization in the article:

quote:

In addition, power is needed primarily to drive the various lines and capacitances associated with the system. As long as a function is confined to a small area on a wafer, the amount of capacitance which must be driven is distinctly limited. In fact, shrinking dimensions on an integrated structure makes it possible to operate the structure at higher speed for the same power per unit area.

silence_kit fucked around with this message at 10:37 on Dec 27, 2019

silence_kit
Jul 14, 2011

by the sex ghost

Paul MaudDib posted:

What you're describing is a different process (eg some of the mobile vs HPC nodes, I forget from who) with different design rules.

Indiana Kron is mainly describing how transistor sensitivity to input voltage is somewhat fundamental, so you trade off transistor on-current density (and thus speed) with off-state leakage current density (and thus leakage power dissipation) by changing the transistor threshold voltage.

I don’t work for Intel, nor do I actually work in the computer chip industry, so I can only speculate and go off what I have read, but I suspect that Intel has multiple threshold voltage transistor options in their recent processes. So I believe that to perform this trade off, Intel would not need to switch processes. They would need to switch designs though—there is not a way to dynamically change transistor threshold voltage after manufacture and testing.

silence_kit
Jul 14, 2011

by the sex ghost

Malcolm XML posted:

I thought it was common knowledge that the reason Intel limited mainstream desktop chips to 4 cores pre-Ryzen was that they just harvested leaky mobile processors. It's also why they always had an iGPU despite it being pointless on most desktops.

If you go to wikichip.org, you can find die images for Haswell, Skylake, etc. designs. The web site shows two different die images for Intel's 2 core (primarily laptop) & 4 core (primarily desktop) products for many of its generations. This suggests that Intel does not harvest leaky mobile processors to create its desktop products. If Intel harvested leaky mobile processors to create its desktop products, there would only be one die image for the 2 core & 4 core products.

Paul MaudDib is right though in the quote below: there is no evidence on wikichip.org that the circuit designs for each core are different for the 2 core & 4 core products. That does not preclude the possibility that the core designs are different though for the 2 core & 4 core products.

Paul MaudDib posted:

I don't know of any evidence that Intel has a different stepping of skylake cores with a different library for mobile transistors

I would believe such a library exists for things like Atom/Denverton cores though

silence_kit
Jul 14, 2011

by the sex ghost
Lol nowhere did I claim that they were authoritative, or that I am engaging in anything other than speculation

silence_kit
Jul 14, 2011

by the sex ghost
When people say 'leaky silicon' what does that really mean?

Is it that the transistors in the chip in the non-T version have a wider range of threshold voltages, so some low-threshold voltage transistors conduct more current in the off-state than expected (this is the leakage)? And the fact that there are some high-threshold voltage transistors in the distribution means that the chip will either need a little extra time or a little extra voltage for the switching circuits to be able to complete its computations within the clock period?

Does this explanation capture the physical differences between non-T & T versions of chips, or is it something else that creates that distinction?

Adbot
ADBOT LOVES YOU

silence_kit
Jul 14, 2011

by the sex ghost

Indiana_Krom posted:

Transistors are analog devices so "off" isn't zero volts, the whole operating range of transistors is a curve. Really fast transistors generally let more voltage through even when they are off

? Do you mean current here? I understand that there often is a trade-off between achieving high transistor on-current density (this is almost the same thing as transistor speed) and low transistor off-current density (this is proportional to static power dissipation in a digital circuit), and the trade-off can be made in digital circuit design when selecting sub-circuits containing transistors with different threshold voltages for the different functions in the circuit.

Indiana_Krom posted:

and the distance between off and on is smaller allowing them to switch really quickly at the expense of always "leaking" a lot of power.

? Is this just another way of saying that small (meaning short gate length) transistors are susceptible to having unexpectedly low threshold voltage? This effect alone doesn't explain why T-series Intel chip would be faster at a lower supply voltage, though. If you reasoned only using this effect, the T-series Intel chip would have higher threshold voltages, meaning a lower standby power dissipation, but would require more voltage to hit a particular clock speed, because the T-series transistors would need more voltage to get the needed current density for speed.

Indiana_Krom posted:

Basically in order to work a high performance transistor needs to be able to quickly change its field strength across the threshold, but the only real way to accomplish that is to have a really weak field because of the capacitance involved in switching. Strong fields that block most or all of the current from passing through also have a lot of capacitance which requires a lot more time to charge or discharge.

Is this just another way of saying that the way to prevent small transistors from having unexpectedly low threshold voltage is to design the transistor to have a high gate capacitance density? And that in this regime, higher gate capacitance density instead of increasing the current density, like it used to, now leads to lower current density?

This effect still doesn't explain why a T-series Intel chip would be faster at a lower supply voltage. This effect by itself would say that the T-series chip would have higher total capacitance and so it would be slower than the non-T series chip, and would require more voltage to run at the same speeds as the non-T series chip.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply