Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
silence_kit
Jul 14, 2011

by the sex ghost

Fantastic Foreskin posted:

As someone who only has man-on-the-street level knowledge of chip fab, can someone explain to me what exactly it means for a node/process to fail, and how one does it for 5 years straight?

It seems like, from looking at computer chip benchmark numbers, Intel's chip process technology has been stagnant when compared to its competitors, one notable competitor being the Taiwan Semiconductor Manufacturing Corporation (TSMC). It is hard to tell for sure, because computer chip manufacturing process information/capability is proprietary.

Is this a massive failure? I don't know. We might actually be reaching the stage in technology development where computer chip manufacturing technology is extremely mature, and it is challenging to make noticeable improvements. This has been predicted for many years for computer chip manufacturing technology, but it might actually be true now. So maybe it isn't really a massive failure on Intel's part, and TSMC is just doing incredible work.

Most people in this thread (including me) do not actually understand this subject and do not have any sort of insight beyond being a man-on-the-street. Most people in this thread are computer programmers or are PC hobbyists who parrot information from others and don't really understand this subject at all.

silence_kit fucked around with this message at 14:28 on Apr 2, 2021

Adbot
ADBOT LOVES YOU

Khorne
May 1, 2002

Fantastic Foreskin posted:

As someone who only has man-on-the-street level knowledge of chip fab, can someone explain to me what exactly it means for a node/process to fail, and how one does it for 5 years straight?
You fail 5 years straight by trying to get the process right and continually ending up with worse than expected results.

Fab tech is cutting edge. Intel tried to push to the theoretical limit of what was possible without EUV with their 10nm node. They opted to use cobalt in a way other fabs aren't, too, which meant they were pioneering a process with that material while trying to hit apparently out of reach targets. The end result is it has bad yield, is expensive, and didn't meet the specifications they were aiming for. Yield in this context is sort of analogous to if you fab 1 wafer into processors how many are junk. Larger chips are more vulnerable to high defect rates which is another thing working against Intel.

All processes have a yield that's not 100%. Intel's defect rate on 10nm is (was?) unheard of for a production node. And not in a good way. Intel can likely recover over the next two node hops because of the transition to EUV.

It's also worth pointing out that Intel's 10nm and TSMC's 7nm are roughly equivalent processes in terms of the final chips they can produce. Despite the naming difference. Intel's 7nm should be competitive with TSMC's 5nm. As long as Intel has a reasonably competitive 7nm node they should be fine.

Intel was years ahead of other fabs when they hit 14nm. Now they are a year or two behind. More if you count 10nm, but realistically they should be a year or two behind with their 7/5 nodes.

Khorne fucked around with this message at 15:01 on Apr 2, 2021

Indiana_Krom
Jun 18, 2007
Net Slacker

Fantastic Foreskin posted:

As someone who only has man-on-the-street level knowledge of chip fab, can someone explain to me what exactly it means for a node/process to fail, and how one does it for 5 years straight?
We could get in to lots of technical reasons Intel 10nm has been a dumpster fire, but the rough of it is: They can't make chips as quickly as they would like, way too many of the ones they do make end up being defective, and even the good ones don't perform well.

Why it is taking 5 years is for multiple reasons from typical corporate mismanagement to the actual goals being too ambitious for the equipment they are trying to use. Also high volume manufacturing of stuff that small is just inherently incredibly difficult and risky.

What helped TSMC succeed with 7nm where Intel failed is TSMC is using EUV "extreme ultraviolet" lithography for critical layers where Intel is trying to do it only with DUV "deep ultraviolet" lithography. Basically if the ultraviolet you use is a marker, Intel is trying to use a 193nm wide marker tip to draw a 10nm wide line where TSMC is using a 13.5nm wide marker instead. Everyone was using 193nm down to about 12-14nm, there are a lot of tricks and workarounds to make it work down to those sizes, but the difficulty goes up exponentially as the size decreases and TSMC/samsung/etc simply waited till the machines that work with 13.5nm became available before they attempted anything smaller.

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

Indiana_Krom posted:

What helped TSMC succeed with 7nm where Intel failed is TSMC is using EUV "extreme ultraviolet" lithography for critical layers where Intel is trying to do it only with DUV "deep ultraviolet" lithography. Basically if the ultraviolet you use is a marker, Intel is trying to use a 193nm wide marker tip to draw a 10nm wide line where TSMC is using a 13.5nm wide marker instead. Everyone was using 193nm down to about 12-14nm, there are a lot of tricks and workarounds to make it work down to those sizes, but the difficulty goes up exponentially as the size decreases and TSMC/samsung/etc simply waited till the machines that work with 13.5nm became available before they attempted anything smaller.

This, I think, is what I was asking. The manufacturing failure is obviously a technical problem, I just don't know enough about chip fab / design to know what these problems could even be. The 'five years' part was secondary, but if you're trying to drive a nail with a screwdriver that'll get you to five years no problem.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

gradenko_2000 posted:

https://www.youtube.com/watch?v=LYdHTSQxdCM

Gamers Nexus has a review up of the i5-11400, the non-overclockable Rocket Lake six-core, and it comes really dang close to a 5600X despite being a over hundred bucks cheaper

or, put another way, is significantly faster than a Ryzen 5 3600 on top of being 20-40 bucks cheaper

Compared to a 3600 it's very good, but I hate the way Steve falls into the cost trap with budget parts. Yes, the 11600k costs 60% more than the 11400f. Yes, it's only ~10% faster. But comparing price:performance straight up is loving stupid. You have to take all the elements of the system into account. As a portion of total system cost, the difference between the makes a lot more sense. That's not to say that the 11400f isn't a good buy - with the B560 memory speed changes, it's a fantastic buy on a strict budget, for anyone who doesn't anticipate any highly CPU-bound loads, for someone who wants a very energy-efficient/cool/quiet system (especially on a budget) or for someone who might be interested in building another new system in a shorter than usual window, etc etc, but I hate the way reviewers misrepresent the relative value of parts. There's also the even bigger factor that no one ever seems to take into account - what really matters in most cases is performance UPLIFT. You're paying for the increased performance over what you have now, the baseline is not zero.

Still, on the whole I'm very excited that Intel is making low end parts that are actually GOOD. Outside of pro workloads where you really need a lot of cores, I'd be hard pressed to tell anyone who bought an 11400f that they made a poor decision. The impact Zen has had for consumers is amazing and this competition is fantastic.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

K8.0 posted:

but I hate the way reviewers misrepresent the relative value of parts. There's also the even bigger factor that no one ever seems to take into account - what really matters in most cases is performance UPLIFT. You're paying for the increased performance over what you have now, the baseline is not zero.

yeah I get this - the i3-10100 is missing from the benchmark charts and that's probably the weakest part of the review

K8.0 posted:

Still, on the whole I'm very excited that Intel is making low end parts that are actually GOOD.

I think it's more that AMD has abandoned that part of the market while the gettin' is good

like, they could choose to make a Ryzen 3 5100 or whatever with four Zen 3 cores that'd probably OC like a beast while running circles around the i3, but why make it when you only have so many wafers to work with and you already can't keep the 5600X and up from flying off the shelves

I mean, it's good that Intel is taking advantage of that opening, but even Steve identified that there's been less marketing about the 11400 than there has been about the i7 and the i9, fat lot of good it all did

Indiana_Krom
Jun 18, 2007
Net Slacker

Fantastic Foreskin posted:

This, I think, is what I was asking. The manufacturing failure is obviously a technical problem, I just don't know enough about chip fab / design to know what these problems could even be. The 'five years' part was secondary, but if you're trying to drive a nail with a screwdriver that'll get you to five years no problem.
Yeah, it was basically a nail with a screwdriver or fitting a square peg in a round hole kind of problem. It isn't like there aren't valid reasons to not want to use 13.5nm wavelength light though. It took a while for the tools to become available because nobody had a light source that could reliably put out that wavelength with sufficient power to be useful for photo-lithography and the environment it requires further complicates things. DUV machines use "regular" lasers, EUV machines have to use a laser to vaporize a tiny droplet of liquid metal which then produces a flash in the EUV wavelength (and said flash isn't as "clean" as a laser). DUV machines can use lenses and conventional optics, only need to be filled with nitrogen (no oxygen) and can immerse the wafer in a special thin layer of water to improve the optics, EUV machines have to have a near vacuum with only trace hydrogen inside and have to use mirrors because the wavelength won't go through lenses or just about anything else. Also EUV is powerful enough to qualify as ionizing radiation and gradually decays/destroys anything it comes into contact with including the mirrors used inside the machine (it is right on the edge of the X-ray spectrum).

13.5nm is probably the end of the line, any larger wavelength and you are stuck with the same "too big" problem, any smaller wavelength and in exchange for passing through the atmosphere it passes through EVERYTHING because its an X-ray or gamma ray, and the things it does hit get electrons violently dislodged from their atoms.

Anyway, it is just kind of an interesting/fascinating subject where microchip manufacturing meets nuclear physics. If you are bored on a weekend or pandemic isolation some time it can pass a few hours to read about photo-lithography and DUV/EUV on Wikipedia and see what you remember from your elementary science classes on the electromagnetic spectrum.

Indiana_Krom fucked around with this message at 17:21 on Apr 2, 2021

FuturePastNow
May 19, 2014


gradenko_2000 posted:

I think it's more that AMD has abandoned that part of the market while the gettin' is good

Yeah, AMD has processors that fit that part of the market- their 4 and 6 core APUs- they just can't make nearly enough of them to sell at retail since it's all competing for the same 7nm wafers. I think they'd sell boxed 4350/4650/4750s if they could. But the margins are higher on everything else.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Yeah basically any and every problem you can name about AMD stems from just not having enough wafers.

Bofast
Feb 21, 2011

Grimey Drawer

Fantastic Foreskin posted:

As someone who only has man-on-the-street level knowledge of chip fab, can someone explain to me what exactly it means for a node/process to fail, and how one does it for 5 years straight?

Much of Intel's issues seem to be coming from how they tried to approach smaller manufacturing nodes without waiting for EUV tools, likely in an attempt to maintain an edge in process nodes over other companies. They went for slightly different materials and methods to push how small they could make things than other foundries did and those choices ended up not working out that well in practice. It should be mentioned that they previously had issues for a long time with getting yields (useful chips per wafer) on 14nm to similar levels as their 22nm process.

Their 10nm yields were so awful that for quite a while they did not manage to make commercially viable chips on it. If I recall correctly the first :airquote: commercially available :airquote: 10nm chip they had was some dual-core chip with the iGPU disabled that released in a China only Lenovo laptop SKU. Basically, it was sent out to die just so they could tell shareholders that they were shipping 10nm.

Even when things improved to the point where they can make some chips on 10nm they did not seem able to clock the chips high enough to compete with their own 14nm process. They might be good enough for some laptop uses, for example, but not for desktop. Even their server CPUs that are scheduled for May this year seem like they will be pointless for many customers because of how the performance per watt figures will sometimes be worse than the existing ones, and that's not even counting the competition from Epyc.

It does sound like Intel have recently managed to simplify their manufacturing processes a bit due to how EUV tech is now so readily available compared to the more complex (and error prone) workarounds they were trying earlier, so they might finally get back into the swing of things with 7nm ready for production around mid/late 2023 or so.

Cygni
Nov 12, 2005

raring to post

I think i remember reading that the early 10nm node that failed had some obscene amount of multi patterning (as in like 7 passes) that basically ensured they were never going to get yields, with the idea being that eventually they would brute force it and make it work and yields would improve but they... never did.

We've talked about it before but it truly is remarkable how big a gently caress up that first 10nm failed node was. That they've been able to string 14nm out this long with out getting obliterated is a testament to how far ahead they were to start with, but that failure is really starting to hit home. Steam hardware survey has Intel hemorrhaging marketshare in gaming (they've lost 4% marketshare in the last 4 months), and Milan seems to have a ton of momentum in server world.

10nm SF really has to work, and 7nm really REALLY has to work, or the only option would be to start turning to TSMC in earnest.

Gwaihir
Dec 8, 2009
Hair Elf
Even in TSMC land, there's a pretty hard limit on how fast they can expand- iirc there's only one company (ASML?) that makes the actual wafer fabrication machines, and they only produce/sell about 35 of them a year, at a cost of a cool 175 million a pop.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Gwaihir posted:

Even in TSMC land, there's a pretty hard limit on how fast they can expand- iirc there's only one company (ASML?) that makes the actual wafer fabrication machines, and they only produce/sell about 35 of them a year, at a cost of a cool 175 million a pop.

Minor differentiation, but I believe ASML makes the photolithography machines, not the wafer / ingot machines. I think there are a somewhat larger number of companies that make those.

Still, ASML is a pretty big bottleneck if you want to get into that business, since as you say, they don't produce many machines and they are fabulously expensive.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
there's also a lot more stages in fabrication than just lithography and you have to expand all of those too

silence_kit
Jul 14, 2011

by the sex ghost

DrDork posted:

Minor differentiation, but I believe ASML makes the photolithography machines, not the wafer / ingot machines. I think there are a somewhat larger number of companies that make those.

Electronics grade silicon is almost a commodity which is actually quite remarkable because it might be the purest material known to man.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

TheFluff posted:

I've seen AVX512 used in image/video processing (resizing, bitdepth conversion, gamma curves, tone mapping, all the usual matrix math stuff) but the performance wasn't that impressive, at least not on the Skylake-X system it was developed on. IIRC it was like 30% faster than AVX2? You got like twice as many pixels per clock cycle as with AVX/AVX2 on paper in many cases, but it definitely didn't get twice as fast in practice. I don't write this kind of stuff myself, I just hang out with people who do, so this is hearsay and take it as you will.

Of course this sort of stuff can be done on a GPU as well but these image operations are usually part of some bigger processing pipeline and it gets obnoxious to transfer the image back and forth between CPU and GPU for each pipeline step depending on how it's implemented, and a lot of filters aren't written for GPU processing, so there's a lot of value in doing these things on the CPU still.

e: here's a spooky mix of C++ templates, C preprocessor macros and avx512 intrinsics if anyone is curious

video encoding makes heavy use of AVX and so far hasn't been able to be directly ported to SIMT GPU architectures (there is of course fixed-function hardware encoding but it goes to show there are some workloads you can't port to GPUs).

I think some of blender's renderers use it as well?

inference is another, let's say you want to have a game with a unit AI that the goals are driven by inference, it could potentially be a lot of game state to shuttle over to a GPU for a relatively small computation. anywhere you have that pattern of "needs to be short latency, or transfer a lot of frequently-changing data to do a small computation".

or just stuff that flat-out doesn't fit into GPU memory at all.

emulation is another odd one, the RPCS3 people are already making use of it and it's potentially useful for ARM emulation and so on.

string processing is another unexpected one, some of the fast string-parsing libraries do use AVX for bit-searching type tasks and could potentially make use of both the vector width and the new feature sets. gets used in JSON parsing and so on. Really any sort of bit-twiddling code is potentially a use-case for it, that's why it helps emulation too.

djbsort for constant-time (in the cryptographic sense) integer sorting

it gets used in a lot of places where you wouldn't (or can't) just throw a GPU at the problem. of course the gains from going wider on vector width aren't linear in most cases, a 50% speedup is probably doing well for going from 256->512 bit vectors. In some cases you may get good speedups just from the newer instructions available, like the emulator guy who replaced five AVX2 instructions with one AVX-512 instruction when emulating an ARM SVE instruction.

now that there's no downclocking penalty, the applications get wider. that was obviously a troublesome aspect of Skylake-X/SP and even Haswell/Broadwell. But Zen2/Zen3 doesn't have downclocking and now Ice Lake consumer/Ice Lake-SP/Rocket Lake don't either.

Paul MaudDib fucked around with this message at 22:52 on Apr 2, 2021

Fame Douglas
Nov 20, 2013

by Fluffdaddy
Does Paul actually know what they're talking about? All existing evidence points to "no", so I'm curious whether that post is accurate at all.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
He does.

Fame Douglas
Nov 20, 2013

by Fluffdaddy

Doesn't seem that way.

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



TSMC had also been planning a states-side fabrication plant for a few years prior to the formal announcement, and so it also wouldn't surprise me if they had pre-purchased certain key equipment required to expedite bring the new Arizona fab online, especially based on some of the discussions on-going in Arizona regarding infrastructure requirements and planning.

WhyteRyce
Dec 30, 2001

Has intel purchased the EUV machines they need? Given their rarity and cost and lead time and their reluctance to adopt I wonder if they’ll even get the units they need even after they pivot

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib

WhyteRyce posted:

Has intel purchased the EUV machines they need? Given their rarity and cost and lead time and their reluctance to adopt I wonder if they’ll even get the units they need even after they pivot
They've definitely bought a few. I doubt they'd announce that they're building two new fabs if they haven't even ordered the key equipment.

Fame Douglas posted:

Doesn't seem that way.

By your own admission, you can't tell if his post is accurate.

BurritoJustice
Oct 9, 2012

Fame Douglas posted:

Does Paul actually know what they're talking about? All existing evidence points to "no", so I'm curious whether that post is accurate at all.

He isn't always on point but his AVX-512 posts are pretty well informed.

Bofast
Feb 21, 2011

Grimey Drawer

Cygni posted:

I think i remember reading that the early 10nm node that failed had some obscene amount of multi patterning (as in like 7 passes) that basically ensured they were never going to get yields, with the idea being that eventually they would brute force it and make it work and yields would improve but they... never did.

We've talked about it before but it truly is remarkable how big a gently caress up that first 10nm failed node was. That they've been able to string 14nm out this long with out getting obliterated is a testament to how far ahead they were to start with, but that failure is really starting to hit home. Steam hardware survey has Intel hemorrhaging marketshare in gaming (they've lost 4% marketshare in the last 4 months), and Milan seems to have a ton of momentum in server world.

10nm SF really has to work, and 7nm really REALLY has to work, or the only option would be to start turning to TSMC in earnest.

I know Semiaccurate claimed Intel was doing quad patterning which is tricky enough. If it was more than that :psyduck:

The problem with turning to TSMC would be that there's no capacity available anyway, so that's not going to be a quick fix.

wet_goods
Jun 21, 2004

I'M BAAD!

Indiana_Krom posted:

We could get in to lots of technical reasons Intel 10nm has been a dumpster fire, but the rough of it is: They can't make chips as quickly as they would like, way too many of the ones they do make end up being defective, and even the good ones don't perform well.

Why it is taking 5 years is for multiple reasons from typical corporate mismanagement to the actual goals being too ambitious for the equipment they are trying to use. Also high volume manufacturing of stuff that small is just inherently incredibly difficult and risky.

What helped TSMC succeed with 7nm where Intel failed is TSMC is using EUV "extreme ultraviolet" lithography for critical layers where Intel is trying to do it only with DUV "deep ultraviolet" lithography. Basically if the ultraviolet you use is a marker, Intel is trying to use a 193nm wide marker tip to draw a 10nm wide line where TSMC is using a 13.5nm wide marker instead. Everyone was using 193nm down to about 12-14nm, there are a lot of tricks and workarounds to make it work down to those sizes, but the difficulty goes up exponentially as the size decreases and TSMC/samsung/etc simply waited till the machines that work with 13.5nm became available before they attempted anything smaller.

It's not just drawing the lines, the way duv gets around of it is by using more layers to make the same features. If you needed let's say three layers on duv to make the equivalent layer on euv, then you are going to have at a minimum three times the defects, possibly more exponentially depending on the process and under layers. Not to mention it makes it far more expensive since you will need more tooling to print the same number of features.

wet_goods fucked around with this message at 03:27 on Apr 3, 2021

Beef
Jul 26, 2004
To be fair to Intel, it wasn't obvious that EUV would reach maturity in time for their 10nm. I was working at imec years back and EUV was a running joke, always a year or two in the future over a decade or so. Intel was a large investor in ASML and EUV tech, but simply chose to do it later rather than sooner.
It was still a mistake and hubris to think that quad patterning and new materials would make that 10nm node viable, but that's in hindsight.

Arzachel
May 12, 2012

Indiana_Krom posted:

What helped TSMC succeed with 7nm where Intel failed is TSMC is using EUV "extreme ultraviolet" lithography for critical layers where Intel is trying to do it only with DUV "deep ultraviolet" lithography. Basically if the ultraviolet you use is a marker, Intel is trying to use a 193nm wide marker tip to draw a 10nm wide line where TSMC is using a 13.5nm wide marker instead. Everyone was using 193nm down to about 12-14nm, there are a lot of tricks and workarounds to make it work down to those sizes, but the difficulty goes up exponentially as the size decreases and TSMC/samsung/etc simply waited till the machines that work with 13.5nm became available before they attempted anything smaller.

As far as I know TSMC's 7nm (N7) and high performance (N7P) are DUV only.

Bofast
Feb 21, 2011

Grimey Drawer

Beef posted:

To be fair to Intel, it wasn't obvious that EUV would reach maturity in time for their 10nm. I was working at imec years back and EUV was a running joke, always a year or two in the future over a decade or so. Intel was a large investor in ASML and EUV tech, but simply chose to do it later rather than sooner.
It was still a mistake and hubris to think that quad patterning and new materials would make that 10nm node viable, but that's in hindsight.

Well, yeah. It was just a gamble that didn't pay off. It happens.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Arzachel posted:

As far as I know TSMC's 7nm (N7) and high performance (N7P) are DUV only.

Yes; that matches what I’ve seen. EUV get cited a lot, but there were many other design choices that increased risk. They were late and underdelivered on 14 and over promised on 10, and then continually lied about the problems they had.

Bad executive culture.

Khorne
May 1, 2002

Arzachel posted:

As far as I know TSMC's 7nm (N7) and high performance (N7P) are DUV only.
N7P uses EUV for up to 3 layers & is designed to use it for up to 4. It's correct that the original TSMC 7nm node did not use EUV.

They rolled out N7P when higher throughput EUV became available.

Khorne fucked around with this message at 14:15 on Apr 3, 2021

wet_goods
Jun 21, 2004

I'M BAAD!

PCjr sidecar posted:

Yes; that matches what I’ve seen. EUV get cited a lot, but there were many other design choices that increased risk. They were late and underdelivered on 14 and over promised on 10, and then continually lied about the problems they had.

Bad executive culture.

100% on the executive culture, it took years to fire the guy in charge of the technology, then a few more to fire his lovely, overpaid boss

WhyteRyce
Dec 30, 2001

BK came and rose from that environment, the whole thing probably needs a deep enema. It’s probably rife with PPT engineers and political crap

Cygni
Nov 12, 2005

raring to post

Asrock must be stopped



The Taichi gear thing has always been weird! Please.

https://www.anandtech.com/show/16572/the-asrock-z590-taichi-motherboard-review

FuturePastNow
May 19, 2014


can't wait for that to fail a month after the board's warranty ends

AARP LARPer
Feb 19, 2005

THE DARK SIDE OF SCIENCE BREEDS A WEAPON OF WAR

Buglord
And it’s mechanical? What in the fuckity gently caress??

JawnV6
Jul 4, 2004

So hot ...

BobHoward posted:

I don't know a ton about Transmeta's architecture, other than it was a VLIW machine. They relied on their "Code Morphing System," a JIT, to translate x86 code to this proprietary VLIW ISA. The combo of CPU and low level firmware functioned like a real x86 - the native ISA wasn't documented, and iirc they took steps to prevent you from even trying to run native code yourself.

Despite the protection, I recall people had some success at reverse engineering the native ISA.

transmeta was genius, im sad that team got split up because there were some amazing ideas in there and I'm sorta confident we'll all end up back at that same spot. eventually. maybe a few lifetimes or so.

why you'd even want to access the 'native' transmeta ISA is beyond me, the CMS was doing the really heavy lifting

KYOON GRIFFEY JR
Apr 12, 2010



Runner-up, TRP Sack Race 2021/22

FuturePastNow posted:

can't wait for that to fail a month after the board's warranty ends

as far as I can tell it doesn't actually serve any functional purpose, it's just a complication like a wristwatch has. if it breaks you just don't get to see it spin around and stuff. probably not a big deal in the grand scheme of things.

i think it's kind of a hilarious and cool gimmick

movax
Aug 30, 2008

Cygni posted:

Asrock must be stopped



The Taichi gear thing has always been weird! Please.

https://www.anandtech.com/show/16572/the-asrock-z590-taichi-motherboard-review

Legit would consider a windowed case to show that off. It's dumb but I love it.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

JawnV6 posted:

transmeta was genius, im sad that team got split up because there were some amazing ideas in there and I'm sorta confident we'll all end up back at that same spot. eventually. maybe a few lifetimes or so.

why you'd even want to access the 'native' transmeta ISA is beyond me, the CMS was doing the really heavy lifting

I'm sure the answer was roughly "because it's there". Any time you erect a barrier like that, people will try to tear it down and see.

Don't know that I agree about the Transmeta approach being inevitable. Agree that they had some smart people, but I think they failed due to fundamental issues with the concept. Every time someone tries to veer away from brainiac core designs, the brainiac cores just keep on winning.

Adbot
ADBOT LOVES YOU

FuturePastNow
May 19, 2014


I'm just imagining the little stepper motor seizing up and catching on fire someday.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply