Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
FunOne
Aug 20, 2000
I am a slimey vat of concentrated stupidity

Fun Shoe
There's a reason even special purpose stream processors usually have a little general purpose processor attached for setting up, tearing down, and the 'other' things that have to be done.

Adbot
ADBOT LOVES YOU

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

KKKLIP ART posted:

So for those that really didn’t ever follow it, what was the big hype from Itanium to begin with and how did it fall flat on its face?

also supported 64 bit addressing, so you do can do more than 4 gb ram without pae

also had a shitload of cache but thats more of an attempted bandaid around the fallibility of mans inability to generate optimal vliw

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️
It also helps the out-of-order x86 logic doesn't really take much extra transistor budget in the Pentium Pro versus the P5, and that was reduced to just background noise by the end of the 90s.

If Skylake was backported to ancient 90nm node by cutting out the IGP and leaving just 1MB L3 we will prolly still retain like 90% of the IPC

Palladium fucked around with this message at 02:36 on Feb 2, 2019

rage-saq
Mar 21, 2001

Thats so ninja...
The real value of Itanium was the scalable processor setup, it was designed for mainframes where you might have 8-128 processors and you maybe want to swap out a bay of 4 processors and 32gb of ram while the mainframe stays online, in like say 2002 when this was a huge deal for large enterprises.
The problem is, people just scaled out the simpler hardware into data centers and load balanced with software, while AMD put pressure on intel with X64 and Athlon which eventually led intel into reprioritizing and going all out on X64 and cannibalizing the low end itanium market.

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
So SSE and AVX are distinguished from VLIW by the former being a single vectorised instruction and the latter being (potentially) a mix of instructions? Was anything in Itanium's design salvageable for x86?

PC LOAD LETTER
May 23, 2005
WTF?!
They're probably far too fundamentally opposing approaches to designing a architecture for much to translate to the other.

EPIC was all about pushing all the work of extracting ILP on to the programmer and compiler (really the programmer) while x86 has evolved to have the hardware do more of that so the programmer and compiler don't have to. EPIC being VLIW while modern x86 being a mish mash of CISC and RISC probably doesn't help a bit either.

I wouldn't be shocked if it turned out that Intel's and HP's attempts to improve the compilers for EPIC resulted in some compiler improvements that were also beneficial in general for x86 CPU's though. But that would be more of accidental side benefit.

Vanagoon
Jan 20, 2008


Best Dead Gay Forums
on the whole Internet!
Itanium and Netburst happened at around the same time, right?

Was someone purposefully trying to sink Intel from the inside? (ahah) It seems like they were loving up as much as possible on purpose during that time.

WhyteRyce
Dec 30, 2001

Extremely smart people are frequently able to stubbornly shove through bad ideas no matter how short sighted or the objections simply because it was their baby

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

WhyteRyce posted:

Extremely smart people are frequently able to stubbornly shove through bad ideas no matter how short sighted or the objections simply because it was their baby

Yeah. That, and there's often a serious issue with "this would be the best implementation if only everyone else did it the Right Way" where Right Way is either defined as some special snowflake method, or potentially a legitimately better plan that simply no one is going to bother with because it's harder/slower/more expensive. The amount of disconnect that some engineers have between what would be optimal vs what is realistic in actual real-world implementation gives rise to all sorts of interesting disasters.

craig588
Nov 19, 2005

by Nyc_Tattoo
I heard that the Banias and Willamette teams were pitted against each other and when Willamette became the desktop CPU they weren't interested in input from the other superior team.

feedmegin
Jul 30, 2008

Methylethylaldehyde posted:

Some big iron financial software system/database thing that's still too expensive to backport to x86 after the millions spent getting it working in the shiny new itanium environment.

Yeah, mostly the same things people were running on PA-RISC boxes or Alpha/VAX before that. Bear in mind HP has been a Unix vendor for a long time and also indirectly ended up buying DEC. VMS runs on Itanium, too.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Cygni posted:

Its less caring about Itanium itself or Intel or its team or whatever and more caring about VLIW as a concept, to me. The concept seemed so promising, with EPIC and TerraScale and all that and now its all pretty much gone except in embedded coprocs and stuff. Sad.

VLIW has always looked great on paper, but also has always had problems translating that to reality outside of those niches you mention, and it's for fairly well understood reasons.

VLIW's core idea is "hey, you can get incredibly high ILP with very simple hardware if you push all the complexities of scheduling off to the compiler, and punt on cross-generation binary compatibility". Earlier VLIWs like Multiflow targeted markets which didn't care about binary compatibility (supercomputers in Multiflow's case). This is also why VLIW has been able to do well in deeply embedded DSP, eg Qualcomm's Hexagon.

(Speaking of Multiflow, Bob Colwell, the interviewee in that oral history I linked, came to Intel from Multiflow. Intel management really should have listened to him; Multiflow's technical work was very well regarded even though the company failed.)

So, despite being pushed as the future of general purpose computing, EPIC had some obvious and known weaknesses in that department. Although they did manage provide cross-generation binary compatibility, IIRC they still had issues with cross-generation performance regressions (code compiled for one IA-64 core design wouldn't work well on another that was too different).

Another issue was that WhyteRyce's and DrDork's comments are very on point. The architects liked operating without adequate feedback from reality, so IA-64 ended up with a bunch of weird and quirky pet ideas in it. As a result, designing an IA-64 core turned out to be difficult and expensive rather than simple and cheap, and the compiler work was made that much harder. Losing both sides of the VLIW engineering effort tradeoff is not where you want to be.

And... You know that money quote about Itanium's architects simulating a glorious future without having a compiler yet? The compiler problem never got solved, there never was a compiler that was good at making arbitrary source code fly on Itanium. One of the reasons people thought highly of Multiflow's work is that they recognized the supreme importance of the compiler early on, put very smart software engineers on it, and had good coordination between the compiler and architecture teams. IA-64 had this weird disconnect, almost as if the senior ISA designers assumed that because there was existence proof of a decent VLIW compiler, they could just go make handcrafted assembly language snippets look awesome and let somebody else worry about making compiler output look like that.

What Intel and HP management should have done was require a proof of concept implementation of the compiler, and performance studies based on its output, before committing to anything bigger. Of course, if they'd done that, the Itanium project either would've been killed before being marketed to the public, or would have major differences from the IA-64 we know and hate today.

FunOne
Aug 20, 2000
I am a slimey vat of concentrated stupidity

Fun Shoe
I'm sure technical debt played a part of it. Everyone loves the idea of "toss the whole thing and start over, but smarter" but there are real life lessons baked into your current convoluted thing that you have to re-learn with the new one.

So by the time they had the whole new system up and running the X86 guys had time to iterate another 3x on their previous designs. Plus, if you have something working you can always bolt on the next-new-thing rather than building from the ground up.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
Itanium was just iAPX 432 but a decade or two later. Same problems with expecting magical software.

GRINDCORE MEGGIDO
Feb 28, 1985


Is there anything it excels at?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
iirc there were a few very specific workloads where Itanium 2 had a performance lead for a couple years, but nothing big enough to make it a true success.

Most Itanium sales volume was HP pushing captive PA-RISC customers onto replacement Itanium systems.

PC LOAD LETTER
May 23, 2005
WTF?!
I wouldn't say it actually excels at anything per se but it is inherently good at floating point stuff and will beat any x86 chips' x87 or early 64 bit vector (so MMX or 3DNow) FPU performance.

But all VLIW processors do great at FP stuff so that wasn't surprising. And with the introduction of SSE2 vector units in the x86 CPU's around Itanium's "heyday", in the early 2000's, x86 was still able to be at least somewhat competitive on FP work loads, and sometimes beat it there too.

It was clear early on, by 2002 or 2003 when vendors like Dell began dropping it due to lack of sales, that it was going to be a failure so I'm always baffled when I come across someone today who say its so impressive and interesting of a architecture. My WAG is they just stumbled across some old marketing material or early research on it from the late 90's before it became obvious that it wasn't going to work as advertised.

BobHoward posted:

Most Itanium sales volume was HP pushing captive PA-RISC customers onto replacement Itanium systems.
That and by killing Alpha EV-8 they were trying to assure that the market would have no alternatives.

PC LOAD LETTER fucked around with this message at 10:08 on Feb 3, 2019

Falcorum
Oct 21, 2010

GRINDCORE MEGGIDO posted:

Is there anything it excels at?

Burning money. :v:

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

GRINDCORE MEGGIDO posted:

Is there anything it excels at?

AFAIK, mainframe-class RAS performance. Stuff like where ensuring every single database query is handled successfully is a life-and-death scenario.

Which ironically has nothing to do with its ISA per se.

mmkay
Oct 21, 2010

PC LOAD LETTER posted:

I wouldn't say it actually excels at anything per se but it is inherently good at floating point stuff and will beat any x86 chips' x87 or early 64 bit vector (so MMX or 3DNow) FPU performance.

Is there something specific that made it good at floating point math?

PC LOAD LETTER
May 23, 2005
WTF?!
EPIC is a form of VLIW and pretty much all VLIW processors tend to do a good job with FP heavy work loads since they're all about getting good performance by taking advantage of ILP and since many FP work loads tend to be highly parallel in nature there is plenty of ILP for them to make use of.

One of the big things Intel was essentially promising with EPIC and the associated compiler work was that they'd finally found a way to make VLIW good at generic and branch intensive integer work loads which was why so many were so excited about it at first. People have been trying that for decades and failed.

They're still trying actually. Google The Mill architecture to see the most recent attempt to do so. Its interesting to read about, and may even be pretty good for some niche applications, but so far no one has had any luck making it work as advertised either.

edit: I don't have any links, just a vague recollection of some guys did try to do a Mill implementation with FPGA's a few years back but as you note they never made their information public. Everything I've heard about it is rumors but none of them were good, as in performance was sucky. The whole VLIW approach for general purpose CPU use seems to be cursed. \/\/\/\/\/\/\/\/\/

PC LOAD LETTER fucked around with this message at 15:42 on Feb 4, 2019

feedmegin
Jul 30, 2008

PC LOAD LETTER posted:

They're still trying actually. Google The Mill architecture to see the most recent attempt to do so. Its interesting to read about, and may even be pretty good for some niche applications, but so far no one has had any luck making it work as advertised either.

No one has made it work at all, have they? I see people associated with it talking it up on comp.arch all the time but as far as I'm aware there's no publicly available implementation, even for an FPGA, and never has been. It's vapourware.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Vanagoon posted:

Itanium and Netburst happened at around the same time, right?

Was someone purposefully trying to sink Intel from the inside? (ahah) It seems like they were loving up as much as possible on purpose during that time.

Netburst was perfectly fine up until prescott when they kept slapping more and more steps in the pipeline to ram up clock speed and hot-glued two cores in to the same package. Northwood was Extremely Good for its day.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!

BangersInMyKnickers posted:

Netburst was perfectly fine up until prescott when they kept slapping more and more steps in the pipeline to ram up clock speed and hot-glued two cores in to the same package. Northwood was Extremely Good for its day.

Northwood was good but Willamette was kind of mediocre, at least I remember it losing out to the Athlon T-Bird at the time.

MaxxBot fucked around with this message at 23:57 on Feb 4, 2019

Mr. Smile Face Hat
Sep 15, 2003

Praise be to China's Covid-Zero Policy
I'm just so bummed that they're not gonna have Itanium anymore. I was gonna build me an Itanium rig any day now. I think I'll need a few nanoseconds to get over it.

Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib
Plan on building a RISC-V rig in the future, instead.

JnnyThndrs
May 29, 2001

HERE ARE THE FUCKING TOWELS

MaxxBot posted:

Northwood was good but Willamette was kind of mediocre, at least I remember it losing out to the Athlon T-Bird at the time.

Yeah, Willamette was a wet fart, I had a 1.2 Tualitan(sp?) P-III at the time and it was faster than a 1.5 Willamette w/a zillion dollars worth of RDRAM.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

BangersInMyKnickers posted:

Netburst was perfectly fine up until prescott when they kept slapping more and more steps in the pipeline to ram up clock speed and hot-glued two cores in to the same package. Northwood was Extremely Good for its day.

In the decade before Netburst Intel managed to increase clock speed 10x, and Netburst was designed assuming they’d get similar scaling out to 10GHz (at <1V Vcore lol.)

A pipeline that looks good at 5GHz on sims is terrible at 2GHz.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler

JnnyThndrs posted:

Yeah, Willamette was a wet fart, I had a 1.2 Tualitan(sp?) P-III at the time and it was faster than a 1.5 Willamette w/a zillion dollars worth of RDRAM.

Yeah, scaling that up ~2x I replaced a 3.0 Northwood with a Dothan at 2.56 and it was basically unbeatable for what I cared about at the time. That system carried me all the way through the Core 2 era until the end of 2008 when I bought an i7-920.

Eletriarnation fucked around with this message at 03:33 on Feb 5, 2019

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

Eletriarnation posted:

Yeah, scaling that up ~2x I replaced a 3.0 Northwood with a Dothan at 2.56 and it was basically unbeatable for what I cared about at the time. That system carried me all the way through the Core 2 era until the end of 2008 when I bought an i7-920.

When everyone first saw the Pentium M, we were all like "this can't possibly be good", then we saw the benchmarks the opinion changed to "what's the point of the Pentium 4 again?"

BIG HEADLINE
Jun 13, 2006

"Stand back, Ottawan ruffian, or face my lumens!"
Yeah, Intel's Israeli factory really saved their asses with Banias, Dothan, and Yonah.

Navaash
Aug 15, 2001

FEED ME


BIG HEADLINE posted:

Yeah, Intel's Israeli factory really saved their asses with Banias, Dothan, and Yonah.

I can't wait for the Nier design

movax
Aug 30, 2008

Lambert posted:

Plan on building a RISC-V rig in the future, instead.

A Western Digital SwerV perhaps! Or something from SiFive...

Llyd
Oct 9, 2012

feedmegin posted:

Methylethylaldehyde posted:

Some big iron financial software system/database thing that's still too expensive to backport to x86 after the millions spent getting it working in the shiny new itanium environment.
Yeah, mostly the same things people were running on PA-RISC boxes or Alpha/VAX before that. Bear in mind HP has been a Unix vendor for a long time and also indirectly ended up buying DEC. VMS runs on Itanium, too.

HPE side, mainframe finance software and the like were/are also running on Itanium boxes with HP Nonstop (Tandem) OS.
Since they control the whole stack on this platform (and they charge big money for it), they've been working on the transition tools for some time.
These last few years, they've pushed everyone towards x86 and stopped providing upgrades for Itanium rigs.

Apart from some fuckery compiler-side it's relatively painless, and the performance boost with 2016+ x86 is quite the thing.
Doesn't alleviate the pain of having to work with Tandem, but that's another subject.

Bulgakov
Mar 8, 2009


рукописи не горят

i’m browsing inernet2 on my itanium box rn

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map

Lambert posted:

Plan on building a RISC-V rig in the future, instead.

movax posted:

A Western Digital SwerV perhaps! Or something from SiFive...

What is the goonpinion on RISC-V anyways? I've seen a lot of marketing and open invitations to conferences about it...is it living up to promises?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Sidesaddle Cavalry posted:

What is the goonpinion on RISC-V anyways? I've seen a lot of marketing and open invitations to conferences about it...is it living up to promises?

It’s an okay RISC ISA, but merely that. I don’t think it has anything which gives it any real technical advantage over ARMv8.

But with any ISA, a decent ISA design is not enough by itself. You need awesome implementations too. It’s here where you should be skeptical since there has yet to be any significant push to design cutting edge RISC-V cores and chips. There’s been interest in using it in deeply embedded systems where nobody has to care about either compatibility with a large base of binary software or best in class performance, but not more than that. (And the reason for the interest in those spaces is basically not wanting to pay licensing fees to ARM rather than technical advantage.)

I don’t think much of WD’s propaganda, which seems to be WD execs becoming irrationally exuberant about their engineering team telling them “hey we could design our own core and not pay ARM”. I did a brief read through of some of the source of their open sourced RISC-V core, and it did not take long to conclude that it will not compete directly with ARM in application processors (cell phones and up) or servers. As I expected, it looks like it’s a relatively simple core suitable for high end embedded control. Should be much faster than something like a Cortex-M0, but way short of ARM’s Cortex A series, which in turn can’t touch Apple’s wind-themed (Mistral/Typhoon/etc) ARM cores.

I have no idea wtf WD management is smoking to think that data centers are going to be at all interested in running application software on the lovely embedded controllers in their disk drives.

forbidden dialectics
Jul 26, 2005





Transmeta will make ISAs completely obsolete, I don't know what you guys are talking about!!

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BobHoward posted:

I have no idea wtf WD management is smoking to think that data centers are going to be at all interested in running application software on the lovely embedded controllers in their disk drives.

There was a link a few pages ago to a special snowflake SSD controller that allowed for the offloading of x.264/265 video compression/decompression onto the SSD itself. It doesn't make any sense if you're a normal sane user with an Intel CPU and only one or two drives, but it makes a lot of sense if you're a business with a server farm and dozens or hundred of drives if it means you can move away from having to run multiple Xeon CPUs on expensive servers, and instead just let every SSD take care of itself.

Kinda niche market and all, but there is an actual use-case for that sort of thing, especially if they can make a compelling argument based on price.

Adbot
ADBOT LOVES YOU

movax
Aug 30, 2008

Sidesaddle Cavalry posted:

What is the goonpinion on RISC-V anyways? I've seen a lot of marketing and open invitations to conferences about it...is it living up to promises?

I am bullish on the commercialization of it. To me, it's all about quantity; why put effort into your own ISA (silicon + software toolchain support) unless you're see the economies of scale from shipping fuckloads of them? This also includes the calculation of paying someone else a licensing fee for their ISA/core in each of your parts. There are relatively unknown ISAs/processor types that have shipped billions of units; smartphones have delivered Hexagon DSPs for Qualcomm, CEVA has put out a ton of DSPs as well. Cadence Tensilica cores show up, and the Japanese still love utilizing MIPS cores. Hell, consoles were responsible for the giant amount of volume MIPS R3000 and R5900 shipped.

With the license being open, everyone should now be able to make a trade on implementing their own silicon that's compliant to the variant(s) of the RISC-V ISA they choose to support, and then sponge off shared compiler development. For someone like WD who ships a hard-drive controller (SMOOTH / Marvell / etc) in each disk they sell, removing the cost of that ARM license that gets passed up is appealing. If you're someone like Allwinner, Rockchip or someone who's putting out tons of hardware in SE Asia, it's even more appealing to improve your margins.

NVIDIA moved to RISC-V for a their embedded micros in their GPUs, but I don't remember what they've moved to for the Jetsons. The Jetson TX2 IIRC has a full-fledged Cortex-A9 instantiated in it just as the audio-system sub-processor.

It is cool though, that you can instantiate and implement them in FPGA-based designs. This eats into the "market share" for PicoBlaze/MicroBlaze/Nios II/etc. but I feel like if I was Xilinx or Altera, saving the resources in supporting my own synthesizable processor and instead hopping onto the RISC-V train would be the way to go. This comes close to my sandbox rant on open-source silicon / open FPGA / all that poo poo, I'll save that for later.

BobHoward posted:

It’s an okay RISC ISA, but merely that. I don’t think it has anything which gives it any real technical advantage over ARMv8.

But with any ISA, a decent ISA design is not enough by itself. You need awesome implementations too. It’s here where you should be skeptical since there has yet to be any significant push to design cutting edge RISC-V cores and chips. There’s been interest in using it in deeply embedded systems where nobody has to care about either compatibility with a large base of binary software or best in class performance, but not more than that. (And the reason for the interest in those spaces is basically not wanting to pay licensing fees to ARM rather than technical advantage.)

I don’t think much of WD’s propaganda, which seems to be WD execs becoming irrationally exuberant about their engineering team telling them “hey we could design our own core and not pay ARM”. I did a brief read through of some of the source of their open sourced RISC-V core, and it did not take long to conclude that it will not compete directly with ARM in application processors (cell phones and up) or servers. As I expected, it looks like it’s a relatively simple core suitable for high end embedded control. Should be much faster than something like a Cortex-M0, but way short of ARM’s Cortex A series, which in turn can’t touch Apple’s wind-themed (Mistral/Typhoon/etc) ARM cores.

I have no idea wtf WD management is smoking to think that data centers are going to be at all interested in running application software on the lovely embedded controllers in their disk drives.

I think the WD CTO office is correct in utilizing RISC-V as a way to insource their HDD / SSD controller efforts. I agree with you that introducing another competitor (along with SiFive) in the higher-end space (like the RV64/EV64 cores) to compete in the datacenter market seems like a crack-smoking idea. Qualcomm, Cavium and others have toyed in that arena, Apple bought PA Semi and has stood up an entire team that implements custom SoCs with a full ARM architecture license, so I'm not sure who the target market here is.

Of course, all of this pales in the hilarity of some moron at ARM putting up the anti-RISC-V website and Streisand'ing themselves.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply