Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Kazinsal
Dec 13, 2011



A few people have said they wanted a place to talk about non-x86 architectures and platforms, both current and past, so here's a thread for them! I only know a couple well enough to write about, so everyone feel free to contribute writeups on platforms in your domain!

A brief rundown of some currently in production real-world architectures:
  • ARM, which formerly stood for Advanced (or Acorn) RISC Machine, is by far the most common non-x86 CPU architecture. It's a RISC ISA that dates back to the mid-1980s and is currently the king of embedded SoCs. It's in basically every smartphone and most tablets, and Apple is midway through transitioning their x86 Macs over to their own Apple Silicon ARM platform. ARM has most of the mobile space, and is emerging into the desktop space thanks to Apple and into the server space from companies such as Marvell and Ampere Computing.
  • RISC-V is an open-source, constantly evolving ISA that anyone (with sufficient fabrication capabilities) can implement, manufacture, and market CPUs for. It's only been truly "stable" for a few years now so it's still a newcomer in all spaces, but offerings from companies like SiFive (who freely lets you design and test custom RISC-V cores based on their core IP in their online designer tool) are putting actual working RISC-V boards in the hands of developers and consumers through their own channels as well as through third parties such as the BeagleBoard.org project.
  • MIPS is another 80s RISC design that is still getting updated (the latest release of the MIPS64 ISA spec is from 2014). I don't know much about what MIPS is used for other than in a number of deeply embedded situations such as small networking appliances (Ubiquiti's routers are MIPS-based, for example), but it seems cool!
  • PowerPC is a RISC architecture from IBM that evolved from their original POWER RISC machines and eventually supplanted the original POWER ISA before confusingly being renamed "Power ISA". Everyone, including IBM, still calls it PowerPC though. Apple used PowerPC from the mid-90s through the mid-2000s, the seventh generation game consoles used custom PowerPC CPUs, and a bunch of modern satellites still use PowerPC because fairly speedy radiation-hardened PowerPC chips are a thing that works quite well, apparently.

There are also some historic architectures that are still of note:
  • SPARC was Sun's RISC that replaced the Motorola 68000 family in their line of Unix (SunOS) workstations and servers. It also ended up in a few non-Sun products, such as Tadpole's SPARCbook series of laptops in the mid-late 90s. It's pretty much dead now, but you can still find relatively decade-old SPARC gear for not a lot of money on eBay. SPARC was originally a 32-bit architecture, but SPARC64 (formally SPARC V9) was introduced in 1993, so pretty much any SPARC machine you get that isn't proper vintage kit is going to be a 64-bit RISC machine. Linux and the BSDs all maintain up to date 64-bit SPARC ports.
  • VAX was a CISC architecture minicomputer that replaced DEC's venerable PDP-11 minicomputer series by taking the same general ISA theory and extending it to 32 bits with full baked in support for virtual memory management and privilege levels (thus the name, Virtual Address eXtension). If you find a working VAX, I will quite envious of you, but not of your power bill.
  • Motorola's 68000 (m68k) family was a microcomputer-focused CISC family that is notable for being the CPU that powered the first decade and a half of Macintosh computers. 68k machines are pretty much a curiosity at this point.
  • I don't know nearly enough about PA-RISC, NS32k, Alpha, or Super-H to talk about them.

If anyone wants to do any writeups, feel free! I'd love to link them in the OP because I'm sure there are goons out there who have way more experience with developing for and using these platforms than I do. Also please let me know if I've massively screwed up anything in the OP or if there's things I should add. Now go forth and argue about what the best OS to run on an AlphaServer 800, and why it's Windows NT!

Adbot
ADBOT LOVES YOU

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.
As a minor correction, POWER originally referred to IBM's in-house designs for large chips intended to run z/OS and the like run RS/6000 workstations (thanks, PCjr sidecar! That'll teach me to post before coffee on a Friday...). PowerPC was the name given to the cooperative efforts of IBM, Motorola, and Apple (called AIM) to create microprocessors built on the same fundamental architecture. Apple eventually switched to x86 because IBM and Motorola couldn't prioritize a low power chip to replace the G4, and the G5 - itself a modified Power4 design with an Altivec unit, some internal changes, and a hobbled amount of cache to keep power and thermals in check - was incapable of being adapted for a mobile form factor at acceptable performance. PowerPC is effectively dead, though FreeScale was creating designs that were promising for some time and Apple wrestled with the decision internally for a while.

Power is built with performance as a primary concern rather than power optimization. Power9 chips feature support for SMT4 or SMT8 (so a quad core chip would expose itself as 16 or 32 execution threads), relatively shallow pipelines, quad-channel memory and 40+ PCIe lanes, and a massive number of registers - according to Wikipedia the breakdown is:

32× 64/32-bit general purpose registers
32× 64-bit floating point registers
64× 128-bit vector registers

They are built chiefly for server applications and are not SIMD powerhouses compared to modern designs from Intel and AMD, but I've wanted to play with one for several years. Raptor Computing manufactures several motherboards that are fully open source and use the smaller Sforza form factor of chip. It'd set me back as much as a solid Threadripper, but I'm still thinking about it...

Edit: MIPS was the CPU design SGI pushed for all its in-house chips, and even made it way to the N64. It's slowly ebbed in general relevance since, though as you say it still has a presence in embedded and SFF computing. Come to think of it, I had a Blu-ray player driven by one.

Hasturtium fucked around with this message at 15:00 on Jun 18, 2021

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Hasturtium posted:

As a minor correction, POWER originally referred to IBM's in-house designs for large chips intended to run z/OS and the like. PowerPC was the name given to the cooperative efforts of IBM, Motorola, and Apple (called AIM) to create microprocessors built on the same fundamental architecture. Apple eventually switched to x86 because IBM and Motorola couldn't prioritize a low power chip to replace the G4, and the G5 - itself a modified Power4 design with an Altivec unit, some internal changes, and a hobbled amount of cache to keep power and thermals in check - was incapable of being adapted for a mobile form factor at acceptable performance. PowerPC is effectively dead, though FreeScale was creating designs that were promising for some time and Apple wrestled with the decision internally for a while.

Power is built with performance as a primary concern rather than power optimization. Power9 chips feature support for SMT4 or SMT8 (so a quad core chip would expose itself as 16 or 32 execution threads), relatively shallow pipelines, quad-channel memory and 40+ PCIe lanes, and a massive number of registers - according to Wikipedia the breakdown is:

32× 64/32-bit general purpose registers
32× 64-bit floating point registers
64× 128-bit vector registers

They are built chiefly for server applications and are not SIMD powerhouses compared to modern designs from Intel and AMD, but I've wanted to play with one for several years. Raptor Computing manufactures several motherboards that are fully open source and use the smaller Sforza form factor of chip. It'd set me back as much as a solid Threadripper, but I'm still thinking about it...

Edit: MIPS was the CPU design SGI pushed for all its in-house chips, and even made it way to the N64. It's slowly ebbed in general relevance since, though as you say it still has a presence in embedded and SFF computing. Come to think of it, I had a Blu-ray player driven by one.

Power was never used for any of the z/OS mainframes. It was IBM’s RISC system designed for the RS/6000 workstations originally. The mainframe chips are the exact opposite of RISC. They’re weird and kind of cool.

There were a lot of different Power designs for different applications with wildly different performance and price targets. The inability to produce an Apple laptop cpu was more about aim corporate infighting instead of limits of the arch.

Power effectively stalled out after IBM sold its fabs to Global Foundries who gave up on 10/7nm which put POWER 9 behind and took 10/11 out of the running.

MIPS is effectively dead. SGI bought the IP designer then switched to Windows and Itanium and sold it off to a supercomputing startup who went bankrupt who sold it to a Chinese company that tried to use it for one of China’s indigenous supercomputer projects. I think they gave up on that architecture and spun it off who are now trying to pivot to rv like everyone else.

Itanium is worth a mention as well. Intel corporate strategy wanted a clean sheet 64 bit ISA to get away from X86 licenses plus market consolidation from all the different legacy workstation CPUs like PA-RISC from HP plus the end of dennard scaling preventing expected frequency scaling plus over reliance on compilers to make VLIW / EPIC work plus Intel corporate culture was a perfect storm for a disastrous market. It was a real pain in the rear end to work with.

feedmegin
Jul 30, 2008

Kazinsal posted:

[*]VAX was a CISC architecture minicomputer that replaced DEC's venerable PDP-11 minicomputer series by taking the same general ISA theory and extending it to 32 bits with full baked in support for virtual memory management and privilege levels (thus the name, Virtual Address eXtension). If you find a working VAX, I will quite envious of you, but not of your power bill.

My old workplace had a bunch of MicroVAXes in storage up until about 2016? But then chucked them all out to be recycled. And wouldn't let me sneak out one myself :mad:

Those are no worse than any other 90s-ish workstations for power use, really, they looked like a RISC workstation from the era too.

feedmegin
Jul 30, 2008

PCjr sidecar posted:

Itanium is worth a mention as well. Intel corporate strategy wanted a clean sheet 64 bit ISA to get away from X86 licenses plus market consolidation from all the different legacy workstation CPUs like PA-RISC from HP plus the end of dennard scaling preventing expected frequency scaling plus over reliance on compilers to make VLIW / EPIC work plus Intel corporate culture was a perfect storm for a disastrous market. It was a real pain in the rear end to work with.

It didn't help that they cooperated directly on it with HP, who wanted a replacement for PA-RISC (and were its main users in recent times), and both sides basically jammed as many features as they could into a CPU that was designed to be as simple in hardware terms as possible to attain a high clock rate.

karoshi
Nov 4, 2008

"Can somebody mspaint eyes on the steaming packages? TIA" yeah well fuck you too buddy, this is the best you're gonna get. Is this even "work-safe"? Let's find out!
Ground floor!
Worth noting RV comes from one of the original MIPSailures. He also wrote a book and invented RISC or sth. Total failure. MIPS was also used in the DECstation line of products; UNIX machines with the infamous puck mice. I played Netrek on them. 16 MB are enough for running UNIX, X11R4, twm and 4 xterms :colbert:
Then Alpha was released, and it was absurdly fast going from a DECstation at like 25MHz to a 150MHz Alpha, like SSD-fast. Also worth noting Windows NT originally run on MIPS, Alpha and x86, remember the ACE alliance? https://en.wikipedia.org/wiki/Advanced_Computing_Environment
MIPS is (was?) very popular in home networking routers. Broadcom and realtek make SOCs you're likely to find in ADSL routers and $15 Wi-Fi APs. You start seeing quad-core ARM chips in high-end APs now. The openwrt project has a list of supported hardware if you feel the need to janitor a MIPS CPU.

e: MIPS was on CISCO routers, back then. The 7200/7500 series used them. (Juniper went x86 from the beginning, but they were doing the actual switching on ASICs) BCOM also had a lot MIPS-based Ethernet switch SOCs. I assume they're going ARM now?

karoshi fucked around with this message at 15:16 on Jun 18, 2021

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.

PCjr sidecar posted:

Power was never used for any of the z/OS mainframes. It was IBM’s RISC system designed for the RS/6000 workstations originally. The mainframe chips are the exact opposite of RISC. They’re weird and kind of cool.

Agh, seen and noted. It's been a long time since I thought about it, and my brain misfired. I would love to read more about their mainframe chips; maybe I'll hunt something down over my lunch break.

Friends of mine rooted for Alpha and were crestfallen when it was mothballed, but outside of a burly FPU and general 90s workstation competence I never quite got the big deal. Can anybody fill me in?

repiv
Aug 13, 2009

The Mill is going to obsolete all of these legacy architectures

any day now

:corsair:

karoshi
Nov 4, 2008

"Can somebody mspaint eyes on the steaming packages? TIA" yeah well fuck you too buddy, this is the best you're gonna get. Is this even "work-safe"? Let's find out!

Hasturtium posted:

Friends of mine rooted for Alpha and were crestfallen when it was mothballed, but outside of a burly FPU and general 90s workstation competence I never quite got the big deal. Can anybody fill me in?

Old memories: It was very fast. It was ~the future~. At launch time you had poo poo-slow SPARCs, M680x0s and tiny MIPSen to choose from for your workstation/server. x86 was a joke (dual-issue pentium and OOO pentium-pro/pentium2 fixed it a bit). SPECint/SPECfp were the go-to benchmarks back then.

DEC promised a 1000x improvement in 10 years: 10x from clocks(possible, 150MHz -> 1.5GHz), 10x from arch(possible, in-order -> out-of-order), 10x from stolen socks or sth, I don't remember. There's an article by one of the original engineers, who then went to do the StrongARM processors. He commented on how for Alpha they had to innovate in power delivery, CPUs weren't 300W monsters back then, a DECstation with a 16MHz MIPS R3000 didn't need a 2 pound copper heatsink. Some of the OG Alphas had a weird heatsink connector made of 2 thick prongs coming off the CPU package, as seen on this page: https://www.cpu-world.com/CPUs/21064/index.html. So they were pushing X amps into the CPU which was unheard of, back then. (He then did the opposite for StrongARM).

The ISA was new, designed for the "21st century", 64-bit only when everything else was 32-bit. No legacy. Also designed for high-performance, memory accesses had to be aligned on the first CPUs and so on (this was corrected later on, IIRC, trapping on some poo poo-code is not good for performance). Memory coherency was also quite decoupled, oriented towards multi-core efficiencies. The kernel docs for memory barriers still say: "- And then there's the Alpha." https://www.kernel.org/doc/Documentation/memory-barriers.txt

HP buying DEC and burying it was a downer at the time. A printer manufacturer buying a highly innovative company. Killing AXP to push Itanium, smdh. But consolidation was in full force. There wasn't much of an economical point in sustaining your own proprietary ISA.

e: also AMD 29k and m88k, hello NeXT, lol.

karoshi fucked around with this message at 15:42 on Jun 18, 2021

Kazinsal
Dec 13, 2011



karoshi posted:

e: MIPS was on CISCO routers, back then. The 7200/7500 series used them. (Juniper went x86 from the beginning, but they were doing the actual switching on ASICs) BCOM also had a lot MIPS-based Ethernet switch SOCs. I assume they're going ARM now?

So was NS32k! The management plane of some Catalyst switches used them, I believe. And really early IOS devices were m68k. I have this awesome super nerdy book from 1999 about the internals of Cisco routers and the then-current IOS 12.0 kernel and it's got some really neat notes about the platform buses used in old Cisco routers, with a whole chapter dedicated to the crowning achievement of the Cisco 12000 Series Gigabit Switch Router! ::hellyeah::

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

karoshi posted:

DEC promised a 1000x improvement in 10 years: 10x from clocks(possible, 150MHz -> 1.5GHz), 10x from arch(possible, in-order -> out-of-order), 10x from stolen socks or sth, I don't remember. There's an article by one of the original engineers, who then went to do the StrongARM processors. He commented on how for Alpha they had to innovate in power delivery, CPUs weren't 300W monsters back then, a DECstation with a 16MHz MIPS R3000 didn't need a 2 pound copper heatsink. Some of the OG Alphas had a weird heatsink connector made of 2 thick prongs coming off the CPU package, as seen on this page: https://www.cpu-world.com/CPUs/21064/index.html. So they were pushing X amps into the CPU which was unheard of, back then. (He then did the opposite for StrongARM).

The ISA was new, designed for the "21st century", 64-bit only when everything else was 32-bit. No legacy. Also designed for high-performance, memory accesses had to be aligned on the first CPUs and so on (this was corrected later on, IIRC, trapping on some poo poo-code is not good for performance). Memory coherency was also quite decoupled, oriented towards multi-core efficiencies. The kernel docs for memory barriers still say: "- And then there's the Alpha." https://www.kernel.org/doc/Documentation/memory-barriers.txt

It was much worse than just requiring aligned memory accesses. Plenty of 1980s and 1990s RISCs did that without fatal problems - you just write your C compilers to lay out structs with padding to maintain alignment for all members, and so on. (Padding for alignment is common even on x86, because even with HW support for misaligned accesses, they're still frequently slower than aligned accesses. This isn't something that can be worked around, really, you're always going to take a hit when a load spans two cache lines.)

The real issue was that DEC, in its infinite wisdom, decided that 32- and 64-bit loads and stores were all you got. Alpha had no support for reading and writing 16-bit or 8-bit values. This made certain things very, very difficult to do, most notably C strings.

According to another-dave's answer to a question about this, this was probably a blind spot on DEC's part. Their own software stack, with VMS as the OS and most software not written in C, didn't really need native support for 16- or 8-bit loads and stores.

But DEC also wanted to sell the Alpha on the open market, not just use it in their own systems, and if they were going to do that, they were going to have to run C-based operating systems like UNIX and Windows NT, both of which have a very high dependence on C strings, which demand byte granularity loads and stores to implement with any kind of efficiency.

So, the first-gen Alpha processors were fairly useless for anything other than VMS. This hurt Alpha's initial appeal quite a bit.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Kazinsal posted:

  • RISC-V is an open-source, constantly evolving ISA that anyone (with sufficient fabrication capabilities) can implement, manufacture, and market CPUs for. It's only been truly "stable" for a few years now so it's still a newcomer in all spaces, but offerings from companies like SiFive (who freely lets you design and test custom RISC-V cores based on their core IP in their online designer tool) are putting actual working RISC-V boards in the hands of developers and consumers through their own channels as well as through third parties such as the BeagleBoard.org project.
  • MIPS is another 80s RISC design that is still getting updated (the latest release of the MIPS64 ISA spec is from 2014). I don't know much about what MIPS is used for other than in a number of deeply embedded situations such as small networking appliances (Ubiquiti's routers are MIPS-based, for example), but it seems cool!

The status of MIPS is complicated.

MIPS64 ISA ownership has bounced around a bunch of times, and is a little murky. If you want to trawl through news articles on sites like this:

https://www.eejournal.com/?s=mips

maybe you can make sense out of it. Or maybe you can't. I can't.

As you can see from another story there, MIPS Technologies recently emerged from bankruptcy and announced that its new designs would be RISC-V. And in a sense, RISC-V is MIPS TNG: it was created by one of the OG founders of MIPS Technologies, and takes a lot of inspiration from MIPS, even though it isn't compatible with MIPS.

There's also Loongson! Loongson is kinda an extension of the Chinese state, and has been on a long program to make China independent of Western CPU IP, since changing Western political whims can affect whether Chinese companies are allowed to use said IP. For a long time, Loongson built MIPS64 CPUs, and some of the MIPS ISA ownership murkiness arises from various deals which seemed aimed at transferring ownership of MIPS to Chinese companies. They seem to have given up on gaining control of the actual MIPS ISA and recently announced LoongArch, which looks a lot like MIPS64 but with a new encoding and a bunch of other changes.

MIPS as we knew it is probably dead. It's technically still out there, so someone could pick it up and push it again, but currently the parties likely to be interested in that are instead pursuing forks which diverge from direct compatibility with the original MIPS ISA.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Supercomputing is a great source of weird cpu architectures that got destroyed by intel’s process advantage; look at this thing: https://en.m.wikipedia.org/wiki/Cray_XMT

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Also, on ARM:

It should be noted that 64-bit ARM, aka aarch64 or A64, isn't truly a descendent of the 1980s 32-bit ARM, which has been retroactively named aarch32 or A32. aarch64 is a mostly-clean-sheet redesign.

Why is this important to highlight? Because the other way is what AMD chose for x86-64. It used x86 prefix bytes to add new 64-bit only opcodes, but even when the CPU is in 64-bit mode it's still legal to execute old 32-bit instructions. You can write shims to allow 32-bit code to call into 64-bit libraries, and vice versa.

With aarch64, the CPU's decoders are either in aarch64 mode where they recognize only the new 64-bit instruction encoding, or in aarch32 mode where they only understand the old 32-bit encoding. The encodings are too incompatible to support both at the same time. Decoder mode switches are only possible as a side effect of privilege level changes - hypervisor entry/exit or kernel entry/exit - so userspace 64-bit code can never call 32-bit libraries, or vice versa.

More importantly, mode support is optional. The ARMv8 spec is written to allow both aarch32-only and aarch64-only implementations. Apple went 64-bit only in their A-series iPhone/iPad chips a long time ago, and hasn't changed course now that they're transitioning the Mac to Arm. Arm itself has made some announcements about future cores transitioning to 64-bit only. So, the future of Arm is a relatively clean break from 1980s Arm.

Gwaihir
Dec 8, 2009
Hair Elf

Hasturtium posted:

As a minor correction, POWER originally referred to IBM's in-house designs for large chips intended to run z/OS and the like run RS/6000 workstations (thanks, PCjr sidecar! That'll teach me to post before coffee on a Friday...). PowerPC was the name given to the cooperative efforts of IBM, Motorola, and Apple (called AIM) to create microprocessors built on the same fundamental architecture. Apple eventually switched to x86 because IBM and Motorola couldn't prioritize a low power chip to replace the G4, and the G5 - itself a modified Power4 design with an Altivec unit, some internal changes, and a hobbled amount of cache to keep power and thermals in check - was incapable of being adapted for a mobile form factor at acceptable performance. PowerPC is effectively dead, though FreeScale was creating designs that were promising for some time and Apple wrestled with the decision internally for a while.

Power is built with performance as a primary concern rather than power optimization. Power9 chips feature support for SMT4 or SMT8 (so a quad core chip would expose itself as 16 or 32 execution threads), relatively shallow pipelines, quad-channel memory and 40+ PCIe lanes, and a massive number of registers - according to Wikipedia the breakdown is:

32× 64/32-bit general purpose registers
32× 64-bit floating point registers
64× 128-bit vector registers

They are built chiefly for server applications and are not SIMD powerhouses compared to modern designs from Intel and AMD, but I've wanted to play with one for several years. Raptor Computing manufactures several motherboards that are fully open source and use the smaller Sforza form factor of chip. It'd set me back as much as a solid Threadripper, but I'm still thinking about it...

Edit: MIPS was the CPU design SGI pushed for all its in-house chips, and even made it way to the N64. It's slowly ebbed in general relevance since, though as you say it still has a presence in embedded and SFF computing. Come to think of it, I had a Blu-ray player driven by one.

If you really wanted to make extremely poor decisions regarding Power chips, I have an IBM Power 770 that we just replaced with a Power 9 last year as surplus. It'd cost a loving fortune to ship, I'm sure, but it's 2x CECs with 256gb of RAM and 4 of the 8 core Power7+ chips in it.

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.

Gwaihir posted:

If you really wanted to make extremely poor decisions regarding Power chips, I have an IBM Power 770 that we just replaced with a Power 9 last year as surplus. It'd cost a loving fortune to ship, I'm sure, but it's 2x CECs with 256gb of RAM and 4 of the 8 core Power7+ chips in it.

What would you want for that behemoth?

Mr.Radar
Nov 5, 2005

You guys aren't going to believe this, but that guy is our games teacher.
Thanks for making this thread. I actually bought a Raptor Computing Systems Blackbird (with the 8-core CPU) back at the start of the pandemic when I was afraid they might go out of business if there was a recession. Fortunately they haven't, but I have also barely even booted that system since I put it together (partly because I didn't have room in my apartment for another proper desktop PC setup so it's incredibly uncomfortable for me to actually use the system). You can AMA about it, but due to using it so little I probably won't be able to answer any questions about "daily" uses. One great resource I found was the Talospace blog, made by the (former) maintainer of Classilla and TenFourFox (for Classic MacOS and PPC OSX respetively). He has both the big boy Talos II workstation as his main daily-driver PC and a Blackbird as an HTPC (!) and he's also (slowly) working on porting the Firefox JS JIT to POWER.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

My favorite ppc was the powerpc 615 (although ineligible for this thread), which had both an x86 and 32/64 bit ppc core that was pin compatible with the Pentium. Killed because Microsoft wouldn’t support it. Apparently, it could also do 68k?

A lot of future Transmeta people worked on it, apparently.

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.

Mr.Radar posted:

Thanks for making this thread. I actually bought a Raptor Computing Systems Blackbird (with the 8-core CPU) back at the start of the pandemic when I was afraid they might go out of business if there was a recession. Fortunately they haven't, but I have also barely even booted that system since I put it together (partly because I didn't have room in my apartment for another proper desktop PC setup so it's incredibly uncomfortable for me to actually use the system). You can AMA about it, but due to using it so little I probably won't be able to answer any questions about "daily" uses. One great resource I found was the Talospace blog, made by the (former) maintainer of Classilla and TenFourFox (for Classic MacOS and PPC OSX respetively). He has both the big boy Talos II workstation as his main daily-driver PC and a Blackbird as an HTPC (!) and he's also (slowly) working on porting the Firefox JS JIT to POWER.

What are your general impressions? What were you hoping to use it for? Have you considered setting up a little USB switch to toggle between desktops, and feeding different inputs into the monitor to facilitate swapping between them?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
We have a few Raptor Talos II systems for testing at work and they’re nice and it is great to have a power system that has a relatively friendly bios boot in an ATX form factor (power8s were a huge hassle usually).

They are more fragile than most x86 systems though we have killed a couple just with our reboot/power cycling tests. Not something most people will have to do but it was interesting. I haven’t looked at debugging them but I suspect power regulator issues.

For a while pre Epyc Rome they were the best performers for throughput from a pcie slot being gen4 x16! They did get seem to get bottlenecked possibly due to the amount of cache when trying to really saturate it with fio to a bunch of nvram drives. They are the lesser 4 core Sforza cpus on there which has less L3 cache and 4 RAM channels so we were hitting the limits there it seemed, from working with some IBM FAEs. On the Romes the 8 channel ddr4 and much higher L3 lets us hit line rate on a pcie slot.

Mr.Radar
Nov 5, 2005

You guys aren't going to believe this, but that guy is our games teacher.

Hasturtium posted:

What are your general impressions? What were you hoping to use it for? Have you considered setting up a little USB switch to toggle between desktops, and feeding different inputs into the monitor to facilitate swapping between them?

My general impression is that it makes for a decent, if quirky, little desktop Linux PC.

Since I just bought the mobo and CPU I had to supply the rest of the parts and put it together myself, which mostly went smoothly. The few issues I did run into were that the front panel connector seemed to be designed specifically for whatever case they use and wasn't totally ATX compatible (I think I ended up just leaving the power and disk LEDs disconnected because I couldn't figure out how to get them working) and the HD Audio front panel connector was right at the very back of the board so the cable that came with my case barely reached.

On the software side I installed Fedora since the general consensus was that RedHat-derived distros have the best POWER support (since IBM paid RedHat to maintain it, even before the acquisition), even though I usually go with Debian-based distros. I don't recall having any difficulties with the install, though afterwards I did have to manually tweak the kernel options to disable the built-in video output so OpenGL would work on the cheap Radeon R7-240 I threw in it (because a modern desktop without graphics acceleration is just painful) and every time the OS would update the kernel I'd have to redo that. I think there were a few other software tweaks I had to do but I can't recall what they were off the top of my head.

The main reason I got it was just to have a non-x86 system that was (theoretically) on par with desktop-class x86 systems (at least at the time it released). I grew up in the 90s reading about all the exotic architectures of the time, and then in the 00s reading about people getting to snap up the machines using them for next-to-nothing because they'd become obsolete. By the time I'd actually got enough money to get into that, the window of opportunity had already pretty much passed and the machines had become expensive collector's items. I'm actually eyeing getting one of those HiFive Unmatched boards from SiFive for the same reason, and I would have done it already I'd I wasn't moving soon.

As for a KVM, I considered it but my main desktop PC is on a custom-built cart with literally no room for another computer and my work-from-home setup uses a janky Thunderbolt dock that barely works even without a KVM in the mix so I don't want to mess with that. As I mentioned, I'm in the middle of moving to a bigger place so I will probably be able to get the system set up in a manner that I can actually use it without getting neck and back aches and hopefully find something useful to do with it.

feedmegin
Jul 30, 2008

BobHoward posted:

Also, on ARM:

It should be noted that 64-bit ARM, aka aarch64 or A64, isn't truly a descendent of the 1980s 32-bit ARM, which has been retroactively named aarch32 or A32. aarch64 is a mostly-clean-sheet redesign.

Why is this important to highlight? Because the other way is what AMD chose for x86-64. It used x86 prefix bytes to add new 64-bit only opcodes, but even when the CPU is in 64-bit mode it's still legal to execute old 32-bit instructions. You can write shims to allow 32-bit code to call into 64-bit libraries, and vice versa.

With aarch64, the CPU's decoders are either in aarch64 mode where they recognize only the new 64-bit instruction encoding, or in aarch32 mode where they only understand the old 32-bit encoding. The encodings are too incompatible to support both at the same time. Decoder mode switches are only possible as a side effect of privilege level changes - hypervisor entry/exit or kernel entry/exit - so userspace 64-bit code can never call 32-bit libraries, or vice versa.

More importantly, mode support is optional. The ARMv8 spec is written to allow both aarch32-only and aarch64-only implementations. Apple went 64-bit only in their A-series iPhone/iPad chips a long time ago, and hasn't changed course now that they're transitioning the Mac to Arm. Arm itself has made some announcements about future cores transitioning to 64-bit only. So, the future of Arm is a relatively clean break from 1980s Arm.

You sort of forgot to mention Thumb. The world now is basically AArch64 (big boy processors) or Thumb (microcontrollers), classic ARM is legacy but both of those are going forward. Cortex-M isn't going anywhere.

Saucer Crab
Apr 3, 2009




The man CPU of the Playstation 2 was a slightly customized MIPS processor, of all things.

Gwaihir
Dec 8, 2009
Hair Elf

Hasturtium posted:

What would you want for that behemoth?

I wouldn't charge for it since it's surplus, other than just whatever it took to pack and ship. But, like, that would be a lot, lol



The old machine is on top. Each of those CECs are about 100 pounds, I think.

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.

Gwaihir posted:

I wouldn't charge for it since it's surplus, other than just whatever it took to pack and ship. But, like, that would be a lot, lol



The old machine is on top. Each of those CECs are about 100 pounds, I think.

Oof. It's a hell of a setup and way more than I could use. Maybe see if an open source project could avail themselves of it and get a tax write off in the process - I'm pretty sure NetBSD or OpenBSD could use something like that.

Gwaihir
Dec 8, 2009
Hair Elf
Yeah, it's, uh wildly impractical for any use case that isn't a server room, really. Fans are just far too over the top, even at idle.

I know it's not really super thread topical, but that machine really was incredibly nice to use. That paradigm of "we're running everything on one machine, but it's super reliable and really really serviceable" has long since been replaced, but I still appreciate it for what it is, and for the engineering that went in to the hardware to make it work. 8 years in service for that sucker and not a single instance of hardware induced downtime was a pretty good run.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

feedmegin posted:

You sort of forgot to mention Thumb. The world now is basically AArch64 (big boy processors) or Thumb (microcontrollers), classic ARM is legacy but both of those are going forward. Cortex-M isn't going anywhere.

I did kinda skip it, yeah. It's part of aarch32 in the Arm v8 spec, so for the record, the full set of Arm v8 operating modes and instruction sets is:

aarch32 mode: T32 and A32 ISAs
aarch64 mode: A64 ISA

You're completely right that aarch32 is staying around for applications like microcontrollers.

(and who knows, maybe some of the platforms that use dual-mode CPUs today will find aarch32 too sticky to leave behind. Apple didn't, but they made that transition on iOS where they could just set a sunset date for allowing 32-bit code on the App Store.)

feedmegin
Jul 30, 2008

BobHoward posted:

I did kinda skip it, yeah. It's part of aarch32 in the Arm v8 spec, so for the record, the full set of Arm v8 operating modes and instruction sets is:

aarch32 mode: T32 and A32 ISAs
aarch64 mode: A64 ISA

You're completely right that aarch32 is staying around for applications like microcontrollers.

(and who knows, maybe some of the platforms that use dual-mode CPUs today will find aarch32 too sticky to leave behind. Apple didn't, but they made that transition on iOS where they could just set a sunset date for allowing 32-bit code on the App Store.)

Ah, but the whole shtick with the Cortex-M series is they ONLY do Thumb. No A32 support at all. They boot in Thumb and that's all you get. For microcontrollers classic ARM is already dead.

Edit: to expand on this, because most people here probably don't write ARM assembler -

RISC CPUs (i.e. non-x86 later than about 1983) usually use a fixed instruction size, usually 32 bits. This makes instruction fetch much easier (part of the philosophy of keeping things simple) because unlike x86, you don't have to look at the first byte, decide you need to fetch the second byte (or not), then decide how many more bytes you need to fetch to have the complete instruction. Important in the 80s, rather less so now with the massive silicon budgets we have these days. ARM does not deviate from this, a classic ARM instruction is 32 bits, into which it has to fit, depending on instruction, e.g. two source and one destination registers, or one source register, one immediate value and one destination register.

Well that's nice, but now the 90s roll along and ARM is being used in an embedded context. 32 bits for every instruction uses up a lot of space (in 90s terms), can we bring it down somehow? ARM nicks an idea from MIPS, at the time a competitor in the embedded space, which is to have a subset of instructions that can be coded in 16 bits (the MIPS version is called MIPS16). You can switch the processor from 32-bit instruction mode to 16-bit instruction mode and back with a special jump instruction (so as a practical matter, a given function tends to be in one or the other). It's effectively a form of compression. The instructions actually executed are the same in the core, btw, they're just encoded and decoded differently. This is Thumb (retroactively: Thumb 1).

Now someone comes along in the mid 2000s and decides they want to make the smallest, most space efficient possible microprocessor. They decide the best way to do this is to ditch classic 32 bit ARM support altogether and just use Thumb. Recall Thumb 1 is a subset of the ARM ISA. So they add just enough new Thumb instructions to do the stuff Thumb 1 can't. This is Thumb 2 and different Cortex-Ms support different bits of it.

Having written a compiler as a hobby project that targets both classic ARM and a Cortex-M0, Thumb is a bitch to compile for, btw (unsurprisingly given its limited encoding space). All instructions tend to be x86-style dest = dest + one source register (remember earlier? this saves having to fit a third register into 16 bits). You usually only have direct access to 8 registers, not 16 (saves a bit per register). Immediates have a frigging tiny range. Branch displacements too. Constant pools all over the place. You lose that kind of cool predication-for-everything thing that classic ARM has, but then AArch64 ditches that too (it was a great idea in 1990 but it messes with modern OoO cores' performance). It would be horrible to write much assembler by hand for, but, well, it's 2021, even for microcontrollers we don't do that much any more. (One nice feature is that its interrupt handlers are specified to follow the C calling convention for the platform - no special handling needed other than in the linker, a Thumb interrupt handler is Just A Function, you can write firmware for a Cortex-M without a line of assembler).

Your Raspberry Pi or whatever supports both, btw, as does any vaguely modern desktop-y 32-bit-supporting CPU - but don't expect that to be true for too many years longer. ARM as an architecture is a lot less dependent on supporting ancient legacy code than x86 is and behaves accordingly.

feedmegin fucked around with this message at 13:47 on Jun 20, 2021

Mr Shiny Pants
Nov 12, 2012
Nice thread, I like reading about these other processors. Shame most of them went the way of the dodo.

I really liked the Ultra Sparc workstations with the Creator 3d cards when they came out, too bad they were so expensive.

DeepBlue
Jul 7, 2004

SHMEH!!!
I like where this thread is going, and I figure I should say something about Power ISA and how compact it can get.

https://forums.raptorcs.com/index.php/topic,99.0.html

This is still being used as my primary server for home use. I have had very little issue with getting native Linux applications to work, and I have been putting tickets into various GitHub repos for PowerISA support if larger packages/applications do not have it.

https://github.com/ONLYOFFICE/build_tools/pull/87

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Gwaihir posted:

I wouldn't charge for it since it's surplus, other than just whatever it took to pack and ship. But, like, that would be a lot, lol



The old machine is on top. Each of those CECs are about 100 pounds, I think.

Oh man, if you were in Atlanta I'd be there in a heartbeat.

I have a couple IBM PowerPC and Cell blades in my old IBM E series bladecenter.

Hasturtium
May 19, 2020

And that year, for his birthday, he got six pink ping pong balls in a little pink backpack.
It looks like the Blackbird is backordered - any idea on when it's likely I could get one? I'm about to start unloading various PC bits and other sundries on the SA Mart and elsewhere to that end...

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Let's talk about a couple interesting Intel ISAs which are not x86!

====

The first is incredibly popular, yet also obscure if you aren't in certain segments of industry: MCS-51 aka 8051.

It was a family of 8-bit microcontrollers which Intel originally developed and sold in the 1980s. For its time, Intel did a lot of things right with the 8051, and it won big. They got designed into everything. Intel even licensed it widely, which only increased its reach and popularity. I would guess that most 8051 cores in existence weren't made by Intel.

It is somehow still with us today. An example I happen to know about is that the chips in many USB3-to-SATA devices (drive docks, enclosures, etc) contain a simple USB-SATA data bridge controlled by an 8051, which both directs dataflow and translates USB Mass Storage Class commands to SATA commands.

However, it is slowly but surely fading away. 8051 is this weird old architecture from an era when µCs were almost exclusively programmed with assembly code. It's losing design wins to architectures like the embedded profiles of RISC-V and ARM, which have much bigger and livelier software dev ecosystems. (Those USB-SATA bridge chips I mentioned? The newest gen chips from the same company have gone over to Cortex-M0.)

====

And now, let's talk about one of Intel's greatest failures!

I bet some readers are thinking "aha, he's going to talk about Itanium". Nope! The sad story of the iAPX-432 played out a couple decades before Itanium, and it was arguably a worse flop. Itanium actually had a real software ecosystem and multiple generations, but 432 just kinda sank into the swamp. Nobody wanted it. It was hard to write software for, expensive, and performance was terrible.

The 432 was Intel's take on a popular idea in 1970s computer architecture: closing the "semantic gap". Then and now, successful CPUs run conceptually simple instructions. But high level language (HLL) statements can encode much more complex ideas into even single statements. While it was possible to translate HLL statements into equivalent sequences of low level machine instructions, what if instead you designed your ISA to have very complex machine instructions that offer a more direct map to HLL semantics? Wouldn't that be a good thing? (narrator voice: It is not.)

The 432 ISA was designed to support Object Orientation and Capabilities. OOP was the fresh new HLL paradigm sweeping the CS world by storm, and the 432 tried to be the best possible CPU for running a 100% OOP software stack with capability-based protection.

Ironically, this caused one of the central problems of the 432: the architecture is extremely opinionated in favor of the 432 architects' ideas about what OOP should look like at a low level, and that means not all OOP environments need apply. For example, software isn't allowed to directly manipulate pointers to objects, that's limited to hardware and microcode. So if your particular OOP language has some level of friction with the 432's ideas about how pointers work, you are in for trouble.

A full dissection of how iAPX-432 collapsed under its own weight is far more :words: than I'd like to research and write. I'm not even sure the paragraphs I wrote above are that great a description, tbh. Just take a look at this contemporary academic paper on it, and contemplate how difficult it would be to port preexisting software to that thing.

Cenodoxus
Mar 29, 2012

while [[ true ]] ; do
    pour()
done


Fantastic thread idea. I collected and played with a few different varieties of non-x86 hardware in the mid-to-late 2000's.

I had:
  • SGI O2's of the R5k and R10k variety, an SGI Octane, and an Origin 300
  • a Sun Ultra tower that I believe was either an UltraSPARC II or III, and
  • a PowerMac G5 I picked up after the Intel switch just for the novelty of PowerPC.

At the time the G5 was still widely supported so it was a great daily driver. I played a lot of WoW on it. I occasionally keep my eye out for G5's on Marketplace just in case I get that itch again.

All I have left is a single SGI O2 whose plastic shell is so brittle, I dare not try to move it. The SGI community has fractured a lot in recent years so I haven't done a lot with it since. :(

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
I found this article about Power10's absurd levels of I/O, memory bandwidth and addressable memory interesting.


Something that would be cool is a comparison of the different features some of these architecture had and what workloads it made them good/bad at. For instance, what made SPARC worth choosing over other contemporary options, that kind of thing.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
I still have an old DEC Alphaserver workstation, I used to have an SGI Onyx 2, pretty neat systems and absurdly powerful for their day.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

I have an old indigo that the eeprom got wiped on and then the usual rot so i dont know if ill ever get it going again

Mr Shiny Pants
Nov 12, 2012
I probably still have an old Sparcstation 5 (pizzabox) with a quad ethernet card that ran OpenBSD as a firewall. Rock solid machine at 110 or 70 Mhz.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

ConanTheLibrarian posted:

Something that would be cool is a comparison of the different features some of these architecture had and what workloads it made them good/bad at. For instance, what made SPARC worth choosing over other contemporary options, that kind of thing.

SPARC exists because Sun Microsystems saw the Berkeley RISC project and decided "We need that." There wasn't really anything in the SPARC ISA which made it inherently better than the other RISCs. In fact, many would argue it was one of the worst of the early RISCs, because its register window ISA feature turned out to be a bad idea. But it was Sun's RISC, so that's what Sun used, warts and all.

More generally, in the 1980s a whole bunch of things came together. The RISC concept demonstrated that a small team could design a competitive (by 1980s standards) CPU on a low budget. The rise of merchant fab services meant you didn't have to own your own fab to build chips. There was lots of demand for small-ish UNIX workstations, and these RISC CPUs were a killer feature for that type of machine. So everyone in the RISC workstation biz suddenly decided they needed their own private RISC ISA.

But only a couple decades later, this all fell apart. It was no longer enough to put a simple pipelined in-order core on a single chip. There was demand for higher and higher clocks, out-of-order superscalar cores, and so on. A lot of the boutique RISCs became unsustainable as design budgets grew; they just didn't have the sales volume to stay in the game.

So we got exit strategies. To pick two prominent examples, IBM went into partnership with Apple and Motorola in hopes of gaining a much larger consumer-scale market for POWER, its RISC - the resulting "new" architecture was named PowerPC, but it was really just a slightly altered POWER. HP was already working on a post-RISC architecture to replace its PA-RISC, realized they couldn't go it alone, and offered a partnership to Intel -- that's how we got Itanium.

In the end, everyone got steamrolled by the Intel fab tech and x86 engineering budget made possible by the mass market Wintel juggernaut. This included Intel's own Itanium division, even though it was the golden boy of senior executives.

Adbot
ADBOT LOVES YOU

feedmegin
Jul 30, 2008

BobHoward posted:

In the end, everyone got steamrolled by the Intel fab tech and x86 engineering budget made possible by the mass market Wintel juggernaut. This included Intel's own Itanium division, even though it was the golden boy of senior executives.

Not actually EVERYONE as it happens. Your phone isn't running on Intel and its not like they haven't tried. These days, neither is your Mac.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply