Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Complete conjecture but if you switch jobs every 2-3 years the amount of forms to fill out is boring and lame. I'm going to randomly guess burn-out.

Adbot
ADBOT LOVES YOU

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

Intel management wanted the headline, wasn't willing to give him the freedom he expects/needs.

I'm still not sure what a CPU designer does on a daily basis, or what about a person makes them so much better at it than another.

JawnV6
Jul 4, 2004

So hot ...

Some Goon posted:

Intel management wanted the headline, wasn't willing to give him the freedom he expects/needs.
this is from 24 days ago lol https://fortune.com/longform/microchip-designer-jim-keller-intel-fortune-500-apple-tesla-amd/

Some Goon posted:

I'm still not sure what a CPU designer does on a daily basis, or what about a person makes them so much better at it than another.
he's not a designer, at least not in the ground level jargon. id call him an architect

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

JawnV6 posted:

this is from 24 days ago lol https://fortune.com/longform/microchip-designer-jim-keller-intel-fortune-500-apple-tesla-amd/

he's not a designer, at least not in the ground level jargon. id call him an architect

I’d love to read the rest of this but it’s paywalled.

silence_kit
Jul 14, 2011

by the sex ghost

JawnV6 posted:

he's not a designer, at least not in the ground level jargon. id call him an architect

I'd say he's more of an importer-exporter

silence_kit
Jul 14, 2011

by the sex ghost

Some Goon posted:

I'm still not sure what a CPU designer does on a daily basis, or what about a person makes them so much better at it than another.

He's the best at dreaming up new ways to artificially limit the speeds and functionalities of new computer chips, preventing gamers' rigs everywhere from achieving the benchmark numbers that they were meant to achieve.

BlankSystemDaemon
Mar 13, 2009



Ok Comboomer posted:

I’d love to read the rest of this but it’s paywalled.
:ssh:
(it's a link, add-on works in both Firefox and Chrome)

Beef
Jul 26, 2004

JawnV6 posted:

this is from 24 days ago lol https://fortune.com/longform/microchip-designer-jim-keller-intel-fortune-500-apple-tesla-amd/

he's not a designer, at least not in the ground level jargon. id call him an architect

And as to what someone like Jim Keller does all day: meetings, so many meetings. A PI position is already back to back meetings, and he has been doing media work as well.

movax
Aug 30, 2008

As a little cross pollination to the world of wsb I found this DD take on Intel amusing: https://www.reddit.com/r/wallstreetbets/comments/hbr1ol/get_the_fuck_out_of_intel_and_get_out_immediately/

I think Zen 3 will crush it this year but I now wonder if Intel can spend its way out of the problem this time by paying off OEMs and such.

Beef
Jul 26, 2004
Rumor has it he really has a family health issue and cannot keep pulling what I assume is the insane workload inside Intel, he's still consulting for now. I kind of believe this one, as he would otherwise jump ship as he has done in the past.

Shorting INTC based on that news and the fact that Zen3 is on schedule after all is one hell of a risky trade.

For gamers and PC users, yeah Zen 3 can absolutely crush it if they manage to get CPI parity and lower down some of the latencies. But institutional shareholders don't really give a poo poo. Desktop is 1) relatively low-margin 2) not a growth sector, 3) not a fab priority given the current problems meeting demand in other areas and 4) not where Intel's boutique approach is very profitable.
Intel still is a boutique firm with a crapton of customized chips and systems for a wide variety of customers. AMD isn't making a Zen variant for high speed trading or base stations, if they had the engineering bandwidth for it I doubt they would get the fab bandwidth. Intel don't need to pay off OEMs with cash, they pay them off with custom designs.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
True, but it's not all sunshine and roses for Intel, either: the datacenter is a highly profitable growth area that Intel has utterly dominated up until now, but I've already seen some government buyers and similar either explore or purchase AMD systems for new builds: Intel is gonna have to figure out how to deal with Zen being able to provide more cores at reasonably good IPC at better prices than Xeons at some point here in the reasonably near future if they want to maintain the quarterly profits they've been used to.

That said, I'm still long Intel because betting against them has, historically, been a fool's move--they've got more cash and more engineering know-how that pretty much anyone else out there.

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.

DrDork posted:


That said, I'm still long Intel because betting against them has, historically, been a fool's move--they've got more cash and more engineering know-how that pretty much anyone else out there.

That's never under question, it's how Intel is utilizing them.

very poorly, that's how

Cygni
Nov 12, 2005

raring to post

Rip to Intel Macs. Will be interesting to see how hard Intel fights back with giving its OEMs engineering support, like they did with the initial Ultrabook effort.

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

Cygni posted:

Rip to Intel Macs. Will be interesting to see how hard Intel fights back with giving its OEMs engineering support, like they did with the initial Ultrabook effort.

I’m more curious to see how MS and the OEMs respond. According to the keynote Apple’s “good friends at Microsoft” already have Office ready to go for ARM MacOS, and presumably have had devkits, etc. and eyeballs behind the curtain for a while.

Can’t imagine that they’re not looking at the Surface line or portables from Dell, et al. with similar ideas.

Fame Douglas
Nov 20, 2013

by Fluffdaddy

Ok Comboomer posted:

Can’t imagine that they’re not looking at the Surface line or portables from Dell, et al. with similar ideas.

The Surface Pro X already exists, it even has an x86 compatibility layer just like Rosetta.

repiv
Aug 13, 2009

Fame Douglas posted:

The Surface Pro X already exists, it even has an x86 compatibility layer just like Rosetta.

Emphasis on x86, Microsoft's emulator can't run x86_64 apps which kinda sucks. Tons of stuff is 64-bit only nowadays.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
I honestly will be more interested to see how the hell Apple plans to not hilariously fragment their userbase. If they've got it so that iPhone/iPad/iMacs are all built on the same architecture, but the MBP / Mac Pro are still built off Intel / x86, I would expect there to be some serious annoyance from users who buy a new laptop and suddenly find themselves restricted to the Apple mobile store.

Like, is Apple going to lean heavily on devs to release comparable versions for both architectures, or are they just going to accept that you can't get full versions of a lot of software on the low-power lineup and say that's just the tradeoff you make for better battery life? I think people are a lot more willing to accept that they get a gimped version of Photoshop on an iPad than they are on something that's sold as a laptop (even if it'll be basically a slightly larger iPad with permanently attached keyboard at that point).

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

repiv posted:

Emphasis on x86, Microsoft's emulator can't run x86_64 apps which kinda sucks. Tons of stuff is 64-bit only nowadays.

That's the opposite of how I expected it to be limited

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

DrDork posted:

I honestly will be more interested to see how the hell Apple plans to not hilariously fragment their userbase. If they've got it so that iPhone/iPad/iMacs are all built on the same architecture, but the MBP / Mac Pro are still built off Intel / x86, I would expect there to be some serious annoyance from users who buy a new laptop and suddenly find themselves restricted to the Apple mobile store.

Like, is Apple going to lean heavily on devs to release comparable versions for both architectures, or are they just going to accept that you can't get full versions of a lot of software on the low-power lineup and say that's just the tradeoff you make for better battery life? I think people are a lot more willing to accept that they get a gimped version of Photoshop on an iPad than they are on something that's sold as a laptop (even if it'll be basically a slightly larger iPad with permanently attached keyboard at that point).

Did you watch the keynote/read the quick takes from it? They’ve got a bunch of solutions, including universal binaries, and it sounds like Rosetta 2 is going to be good enough to do x86 emulation for a lot of apps (they showed it running the existing MacOS port of Shadow of the Tomb Raider on their A12Z-based devkit which looks to be the guts of a current iPad Pro in a Mac Mini box + 16gb of RAM and an SSD).

Also Apple made a big point to state numerous times that the whole stack was getting switched over in two years. I can’t imagine that they would be announcing that now if they didn’t have MacBook Pro/Mac Pro-capable chips already working in their skunkworks.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Yeah, but it doesn't answer all the questions. Universal binaries sound great, but as we've seen by everyone else who has tried them, they're not easy to get right when you're working with non-trivial programs. Rosetta 2 / emulation has promise, but again, I'll believe that as a perfect universal solution when I see it.

You're no doubt right that they plan on doing the full stack eventually, but that's still two years away, which is a long time in terms of listening to users complain that they can't get their program to run right. Especially for a company that bills itself as "it just works."

e; that said, if there's one tech company out there that can just go "look, bitches, this is the way it is now, you will deal with it and you will like it" and have it work, it's Apple.

DrDork fucked around with this message at 20:29 on Jun 22, 2020

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

DrDork posted:

Yeah, but it doesn't answer all the questions. Universal binaries sound great, but as we've seen by everyone else who has tried them, they're not easy to get right when you're working with non-trivial programs. Rosetta 2 / emulation has promise, but again, I'll believe that as a perfect universal solution when I see it.

You're no doubt right that they plan on doing the full stack eventually, but that's still two years away, which is a long time in terms of listening to users complain that they can't get their program to run right. Especially for a company that bills itself as "it just works."

Eh, this one honestly looks like it’s going to be substantially less painful than the Intel Transition and Rosetta 1. Idk that they would’ve shown off Maya running via Rosetta 2 if they didn’t expect people to try it for themselves and look for holes—but also Apple totally worked extensively with Autodesk behind the scenes to make that successful demo happen.

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
It'll be interesting to see what effects the Apple ARM switch will have wider attempts to produce a viable ARM desktop & server market. If Apple show up with x64_64 beating desktop CPUs if that will trigger a bunch of investment. Or comedy option: Apple use their massive piles of cash to enter the server market and then follow that up with their own cloud computing service.

Fame Douglas
Nov 20, 2013

by Fluffdaddy

repiv posted:

Emphasis on x86, Microsoft's emulator can't run x86_64 apps which kinda sucks. Tons of stuff is 64-bit only nowadays.

They're working on 64 bit compatibility. But emulation is always going to be comparatively slow, something I'm very dubious even Apple could really solve.

Ok Comboomer posted:

Eh, this one honestly looks like it’s going to be substantially less painful than the Intel Transition and Rosetta 1. Idk that they would’ve shown off Maya running via Rosetta 2 if they didn’t expect people to try it for themselves and look for holes—but also Apple totally worked extensively with Autodesk behind the scenes to make that successful demo happen.

I haven't seen the demonstration, but if it's rendering on the GPU, it would make sense that the emulation penalty wouldn't be as large. But that doesn't mean regular more CPU-based programs are going to run amazingly well.

Fame Douglas fucked around with this message at 21:59 on Jun 22, 2020

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

Pablo Bluth posted:

It'll be interesting to see what effects the Apple ARM switch will have wider attempts to produce a viable ARM desktop & server market. If Apple show up with x64_64 beating desktop CPUs if that will trigger a bunch of investment. Or comedy option: Apple use their massive piles of cash to enter the server market and then follow that up with their own cloud computing service.

Those old Xserves certainly sound like real servers. I wonder if son-of-Xserve would carry on that trend or follow the Mac Pro in an attempt to be as quiet as possible

Fame Douglas
Nov 20, 2013

by Fluffdaddy
There have been attempts at producing ARM servers for years, Apple won't be an important factor in server production. At the moment, they're only interesting for some specialized applications.

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

Fame Douglas posted:

There have been attempts at producing ARM servers for years, Apple won't be an important factor in server production. At the moment, they're only interesting for some specialized applications.

Also all of apple’s data center servers are like HP, I’m pretty sure. I think they run a few data centers on racked Mac Minis.

mobby_6kl
Aug 9, 2009

by Fluffdaddy

Fame Douglas posted:

There have been attempts at producing ARM servers for years, Apple won't be an important factor in server production. At the moment, they're only interesting for some specialized applications.
The year of Arm on Servers, any moment now.

Cygni posted:

Rip to Intel Macs. Will be interesting to see how hard Intel fights back with giving its OEMs engineering support, like they did with the initial Ultrabook effort.
Hopefully by unfucking their manufacturing and properly updating the uarch finally. Would be pretty funny if in two years they're crushing it again.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

ARM on servers is (mostly) what got the ex-Apple nuvia guy sued by Apple.

Cygni
Nov 12, 2005

raring to post

Fame Douglas posted:

There have been attempts at producing ARM servers for years, Apple won't be an important factor in server production. At the moment, they're only interesting for some specialized applications.

#1 super computer in the world is Fujitsu’s new ARM cluster, was announced today. Ampere’s and Marvell’s new server parts look impressive too. Dunno if the X86 Armageddon is finally here, but there is noticeable momentum.

human garbage bag
Jan 8, 2020

by Fluffdaddy

Cygni posted:

#1 super computer in the world is Fujitsu’s new ARM cluster, was announced today. Ampere’s and Marvell’s new server parts look impressive too. Dunno if the X86 Armageddon is finally here, but there is noticeable momentum.

The raw resource cost per chip is not significantly different between an ARM and an Intel; it's roughly the same amount of silicon. So if a race was declared to produce the most TFLOPs per hour of CPUs, the Intel design would be the chosen CPU to produce.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

human garbage bag posted:

The raw resource cost per chip is not significantly different between an ARM and an Intel; it's roughly the same amount of silicon. So if a race was declared to produce the most TFLOPs per hour of CPUs, the Intel design would be the chosen CPU to produce.

In a perfect spherical cow vacuum world your first sentence is true. In reality the raw materials aren't what system builders care about, it's the total cost per tested and pacakged chip. Raw materials are a tiny fraction of that cost.

Different foundries with different process tech can and do have different costs. Even a single foundry can even have reduced-cost process recipes on the same node. (For example, reducing the metal layer count for chips which don't need dense wire routing.)

But more importantly, marginal cost of production is meaningless when you're thinking about assembling a supercomputer. What matters is the price you pay to the supplier for each chip, which has variables completely independent of the cost of building it. When you buy an Intel chip, you get charged a hefty profit margin because (a) thanks to the long dominance of x86 Windows there's perpetually high demand for x86 CPUs and (b) thanks to exploitation of the patent system, Intel has a near-monopoly on the right to build x86 CPUs. (AMD got legally backdoored in back in the 386 days, and their patent position was eventually bolstered by authoring the AMD64 extensions to the ISA which Intel now has to use. Nevertheless, Intel has been pretty effective at keeping AMD down, and historically so has AMD itself - it hasn't always been the best-run company.)

In short, Intel is a monopolist, and they use that to make fat profit margins. Even now, amidst all the horrible setbacks on the 10nm node, they're raking in money.


I have no idea what you're going for with your second sentence. If your only goal was to produce lots of paper TFLOPS for some abstract competition, you wouldn't use conventional microprocessors with conventional ISAs like x86 or ARM, you'd go for extremely wide VLIW that was neither Intel nor ARM. Such machines tend to have an unfortunately large ratio between their peak and actual FLOPS, so unless you're only building your super for bragging rights they're not a great idea.

BlankSystemDaemon
Mar 13, 2009



HPC loads are also massively parallelizable, so that does tend to favour massive amounts of cores that're more energy efficient, which is basically ARMs modus operandi.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

D. Ebdrup posted:

HPC loads are also massively parallelizable, so that does tend to favour massive amounts of cores that're more energy efficient, which is basically ARMs modus operandi.

Less performance per core means you need to spend more power on interconnects, and interconnects (and moving days around in general) is becoming the most expensive part of computation now.

Interconnects are basically the entire thing that distinguishes HPC from a rack of commodity PCs. You are specifically not just looking for a bunch of massively parallel hardware, it’s about moving data around efficiently. So that’s a non trivial downside.

Crunchy Black
Oct 24, 2017

by Athanatos
Yeah, at least when it comes to supercomputing, the biggest hurdle, generally, is parallelizing the load. Which uh, I don't think I'm going to shock anyone here by saying, AMD has quite the advantage at the moment in the PCIe space, which helps immensely.

human garbage bag
Jan 8, 2020

by Fluffdaddy

BobHoward posted:

In a perfect spherical cow vacuum world your first sentence is true. In reality the raw materials aren't what system builders care about, it's the total cost per tested and pacakged chip. Raw materials are a tiny fraction of that cost.

Different foundries with different process tech can and do have different costs. Even a single foundry can even have reduced-cost process recipes on the same node. (For example, reducing the metal layer count for chips which don't need dense wire routing.)

But more importantly, marginal cost of production is meaningless when you're thinking about assembling a supercomputer. What matters is the price you pay to the supplier for each chip, which has variables completely independent of the cost of building it. When you buy an Intel chip, you get charged a hefty profit margin because (a) thanks to the long dominance of x86 Windows there's perpetually high demand for x86 CPUs and (b) thanks to exploitation of the patent system, Intel has a near-monopoly on the right to build x86 CPUs. (AMD got legally backdoored in back in the 386 days, and their patent position was eventually bolstered by authoring the AMD64 extensions to the ISA which Intel now has to use. Nevertheless, Intel has been pretty effective at keeping AMD down, and historically so has AMD itself - it hasn't always been the best-run company.)

In short, Intel is a monopolist, and they use that to make fat profit margins. Even now, amidst all the horrible setbacks on the 10nm node, they're raking in money.


I have no idea what you're going for with your second sentence. If your only goal was to produce lots of paper TFLOPS for some abstract competition, you wouldn't use conventional microprocessors with conventional ISAs like x86 or ARM, you'd go for extremely wide VLIW that was neither Intel nor ARM. Such machines tend to have an unfortunately large ratio between their peak and actual FLOPS, so unless you're only building your super for bragging rights they're not a great idea.

I was actually envisioning a total war scenario where a command economy throws out the profit motive. In this scenario I believe that having all foundries produce Intel chips would produce more total TFLOP computing power than having all foundries produce ARM chips.

JawnV6
Jul 4, 2004

So hot ...
hey does Alereon still post here just asking

human garbage bag posted:

The raw resource cost per chip is not significantly different between an ARM and an Intel; it's roughly the same amount of silicon. So if a race was declared to produce the most TFLOPs per hour of CPUs, the Intel design would be the chosen CPU to produce.
did you have specific mm2 vs. FLOPS for this or are you just guessin

looks like uninformed guessin

human garbage bag posted:

I was actually envisioning a total war scenario where a command economy throws out the profit motive. In this scenario I believe that having all foundries produce Intel chips would produce more total TFLOP computing power than having all foundries produce ARM chips.
oh wow yeah you're just wholesale grabbing numbers out of, generously, thin air

like how is FLOPS even marginally relevant, what's this problem that is purely compute bound and has no IOcut or sensitivity to memory bandwidth

ConanTheLibrarian
Aug 13, 2004


dis buch is late
Fallen Rib
Not to mention that GPUs and matrix processors would give you vastly more bang for your weird FLOPs/chip buck than any CPU.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

human garbage bag posted:

I was actually envisioning a total war scenario where a command economy throws out the profit motive. In this scenario I believe that having all foundries produce Intel chips would produce more total TFLOP computing power than having all foundries produce ARM chips.

But why do you believe that? You still haven't given any actual reason why you think Intel chips automatically equals more TFLOPS per wafer. Much less any indication that you understand why peak TFLOPS numbers are not actually what you should be optimizing for in a super. Or any indication that you understand that you can't just decide to make Intel chips on TSMC's process, you'd have to do a costly port.

Discussion Quorum
Dec 5, 2002
Armchair Philistine
If we ended up in some sort of WWIII scenario where we had to go all-out on chip production, I have to assume that would be directed at the lower-power/embedded stuff that ends up in weapons systems and vehicles, not exactly the stuff gamers drool over.

For scale, the upgraded CPU in the F-35 advertises a Dhrystone of 2900 which puts it on par with a Raspberry Pi 3B+ (yeah I know it likely kills the Pi in other capabilities but we're talking about ~FLOPS~).

https://www.l3commercialaviation.com/avionics/products/high-performance-icp/

Adbot
ADBOT LOVES YOU

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
Dhrystone MIPS is a pure integer test FYI, no correlation with FLOPS.

Also even as an integer test, it's a super bad benchmark that nobody should use for anything (like, seriously, it's REAL bad), yet still survives in the embedded computing realm because people are dummies or something.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply