Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
shrike82
Jun 11, 2005

Laslow posted:

Yep. Cloud is a race to the bottom and they don’t play that game.

lol cloud is a major profit center for big tech companies extending to even hardware makers that cater to cloud providers

Adbot
ADBOT LOVES YOU

WhyteRyce
Dec 30, 2001

Apple is pushing more and more into the services game. They aren't a race to the bottom company but their services aren't exactly race to the bottom either last I checked icloud pricing

WhyteRyce fucked around with this message at 06:25 on Nov 23, 2020

Laslow
Jul 18, 2007

shrike82 posted:

lol cloud is a major profit center for big tech companies extending to even hardware makers that cater to cloud providers
Sure, especially with the services. But on the hardware side, would you not expect it to be commoditized to the point where it’d be an unattractive proposition given all of the competition?

Laslow
Jul 18, 2007

WhyteRyce posted:

Apple is pushing more and more into the services game. They aren't a race to the bottom company but their services aren't exactly race to the bottom either last I checked icloud pricing
And maybe that’s their opening. They can charge people gently caress-awful prices for personal backups/sync because it’s so seamless with the iOS and Mac devices that can shove the service signup page into millions of captive faces.

Then it might make economic sense for them to make a buttload of ARM Xserves(AServes?) for themselves to use.

Space Gopher
Jul 31, 2006

BLITHERING IDIOT AND HARDCORE DURIAN APOLOGIST. LET ME TELL YOU WHY THIS SHIT DON'T STINK EVEN THOUGH WE ALL KNOW IT DOES BECAUSE I'M SUPER CULTURED.

WhyteRyce posted:

Apple is pushing more and more into the services game. They aren't a race to the bottom company but their services aren't exactly race to the bottom either last I checked icloud pricing

Sure, they want to sell services. Like every other "cloud" provider not named Amazon, Microsoft, or Google, they'll keep hosting those services on AWS and Azure, because that's cheaper, easier, and less risky than developing their own server hardware from the ground up and building a worldwide network of datacenters to run it all.

The really interesting thing here is going to be the server-side transition to ARM. Right now, one of the perceived issues in cloud ARM adoption is that developers would like to be able to run the same binaries and containers locally and server-side, whether that's a specific interpreter/runtime or their own compiled binaries, to make deployment and troubleshooting easier. When local dev is x86-64, then there's a barrier in going to ARM hosts. But, local dev on an ARM Macbook flips the script; now, it'll be easier to deploy to AWS Graviton or whatever equivalent Azure comes up with when they finally roll out ARM VMs.

Laslow posted:

And maybe that’s their opening. They can charge people gently caress-awful prices for personal backups/sync because it’s so seamless with the iOS and Mac devices that can shove the service signup page into millions of captive faces.

Then it might make economic sense for them to make a buttload of ARM Xserves(AServes?) for themselves to use.

Apple has hosted iCloud in Azure and AWS for a long time now. Why would they go back to on-prem hosting now? Having a good CPU is just one tiny part of a very complex equation, and they can pay other people to handle all the headaches for them with better economies of scale than they'll ever have.

WhyteRyce
Dec 30, 2001

A simple Google shows Apple indeed has their own data centers and is spending billions to expand

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Jenny Agutter posted:

i actually replaced the single fan with a noctua 120mm fan which led to me checking the temps and realizing the issue. if i get the D15 with a single 140mm fan would it make sense to put the 120mm i already have on it as well, or will they end up causing issues? or should I just trial and error it? this is all in a meshify c

The single fan model is the one I'd always recommend, the D15S - the S is actually shaped differently, and doesn't block the top pcie slot

HalloKitty fucked around with this message at 10:27 on Nov 23, 2020

SwissArmyDruid
Feb 14, 2014

by sebmojo

movax posted:

I'll go ask / check what the other modmins think — all it takes is one person to completely ruin it for everyone and I know similar things have been tried in the past, but I like the spirit!

I would like to express enthusiasm for this. I relish the opportunity to dig through the IT Drawer Full Of Parts We Keep On Hand For Troubleshooting Purposes Because They're Too Old To Be Used In The Current Machine But Too New To Throw Away And We're Too Lazy To Sell Them For Break-Even When You Factor In Shipping.

silence_kit
Jul 14, 2011

by the sex ghost

Mr.Radar posted:

Since some of the gains that the M1 is showing do seem to be related to the RISC-style uniformity of the instruction set (specifically the huge ROB and amazing instruction-level parallelism) in a way that would be hard to replicate just by throwing more transistors at the x86 decoder (like they were able to do in the past to compete with RISC architectures) I could see Intel (or AMD) releasing a new "Risc86" architecture that preserves (most of) the semantics of x86(-64) (i.e. something close enough you could automatically translate like 95%+ of x86 assembly to it) but with an instruction encoding that's more uniform and easier to decode enabling them to implement those same type of optimizations. Presumably the chips would continue to include an x86(-64) decoder to run legacy software as well, making the transition as smooth as possible.

The YOSPOS AMD or Apple thread had an interesting discussion about this subject. Why is the Apple laptop chip so good? Four possible reasons were identified.

1) Independent of the chip design, Apple is using a better Instruction Set Architecture than Intel. Of course computer-oriented people would want to attribute the increase in performance to a computer-level idea.

2) The Apple chip is just better engineered than the Intel chips, independent of the different constraints put on the different products.

3) Apple, being a computer system company, and not a computer chip company, does not need to make a profit on the sale of their computer chips. Because of this, they are able to make their chips much bigger than Intel, and this gives them an advantage over Intel independent of how well-engineered are the two chip designs.

4) TSMC has a manufacturing process advantage over Intel.

I don't know which of the 4 reasons is most important. Maybe they are all important. Thoughts?

silence_kit fucked around with this message at 12:15 on Nov 23, 2020

BlankSystemDaemon
Mar 13, 2009



silence_kit posted:

The YOSPOS AMD or Apple thread had an interesting discussion about this subject. Why is the Apple laptop chip so good? Four possible reasons were identified.

1) Independent of the chip design, Apple is using a better Instruction Set Architecture than Intel. Of course computer-oriented people would want to attribute the increase in performance to a computer-level idea.

2) The Apple chip is just better engineered than the Intel chips, independent of the different constraints put on the different products.

3) Apple, being a computer system company, and not a computer chip company, does not need to make a profit on the sale of their computer chips. Because of this, they are able to make their chips much bigger than Intel, and this gives them an advantage over Intel independent of how well-engineered are the two chip designs.

4) TSMC has a manufacturing process advantage over Intel.

I don't know which of the 4 reasons is most important. Maybe they are all important. Thoughts?
Reason one doesn't really make sense, because with x86(_64)/AMD64, underneath the microcode, it much more closely resembles RISC.
ARMv8a also implements javascript instructions, so it hardly classifies as RISC.

Reasons two and three could be part of an explanation, but even if combined, aren't likely to be the whole answer.

Reason four seem dubious at best, since that would affect chip yields / binning more, and that's not something we've seen borne out with AMDs chips which are also manefactured at TSMC. Besides, Intel also gets stuff manefactured at TSMC.

EDIT: Another part of the answer is very likely to be that since Apple are funding LLVM development, and since that's a primary part of the toolchain used to build macOS, the people Apple are funding have access to the equivalent of all of the NDA'd stuff that Intel similarily uses for their HPC-targeted compilers for C++, Fortran, and such.
This naturally also benefits other projects that use LLVM for ARM, such as FreeBSD.

BlankSystemDaemon fucked around with this message at 13:47 on Nov 23, 2020

Gwaihir
Dec 8, 2009
Hair Elf

silence_kit posted:

The YOSPOS AMD or Apple thread had an interesting discussion about this subject. Why is the Apple laptop chip so good? Four possible reasons were identified.

1) Independent of the chip design, Apple is using a better Instruction Set Architecture than Intel. Of course computer-oriented people would want to attribute the increase in performance to a computer-level idea.

2) The Apple chip is just better engineered than the Intel chips, independent of the different constraints put on the different products.

3) Apple, being a computer system company, and not a computer chip company, does not need to make a profit on the sale of their computer chips. Because of this, they are able to make their chips much bigger than Intel, and this gives them an advantage over Intel independent of how well-engineered are the two chip designs.

4) TSMC has a manufacturing process advantage over Intel.

I don't know which of the 4 reasons is most important. Maybe they are all important. Thoughts?

Number 4 is quite large. TSMC 5nm means they've got a 30% power advantage over the next closest designs on TSMC 7nm. Intel's 14nm isn't even in the same ballpark. Intel 10nm exists, I guess, but it also sucks. Combined with 3, (apple shoved GIGANTIC amounts of L2 cache on the chip, which equals space), it's a very sizable difference vs another company's product. 1 is probably not a big one in the scheme of things.

mobby_6kl
Aug 9, 2009

by Fluffdaddy
I don't think Intel makes any CPUs at TSMC as far as I know, and their process is clearly ahead of Intel's current 14nm+++++ trash. But this is probably what, 20-30% advantage in power/performance tops.

I'm not a big enough nerd to tell if the instruction set makes a difference, though from what I remember reading, supposedly x86 decoding isn't a significant bottleneck. And Intel has more than enough money to fund development, it's not like you can throw more people and money at it and get linearly better processors.

My completely uninformed take is that Intel hosed up in execution/management of process transition and that derailed the architecture development too since poo poo had to be backported to work on 14nm instead developing something new.

IIRC their main cash cow is in servers, and that's not going anywhere just yet, but AMD is now a real competitor and Amazon is rolling out their Arm stuff. So they really need to get their poo poo together quickly.

BobHoward posted:

Intel hosed around with StrongARM for a while, renaming it to Xscale and going through a few generations. In 2006 they sold the entire business unit to Marvell, just in time to avoid being part of the smartphone gold rush. (NB: they already had Xscale products targeted at phones at the time of the sale!)

When they realized what a terrible mistake they'd made, Intel spent a ton of money trying to cram x86 IP, in the form of Atom derivatives, into phones. It did not work out. Performance wasn't great, especially because running ARM binaries through a translation layer (which was a thing Intel had to do) did not produce great results.

I'm also remembering something about it being difficult for Intel to get phone OEMs to deal with any of the pain of shipping a non-ARM Android image. Apple can manage to pull off these architecture switches because they own both the hardware and the OS, but in PC and Android world, it's a nightmare because there's so many different organizations with different capabilities and directions.

The products were so unappealing that Intel had to subsidize their way into a handful of phone models. Sales were predictably tepid, and eventually Intel gave up on shoveling money into a fiery inferno.
Hey, I have, on a shelf behind me, a PDA with an Xscale CPU in it. At the time it seemed insane to have 300Mhz in your hand though obviously it wasn't as fast as a P2 at that frequency.

The Atom phones were pretty competitive at the lower-midrange, from what I remember: https://www.youtube.com/watch?v=v_vttBfgt04&t=213s

BlankSystemDaemon
Mar 13, 2009



mobby_6kl posted:

I don't think Intel makes any CPUs at TSMC as far as I know, and their process is clearly ahead of Intel's current 14nm+++++ trash. But this is probably what, 20-30% advantage in power/performance tops.

I'm not a big enough nerd to tell if the instruction set makes a difference, though from what I remember reading, supposedly x86 decoding isn't a significant bottleneck. And Intel has more than enough money to fund development, it's not like you can throw more people and money at it and get linearly better processors.

My completely uninformed take is that Intel hosed up in execution/management of process transition and that derailed the architecture development too since poo poo had to be backported to work on 14nm instead developing something new.

IIRC their main cash cow is in servers, and that's not going anywhere just yet, but AMD is now a real competitor and Amazon is rolling out their Arm stuff. So they really need to get their poo poo together quickly.
Yeah, I was misremembering something - there's been a pretty persistent rumour about Intel contracting with TSMC, but apparently it's only supposed to be a one-off thing.

Tangentially related, there are also some pretty interesting patents to be found.

Gwaihir
Dec 8, 2009
Hair Elf
Intel did post a 10% YoY loss for their datacenter group I think, while AMD at the same time posted a 116% gain. The difference is that Intel has something like 18 billion in revenue per quarter while amd has 3. So there's a large buffer there.

BlankSystemDaemon
Mar 13, 2009



Part of that 10% YoY loss is almost certainly guaranteed to be from the hyperscalers :yaybutt:
Those exist purely because of the commodification of compute as a resource with oversubscription being the price customers ultimately pay.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

mobby_6kl posted:

IIRC their main cash cow is in servers, and that's not going anywhere just yet, but AMD is now a real competitor and Amazon is rolling out their Arm stuff. So they really need to get their poo poo together quickly.

So, about Amazon's ARM offering and how good it is:

If your application runs with no issues on Amazon's ARM CPUs (Graviton 2), then you are probably able to realize an immediate ~30-50% cost savings by doing so. Groups spending a lot of money on AWS services that have Graviton instance types available should be cutting over ASAP. It's a huge win for them, and I'd expect the vast majority of AWS' own hosted services to be based on it shortly.

Amazon has been showing tons of performance numbers on applications that scale well with more cores showing how you can choose between keeping the same instance size and getting better performance, or going down an instance size and keeping similar performance. How on earth did they get access to a world-killing ARM CPU? Because for more than a decade now AWS has been selling "1vCPU = 1 hyper-thread, half of a physical core" and now for their own ARM chips, "1vCPU=1 physical core". So the actual headliner here is that "2 ARM cores are moderately faster at multi-threaded tasks than a single Intel or AMD x86 core", and Amazon is willing to rent you those 2 ARM cores for less money than a single x86 core. Amazon is still going to clean shop with this offering, and it should become the default choice for most AWS applications.

BlankSystemDaemon
Mar 13, 2009



Twerk from Home posted:

Amazon has been showing tons of performance numbers on applications that scale well with more cores showing how you can choose between keeping the same instance size and getting better performance, or going down an instance size and keeping similar performance. How on earth did they get access to a world-killing ARM CPU? Because for more than a decade now AWS has been selling "1vCPU = 1 hyper-thread, half of a physical core" and now for their own ARM chips, "1vCPU=1 physical core". So the actual headliner here is that "2 ARM cores are moderately faster at multi-threaded tasks than a single Intel or AMD x86 core", and Amazon is willing to rent you those 2 ARM cores for less money than a single x86 core. Amazon is still going to clean shop with this offering, and it should become the default choice for most AWS applications.
It's pretty obvious that one SMT queue that appears like a core doesn't behave like a physical core when it comes to cache contention, which is the single biggest issue with :yaybutt:

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Intel already sends a lot of chipset-scale trailing process work to TSMC; the new part is sending high performance/margin HPC GPU out. That is probably going to TSMC but rumors are they’re looking at Samsung which would be a hilarious and on brand with the last half decade of intel decision making.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Twerk from Home posted:

So, about Amazon's ARM offering and how good it is:

If your application runs with no issues on Amazon's ARM CPUs (Graviton 2), then you are probably able to realize an immediate ~30-50% cost savings by doing so. Groups spending a lot of money on AWS services that have Graviton instance types available should be cutting over ASAP. It's a huge win for them, and I'd expect the vast majority of AWS' own hosted services to be based on it shortly.

Amazon has been showing tons of performance numbers on applications that scale well with more cores showing how you can choose between keeping the same instance size and getting better performance, or going down an instance size and keeping similar performance. How on earth did they get access to a world-killing ARM CPU? Because for more than a decade now AWS has been selling "1vCPU = 1 hyper-thread, half of a physical core" and now for their own ARM chips, "1vCPU=1 physical core". So the actual headliner here is that "2 ARM cores are moderately faster at multi-threaded tasks than a single Intel or AMD x86 core", and Amazon is willing to rent you those 2 ARM cores for less money than a single x86 core. Amazon is still going to clean shop with this offering, and it should become the default choice for most AWS applications.

Also, there’s a rumor is that AWS isn’t charging to recoup the full development costs for Graviton. Even at hyperscaler discount points CPUs are the biggest cost per node, and aws would rather capture more of that. Gv2 isn’t about any specific ARM architecture benefits but priced as a lever to get customers to adapt multiarch.

Nuvia’s whole pitch is using the Apple chip design philosophy in the data center; whether there’s enough volume left over to be economically viable is the question.

BlankSystemDaemon
Mar 13, 2009



PCjr sidecar posted:

Also, there’s a rumor is that AWS isn’t charging to recoup the full development costs for Graviton. Even at hyperscaler discount points CPUs are the biggest cost per node, and aws would rather capture more of that. Gv2 isn’t about any specific ARM architecture benefits but priced as a lever to get customers to adapt multiarch.

Nuvia’s whole pitch is using the Apple chip design philosophy in the data center; whether there’s enough volume left over to be economically viable is the question.
I can't help but feel that everyone wins by AWS forcing multiarch support, so that's one of the few good things that Amazon has done - but I'm worried that people won't take it as a hint to adopt the BSD practice of keeping as much of the codebase, as is possible, machine independent.

I gotta say, I'm a bit confused by Nuvia, because they let Jon Masters go, and he was the VP of Software and really knows his poo poo.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
Question about Xeons and strictness of RAM compatibility:

I have an i3-6320 in my fileserver. It's been a rather nice value server CPU with a high clock and ECC support. I've been pondering swapping it for a Xeon E3-1270/80 v6 as a low-effort upgrade to give myself two more cores to better support more simultaneous users, if I can snag one on eBay for a good price.

My i3-6320 lists compatible RAM as:
DDR4-1866/2133, DDR3L-1333/1600

The Xeon, however, lists:
DDR4-2400, DDR3L-1866

The 64 GB of RAM in the machine is all DDR4-2133 ECC. Motherboard is a Supermicro MBD-X11SSM-F-O.

Will the Xeon be happy to run the RAM at 2133 or must it be exactly 'to spec'?

I know consumer SKUs are flexible with this, but the server stuff seems stricter so I've been trying to find out before going ahead and buying one, as the upgrade probably isn't worth it if I have to go through re-buying all the RAM. (At that point, maybe just wait a year or two and buy a whole new mobo+CPU+RAM).

BlankSystemDaemon
Mar 13, 2009



admiraldennis posted:

Question about Xeons and strictness of RAM compatibility:

I have an i3-6320 in my fileserver. It's been a rather nice value server CPU with a high clock and ECC support. I've been pondering swapping it for a Xeon E3-1270/80 v6 as a low-effort upgrade to give myself two more cores to better support more simultaneous users, if I can snag one on eBay for a good price.

My i3-6320 lists compatible RAM as:
DDR4-1866/2133, DDR3L-1333/1600

The Xeon, however, lists:
DDR4-2400, DDR3L-1866

The 64 GB of RAM in the machine is all DDR4-2133 ECC. Motherboard is a Supermicro MBD-X11SSM-F-O.

Will the Xeon be happy to run the RAM at 2133 or must it be exactly 'to spec'?

I know consumer SKUs are flexible with this, but the server stuff seems stricter so I've been trying to find out before going ahead and buying one, as the upgrade probably isn't worth it if I have to go through re-buying all the RAM. (At that point, maybe just wait a year or two and buy a whole new mobo+CPU+RAM).
Memory DIMMs, like most everything else including PCI busses, SATA/SAS busses, and networking, and so on and so forth, contains firmware which, during POST, will do training to establish the maximum speed that everything can agree to run together.
The main thing to keep an eye on with respect to memory is whether or not it's registered/buffered, in addition to the usual such as ECC and CAS latency.

BlankSystemDaemon fucked around with this message at 03:20 on Nov 24, 2020

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.

silence_kit posted:

The YOSPOS AMD or Apple thread had an interesting discussion about this subject. Why is the Apple laptop chip so good? Four possible reasons were identified.

1) Independent of the chip design, Apple is using a better Instruction Set Architecture than Intel. Of course computer-oriented people would want to attribute the increase in performance to a computer-level idea.

2) The Apple chip is just better engineered than the Intel chips, independent of the different constraints put on the different products.

3) Apple, being a computer system company, and not a computer chip company, does not need to make a profit on the sale of their computer chips. Because of this, they are able to make their chips much bigger than Intel, and this gives them an advantage over Intel independent of how well-engineered are the two chip designs.

4) TSMC has a manufacturing process advantage over Intel.

I don't know which of the 4 reasons is most important. Maybe they are all important. Thoughts?
I found this Anandtech deepdive really interesting. It certainly gives the impression of a lot of 1 and 2.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive

A good instruction set, starting from a clean-sheet, a good focus on modern requirements and full control of the software stack so they can include novel features and have them fully used from day 1.

Pablo Bluth fucked around with this message at 21:34 on Nov 25, 2020

BlankSystemDaemon
Mar 13, 2009



Pablo Bluth posted:

I found this Anandtech deepdive really interesting. It certainly gives the impression of a lot of 1 and 2.
https://www.anandtech.com/show/16226/apple-silicon-m1-a14-deep-dive

A good instruction set, starting from a clean-sheet, a good focus on modern requirements and full control of the software stack so they can include novel features and have them fully used from day 1.
Saying one ISA is better than the other is a complete nonsense when the one that's supposed to be RISC implements a javascript instruction.

They're both effectively CISC-like ISAs running on out-of-order super-scalar processors with branch prediction, micro-op fusion and caching RISC-like CPU cores, and there's lot to be said for the kinds of optimization that can be done in the compiler when the same company is working on both the chip and the compiler.
Heck, just look at all the ~250 page PDF for microarchitecture optimizations for x86-likes, the ~150 page pdf for x86-like assembly language optimization, or the ~400 page pdf for keeping track of latency, throughput, and micro-operation breakdowns for individual instructions.

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
I won't claim to know the finer details of what counts as ISA and what counts as something else, but the Anandtech article talks about how unusually wide the architecture is and the challenge in going the same route with x86.

x86 is now 40 years of making improvements fit with legacy decisions. Even if that just adds the odd percent penalty here and there, compound interest can quickly add up. Would anyone making a clean sheet design today do something even remotely close to what x86 has become?

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
BSD superfan, your posts make me facepalm so often

the "Javascript instruction" is just a slightly specialized version of FP-to-int. FP conversion is important when implementing JS because JS has no native integer data type, just IEEE double precision floating point.

Adding this variant of FP-to-int in ARM v8.3 wasn't much of a change from v8.2. There were already two versions (vector and scalar) of what FJCVTZS does, which is to convert floating point to signed fixed point with rounding towards zero. The "javascript" version just gives you the same result mod 2^32. For some reason that's important to JS interpreters and JITs.

The ARM64 instruction set has a lot of opcodes, but most of them are like this: variations on a theme easily supported with the same hardware. Some of the 1980s RISC pioneers have backronymed RISC to "Reduced Instruction Set Complexity" rather than the original "Reduced Instruction Set Computer," because their primary concern was always ease of implementation, not counting up the number of instructions and deciding that if the result was too many it wasn't a RISC any more.

CFox
Nov 9, 2005
ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86.

FlapYoJacks
Feb 12, 2009

CFox posted:

ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86.

VIA. :v:

Gucci Loafers
May 20, 2006
Probation
Can't post for 2 hours!

Space Gopher posted:

The really interesting thing here is going to be the server-side transition to ARM. Right now, one of the perceived issues in cloud ARM adoption is that developers would like to be able to run the same binaries and containers locally and server-side, whether that's a specific interpreter/runtime or their own compiled binaries, to make deployment and troubleshooting easier. When local dev is x86-64, then there's a barrier in going to ARM hosts. But, local dev on an ARM Macbook flips the script; now, it'll be easier to deploy to AWS Graviton or whatever equivalent Azure comes up with when they finally roll out ARM VMs.

Are there any projections on how long this transition would take? I assume decades.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

CFox posted:

ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86.

As anyone who has worked with NVIDIA as a partner would tell you: lol.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Gabriel S. posted:

Are there any projections on how long this transition would take? I assume decades.

Depends what you mean by "transition." There are small and developing customers who would probably jump on a 30-50% cost AWS cost reduction as quickly as they could--a couple of years at most. But if you're talking "big iron" type customers who have entrenched products that would take massive engineering efforts to port over to a different arch, yeah, a decade+ if ever.

The real question will be if AWS can maintain that price delta over time. If not, then it saps a lot of the reason for going with ARM in the first place. But then, AWS gets to set the prices however they like, and presumably have a vested interest in driving ARM adoption, sooo...

CFox posted:

ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86.

Good luck waiting for anyone to whip up a commercially available ARM chip that can compete on the desktop (or laptop, for that matter). Apple ain't sharing, and no one else seems to give a rat's rear end about anything outside the datacenter or cell-phones. To the extent that x86 belongs to Intel and AMD, it's almost irrelevant because the development costs for any competitive chip are so astronomical that no one else is bothering except for custom solutions for their own internal products.

DrDork fucked around with this message at 04:22 on Nov 26, 2020

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
actually the best shot for competitive high-performance ARM is NVIDIA. That's where they'll need to aim to satisfy their HPC ambitions. lovely phone efficiency cores with GPUs attached aren't going to be suitable for the HPC roles they're envisioning, they will be pushing their design teams to follow Apple in high-perf cores.

if the merger gets turned down - yeah, nobody is doing high-perf ARM cores, that will be available to the commercial market, that dream is over. In a lot of ways NVIDIA buying ARM is really necessary to break the x86 duopoly. Nobody else is going to do it (that will offer their chips to the open market).

Paul MaudDib fucked around with this message at 04:42 on Nov 26, 2020

Shipon
Nov 7, 2005

CFox posted:

ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86.

isn't it locked to one company now, nvidia?

also gently caress ARM i ain't installing wack rear end emulators to run old games

BlankSystemDaemon
Mar 13, 2009



BobHoward posted:

The ARM64 instruction set has a lot of opcodes, but most of them are like this: variations on a theme easily supported with the same hardware. Some of the 1980s RISC pioneers have backronymed RISC to "Reduced Instruction Set Complexity" rather than the original "Reduced Instruction Set Computer," because their primary concern was always ease of implementation, not counting up the number of instructions and deciding that if the result was too many it wasn't a RISC any more.
The point was that it's still an out-of-order super-scalar CPU with something that has very lose relationship with the original purpose behind RISC, which came down down to cost-effectiveness.
Which should sound familiar, because that was the point of the paper that later gave birth to what's now known as Moore's Law, where one of the most-used words is 'cost'.
Also, leave my Britney^wmy gimmick alone!

CFox posted:

ARM is a better instruction set if only because it’s not locked behind 2 drat companies. Intel and AMD can always pivot to making ARM chips, you can’t say the same for making x86.
And ARM is locked behind half a dozen, maybe a dozen companies - for all intents and purposes, it doesn't really change anything whether it's 2 or 12 companies.
While it's not nearly as advanced as, for example, the offerings from Intel, AMD, ARM or any of the licensees, RISC-V does exist as an out-of-order super-scalar CPU


DrDork posted:

Good luck waiting for anyone to whip up a commercially available ARM chip that can compete on the desktop (or laptop, for that matter). Apple ain't sharing, and no one else seems to give a rat's rear end about anything outside the datacenter or cell-phones. To the extent that x86 belongs to Intel and AMD, it's almost irrelevant because the development costs for any competitive chip are so astronomical that no one else is bothering except for custom solutions for their own internal products.
The Graviton2 cores that's been doing pretty well for :yaybutt: is just a bunch of ARM Neoverse N1, so I don't know if it's as far off as people seem to think - although you're right, it isn't available right now.

Coffee Jones
Jul 4, 2004

16 bit? Back when we was kids we only got a single bit on Christmas, as a treat
And we had to share it!

silence_kit posted:

The YOSPOS AMD or Apple thread had an interesting discussion about this subject. Why is the Apple laptop chip so good? Four possible reasons were identified.

1) Independent of the chip design, Apple is using a better Instruction Set Architecture than Intel. Of course computer-oriented people would want to attribute the increase in performance to a computer-level idea.

2) The Apple chip is just better engineered than the Intel chips, independent of the different constraints put on the different products.

3) Apple, being a computer system company, and not a computer chip company, does not need to make a profit on the sale of their computer chips. Because of this, they are able to make their chips much bigger than Intel, and this gives them an advantage over Intel independent of how well-engineered are the two chip designs.

4) TSMC has a manufacturing process advantage over Intel.

I don't know which of the 4 reasons is most important. Maybe they are all important. Thoughts?

New memory controller allows for a single core to saturate the available bandwidth to ram; allowing for very fast single core performance. DDR 5 provides an additional 50% bandwidth so 2021 2022 will see an even larger jump

Posted from the Apple m1 thread here’s all the bug workarounds listed on my system with workarounds
code:
cat /proc/cpuinfo | grep bugs
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit srbds
Those definitely add up, and apple doesn’t have to mitigate this in software ... yet.

BlankSystemDaemon
Mar 13, 2009



Coffee Jones posted:

Posted from the Apple m1 thread here’s all the bug workarounds listed on my system with workarounds
code:
cat /proc/cpuinfo | grep bugs
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa itlb_multihit srbds
Those definitely add up, and apple doesn’t have to mitigate this in software ... yet.
:stonklol:

Zeta Acosta
Dec 16, 2019

#essereFerrari
how much mileage can i get from a xeon e31246 v3? i had a i54430 for seven years and never had a problem with it

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy

Zeta Acosta posted:

how much mileage can i get from a xeon e31246 v3? i had a i54430 for seven years and never had a problem with it

Depends on what you're trying to do with it. You could probably stretch it out another year if you're gaming and/or doing light productivity, and you have a decent GPU.

Zeta Acosta
Dec 16, 2019

#essereFerrari

gradenko_2000 posted:

Depends on what you're trying to do with it. You could probably stretch it out another year if you're gaming and/or doing light productivity, and you have a decent GPU.

1080 gaming with an rx 590. word, spss and atlas-ti its all i use

Adbot
ADBOT LOVES YOU

Prescription Combs
Apr 20, 2005
   6

Zeta Acosta posted:

how much mileage can i get from a xeon e31246 v3? i had a i54430 for seven years and never had a problem with it

That xeon is basically the i7 haswell variant of the i5. You'll be getting hyperthreading and a good bit higher clock speeds. Will probably last quite a good bit.

i5 https://ark.intel.com/content/www/us/en/ark/products/75036/intel-core-i5-4430-processor-6m-cache-up-to-3-20-ghz.html
Xeon https://ark.intel.com/content/www/us/en/ark/products/80916/intel-xeon-processor-e3-1246-v3-8m-cache-3-50-ghz.html

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply