Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Cygni
Nov 12, 2005

raring to post

quote:

“The company’s 7nm-based CPU product timing is shifting approximately six months relative to prior expectations,” Intel said. “The primary driver is the yield of Intel’s 7nm process, which based on recent data, is now trending approximately twelve months behind the company’s internal target.”

Intel delaying a new process??? NO WAY!!!

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
12 months behind the internal target which is 24 months behind their projections which is 36 months behind their roadmap. Or something like that.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Is it? Really? Do I need to hunt for specific modules with fabled B-die or some poo poo like that? I'd rather not. The fastest ECC UDIMMs I see are rated 2666MHz. The 3200MHz ones are typicall RDIMM. Guess what I can't use.
The kind of machines that use ECC DIMMs at speeds that fast are also the kind of machines that needs oodles and oodles of memory, so of course it's RDIMM most of the time.
3200MHz UDIMM do exist, though. There's even more than one SKU.
EDIT: Ignore the fact that they're listed as sampling - cloud customers have been buying most of the stock, but a friend of mine received some he ordered a few weeks ago.

BlankSystemDaemon fucked around with this message at 23:44 on Jul 23, 2020

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

D. Ebdrup posted:

The kind of machines that use ECC DIMMs at speeds that fast are also the kind of machines that needs oodles and oodles of memory, so of course it's RDIMM most of the time.
3200MHz UDIMM do exist, though. There's even more than one SKU.
EDIT: Ignore the fact that they're listed as sampling - cloud customers have been buying most of the stock, but a friend of mine received some he ordered a few weeks ago.

Ecc requires an extra 12.5% raw dram and 25% for ddr5

Thats the main reason it isn't used in consumer chips. Nobody but nerds wants to pay for it

shrike82
Jun 11, 2005

What's up with Intel? Is it a management problem or did they make a wrong bet on tech sometime back and it's taking time to turn the ship around.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

D. Ebdrup posted:

EDIT: Ignore the fact that they're listed as sampling - cloud customers have been buying most of the stock, but a friend of mine received some he ordered a few weeks ago.
I think I've seen these in the past. I have yet to find a place over here to procure those. I don't have business contacts to get these. Alas, we'll see how things are when I'll upgrade again.

movax
Aug 30, 2008

shrike82 posted:

What's up with Intel? Is it a management problem or did they make a wrong bet on tech sometime back and it's taking time to turn the ship around.

Most of column A, a lot of putting eggs in one basket, a little of column B. I guess thinking about it, this should not have been a surprise.

2000: process will scale to 5 GHz and beyond, gimme dat fat rear end pipeline.
2020: process will continue to shrink tick tock tick tock! (More like 2016, but whatever)

evilweasel
Aug 24, 2002

shrike82 posted:

What's up with Intel? Is it a management problem or did they make a wrong bet on tech sometime back and it's taking time to turn the ship around.

it's certainly both; you don't get to the multi-year chicken loving that was the 10nm node without bad management

so it is not shocking that even if the 7nm node suffers from lesser technical problems, the wildly incompetent management is still loving some chickens

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot

shrike82 posted:

What's up with Intel? Is it a management problem or did they make a wrong bet on tech sometime back and it's taking time to turn the ship around.

Intel was completely dominant with both their architecture and their process tech. They were confident their process lead would continue with 10nm and decided to stop building 14nm capacity since 10nm was just around the corner. When they had problems, they decided to have faith and not build any more 14nm capacity. Repeat for years and years and... now they're looking like absolute garbage and who knows if they'll ever recover in terms of foundry tech, and a ton of design effort has been wasted on products intended for processes that never came to fruition.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
It's a little funny because Ryzen 3100s and 3300x's are impossible to find here in the PH and even 3600s are hard to get to but Intel 10th gen is available

If I didn't already have a B350 I might have considered it

AARP LARPer
Feb 19, 2005

THE DARK SIDE OF SCIENCE BREEDS A WEAPON OF WAR

Buglord
So what are we looking at until Intel finds its footing again: 3 years? 5?

Anime Schoolgirl
Nov 28, 2002

Combat Pretzel posted:

Is it? Really? Do I need to hunt for specific modules with fabled B-die or some poo poo like that? I'd rather not. The fastest ECC UDIMMs I see are rated 2666MHz. The 3200MHz ones are typicall RDIMM. Guess what I can't use.

they're selling second-hand EUDIMM 3200 modules

https://www.amazon.com/2x32GB-DDR4-3200-PC4-25600-Unbuffered-Memory/dp/B084D9ZHR5/

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

K8.0 posted:

Intel was completely dominant with both their architecture and their process tech. They were confident their process lead would continue with 10nm and decided to stop building 14nm capacity since 10nm was just around the corner. When they had problems, they decided to have faith and not build any more 14nm capacity. Repeat for years and years and... now they're looking like absolute garbage and who knows if they'll ever recover in terms of foundry tech, and a ton of design effort has been wasted on products intended for processes that never came to fruition.

meanwhile Geekbench 5 just got ported to natively run on ARM/Apple Silicon and the 2.5ghz A12Z that's in the new Mac devkits (also the iPad Pro but with 6gb RAM vs 16) is apparently hitting 1098 single core and 4555 multi-core (vs the non-native of 800 on single-core and 2600 on multi-core).

For comparison, the i7-1060NG7 (1.2 GHz) in the top of the line 2020 MacBook Air scores 1139 on single-core and 3034 on multi-core (quad). The TOTL i7-1068NG7 at 2.3ghz in the just-updated 13" MacBook Pro scores 1229 single-core and 4512 multi-core (quad).

For the Geekbench 5 Metal API test, the GPU in the devkit A12Z hits 12610, vs 9920 for the i7-1068NG7 in the MBP.

The Apple SoC has a 15 watt TDP, vs 45 for the i7 in the MBP. It's built using TSMC's 7nm node, and Apple just placed a massive order to use their 5nm process for 2021's chips.

This is a repurposed iPad chip, and nobody knows what this means for actual shipping desktops and notebooks. Will we see an Apple chip running at 45 watts with 3x the performance of competing X86 hardware? Will we see equally-performant chips with 30% as much power draw and heat output?

ARM is up for sale and the scuttlebutt is that Nvidia (and possibly Microsoft) are interested (Apple has apparently already passed. They probably don't need it at this point and I imagine that buying ARM would open them up to some serious antitrust litigation, seeing as how ARM chips power 99.999% of Android systems as well).

TLDR: The industry's going to collectively decide that x86's days are numbered within the next 2-3 years, and I don't think Intel has an out. AMD either, but they don't have all of their eggs in that particular basket.

SwissArmyDruid
Feb 14, 2014

by sebmojo

evilweasel posted:

it's certainly both; you don't get to the multi-year chicken loving that was the 10nm node without bad management

so it is not shocking that even if the 7nm node suffers from lesser technical problems, the wildly incompetent management is still loving some chickens

Whatever the case, I've always been of the opinion that C-level executives are not interchangable. Finance stays in their lane, otherwise they just cut costs they don't understand. Marketing stays in their lane, or everything becomes spin. Unless the executive has spent any time actually making the thing that the company's known for, they're doomed to fail.

And as far as I know, Bob Swan has never NOT been a finance lizard.

Kerbtree
Sep 8, 2008

BAD FALCON!
LAZY!
Just remember, the thermals of MacBooks are notorious. Doubly so if you’re comparing to a box with proper fans.

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

Kerbtree posted:

Just remember, the thermals of MacBooks are notorious. Doubly so if you’re comparing to a box with proper fans.

Geekbench 5 scores produced by A12Z chips literally inside iPads aren’t that far off from the devkit’s scores. 5-10% difference, IIRC.

gradenko_2000
Oct 5, 2010

HELL SERPENT
Lipstick Apathy
What would it even look like for the desktop segment to move to ARM from x86. Am I gonna have to move to Linux?

Don't get me wrong, I kinda really like what Lubuntu could do with my old laptop with 4 gigs of RAM (before I upgraded it) but I'm a nerd and a lot of people aren't.

trilobite terror
Oct 20, 2007
BUT MY LIVELIHOOD DEPENDS ON THE FORUMS!

gradenko_2000 posted:

What would it even look like for the desktop segment to move to ARM from x86. Am I gonna have to move to Linux?

Don't get me wrong, I kinda really like what Lubuntu could do with my old laptop with 4 gigs of RAM (before I upgraded it) but I'm a nerd and a lot of people aren't.

I assume it starts with Apple making a big splash with some benchmarks and fawning press and then a whole bunch of shareholders across the computing and software industries start barking at their respective CEOs about how they don’t have their own version of Apple’s shiny new toys

also it’s not just the switch to ARM architecture with Apple silicon- the “neural engine” and on-chip machine learning cores are coming over, PowerVR-based GPU is coming over. Even if the PC space stays on x86, we’re going to see some big changes to how CPUs get made because there’s no way that Microsoft is going to let themselves be left behind on those spec sheet points

At minimum, expect to see Intel chips shipping with their own ML cores in.....like 2026 I guess (lol)?

trilobite terror fucked around with this message at 12:07 on Jul 25, 2020

shrike82
Jun 11, 2005

Err... Intel has pretty good support for ML inferencing (AVX2 and their OpenVINO platform).

repiv
Aug 13, 2009

avx512 already has reduced precision ops meant for ML, and the next iteration is adding matrix FMA ops akin to nvidias tensor cores

problem is their consumer chips still don't have avx512 because it was tied up with 10nm and lol at how that turned out

Dr. Fishopolis
Aug 31, 2004

ROBOT

repiv posted:

avx512 already has reduced precision ops meant for ML, and the next iteration is adding matrix FMA ops akin to nvidias tensor cores

I actually don't get AVX512. Intel is never, ever going to catch up with GPUs for floating point math, why would you push your cpu to do a thing your GPU can do in half the time for half the power?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Ok Comboomer posted:

meanwhile Geekbench 5 just got ported to natively run on ARM/Apple Silicon and the 2.5ghz A12Z that's in the new Mac devkits (also the iPad Pro but with 6gb RAM vs 16) is apparently hitting 1098 single core and 4555 multi-core (vs the non-native of 800 on single-core and 2600 on multi-core).

For comparison, the i7-1060NG7 (1.2 GHz) in the top of the line 2020 MacBook Air scores 1139 on single-core and 3034 on multi-core (quad). The TOTL i7-1068NG7 at 2.3ghz in the just-updated 13" MacBook Pro scores 1229 single-core and 4512 multi-core (quad).

For the Geekbench 5 Metal API test, the GPU in the devkit A12Z hits 12610, vs 9920 for the i7-1068NG7 in the MBP.

The Apple SoC has a 15 watt TDP, vs 45 for the i7 in the MBP. It's built using TSMC's 7nm node, and Apple just placed a massive order to use their 5nm process for 2021's chips.

TLDR: The industry's going to collectively decide that x86's days are numbered within the next 2-3 years, and I don't think Intel has an out. AMD either, but they don't have all of their eggs in that particular basket.

You're missing a few bits here. The i7-1068NG7 is a 28W chip, not 45W, and is an Apple-specific part, so who knows what they decided to optimize for there. The generally released i7-1065g7 is a 15W-25W part that hits ~1300 single-core and 4000+ multi-core, and the 10710u can hit even higher. So while the A12z is certainly impressive, it's not near as head-and-shoulders better the way you're describing, especially considering the node advantage Apple has there. To the point, AMD also seems to have some ideas about mobile systems, and is crushing it on desktop and server loads in ways Apple isn't even considering trying to match right now. Really what you're saying here is "turns out Intel sitting with their thumb up their butts for 5+ years has given a lot of people an opportunity to catch up" more than "lol x86 sux." (though I do agree that for power-limited systems like laptops, ARM/RISC has some built-in advantages that are hard to match with CISC chips)

For ARM taking over in the next 2-3 years, I think you're being very optimistic here, and ignoring some pretty important points:

Apple has a special advantage where they can just say "gently caress all you users / developers / everyone, we're doing it this way now" and get away with it. Basically no one else can, for a lot of reasons, so even if ARM was straight up, no contest better in every way, it'd still take longer than that for the industry to bother getting onboard. Right now that's not clearly the case: Apple has some impressive performance for mobile and power-limited platforms, but how well can that scale up against an x86 chip that's plugged into the wall? I'm sure we'll see, but scaling up isn't always easy.

Apple is the only one with ARM SoCs or CPUs that are really in any way compelling against x86 for your home user. Yes, the HPC stuff from Fujitsu is interesting, but that's not at all aimed at consumers. No one else is really working on consumer-grade ARM chips, either, AFAIK. Intel/AMD would take years to get a RISC chip developed (if they would even consider it in the first place), Apple is sure as gently caress not going to license their tech to anyone or provide chips for competitors, so you're looking at a solution from....Qualcomm, I guess? Samsung? They've dabbled with low-end laptop-like SoCs, but they've got a long way to go before they're up to the level of the A12/13.

And then there's the compatibility issue. I would not bet any money on Microsoft being able to replicate Apple's claimed success in emulation layer software to allow you to run x86 on ARM. Not just because Microsoft moves slower, but because they have an enormously heavier lift there: Apple is basically designing the translation/emulation software to target a single, limited family of chips, which they control, on a specific, limited set of motherboards, which they also control. Microsoft controls none of that, and would have to take into account all the hardware compatibility issues that come with using a limitless combination of random poo poo. That ain't easy, so performance is likely going to suffer heavily. And in the meantime, while Apple can basically strong arm their entire ecosystem into jumping to ARM because they say so, everyone else is going to continue developing for the platforms their customers already have.

Which brings us to the real issue: backwards compatibility and software. It is enormously difficult to get businesses to transition to anything, even when it's a straight upgrade with minimal concerns. Consider how long it has taken to get businesses on board with Windows XP, Windows 7, Windows 10, x86_64, anything. And now you're trying to move them to an entirely new architecture that's absolutely going to break their special-sauce software that barely runs now and would cost $$$$ to redevelop for ARM? It could happen, but it would take decades.

DrDork fucked around with this message at 16:47 on Jul 25, 2020

CFox
Nov 9, 2005
There won’t be a switchover in 2-3 years but there will be a strong push on the windows side. Microsoft would absolutely love to ditch all the legacy cruft that’s slowing them down and OEMs would love to buy cheaper and simpler SOCs and lower their costs. I think apple is going to come out strong with good performance and excellent battery life and just make their offerings straight up better than things on the windows side, as long as you’re not trying to game on them or need some software that Mac doesn’t have.

repiv
Aug 13, 2009

Dr. Fishopolis posted:

I actually don't get AVX512. Intel is never, ever going to catch up with GPUs for floating point math, why would you push your cpu to do a thing your GPU can do in half the time for half the power?

CPU SIMD still has its place, offloading to the GPU is a high latency operation that only pays off if you have a sufficiently large batch of work to do all at once. If you have a lot of small batches then the CPU can probably finish them before the GPU would even start executing.

My understanding is that ML is mostly about crunching through fat batches of data though so I dunno how useful the AVX512 ML extensions are :shrug:

FuturePastNow
May 19, 2014


Dr. Fishopolis posted:

I actually don't get AVX512. Intel is never, ever going to catch up with GPUs for floating point math, why would you push your cpu to do a thing your GPU can do in half the time for half the power?

it lets Intel put out a benchmark that makes their CPU look better than AMD at something

Fantastic Foreskin
Jan 6, 2013

A golden helix streaked skyward from the Helvault. A thunderous explosion shattered the silver monolith and Avacyn emerged, free from her prison at last.

The legacy cruft is Window's main advantage over any potential competitors, at least in the highly lucrative business space.

Dr. Fishopolis
Aug 31, 2004

ROBOT

repiv posted:

CPU SIMD still has its place, offloading to the GPU is a high latency operation that only pays off if you have a sufficiently large batch of work to do all at once. If you have a lot of small batches then the CPU can probably finish them before the GPU would even start executing.

That makes sense to a point but then wouldn't the thermal and silicon budget be better spent on including actual tensor cores (or equivalent?) Doesn't even need to be a full blown igpu. It just feels like intel is trying to halfass a problem that's already solved.

e: assuming they're doing this in good faith and it isn't just for benchmarks

Dr. Fishopolis fucked around with this message at 19:15 on Jul 25, 2020

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Some Goon posted:

The legacy cruft is Window's main advantage over any potential competitors, at least in the highly lucrative business space.

Yes, absolutely

CFox
Nov 9, 2005

Some Goon posted:

The legacy cruft is Window's main advantage over any potential competitors, at least in the highly lucrative business space.

It is but even if it were dropped tomorrow businesses aren’t going to switch to macs or chrome books or linux. Windows would still be the default if only for AD and everything around that. Microsoft wants you to ditch desktop apps and use azure for everything anyways.

wet_goods
Jun 21, 2004

I'M BAAD!

SwissArmyDruid posted:

Whatever the case, I've always been of the opinion that C-level executives are not interchangable. Finance stays in their lane, otherwise they just cut costs they don't understand. Marketing stays in their lane, or everything becomes spin. Unless the executive has spent any time actually making the thing that the company's known for, they're doomed to fail.

And as far as I know, Bob Swan has never NOT been a finance lizard.

The bad MGMT is at the top of the manufacturing org and a few tiers of awful vps there, not the c-level. The biggest sin out of the c level people is that they didn't fire about three tiers of managers in the manufacturing org when 10 failed long before Bob was CEO. Bob should should 100% axe people at the top of manufacturing now that 7nm is going to be a zoo because that's on him.

One more thing, it's the c level and sales people that have kept things growing at actually a really good rate for the past few years to cover for manufacturing gently caress ups.

wet_goods fucked around with this message at 20:35 on Jul 25, 2020

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:

An architecture change would just mean JIT translation or a VM these days, all that tech is mature now.

WhyteRyce
Dec 30, 2001

BS has probably been better than BK who supposedly had engineering chops

Kerbtree
Sep 8, 2008

BAD FALCON!
LAZY!

CFox posted:

There won’t be a switchover in 2-3 years but there will be a strong push on the windows side. Microsoft would absolutely love to ditch all the legacy cruft that’s slowing them down and OEMs would love to buy cheaper and simpler SOCs and lower their costs. I think apple is going to come out strong with good performance and excellent battery life and just make their offerings straight up better than things on the windows side, as long as you’re not trying to game on them or need some software that Mac doesn’t have.

People are already just-about making it work already on the surface pro X that’s ARM with a compat later. Windows HAL’s a thing.

https://youtu.be/BceSt_Mx8Hk

WhyteRyce
Dec 30, 2001

Also I thought there was a bunch of turn over at the top levels of manufacturing but a lot of that turnover is Murthy driven or initiated since he's overseeing all of that side

Fame Douglas
Nov 20, 2013

by Fluffdaddy

DrDork posted:

And then there's the compatibility issue. I would not bet any money on Microsoft being able to replicate Apple's claimed success in emulation layer software to allow you to run x86 on ARM. Not just because Microsoft moves slower, but because they have an enormously heavier lift there: Apple is basically designing the translation/emulation software to target a single, limited family of chips, which they control, on a specific, limited set of motherboards, which they also control. Microsoft controls none of that, and would have to take into account all the hardware compatibility issues that come with using a limitless combination of random poo poo. That ain't easy, so performance is likely going to suffer heavily. And in the meantime, while Apple can basically strong arm their entire ecosystem into jumping to ARM because they say so, everyone else is going to continue developing for the platforms their customers already have.

Windows for ARM already has an x86 emulation layer, and 64 bit is being developed right now. It does seem to be slow, though, but who knows how it actually stacks up to Apple's solution.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Fame Douglas posted:

Windows for ARM already has an x86 emulation layer, and 64 bit is being developed right now. It does seem to be slow, though, but who knows how it actually stacks up to Apple's solution.

Yeah, and that's kinda my point: it can be done, sorta, but the shear amount of space that thing has to cover means it'll either be slow, only support a fraction of hardware, or require the sort of development skill and dedication that you're not finding outside of FANG. And while that's the case, it's gonna really crush the draw of ARM: it's nice to have hardware that's 30% more power efficient native, but if most of your software has to run through a crudgy emulation that sucks up that 30% and maybe more, that's not an obvious win.

And that's again assuming ARM chips with performance that don't exist (except Apple).

shrike82
Jun 11, 2005

RE ML and CPUs, inferencing is moving to INT8 quantized operations.

GPUs will always be better for training but in production when you're serving inferencing queries to end-users, people shift to CPU since they're cheaper and scale up better.

SwissArmyDruid
Feb 14, 2014

by sebmojo
So we *won't* be headed towards a runaway machine learning algorithm initiating a self-improving feedback loop through continuous end-user queries, achieving sentience, and obliterating all organic life, because we're cheap motherfuckers, is what you're saying. =(

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

also worth mentioning that AVX 512’s direct antecedent dates to 2006 with Larrabee and the first pivot was trying solve scientific computing problems (make big Linpack number) rather than ML. they’ve added several sets of avx512 instructions for ML as GPGPU’s advantages in ML became clear

Adbot
ADBOT LOVES YOU

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

CFox posted:

It is but even if it were dropped tomorrow businesses aren’t going to switch to macs or chrome books or linux. Windows would still be the default if only for AD and everything around that. Microsoft wants you to ditch desktop apps and use azure for everything anyways.

AD isn't all that special, in my eyes, compared to things that have come before. Microsoft won out because of inertia. They won out because the server was graphical and easy to configure, just like the client. They won out because they kept supporting legacy software.

Azure is meaningless in the context of Windows on the desktop. If everything is going to be web applications, who cares what system you're running? If you can use any old web browser to authenticate and do your work, you're going to see a lot more fragmentation in the workplace, with people wanting Macs, for example.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply