Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
SemiAccurate claims to have good info about Rome/Zen2 from a recent leak. Claims it's pretty exceptional. Anyone know what it is exactly? Paywall and such.

Adbot
ADBOT LOVES YOU

Azuren
Jul 15, 2001

I've been looking to jump ship and build all AMD for my next rig and I'm wondering what the sweet spot of waiting is. I'd originally planned for say late 2019/sometime in 2020, around whenever Cyberpunk 2077 comes out. Should I be looking at whenever Zen 2 and Navi come out? My ideal goal is to run max settings at 1440p and ~100hz (I'll be looking at a 144hz freesync monitor but in practice 100 fps is plenty imo). I know we're operating on a lot of rumors and speculation about stuff that hasn't been released yet, just trying to come up with a plan.

ZobarStyl
Oct 24, 2005

This isn't a war, it's a moider.

Azuren posted:

I've been looking to jump ship and build all AMD for my next rig and I'm wondering what the sweet spot of waiting is. I'd originally planned for say late 2019/sometime in 2020, around whenever Cyberpunk 2077 comes out. Should I be looking at whenever Zen 2 and Navi come out? My ideal goal is to run max settings at 1440p and ~100hz (I'll be looking at a 144hz freesync monitor but in practice 100 fps is plenty imo). I know we're operating on a lot of rumors and speculation about stuff that hasn't been released yet, just trying to come up with a plan.
At worst, Zen 2 is merely 'meh' in terms of power savings and raw IPC, so I'd say you're easily on track for a solid CPU option in 2019/2020, but betting on GPU release schedules is a losing man's game. Having said that, a 2700X and a Vega64 would run nearly to the specs you require today. The good news is that waiting today is a wonderful, special time of post-buttcoin oversupply, so delay as long as you are able.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
Hey guys I'm here today with a quick tutorial on how to build a PC for a game.

Step 1 : Forget about buying a computer until a month or two before the game comes out.

Seriously the only reason to keep track of hardware that you aren't going to use right away is because it's fun. Never ever buy poo poo in advance, yeah there are plenty of times when prices have gone up but the odds are buying hardware you aren't going to use right away is just throwing money away.

ufarn
May 30, 2009
I wanted to upgrade for like three or four years, but there just weren't any games I genuinely wanted to play it on, so I just waited until my components basically looked like they were about to eat it.

People thought Black Ops 4 was going to be the next game, and look what happened. Star Citizen, too. Same thing might be the case for Cyberpunk 2077, we still don't know whether the game will actually be good.

Think of it as buying clothes; don't buy based on what you might use another time and when you lose weight, just make up your mind about what the situation is right now.

Also, you can always buy components piecemeal like PSUs, cases, fans, and cooling first, then you still get something to play with. And something's always going to go south with that which is where you realize you have to spend some more money on cables, thermal paste, and all sorts of expenses you didn't account for first.

I spent time waiting for things like Intel iGPU support for Variable Refresh Rate which they promised like four years ago, along with other features, and look how much that amounted to.

Better to decide based on the available facts, not what might be.

E: And as someone who waited that long to buy, there was never really a great time to buy. I got an eight-core processor with sixteen threads to show for it this year, but the RAM, SSD, and GPU prices sure weren't fun.

ufarn fucked around with this message at 16:31 on Oct 29, 2018

Azuren
Jul 15, 2001

Oh yeah, I'm definitely not buying anything ahead of time. I won't buy anything until I'm ready to build, was mostly just trying to get an idea for what will be coming down the pipe around that timeframe. :)

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.
The best time to buy new hardware is tomorrow

This continues to apply tomorrow

Such is life

Icept
Jul 11, 2001
Anyone think AMD is going to follow Intels footsteps and do a Ryzen 9 this next generation?

SwissArmyDruid
Feb 14, 2014

by sebmojo
I mean, they narrowed the product stack with Zen+, opting not to float a 2800/X product. I don't see what artificially inserting a new product tier in between Threadripper and R7 does for AMD, aside from waste money keeping up with the Intelses. Intel's already in a bad enough spot as it is, with everything below i9 now being non-hyperthreaded parts.

There's a saying in basketball: Don't bail out a struggling shooter on the opposite team. (By which it means, don't be giving them fouls and letting them step to the free throw line)

SwissArmyDruid fucked around with this message at 17:50 on Oct 29, 2018

NewFatMike
Jun 11, 2015

Strictly speaking, that's essentially what Threadripper is. The i9 nomenclature is their HEDT range these days. Their last set was Skylake X which had an i7 "entry" level part for that platform, much in the same way that there's an 8C/16T Threadripper part.

It's all pretty much workstation parts without the enterprise price tag (and concomitant service guarantees), which works out nicely because enthusiasts are more likely to build and service their own machines.

SwissArmyDruid
Feb 14, 2014

by sebmojo
Hmm.... you know, you're right. i9s are about as thermally constrained as all get out, what with tech sites pretty much coming to the consensus that solder was done out of necessity rather than conceding to popular demand. I don't know why I even assumed that there would be any more thermal headroom for an -X tier part on top of what already is. Just the idea of a 200W Intel processor in this day and age is :stonklol: kinds of hilarious. The FX9590 or whatever was bonkers enough as it was, with its 220W TDP.

I dunno, marketing is a stupid thing.

SwissArmyDruid fucked around with this message at 19:14 on Oct 29, 2018

ZobarStyl
Oct 24, 2005

This isn't a war, it's a moider.

NewFatMike posted:

Strictly speaking, that's essentially what Threadripper is. The i9 nomenclature is their HEDT range these days. Their last set was Skylake X which had an i7 "entry" level part for that platform, much in the same way that there's an 8C/16T Threadripper part.

It's all pretty much workstation parts without the enterprise price tag (and concomitant service guarantees), which works out nicely because enthusiasts are more likely to build and service their own machines.
Precisely right, that's the beauty of TR as a pseudo-server product. It has all the umph without the less necessary 'true server' differentiators. It'll scale better than the i9's do because Intel has to keep making these new custom chips for each core config, whereas all AMD has to do is die shrink a few rounds of Ryzens for the next 5 years. I'd happily buy a 32 core TR5 @ 7 nm with ~150W draw, should physics allow for such as thing.

Arzachel
May 12, 2012

Combat Pretzel posted:

SemiAccurate claims to have good info about Rome/Zen2 from a recent leak. Claims it's pretty exceptional. Anyone know what it is exactly? Paywall and such.

From what I gather, it's just a rehash of the 8+1 core ccx rumors that were floating a while ago.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
It's more than that, they're also claiming server socket change up as well, my guess would be for something like SP3+ enabling Hexdecachannel memory, while still retaining backwards/forward compatibility. It would indicate tiny rear end dies though as it'd be a full node shrink, so a theoretical 16C/32T part isn't impossible on say AM4 or AM4+ (AM4+ enabling quad channel) through MCM. X499/599 enables octochannel for TR. It'd be one hell of a way to sell new motherboards and have the A520/B550/X570 be more than a BIOS update.

Lol, could AMD cheat bandwidth out for a native dual channel APU this way? Like, just MCM on another die which behaves as a 128-bit memory controller to help utilize the traces and feed the iGPU? More or less expensive/practical than just eDRAM, HBM or GDDR6?

EmpyreanFlux fucked around with this message at 00:34 on Oct 30, 2018

Khorne
May 1, 2002

Arzachel posted:

From what I gather, it's just a rehash of the 8+1 core ccx rumors that were floating a while ago.
It's still 4 core ccx they just botched the terminology.

NewFatMike
Jun 11, 2015

EmpyreanFlux posted:

It's more than that, they're also claiming server socket change up as well, my guess would be for something like SP3+ enabling Hexdecachannel memory, while still retaining backwards/forward compatibility. It would indicate tiny rear end dies though as it'd be a full node shrink, so a theoretical 16C/32T part isn't impossible on say AM4 or AM4+ (AM4+ enabling quad channel) through MCM. X499/599 enables octochannel for TR. It'd be one hell of a way to sell new motherboards and have the A520/B550/X570 be more than a BIOS update.

Lol, could AMD cheat bandwidth out for a native dual core APU this way? Like, just MCM on another die which behaves as a 128-bit memory controller to help utilize the traces and feed the iGPU? More or less expensive/practical than just eDRAM, HBM or GDDR6?

If the Kaby Lake G parts are anything to go by, HBM would probably be the way to go.

I'd want to see something stupid with that TR4/SP3 package size, like a socketed GPU on a dual socket board. What would be the use case? No clue. Lower profile for server blades, maybe?

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

sincx fucked around with this message at 05:50 on Mar 23, 2021

PC LOAD LETTER
May 23, 2005
WTF?!

EmpyreanFlux posted:

More or less expensive/practical than just eDRAM, HBM or GDDR6?
I think they'd have to go with HBM of some sort for it to make sense to feed a iGPU and give something closer to mid range-ish dGPU performance. Or at least give enough performance over current APU's and Intel iGPU's to stand out anyways.

GDDR6 on package or on mobo probably won't work out for the same reasons GDDR6 didn't on AM3. eDRAM has the bandwidth and reasonable power usage but costs too much to be practical.

Arzachel
May 12, 2012

EmpyreanFlux posted:

It's more than that, they're also claiming server socket change up as well, my guess would be for something like SP3+ enabling Hexdecachannel memory, while still retaining backwards/forward compatibility. It would indicate tiny rear end dies though as it'd be a full node shrink, so a theoretical 16C/32T part isn't impossible on say AM4 or AM4+ (AM4+ enabling quad channel) through MCM. X499/599 enables octochannel for TR. It'd be one hell of a way to sell new motherboards and have the A520/B550/X570 be more than a BIOS update.

Lol, could AMD cheat bandwidth out for a native dual core APU this way? Like, just MCM on another die which behaves as a 128-bit memory controller to help utilize the traces and feed the iGPU? More or less expensive/practical than just eDRAM, HBM or GDDR6?

Less practical/more expensive. Extra memory channels pull significantly more power and require much more complex motherboard designs.

LRADIKAL
Jun 10, 2001

Fun Shoe
Didn't we figure out that DDR4 has as much bandwidth as the old edram has? Is it latency that's the issue?

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!
Just a quick note I hosed up and said "dual core", not "dual channel" as I intended but I think everyone got that anyway.

PC LOAD LETTER posted:

I think they'd have to go with HBM of some sort for it to make sense to feed a iGPU and give something closer to mid range-ish dGPU performance. Or at least give enough performance over current APU's and Intel iGPU's to stand out anyways.

GDDR6 on package or on mobo probably won't work out for the same reasons GDDR6 didn't on AM3. eDRAM has the bandwidth and reasonable power usage but costs too much to be practical.

HBM means either EMIB (which they may not have patents too) or TSV (and that induces much higher failure rate for finished products), where as a dedicated companion memory controller to feed the iGPI needs only an MCM, is my understanding. Is MCM vs TSV relatively similar risk in production?

Arzachel posted:

Less practical/more expensive. Extra memory channels pull significantly more power and require much more complex motherboard designs.

The point I raised though was that say in theory AM4+ X570 boards come with quad channel as a feature for dual die R7s, again just dumb speculation based on the 8+1 rumor. Hopefully takes care of the inherent board complexity issue, but how much more power would a dedicated 128-bit IMC pull compared to say HBM, and the relative complexity of implementing both as well as failure rates in manufacturing and overall cost because of each approach? Is it even feasible for a companion memory controller to exist in this fashion? My understanding is a tenative yes, as it wouldn't be that different from chipsets pre-Core2/Athlon64.

Sorry it's a kind of hopelessly large theoretical question and I don't mean to dump it at anyone's feat.

Drakhoran
Oct 21, 2012

Adored has apparently heard some of the same rumors as Charlie Demerjian:

https://www.youtube.com/watch?v=K4xctJOa6bQ

He now sounds confident that Rome will have eight 8 core CCXs around a central controller chip for 64 cores per CPU. He also said something about Rome servers having a single NUMA domain per socket which I guess would mean the memory controller is moved from the CCX to the central chip. Is that feasible?

OhFunny
Jun 26, 2013

EXTREMELY PISSED AT THE DNC
Has Adored actually gotten scoops on anything?

karoshi
Nov 4, 2008

"Can somebody mspaint eyes on the steaming packages? TIA" yeah well fuck you too buddy, this is the best you're gonna get. Is this even "work-safe"? Let's find out!

Drakhoran posted:

Adored has apparently heard some of the same rumors as Charlie Demerjian:

https://www.youtube.com/watch?v=K4xctJOa6bQ

He now sounds confident that Rome will have eight 8 core CCXs around a central controller chip for 64 cores per CPU. He also said something about Rome servers having a single NUMA domain per socket which I guess would mean the memory controller is moved from the CCX to the central chip. Is that feasible?

I've got vague memories of ye ole hypertransport needing a "directory" service for the memory hierarchy to scale beyond a few chips. The central die could hold the directory service, four InfinityFabric 2: now silkier! links and a handful or two of memory channels. It shouldn't be outrageously big, right? There have been old GPUs with 512 bit RAM interfaces starting with the ATI 2900XT in some prehistoric process node. Although I believe physical interfaces don't scale that much down, or maybe it was the analog parts of them.

Broose
Oct 28, 2007
I am eager for zen 2 news. Or just CPU news in general I guess? All this technical stuff is super interesting to me.

Yudo
May 15, 2003

OhFunny posted:

Has Adored actually gotten scoops on anything?

Recently he got some leaks regarding Turing. I don't remember it being that big of a deal, but a scoop it was.

repiv
Aug 13, 2009

Yudo posted:

Recently he got some leaks regarding Turing. I don't remember it being that big of a deal, but a scoop it was.

Who can forget when Adored leaked the very real $3000 Titan RTX, and RTX 2070 with 7GB of VRAM.

Yudo
May 15, 2003

repiv posted:

Who can forget when Adored leaked the very real $3000 Titan RTX, and RTX 2070 with 7GB of VRAM.

I thought it was something more substantial and accurate, but I guess not.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

repiv posted:

Who can forget when Adored leaked the very real $3000 Titan RTX, and RTX 2070 with 7GB of VRAM.

totally nailed the 13-18% performance gain gen-on-gen

"maybe as much as 20% though!"

Paul MaudDib fucked around with this message at 19:26 on Oct 30, 2018

NewFatMike
Jun 11, 2015

AMD apparently have new Vega 16 and 20 GPUs for mobile with HBM 2:

https://www.amd.com/en/graphics/radeon-pro-vega-20-pro-vega-16

They're already in the new MacBooks. Still 14nm, but I'll be interested in seeing how it does. As a more compute focused architecture, I'm kinda surprised it's not in any mobile workstations.

A Bad King
Jul 17, 2009


Suppose the oil man,
He comes to town.
And you don't lay money down.

Yet Mr. King,
He killed the thread
The other day.
Well I wonder.
Who's gonna go to Hell?
Hi thread :wave:

What's the general consensus on those laptop Raven Ridge APUs, and why are there so few (read as "none") high-end SKUs and only one mid-tier?

I don't need to buy, but I want to buy. Is there a basic timeline outside of magic ball stuff for when another series of APUs will launch (12nm?), or is AMD happy with its position?

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

sincx fucked around with this message at 05:50 on Mar 23, 2021

Klyith
Aug 3, 2007

GBS Pledge Week

A Bad King posted:

What's the general consensus on those laptop Raven Ridge APUs, and why are there so few (read as "none") high-end SKUs and only one mid-tier?

They're not power efficient enough for road-warrior battery life, which keeps them out of the high end ultraportables. And the GPU, while better than intel stuff, isn't good enough for high end gaming laptops. I think they're great as a platform for companion laptops, ie for people with a primary desktop. But most people for whom the laptop is a secondary machine are sticking to an under $1k purchase.


(Also laptops were always the place intel was most pushy about locking out AMD :tinfoil: :tinfoil: :tinfoil:)

A Bad King
Jul 17, 2009


Suppose the oil man,
He comes to town.
And you don't lay money down.

Yet Mr. King,
He killed the thread
The other day.
Well I wonder.
Who's gonna go to Hell?

sincx posted:

They're about twice as fast as Intel's iGPU but around half the speed of a MX130. I'm guessing that's not fast enough for a high-end SKU.

I can't seem to find the MX150 or 130 in anything ~13" with a not-trash IPS screen with greater than 80% sRGB... and Intel never places their GT3e/Iris tech in anything under ~25watts this cycle. I was hoping for a unicorn...

Just want a low watt, <$1.2k, <3lb Lilliputian machine that does graphics gooder than a UHD 620, with decent build, >5hrs of web surfing and USB-C.... I have a monster desktop at home, but why can't I do work on the airplane tray table?

Edit: Guess I'll wait for the next revisions, refurb my Yoga with it's rotted out battery and hope. Thanks.

A Bad King fucked around with this message at 23:44 on Oct 30, 2018

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!

sincx posted:

They're about twice as fast as Intel's iGPU but around half the speed of a MX150. I'm guessing that's not fast enough for a high-end SKU.

Careful about this, there are configuration of the MX150 that are clocked ~500Mhz slower and the reduction in performance that brings (like~30% IIRC).

PC LOAD LETTER
May 23, 2005
WTF?!

EmpyreanFlux posted:

HBM means either EMIB (which they may not have patents too) or TSV (and that induces much higher failure rate for finished products), where as a dedicated companion memory controller to feed the iGPI needs only an MCM, is my understanding.
I think you're correct here.

Its just that the cost and power isn't worthwhile to add a dedicated companion memory controller to the package just to act between the APU die and the on package/mobo memory even if total package power wasn't a issue (its a big one unfortunately) and package pins weren't a issue (I don't think there is enough pins available to add a on mobo DRAM of any sort and maintain a similar level of I/O that AM4 currently has, maybe if you scrapped nearly all the PCIe lanes it could work? I don't think they'd do that though).

I think if AMD wanted to put some sort of higher than system DRAM bandwidth on package to boost iGPU performance for their APU's they'd just engineer in another memory controller into the APU die itself. They did do a on die GDDR5 for some of the next to last or last gen AM3 APU's I believe but they never actually added the GDDR5 to the package. Its unknown why.

EmpyreanFlux posted:

Is MCM vs TSV relatively similar risk in production?
I've no clue unfortunately. Just vague rumors that it still costs more than anyone would like to do products that utilize TSV. MCM's still seem to be the default approach for doing multi die in cost effective mass production (ie. APU-like pricing) and probably will remain so for at least a few more years.

EmpyreanFlux posted:

in theory AM4+ X570 boards come with quad channel as a feature for dual die R7s
The pins for AM4 aren't there for quad channel memory, they'd really need a whole new socket for that and AMD has said they're sticking with AM4 until DDR5 comes out so 2020+ time frame.

Drakhoran posted:

He also said something about Rome servers having a single NUMA domain per socket which I guess would mean the memory controller is moved from the CCX to the central chip. Is that feasible?
Loosing the on die memory controller would add latency to each cores memory accesses but inter die latency is already pretty high and a issue for Epyc/TR MCM CPU's and going to 1 big memory controller could offer enough of benefit for inter die latency/bandwidth to offset that negative effect and be advantageous over all for most work loads.

But that is on paper so to speak.

Its all about trade offs and without benches it could really go either way in the real world. Good implementation will be critical for that approach working well in the real world.

PC LOAD LETTER fucked around with this message at 00:26 on Oct 31, 2018

sincx
Jul 13, 2012

furiously masturbating to anime titties
.

sincx fucked around with this message at 05:50 on Mar 23, 2021

Klyith
Aug 3, 2007

GBS Pledge Week

sincx posted:

If you are willing to format and do a clean install of windows:

https://hothardware.com/reviews/huawei-matebook-d-review

That one doesn't meet his qualifications, the screen is far short of >80% srgb.


(99% of laptops are not gonna meet that list of qualifications regardless of CPU. A pro-quality screen on a sub-3lb machine for under $1200? Good luck.)

A Bad King
Jul 17, 2009


Suppose the oil man,
He comes to town.
And you don't lay money down.

Yet Mr. King,
He killed the thread
The other day.
Well I wonder.
Who's gonna go to Hell?

Klyith posted:


(99% of laptops are not gonna meet that list of qualifications regardless of CPU. A pro-quality screen on a sub-3lb machine for under $1200? Good luck.)

I'm not asking for Adobe. Just sRGB, which is more common than you think.

Apparently the D has 73% sRGB and poor color gamut. But for ~$550.....dang.

Adbot
ADBOT LOVES YOU

SwissArmyDruid
Feb 14, 2014

by sebmojo

A Bad King posted:

I can't seem to find the MX150 or 130 in anything ~13" with a not-trash IPS screen with greater than 80% sRGB... and Intel never places their GT3e/Iris tech in anything under ~25watts this cycle. I was hoping for a unicorn...

Just want a low watt, <$1.2k, <3lb Lilliputian machine that does graphics gooder than a UHD 620, with decent build, >5hrs of web surfing and USB-C.... I have a monster desktop at home, but why can't I do work on the airplane tray table?

Edit: Guess I'll wait for the next revisions, refurb my Yoga with it's rotted out battery and hope. Thanks.

I think this is another case for the HP Envy x360 13z, but I am growing more and more leery of recommending that by the day. My brother texted me for help the other day, and not only are there STILL no drivers or BIOS updates or anything of any sort on HP's support pages for his 2-in-1, but HP's own self-updating tool tried to download and install an Intel BIOS this past weekend, stopped only by the updater utility not being able to parse the update.

I'm starting to think that there are corporate drama and shenanigans going on surrounding this product going on at HP.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply