Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
CAPS LOCK BROKEN
Feb 1, 2006

by Fluffdaddy

RME posted:

Has AMD indicated that Zen2 would have their TSX equivalent?
It's a really marginal use case but the PS3 emulator can actually leverage it for notable performance boosts, but instruction sets don't exactly build hype when you're trying to sell your new stack

From what I've seen online, Zen2 is not going to have TSX

Adbot
ADBOT LOVES YOU

RME
Feb 20, 2012

Oh well
doing some more digging it seems like the TSX specific improvements have significantly diminished in the last few months apparently anyways, but there are still issues with Zen/+ that end up holding Ryzen chips back that hopefully it will address

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!
Charlie is saying Threadripper before October

https://twitter.com/CDemerjian/status/1139994286630588417?s=19

Seamonster
Apr 30, 2007

IMMER SIEGREICH
Jesus, AMD really is going to dominate all the news cycles until winter.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.
Intel gen 11 gfx is apparently better than ryzen apus is what I'm hearing. So two gen (intel 10 = foundry 7) of silicon to get a single gen boost since intel 10nm clocks are real bad.

Doesn't bode well for xe offering real competition and Navi is a drat squib due to high prices and low perf.

Nvidia super gonna clean house.

Lambert
Apr 15, 2018

by Fluffdaddy
Fallen Rib
I'll believe it when I see it; so far, Intel iGPUs have been terribad.

Khorne
May 1, 2002

Malcolm XML posted:

Nvidia super gonna clean house.
RTX is mediocre enough that AMD knocking on the door of early 2016 products is enough to give Nvidia competition. I'm surprised there aren't more news stories about the crazy failure rates for 2080 Tis, too.

I hope nvidia releases an actual new gen and doesn't try to price gouge so I can upgrade from my 1070. Right now the best upgrade seems to be trying to nab a used 1080 Ti when navi drops or when the super series drops provided it isn't actually good.

I get that I am a weird minority. I want a mid-high end graphics card, but I turn graphics settings down and am not aiming for 4k. Their strategy with the RTX series seems to be to release a minor upgrade so they can keep selling old 2016 cards they over produced for crypto or take home a high premium for a fairly minor upgrade.

The 1660 Ti is cool and good, but I have a 1070 which is roughly equal.Outside of the 1660 Ti, we are still at Q2 2016 price:perf levels in Q3 2019. This is the most stagnant the GPU market has ever been.

Khorne fucked around with this message at 02:49 on Jun 16, 2019

GRINDCORE MEGGIDO
Feb 28, 1985


A shame dlss is utterly lame. Rtx really undelivered.

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

Khorne posted:

I hope nvidia releases an actual new gen and doesn't try to price gouge

pfffffthahahahahahahaha

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

GRINDCORE MEGGIDO posted:

A shame dlss is utterly lame. Rtx really undelivered.

For gamers yes. Ray tracing as a production tool or for cool effects once ps5/Scarlett get it will be good.

FWIW it looks like Nvidia needed to use the tensor cores for something game related. And/or the same structures speed up ray tracing to save die area

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Khorne posted:

RTX is mediocre enough that AMD knocking on the door of early 2016 products is enough to give Nvidia competition. I'm surprised there aren't more news stories about the crazy failure rates for 2080 Tis, too.

I hope nvidia releases an actual new gen and doesn't try to price gouge so I can upgrade from my 1070. Right now the best upgrade seems to be trying to nab a used 1080 Ti when navi drops or when the super series drops provided it isn't actually good.

I get that I am a weird minority. I want a mid-high end graphics card, but I turn graphics settings down and am not aiming for 4k. Their strategy with the RTX series seems to be to release a minor upgrade so they can keep selling old 2016 cards they over produced for crypto or take home a high premium for a fairly minor upgrade.

The 1660 Ti is cool and good, but I have a 1070 which is roughly equal.Outside of the 1660 Ti, we are still at Q2 2016 price:perf levels in Q3 2019. This is the most stagnant the GPU market has ever been.

AMD is for some reason not trying to regain market share in GPUs. This will let Nvidia continue to set pricing; now that 12nm yields must be higher they can be a tad more aggressive to maintain market share.


I suspect amd wanted to show off Navi 20 but couldn’t

shrike82
Jun 11, 2005
Probation
Can't post for 4 hours!
There's a limit to how much Nvidia can price gouge - we saw that with the dismal sales of the 2000 series.

K8.0
Feb 26, 2004

Her Majesty's 56th Regiment of Foot
Turing isn't price gouging. They're super expensive because they're absolutely enormous parts. The RTX 2060/2070 die is nearly as big as the 1080 Ti die, and the 2080 Ti is 60% bigger than the 1080 Ti, despite being on a slightly smaller node.

Been saying it since Turing came out and I'll keep saying it, the GPU buying situation will not improve until Ampere on 7nm. The Supers will be a better deal because as production matures Nvidia is able to get slightly more out of their wafers, but it's not going to be a huge leap. Just hope Samsung's 7nm process is good for GPUs and that Ampere comes sooner rather than later.

CAPS LOCK BROKEN
Feb 1, 2006

by Fluffdaddy

Lambert posted:

I'll believe it when I see it; so far, Intel iGPUs have been terribad.

Intel Iris Pro was probably their first iGPU that didn't have the performance of wet cardboard.

Spiderdrake
May 12, 2001



K8.0 posted:

Turing isn't price gouging. They're super expensive because they're absolutely enormous parts. The RTX 2060/2070 die is nearly as big as the 1080 Ti die, and the 2080 Ti is 60% bigger than the 1080 Ti, despite being on a slightly smaller node.
Yeah I was under the impression the design was back ported after they weren't able to get the process they wanted ready in time or something of the sort?

On the other hand I am sure there is a little room in the price but people imagining Super is going to be the massive saving grace are, as you said, not going to be pleased.

eames
May 9, 2009

Toms Hardware posted:

Chiang also says that AMD is changing its marketing strategy to focus on being a premium brand, as opposed to being the value alternative.
"Lots of people ask me, what do you think about today's AMD? I say today's AMD is completely different company compared to two, three, five years ago," Chiang said. "They have nice technology and they are there to put the higher spec with the reasonable pricing. But right now they say, "Hey Charles, lets push to marketing to the higher [end]. So let's sell higher-pricing motherboards, higher-spec motherboards, and let's see what will happen in the market. So I don't think that AMD is the company that wants to sell low cost here, low cost there."
https://www.tomshardware.com/news/msi-amd-x570-motherboard-pricing,39593.html

Chiang is the CEO of MSI.

Times are changing! None of this is surprising but it is happening much faster than I expected.

Also interesting that he mentions X570 chipset costing twice as much as X470. I keep reading X590 rumors and wonder what that could possibly bring to the table at a presumably even higher price point.

iospace
Jan 19, 2038


Khorne posted:

RTX is mediocre enough that AMD knocking on the door of early 2016 products is enough to give Nvidia competition. I'm surprised there aren't more news stories about the crazy failure rates for 2080 Tis, too.

I hope nvidia releases an actual new gen and doesn't try to price gouge so I can upgrade from my 1070. Right now the best upgrade seems to be trying to nab a used 1080 Ti when navi drops or when the super series drops provided it isn't actually good.

I get that I am a weird minority. I want a mid-high end graphics card, but I turn graphics settings down and am not aiming for 4k. Their strategy with the RTX series seems to be to release a minor upgrade so they can keep selling old 2016 cards they over produced for crypto or take home a high premium for a fairly minor upgrade.

The 1660 Ti is cool and good, but I have a 1070 which is roughly equal.Outside of the 1660 Ti, we are still at Q2 2016 price:perf levels in Q3 2019. This is the most stagnant the GPU market has ever been.

I think a lot of it has to do with "well, right now we have two options:

1. We focus on getting our cards to work well with 4k.
2. We focus on new tech, like raytracing."

The RTX series tries to do both but is eh at it, being only slightly better than the 1080s, but I don't regret mine given a 1080ti is still going to be pricey and used ones are a gamble and a half.

4k60 and higher resolutions (though I don't know why you'd need anything higher given the limitations of the human eye, but I digress) will come naturally as GPU power comes about. Raytracing is probably going to be the harder thing to accomplish, but even if it amounts to a "public prototype", we know it's at least possible on a GPU and be done in real time.

Anarchist Mae
Nov 5, 2009

by Reene
Lipstick Apathy

K8.0 posted:

Turing isn't price gouging. They're super expensive because they're absolutely enormous parts. The RTX 2060/2070 die is nearly as big as the 1080 Ti die, and the 2080 Ti is 60% bigger than the 1080 Ti, despite being on a slightly smaller node.

I don't buy it. You're paying 40% more for essentially the same node, that has been around for ages and likely yields well enough for even that massive die to be cheap. You'd be better off arguing for design costs rather than manufacturing costs as those are hard to quantify.

I think that since there's little to no competition coming from AMD, they're going to charge as much as they think the market can bear, because that's how capitalism works.

I doubt Super is going to change the status quo much, if anything they'll cost considerably more for just a little more performance and the original 2000 series will drop a little.

As for Navi, well it's hard to know what to think without third party benchmarks and verification of the TDP. At least AMD is holding pace at the low to mid range and creating a separate implementation of GCN for gaming is a step in the right direction if they're ever going to compete at the high end.

I hope they're able to find some more ways to optimise their implementation though...

Zedsdeadbaby
Jun 14, 2008

You have been called out, in the ways of old.

iospace posted:


(though I don't know why you'd need anything higher given the limitations of the human eye, but I digress)

I disagree, there are still obvious aliasing on fine details such as foliage, chain link fences, overhead power lines, etc. TAA helps, but it aggressively softens the image quality. You underestimate the human eye, and how much higher resolution needs to go to reach anywhere near the detail we see in real life.

There are still room for further resolution increases. Especially as display sets get bigger and bigger. By the end of this decade people will be playing games on 65-85" televisions, where the resolution becomes more obvious.

Crunchy Black
Oct 24, 2017

by Athanatos
god damnit I just want to build one of these guys and beat the poo poo out of it on the bench while my skylake/1080ti machine continues to be viable. Once AMD/Nvidia release a new gen of GPU that's worth it I'll put the Ryzen into production.

Arzachel
May 12, 2012

Malcolm XML posted:

Intel gen 11 gfx is apparently better than ryzen apus is what I'm hearing. So two gen (intel 10 = foundry 7) of silicon to get a single gen boost since intel 10nm clocks are real bad.

Doesn't bode well for xe offering real competition and Navi is a drat squib due to high prices and low perf.

Nvidia super gonna clean house.

<10% faster with 50% more memory bandwidth

TheCoach
Mar 11, 2014
People keep going on that Navi/RDNA is useless etc. but the gains over Vega are significant and if they carry over to the next APUs they will utterly demolish intel.

Sub Rosa
Jun 9, 2010




Sub Rosa posted:

I learned I don't need to spend $500 for a CPU that will last 8 years
I know I said this but I think I'm going to start saving now for the 64 core Threadripper. One workload I do is infinite analysis of chess positions, and I currently rent a Google Cloud server for it.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

GRINDCORE MEGGIDO posted:

A shame dlss is utterly lame. Rtx really undelivered.
They should ditch the tensor cores and use the die space for more raytracing cores.

PC LOAD LETTER
May 23, 2005
WTF?!

Malcolm XML posted:

Intel gen 11 gfx is apparently better than ryzen apus is what I'm hearing.
If you give it much faster RAM and leave the Ryzen APU with DDR4 2666 or something sure.

AMD will be supporting much faster RAM by default though with Zen2 and if you bother to put some DDR4 3200 with a current Zen/+ APU then they still perform fairly well and can easily compete with coming Intel iGPU's too.

Malcolm XML posted:

AMD is for some reason not trying to regain market share in GPUs.
They know they need something truly impressive vs Nvidia's best to do that and a revamped GCN derivative can't do that.

They know they need a truly ground up new GPU to do it and they're working on it but takes a long time to get it done. Maybe late 2020 or early 2021 they'll have something done if we're lucky. I think they started work in early 2018 so that time frame is a possibility.

Anime Schoolgirl
Nov 28, 2002

Combat Pretzel posted:

They should ditch the tensor cores and use the die space for more raytracing cores.
iirc the tensor cores are the raytracing cores

repiv
Aug 13, 2009

Anime Schoolgirl posted:

iirc the tensor cores are the raytracing cores

They're separate hardware, that's why Volta lacked RT acceleration despite having tensor cores. They're tangentially connected in that Nvidia's offline AI denoisers run on the tensor cores but AFAIK their realtime denoisers are plain old non-tensor shaders, so right now DLSS is literally the only gaming use-case for the tensor cores (and it sucks).

iospace
Jan 19, 2038


Anime Schoolgirl posted:

iirc the tensor cores are the raytracing cores

I thought at first nope, but you're right. "A Tensor core is a mixed-precision FPU specifically designed for matrix arithmetic." They have a small subset of them dedicated to raytracing.

repiv posted:

They're separate hardware, that's why Volta lacked RT acceleration despite having tensor cores. They're tangentially connected in that Nvidia's offline AI denoisers run on the tensor cores but AFAIK their realtime denoisers don't, they're plain old shaders.

I think then it's more they're using the core as the basis of the raytracing ones.

repiv
Aug 13, 2009

Where did you hear that? It sounds unlikely given the tensor cores only support FP16, which I'm pretty sure isn't precise enough for the world-space coordinates you'd be dealing with when raytracing.

The only references I can find to the tensor cores being part of the raytracing pipe are in regard to denoising, which actually comes after the raytracing in a completely separate pass (assuming the denoiser even uses the tensors, most don't).

iospace
Jan 19, 2038


repiv posted:

Where did you hear that? It sounds unlikely given the tensor cores only support FP16, which I'm pretty sure isn't precise enough for the world-space coordinates you'd be dealing with when raytracing.

The only references I can find to the tensor cores being part of the raytracing pipe are in regard to denoising, which actually comes after the raytracing in a completely separate pass (assuming the denoiser even uses the tensors, most don't).

Speculation, hence the think. Open to corrections.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
The BVH/triangle intersectors are dedicated hardware. And they sure don't work with FP16, because that's too low of a precision to deal with in 3D space.

NVIDIA was going on about it all day long during the release conference.

--edit:

Combat Pretzel fucked around with this message at 16:03 on Jun 16, 2019

iospace
Jan 19, 2038


Alright, it turns out I misinterpreted what I was reading. Turns out they're using the Tensor cores on Volta* to do the raytracing, with the caveat that Volta based boards are not necessarily meant for gaming.

My bad.

*source: https://www.forbes.com/sites/daveal...g/#4fbcc5816e31

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

iospace posted:

Alright, it turns out I misinterpreted what I was reading. Turns out they're using the Tensor cores on Volta* to do the raytracing, with the caveat that Volta based boards are not necessarily meant for gaming.

My bad.

*source: https://www.forbes.com/sites/daveal...g/#4fbcc5816e31

That's a Forbes Contributor site, i.e. some blogger using Forbes' platform.

Tensor cores don't do raytracing period. They do matrix math. You can do raytracing without raytracing cores (as everyone did between the time raytracing was invented 40 years ago and when Turing came out last year), it's just slower. Volta is using a software implementation of raytracing, it's just a huge superfast compute card and it's fast enough that's viable.

GPUs are generally the preferred platform for doing software-based raytracing anyway. Places like Pixar will have big server farms full of tons of GPUs. This is also nothing NVIDIA-specific; Crytek demoed Vega running software-based raytracing at 1080p on their new engine. Raytracing is a highly parallel task, each ray is basically a separate work-item, so it fits the GPU "thousands of threads" model very nicely in general.

Paul MaudDib fucked around with this message at 17:09 on Jun 16, 2019

Seamonster
Apr 30, 2007

IMMER SIEGREICH

CAPS LOCK BROKEN posted:

Intel Iris Pro was probably their first iGPU that didn't have the performance of wet cardboard.

Its almost as though anybody can make a decent iGPU as long as there's enough memory bandwidth...

Klyith
Aug 3, 2007

GBS Pledge Week

PC LOAD LETTER posted:

and a revamped GCN derivative can't do that.

https://www.youtube.com/watch?v=V7wwDnp8p6Y

David Kanter is satisfied to call it a new architecture, I think we can finally drop the "GCN rehash" burns.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

TheCoach posted:

People keep going on that Navi/RDNA is useless etc. but the gains over Vega are significant and if they carry over to the next APUs they will utterly demolish intel.

Zen 1 gains over bulldozer were insane but because there was a clear price difference that’s what drove people to it. Navi is expensive for what it offers.

iospace
Jan 19, 2038


Paul MaudDib posted:

That's a Forbes Contributor site, i.e. some blogger using Forbes' platform.

Tensor cores don't do raytracing period. They do matrix math. You can do raytracing without raytracing cores (as everyone did between the time raytracing was invented 40 years ago and when Turing came out last year), it's just slower. Volta is using a software implementation of raytracing, it's just a huge superfast compute card and it's fast enough that's viable.

GPUs are generally the preferred platform for doing software-based raytracing anyway. Places like Pixar will have big server farms full of tons of GPUs. This is also nothing NVIDIA-specific; Crytek demoed Vega running software-based raytracing at 1080p on their new engine. Raytracing is a highly parallel task, each ray is basically a separate work-item, so it fits the GPU "thousands of threads" model very nicely in general.

Ah, ok! Thanks for the info!

repiv
Aug 13, 2009

Paul MaudDib posted:

GPUs are generally the preferred platform for doing software-based raytracing anyway. Places like Pixar will have big server farms full of tons of GPUs. This is also nothing NVIDIA-specific; Crytek demoed Vega running software-based raytracing at 1080p on their new engine. Raytracing is a highly parallel task, each ray is basically a separate work-item, so it fits the GPU "thousands of threads" model very nicely in general.

The really high end studios like Pixar still usually render on CPUs if only because their scenes are so massive that literally no GPU in existence has enough VRAM, although that gap is starting to close with 48GB GPUs and NVLink being a thing now.

e: yeah pixar is working towards using gpus but their current production renderer is still cpu-based: https://renderman.pixar.com/news/renderman-xpu-development-update

quote:

For example, on Coco the average shot took 27GB of memory with outliers requiring 70GB or more.

:monocle:

repiv fucked around with this message at 18:02 on Jun 16, 2019

Khorne
May 1, 2002
Shopping for a motherboard is so tedious. I've been looking at the x570 lineup because I have an actual use for the spare lanes. Lots of motherboards just seem to not take advantage of the chipset. For example, MSI's x570 Gaming Plus has 6 sata ports despite the x570 configurations being 4,8,12. So they consumed a bunch of lanes to get 8 sata ports and decided "nah, we're just putting 6". Some of the other boards only have 4 and similar layout to an x470. Who is going to shell out for an x570 for 2 m2 ports and 4 sata ports and nowhere near the usb ports supported? It makes no sense. The entire appeal of x570 is in more effective lanes. Otherwise, you should just get an x470 for nearly half the price.

Also calling out MSI here, but the specifications tab doesn't show specifications on their site. The other manufacturers have specifications so clearly there's no embargo. I have to go look at pictures of the pcb to figure out what they're trying to sell me. I actually like MSI, although you have to be real careful about ports with their motherboards. The lack of specs also doesn't say whether LAN is realtek or intel, and if I'm paying over $120 for a motherboard it's going to have Intel LAN probably.

Gigabyte also seems to think 6 sata ports is great even on a $300+ board. At least that MSI board I pointed out should land in the $140-$185 range.

It also seems like ASRock prices haven't leaked which makes real planning hard. Not a huge fan of wifi being included on a lot of boards either. I'm looking at lower end boards because the VRM is super overkill on most x570, and it seems like all the extra features of the expensive boards are mostly flash unless you do LN2 overclocking or something. I may ultimately end up on x470, because I could build almost an entire second computer with the price difference of some of these boards.

Khorne fucked around with this message at 18:19 on Jun 16, 2019

Adbot
ADBOT LOVES YOU

iospace
Jan 19, 2038


Core dump of thoughts here:

1. The problem with the GPU market is there's no 1440p60* or higher cards available at 400 USD. When I bought my 670, it was pretty much a 1080p60* card. It was also 400 MSRP. If either AMD or Nvidia release a card that fits that bill, it'll sell like mad.
2. AMD seems to be content to seizing the CPU market at the moment. It gets them more press in media that most consumers are going to consume as opposed to the GPU market, which really only impacts gamers and companies for the most part.
3. Nvidia feels like it was in a damned if they do, damned if they don't situation with the RTX. Part of why I feel it hasn't sold well is the lack of ray tracing support in games at the moment, though the demos of them are pretty drat sweet. So it's "Do we go for it, or do we let AMD beat us to the punch on hardware based ray tracing?" The other thing is "which would lead to adoption quicker, video cards supporting it or games supporting it?"

Answer: a home console supporting it.

*averaging around 60 fps at great, but not necessarily maxed, settings. Though probably it's 1440p120+ or whatever at this point.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply