Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«248 »
  • Post
  • Reply
EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!


Faster RAM does meaningfully impact Ryzen though, on like an order of 10% faster.

eames posted:

I think you just found a legitimate use for Vega cards!

Ergo, what the Radeon Pro SSG was already being used for, just more powerful.

Adbot
ADBOT LOVES YOU

Paul MaudDib
May 2, 2006

"Tell me of your home world, Usul"


FaustianQ posted:

There is no way Threadripper and Eypc don't get picked up for enterprise and worskatation use at these prices, it's literally AMD CPU+MB ≤ Intel CPU. Knowing AMD, they'd throw in discounts for buying completely from them, so that means more GPU sales and more sales of their rebranded SSDs and memory. Has AMD made any moves to either source Qualcomm or acquire their own for ethernet controllers?

The HBCC memory controller AMD is talking about has me thinking, could AMD turn HBM2 into a nonvolatile storage medium, or even a PCIE Ramdisk?

CUDA can support RDMA via peer-to-peer transfer or NVLink, I would be unsurprised if AMD had/developed equivalent hardware, the problem is the bandwidth of those channels is still quite low. EDR Infiniband with a 12x connection (3 ganged cables) is still only 300 Gbit/s and with a standard QDR x4 you're only at like 32 Gbit/s (1 cable). That's barely DDR4 speed. At PCIe 3.0x16 you're at 128 Gbit/s slice that down to x4 and you're naturally at 32 Gbit/s.

Paul MaudDib fucked around with this message at Jun 3, 2017 around 00:30

SwissArmyDruid
Feb 14, 2014



Thanks, goons, I knew I was freaking forgetting something. I completely forgot to check that memory kit against QVL.

Palladium
May 8, 2012
Probation
Can't post for 4 days!


eames posted:

The most important enterprise wins will be companies like Google and Facebook, I don't think either of them worries about licensing because they run their own stacks.

On the other hand, these software corps aren't exactly the thrifty sort either.

suck my woke dick
Oct 10, 2012

I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!

Put this cum-loving slave on ignore immediately!


SwissArmyDruid posted:

Can I just say how happy I am that DVI connectors on motherboards seems to finally be dead? The X370 Gaming ITX/ac that ASRock are showing off at Computex, btw.



Can I just say how inferior this motherboard is for having no DP connector? HDMI is piss garbage for idiots who think connectors should look like a hi-poly 3d model from 2009.

GRINDCORE MEGGIDO
Aug 22, 2004



There's no AMD CPU that would drive the onboard graphics output yet, anyway.

Boiled Water
Apr 5, 2006

YOU ARE A BRAIN
IN A BUNKER


blowfish posted:

Can I just say how inferior this motherboard is for having no DP connector? HDMI is piss garbage for idiots who think connectors should look like a hi-poly 3d model from 2009.

It double weird when you consider the licensing on DP is lower than HDMI.

sincx
Jul 13, 2012

What actually transpires beneath the veil of an event horizon? Decent people shouldn't think too much about that.

Will a 1700X beat out a 2600k overclocked to 4.3 GHz in both single and multithreaded tasks? Or just multi?

eames
May 9, 2009



sincx posted:

Will a 1700X beat out a 2600k overclocked to 4.3 GHz in both single and multithreaded tasks? Or just multi?

Userbenchmark gives you a rough idea. The difference between average overclocks is -2% single-core, +120% multi-core.
http://cpu.userbenchmark.com/Compar...2600K/3915vs621

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!


Is Userbenchmark reliable? Because if there is no way to tell what the actual OC settings are for Peak, and if we assume fastest speeds listed then the 1700X @ 4.1Ghz absolutely trounces the 2600K @ 4.3Ghz.

The difference between average clocks is actually 7% in favor of the 1700X.

AVeryLargeRadish
Aug 19, 2011

WolfDad is Best Dad.


FaustianQ posted:

Is Userbenchmark reliable? Because if there is no way to tell what the actual OC settings are for Peak, and if we assume fastest speeds listed then the 1700X @ 4.1Ghz absolutely trounces the 2600K @ 4.3Ghz.

The difference between average clocks is actually 7% in favor of the 1700X.

That isn't really a good comparison, 4.1GHz is really high for a 1700X and 4.3GHz is really low for a 2600k. 3.9 vs 4.3 or 4.1 vs 4.8 would be fairer comparisons.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!


AVeryLargeRadish posted:

That isn't really a good comparison, 4.1GHz is really high for a 1700X and 4.3GHz is really low for a 2600k. 3.9 vs 4.3 or 4.1 vs 4.8 would be fairer comparisons.

My point was the original question asked was how it would compare between a 1700X and a 4.3Ghz 2600K, and Userbenchmark seems to have the 2600K in the lead by 2% @ 5.05Ghz when the 1700X is @ 4.1Ghz, thus the conclusion the posters question being yes, the 1700X should trounce the 2600K at that speed. They'd be pretty even @ 3.9Ghz and 4.8Ghz single threaded as well.

repiv
Aug 13, 2009



College Slice

FaustianQ posted:

Faster RAM does meaningfully impact Ryzen though, on like an order of 10% faster.

Speaking of which, how much have the BIOS/AGESA updates actually improved RAM compatibility? Is it easy to get 2x16GB running at 3200mhz yet?

eames
May 9, 2009



Anandtech's twitter feed posted a link to an article about an Asrock X399 board but it redirects to the homepage (pulled?). Google cache caught it.


http://webcache.googleusercontent.c...s+&cd=1&ct=clnk

eames fucked around with this message at Jun 3, 2017 around 13:27

Drakhoran
Oct 21, 2012



GRINDCORE MEGGIDO posted:

There's no AMD CPU that would drive the onboard graphics output yet, anyway.

Technically the Bristol Ridge APUs released last year can.However I don't think those ever got a retail release.

Aesculus
Mar 22, 2013


NewFatMike
Jun 11, 2015



Saved

LoopyJuice
Jul 5, 2007


sincx posted:

Will a 1700X beat out a 2600k overclocked to 4.3 GHz in both single and multithreaded tasks? Or just multi?

If it's any help, I've just upgraded from an i5 2500k at 4.6GHz to a 1700 at 3.95GHz and the 1700 made a world of difference for my usage case (144hz gaming with a GTX 970, watching twitch streams/PLEX/browsing/discord across 3x1080p screens) I've also gone from 60-70fps in CS:GO to over 200 because the i5 was choking hard (only game i've really noticed such a massive FPS increase in as the others I play are generally GPU limited by the 970, but all run perfectly fine and none slower than before), I couldn't watch PLEX/twitch streams while playing most games without a massive FPS hit and all cores were pegged at 100% in a lot of games. Windowed/maximized mode in games also runs like a dream now where I would take a pretty good FPS hit before. I'd recommend it every day of the week from a Sandy/Ivy bridge era chip if you do anything more than loading up one game on one monitor with nothing in the background ever.

It also never goes over 60C under full GPU/CPU load on a custom water loop whereas my old i5 used to hit almost 80C for reasons unknown (were 2500k chips soldered or TIM?) whilst the GTX 970 was hovering at about 45-50 under load in the same loop.

Granted some of the nice performance increase is due to going from a Corsair SATA SSD to the Samsung 960 M.2 NVME drive which is ridiculously quick, either way i'd probably sooner take a hammer to the ballsack than go back to my 2500k at 4.6GHz.

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.



Oven Wrangler

LoopyJuice posted:

It also never goes over 60C under full GPU/CPU load on a custom water loop whereas my old i5 used to hit almost 80C for reasons unknown (were 2500k chips soldered or TIM?) whilst the GTX 970 was hovering at about 45-50 under load in the same loop.

They're the last generation of Intel quad-core to be soldered. What cooler did you have on the 2500K? I never see 70C with mine at 4.4GHz/1.38V under a Hyper 212+, but if I were to put enough juice through it to get 4.6 stable I might be in the same place you were.

Kazinsal
Dec 13, 2011



I see 80-82 C with all cores maxed at 4.2 GHz/1.35V on my i7-3820.

I really need a new CPU.

LoopyJuice
Jul 5, 2007


Eletriarnation posted:

They're the last generation of Intel quad-core to be soldered. What cooler did you have on the 2500K? I never see 70C with mine at 4.4GHz/1.38V under a Hyper 212+, but if I were to put enough juice through it to get 4.6 stable I might be in the same place you were.

It was using the same custom water loop, an EKWB Evo supremacy, I even re-pasted it a while ago with no change at all, same loop as the GTX 970 which was sitting at 45C load / 30C idle. I can't remember what voltage was being pushed however, it was using the 4.6g preset on an Asus Maximus Gene-Z so likely a lot..

SwissArmyDruid
Feb 14, 2014



Linus video: https://www.youtube.com/watch?v=TWFzWRoVNnE

I hadn't really looked into what's going on in Chipzilland, but that sounds... awful bleak.

wargames
Mar 16, 2008

official yospos cat censor


Amd needed to make a great and they did, They also needed intel to gently caress up hard and that is happening?

ISeeCuckedPeople
Feb 7, 2017

by Smythe


wargames posted:

Amd needed to make a great and they did, They also needed intel to gently caress up hard and that is happening?

They came kind of late to the game.

Who is really upgrading their pcs in this day and age? Outside of commercial market and hardest of the hardcore gamers.

PC part sales are way down compared to 5 -6 years ago and we're reaching the limit on how much faster cpus actually are advantageous to most end users.

A I5 from 3-4 years ago is still pretty loving fast.

EmpyreanFlux
Mar 1, 2013

The AUDACITY! The IMPUDENCE! The unabated NERVE!


wargames posted:

Amd needed to make a great and they did, They also needed intel to gently caress up hard and that is happening?

Intel isn't releasing 14 core and up SKUs until much later next year. That's a really wide window for TR4.

SwissArmyDruid
Feb 14, 2014



ISeeCuckedPeople posted:

They came kind of late to the game.

Who is really upgrading their pcs in this day and age? Outside of commercial market and hardest of the hardcore gamers.

PC part sales are way down compared to 5 -6 years ago and we're reaching the limit on how much faster cpus actually are advantageous to most end users.

A I5 from 3-4 years ago is still pretty loving fast.

There are two segments that AMD *needs* to make inroads in on: Server and mobile. That's where all the money is, and where turnover is most likely to happen. The former, because of all the usual buzzwords of "density" "power-efficiency" and "compute", and the latter, because notebooks are fundamentally obsolete from the moment they hit the market: Performance is strictly downhill from there, barring a miracle with MXM GPUs or adding in extra RAM. Maybe swapping spinning rust for flash, but even that is progressively less likely these days.

Everything else can be broadly classified as mindshare, from the crappiest $300 Dell box to the HEDT. Yes, we're excited about Threadripper, because that's _us_ but the real important parts here are EPYC and Ryzen Mobile.

SwissArmyDruid fucked around with this message at Jun 4, 2017 around 06:59

eames
May 9, 2009



FaustianQ posted:

Intel isn't releasing 14 core and up SKUs until much later next year. That's a really wide window for TR4.

The Asus ROG official even posted "we won't see them until next year" and then stealth-edited it into "we won't see them until later this year"

Kazinsal
Dec 13, 2011



SwissArmyDruid posted:

There are two segments that AMD *needs* to make inroads in on: Server and mobile. That's where all the money is, and where turnover is most likely to happen. The former, because of all the usual buzzwords of "density" "power-efficiency" and "compute", and the latter, because notebooks are fundamentally obsolete from the moment they hit the market: Performance is strictly downhill from there, barring a miracle with MXM GPUs or adding in extra RAM. Maybe swapping spinning rust for flash, but even that is progressively less likely these days.

Everything else can be broadly classified as mindshare, from the crappiest $300 Dell box to the HEDT. Yes, we're excited about Threadripper, because that's _us_ but the real important parts here are EPYC and Ryzen Mobile.

They also need to get deals in with vendors for the server kit. Having super inexpensive 32C/64T chips that clock just as high as their Intel counterparts is worth next to nothing if you don't have HP/Dell/Cisco/IBM/etc. prepared to sell systems with them.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.


Kazinsal posted:

They also need to get deals in with vendors for the server kit. Having super inexpensive 32C/64T chips that clock just as high as their Intel counterparts is worth next to nothing if you don't have HP/Dell/Cisco/IBM/etc. prepared to sell systems with them.

Nah just create an ocp box and then the clouds will eat that poo poo up

cloud providers are the new drivers here

ufarn
May 30, 2009


Any news on the supposed issue AMD CPUs have with Nvidia GFXes compared to Nvidia as AdoredTV pointed out?

I'm still considering an AMD CPU, but with poor game performance on a future Nvidia card, I feel like I still have to go with Intel.

Rastor
Jun 2, 2001



Malcolm XML posted:

Nah just create an ocp box and then the clouds will eat that poo poo up

cloud providers are the new drivers here

Cloud providers buy in the biggest quantities but at the same time for that reason they want to be sure of the performance and stability before signing the purchase order. Some design wins with HP/Dell/Cisco/IBM/etc. will no doubt be high priority for AMD right now.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


Malcolm XML posted:

Nah just create an ocp box and then the clouds will eat that poo poo up

cloud providers are the new drivers here

If they want to sell trays of chips to a single consumer, yeah the cloud providers are the way to go.

MaxxBot
Oct 6, 2003

you could have clapped

you should have clapped!!


Some of the cloud providers really hate having only a single CPU supplier which could be an advantage for AMD, I know that has driven google to invest in IBM Power for example.

MagusDraco
Nov 11, 2011

even speedwagon was trolled


ufarn posted:

Any news on the supposed issue AMD CPUs have with Nvidia GFXes compared to Nvidia as AdoredTV pointed out?

I'm still considering an AMD CPU, but with poor game performance on a future Nvidia card, I feel like I still have to go with Intel.

If you mean that thing where Rise of the Tomb Raider was running worse on amd cpus compared to intel cpus Crystal Dynamics just patched their game to fix that late last month.

https://www.pcper.com/reviews/Graph...formance-Update

Probably means developers are going to need to keep Ryzen in mind when making PC games in the future but it was totally fixable.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.


Rastor posted:

Cloud providers buy in the biggest quantities but at the same time for that reason they want to be sure of the performance and stability before signing the purchase order. Some design wins with HP/Dell/Cisco/IBM/etc. will no doubt be high priority for AMD right now.

Lol cloud providers are like 95% driven by cost

Failure is built into the design dude. They just migrate loads around.

Azure used to use poo poo opterons since they were cheaper than xeons
Low margins ultra high volumes > than ultra high margin ultra low volumes for amd here. Sure the hospital datacenter will buy E7 but aws will buy 100x the epyc 32 core proc

Malcolm XML fucked around with this message at Jun 4, 2017 around 19:42

Malcolm XML
Aug 8, 2009

I always knew it would end like this.


Palladium posted:

On the other hand, these software corps aren't exactly the thrifty sort either.

Uhh if you can save 1% of power in a million core dc than that saves millions

The perks are peanuts in operating costs

eames
May 9, 2009



Malcolm XML posted:

Lol cloud providers are like 95% driven by cost

Failure is built into the design dude. They just migrate loads around.

Azure used to use poo poo opterons since they were cheaper than xeons

^ this
Most new Amazon/Google/Facebook servers are whiteboxes from chinese ODMs. As soon as they see that AMD delivers an attractive package of cores/power consumption/price without catastrophic errata, they'll tick the according box on the order sheet and that's that.

Rastor
Jun 2, 2001



Malcolm XML posted:

Lol cloud providers are like 95% driven by cost

Failure is built into the design dude. They just migrate loads around.

Azure used to use poo poo opterons since they were cheaper than xeons
Low margins ultra high volumes > than ultra high margin ultra low volumes for amd here. Sure the hospital datacenter will buy E7 but aws will buy 100x the epyc 32 core proc

Yeah, they're driven by cost, and if you've got nodes failing left and right your costs shoot up.

The big cloud providers are big movers, I'm just saying don't expect them to necessarily be first movers. Not on a brand new platform, anyway. Proven reliable opterons you can get for cheap is a different beast.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.


Rastor posted:

Yeah, they're driven by cost, and if you've got nodes failing left and right your costs shoot up.

The big cloud providers are big movers, I'm just saying don't expect them to necessarily be first movers. Not on a brand new platform, anyway. Proven reliable opterons you can get for cheap is a different beast.

If the nodes are cheap enough then it doesn't matter

Its a complex roi calc

Long story short I guarantee amd spoke with all of the major clouds when designing EPYC THREADRIPPER and built it to easily slot into their requirements and that there are test machines running ocp epyc platforms

Adbot
ADBOT LOVES YOU

Rastor
Jun 2, 2001



Oh no doubt there are test machines in labs, I agree there. But stuff like being unstable under heavy compiling loads is gonna have to be ironed out before the production order is placed.

The cloud providers don't care about five-nines reliability on any given node but they do need the poo poo to actually work, in production conditions (where they don't baby systems at all), under heavy loads.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«248 »