Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«182 »
  • Post
  • Reply
NewFatMike
Jun 11, 2015



Skylake X refresh is weird:

https://youtu.be/DQWp7Ppz0_o

Iíll wait for benchmarks, but I really canít see any reason to go with these over Threadripper based on specs alone.

Adbot
ADBOT LOVES YOU

Cygni
Nov 12, 2005

raring to post



im surprised that this is all Skylake X Refresh and not Cascade Lake. wonder if that was a name change or if they got delayed or what.

lllllllllllllllllll
Feb 28, 2010

Now the scene's lighting is perfect!


EmpyreanFlux posted:

Interesting test done, not sure about the rigorousness of it though.

https://www.youtube.com/watch?v=S3YKMf0BDno

Around 35W cTDP, a 2700X will clock @ 2.8-3.2Ghz, but loses it's poo poo at 15W (probably encountering an issue with the uncore). I wonder if 25W would be 2.4-2.8Ghz? Or 2.2-2.6Ghz? That's still hella good, I could see that an an MX150/RX M550 in a 13.3"-14" solution or a 1050/Ti/M560 in a 15" one.
This is really cool. Sadly only Asrock (and maybe MSI?) boards offer cTDP as far as I know. I guess telling Windows to only use 60% or so using its energy plan will get the same result.

SamDabbers
May 26, 2003

QUITE.


Fallen Rib

That opens up some interesting options for the desktop replacement/sff workstation niche. Most people will probably keep running them in desktops at max TDP though.

Broose
Oct 28, 2007



I know it is a bit early to ask, but has there been any whispers of new motherboard features exclusive to zen 2 like the xfr2 stuff is for Zen+?

Harik
Sep 9, 2001


SwissArmyDruid posted:

Someone out there appears to have access to Rome:



source: https://www.notebookcheck.net/Chiph...f.337356.0.html

Allegedly, whatever AMD chip(s) is/are running Cinebench there clocks in at 12587 score.

For reference, the current world record Cinebench run is 10038 score, running on 4x Xeon Platinum 8160s: http://hwbot.org/submission/3630844...m_8160_10038_cb

The usual Chiphell disclaimers apply.

That's the same one I posted a month ago when WCCF got wind of it, I think. Numbers and core counts look identical. I pointed out the world record cinebench then, too.

E: It is the exact same leak, they just reposted a new article with another screenshot of the same setup.

E2: That post was from the 8th, not yesterday. *sigh* I'm bad at forums.

Harik fucked around with this message at Oct 14, 2018 around 01:55

Cygni
Nov 12, 2005

raring to post



Broose posted:

I know it is a bit early to ask, but has there been any whispers of new motherboard features exclusive to zen 2 like the xfr2 stuff is for Zen+?

Not at the moment. Being socket compatible with AM4, and assuming they will wanna release APU style SoCs for that socket, i wouldnt expect anything wild.

PCIe 4 and DDR5 will need a new socket, and what I've read suggests 2020 for that stuff. If I were gonna guess, thats where we would get 10gbe too.

This is me just guessing out my rear end, but I don't think is going to be much (if anything) radically architecturally different with Zen2 on the CPU or chipset side. 2020 seems to be aligning as the time to ball out.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

I think Zen 2 will do PCIe 4 regardless, not sure why it'd require a new socket. Since IF is based on PCIe, IF would apparently get as speed boost, which is a nice thing for Zen.

Anime Schoolgirl
Nov 28, 2002

~perfect~
battlebrother





DDR5 will require a new socket, PCIe4 can get away with just another series of traces and backwards compatibility.

Cygni
Nov 12, 2005

raring to post



PCIe4 needs more traces and more pins from what Iíve read. I guess itís possible there are enough spare pins in AM4 to do it, and maybe you get an AM4+ situation, but I sorta doubt AMD will go that route.

PC LOAD LETTER
May 23, 2005
WTF?!

Slippery Tilde

Its possible they go with PCIe 4.0 for TR/Epyc (which is where the real benefit and need would be) and leave AM4 with PCIe 3.0 which is still pretty good for what it needs to do.

lostleaf
Jul 12, 2009


Does AMD have good linux support specifically for hevc encoding and decoding acceleration? As in not having to compile and load kernel modules.

Stanley Pain
Jun 16, 2001

Bit. Trip. RIP.


lostleaf posted:

Does AMD have good linux support specifically for hevc encoding and decoding acceleration? As in not having to compile and load kernel modules.

It's Linux. So it'll range anywhere from just working, to needing a daily sacrifice.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Cygni posted:

PCIe4 needs more traces and more pins from what Iíve read.
Again, why? The physical PCIe connectors are still the same.

The only loose reference I could find about more traces was speculation of the spec requiring 300-500W of power delivery on the slot, which seems conjecture since there's no new slot.

--edit: Maximum trace length is 10-12" in PCIe 4.0. Anything beyond requires components called retimers. This is going to be interesting on how it'll affect cost on EATX mainboards.

Combat Pretzel fucked around with this message at Oct 14, 2018 around 23:55

Cygni
Nov 12, 2005

raring to post



I went back and reread it, and it wasnít more traces, but different (probably
more expensive) design. I guess PCIe 4 signal degradation is crazy high at the new speeds, to the point that runs of more than a few inches cause some people issues. The max trace length is like 10in without retimers, and 3.0 designs arenít usable for 4.0 implementations.

https://community.keysight.com/comm...sion-of-pcie-30

Anyway Iím just googling around and probably wrong anyway so yeah!

lostleaf
Jul 12, 2009


Stanley Pain posted:

It's Linux. So it'll range anywhere from just working, to needing a daily sacrifice.

I was wondering if anyone had any experience because on Intel side, quicksync is working by installing vaapi thru apt sources. I'm hoping it's the same on AMD but Google didn't give me a consistent answer.

Mr Shiny Pants
Nov 12, 2012


Stanley Pain posted:

It's Linux. So it'll range anywhere from just working, to needing a daily sacrifice.

At least when it does work, it will probably keep working.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA


Combat Pretzel posted:

Again, why? The physical PCIe connectors are still the same.

The only loose reference I could find about more traces was speculation of the spec requiring 300-500W of power delivery on the slot, which seems conjecture since there's no new slot.

--edit: Maximum trace length is 10-12" in PCIe 4.0. Anything beyond requires components called retimers. This is going to be interesting on how it'll affect cost on EATX mainboards.

The extra traces might be induction compensating ground planes and other fun bits you use in high speed signaling to keep EMI to a minimum. And yeah, whoever makes the retimers and PLX chips is gonna make a killing on PCIe 4 boards.

SwissArmyDruid
Feb 14, 2014



lostleaf posted:

Does AMD have good linux support specifically for hevc encoding and decoding acceleration? As in not having to compile and load kernel modules.

You know, this seems like the kind of thing that Phoronix would have.

https://www.phoronix.com/scan.php?p...-linux418&num=3

Craptacular!
Jul 9, 2001

Fuck the DH


lostleaf posted:

Does AMD have good linux support specifically for hevc encoding and decoding acceleration? As in not having to compile and load kernel modules.

It sort of depends on what your hardware is. Raven Ridge requires newer kernels and versions of mesa than ship with most distros until next year, and as of this summer when last I looked only the 2400G is likely to work and the 2200G just doesnít accelerate and sometimes needs multiple bootups to reach desktop. Your average Ubuntu user can figure out how to go to a pre-release kernel as soon as they learn ukuu exists, but moving from mainline mesa to somebodyís compiled beta without loving up your installation takes real knowledge.

If youíre on dGPU+CPU, youíre probably fine.

Craptacular! fucked around with this message at Oct 15, 2018 around 09:05

Harik
Sep 9, 2001


SwissArmyDruid posted:

You know, this seems like the kind of thing that Phoronix would have.

https://www.phoronix.com/scan.php?p...-linux418&num=3
That's pure CPU performance on the encoding tests, not APU acceleration.

E: More thread related, is x265 really tied to single-thread performance? Since the 32-core is beaten by the 16-core and both are trounced by Intel I would have to guess so. It seems really shortsighted to design a modern codec that's tied to a single-thread as core counts explode.

Harik fucked around with this message at Oct 15, 2018 around 09:53

SwissArmyDruid
Feb 14, 2014



Harik posted:

That's pure CPU performance on the encoding tests, not APU acceleration.

E: More thread related, is x265 really tied to single-thread performance? Since the 32-core is beaten by the 16-core and both are trounced by Intel I would have to guess so. It seems really shortsighted to design a modern codec that's tied to a single-thread as core counts explode.

They didn't specify. I figured if they wanted APU perf, they'd have specified that, and if they wanted GPU perf, they'd have asked in the video card thread. So, strictly CPU.

But on the off chance that they wanted APU, here, scroll all the way to the bottom, Windows-based, though. https://www.anandtech.com/show/1242...2400g-review/10

sauer kraut
Oct 2, 2004


Harik posted:

E: More thread related, is x265 really tied to single-thread performance? Since the 32-core is beaten by the 16-core and both are trounced by Intel I would have to guess so. It seems really shortsighted to design a modern codec that's tied to a single-thread as core counts explode.

No that's Zen being very mediocre at workloads that make heavy use of AVX.

Harik
Sep 9, 2001


sauer kraut posted:

No that's Zen being very mediocre at workloads that make heavy use of AVX.
A 32-core going slower than a 16-core of the same architecture still doesn't make any sense. That's saying that using AVX causes it to downclock more than 50%. x264 uses AVX as well, but there AMD clearly has the lead due to more cores.

Something's not right with the x265 encoder on threadripper.

SwissArmyDruid posted:

They didn't specify. I figured if they wanted APU perf, they'd have specified that, and if they wanted GPU perf, they'd have asked in the video card thread. So, strictly CPU.
They asked about drivers, which doesn't make sense in a CPU context.

Anime Schoolgirl
Nov 28, 2002

~perfect~
battlebrother





Harik posted:

A 32-core going slower than a 16-core of the same architecture still doesn't make any sense. That's saying that using AVX causes it to downclock more than 50%. x264 uses AVX as well, but there AMD clearly has the lead due to more cores.

Something's not right with the x265 encoder on threadripper.

They asked about drivers, which doesn't make sense in a CPU context.
The scheduling may just be hosed on x265, similarly to GPU performance being cut in half in some APIs when using the 2990WX.

Harik
Sep 9, 2001


Anime Schoolgirl posted:

The scheduling may just be hosed on x265, similarly to GPU performance being cut in half in some APIs when using the 2990WX.

That's entirely possible, it would be interesting to see if there's an "Aha!" moment in the video encoding space where they figure out why it's so low.

You're right about 265 being AVX heavy. I guess because it was a greenfield project when AVX was more available it was designed around that in a way that 264 wasn't. That surprised me and was not at all what I expected. 2x256bit AVX per core for Intel means it really plays to their strengths.

Tough to tell what the 2990WX hit is, since apparently nobody has done H265 benchmarks on the epyc 7501. That would give an idea of what performance would be with proper memory bandwidth to all cores.

Adbot
ADBOT LOVES YOU

Klyith
Aug 3, 2007

GBS Pledge Week


A cursory glance at the x265 website says two things
- it's threading has accounting for NUMA nodes, likely because H.265 is quite memory intensive
- it does not multithread as cleanly as older codecs because apparently macroblocks have dependencies on previous ones within a single frame

Both of those together I could see giving Intel the edge. Possibly x265 doesn't have the NUMA stuff for the new big threadrippers, in which case it could improve a fair bit later. But maybe between AVX, Intel's memory controller having a bandwidth advantage over Ryzen, and HEVC not scaling to 32 cores as well, the threadripper just won't be the best CPU for ripping x265 videos.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«182 »