|
Potato Salad posted:Motion to just use Zen, Zen Refresh, and Zen2? No. Zen, Zen.5, and ZenTwo. edit: Zentoo. Just like my favorite distro of Linux. VVVVVV I mean, If I were in charge of naming, I would entertain the idea, but it's hard to strike the balance of "taking the piss out of Chipzilla" and "just copying them relentlessly". SwissArmyDruid fucked around with this message at 19:54 on Sep 5, 2017 |
# ? Sep 5, 2017 19:32 |
|
|
# ? Apr 28, 2024 04:01 |
|
Zen, Zen Lake, then Ryzen Lake
|
# ? Sep 5, 2017 19:34 |
|
Ryzen XP
|
# ? Sep 5, 2017 20:07 |
|
Ryzen All In Wonder.
|
# ? Sep 5, 2017 20:08 |
|
Wait, instead of ticktock AMD has up-rebrand cycle? :V
|
# ? Sep 5, 2017 20:10 |
|
Stanley Pain posted:Ryzen All In Wonder. An onboard tv tuner? An on board HDMI input would be neat/useful for a few things
|
# ? Sep 5, 2017 20:11 |
|
Paul MaudDib posted:Zen2 has always had the potential for greatness. Zen1 is already not half bad, if they can bump the IPC and clocks up a bit, fix the memory controller's touchiness, and get some of the errata fixed then Zen2 could easily be going toe-to-toe with Coffee Lake.
|
# ? Sep 5, 2017 20:13 |
|
Onboard analog tv-tuner
|
# ? Sep 5, 2017 20:13 |
|
repiv posted:new leak from wccftech The compression blur just makes it
|
# ? Sep 5, 2017 20:28 |
|
Sormus posted:Wait, instead of ticktock AMD has up-rebrand cycle? :V Ryzen refresh is a supposed to be a straight respin with no architectural changes. Think Devil's Canyon except with actually significant clockspeed increases since we're already seeing 200-300mhz from minor steppings between Ryzen and Threadripper/Epyc. Arzachel fucked around with this message at 20:43 on Sep 5, 2017 |
# ? Sep 5, 2017 20:40 |
|
Pablo Bluth posted:They should probably introduce AVX512 too. Even if it's not that relevant to most people, it gives a poor impression to lose badly in subset of benchmarks. Munkeymon posted:The compression blur just makes it
|
# ? Sep 5, 2017 20:43 |
|
repiv posted:new leak from wccftech yesssss btw in case anyone was wondering where I was pulling that from, it was these articles on AT, mixed with some of the slides from AMDs Ryzen presentations. GlobalFoundries Details 7 nm Plans: Three Generations, 700 mm², HVM in 2018 Samsung and TSMC Roadmaps: 8 and 6 nm Added, Looking at 22ULP and 12FFC
|
# ? Sep 5, 2017 20:53 |
|
Pablo Bluth posted:They should probably introduce AVX512 too. Even if it's not that relevant to most people, it gives a poor impression to lose badly in subset of benchmarks. They're not going to do that. It would take too much die space, and it would ruin them in power-consumption charts. Hell, they don't even do native AVX2 instructions (it's executed as a pair of AVX1 ops, effectively halving the throughput). This has actually been a boon for them, because those charts measure power consumption, not power efficiency. Yeah, when running Prime95 a Skylake-X pulls a lot more power, but they're actually churning a hugely greater number of primes, so efficiency is actually much higher. But that's not what the charts are measuring. This also happens to be the only way you get those hilarious 300W figures that melt VRMs and poo poo, Prime95 is a ridiculously unrealistic load and virtually any other task drops power consumption to a fraction of that. AMD would be exposing themselves to similar problems/discourse about their processors. So yeah, slow AVX performance that reduces die space and leaves you looking good in power-consumption charts is probably an overall boon for AMD, vs a pyrrhic victory chasing Intel's performance. The place where rubber meets the road is media creation though (encoding and rendering). x265 is very good at using AVX512 and this is one of the few benchmarks where the 7900X blows out the 1950X, despite the latter's 60% advantage in core count. It also substantially closes the gap in x264 and Blender rendering, since these are AVX512-aware as well. The 1950X still wins in these, but the 7900X closes the gap down to 5-20% despite its much lower core count, thanks to AVX512. AMD's marketing has revolved around the idea that everyone is suddenly streaming or doing 3D rendering, so it's pretty funny to see them underperforming so drastically in those tasks. AVX512 is actually pretty good for workstation-type users, and is important in various kinds of HPC workloads as well. Not everyone is a workstation user, of course, but AMD's marketing has really been pushing the idea that you need a 16-core NUMA setup to leave a couple tabs open in Chrome while you game. (it's even more hilarious that when GN did testing they determined that (while they could not eliminate the null hypothesis), Intel's G4560 actually outperformed the 4-core Ryzen parts when they added background load) Paul MaudDib fucked around with this message at 21:28 on Sep 5, 2017 |
# ? Sep 5, 2017 21:12 |
|
They will be doing moving to AVX-2 though, and just pair AVX-2 ops for AVX-512, like they've explicitly stated they want to do this for Zen2. I don't think AMD will ever go native AVX-512 until they're confident in their market position. EDIT: To be clear, I think this is where the APUs and CPUs will diverge, so 7nm Zen2 APUs will not have AVX-512 capability, and will probably have less PCIE lanes and I/O as well. EmpyreanFlux fucked around with this message at 21:39 on Sep 5, 2017 |
# ? Sep 5, 2017 21:36 |
|
Paul MaudDib posted:Hell, they don't even do native AVX2 instructions (it's executed as a pair of AVX1 ops, effectively halving the throughput). Zen doesn't do full-rate AVX1 either, it breaks all 256-bit SIMD ops into a pair of 128-bit ops (effectively decomposing AVX into SSE).
|
# ? Sep 5, 2017 21:39 |
|
If you're doing that kind of high throughput encoding, wouldn't it be insanely faster and more efficient to offload that work to whatever Firepro card assuming you're inside the AMD ecosystem anyhow?
|
# ? Sep 5, 2017 21:48 |
|
Is AVX-512 even going to become a big deal at some point? I thought it's generally been viewed as going the way of AltiVec and currently it's just a way for Intel to put big numbers in certain benchmarks for the "MAH FRAMEZ" crowd.
|
# ? Sep 5, 2017 21:49 |
|
BangersInMyKnickers posted:If you're doing that kind of high throughput encoding, wouldn't it be insanely faster and more efficient to offload that work to whatever Firepro card assuming you're inside the AMD ecosystem anyhow? Yes, streamers are typically better off using GPU encoding. CPU encoding has better quality but it's difficult to achieve this in real-time, and if you are going to attempt to do so then the gold standard is to do the encoding on a dedicated machine since anything that exceeds the quality of GPU encoding is also going to poo poo up your framerate something fierce, even on an 8-core processor. Which, again, is why AMD's marketing is deceptive here. Regular users are better off using GPU encoding due to substantially lower impacts on framerate, ~*professional streamers*~ are better off getting a dedicated machine to handle encoding.
|
# ? Sep 5, 2017 22:04 |
|
Paul MaudDib posted:Yes, streamers are typically better off using GPU encoding. CPU encoding has better quality but it's difficult to achieve this in real-time, and if you are going to attempt to do so then the gold standard is to do the encoding on a dedicated machine since anything that exceeds the quality of GPU encoding is also going to poo poo up your framerate something fierce, even on an 8-core processor. Seems hard, you'd need a dedicated machine that can really rip through several threads at once
|
# ? Sep 5, 2017 22:22 |
|
Paul MaudDib posted:Yes, streamers are typically better off using GPU encoding. CPU encoding has better quality but it's difficult to achieve this in real-time, and if you are going to attempt to do so then the gold standard is to do the encoding on a dedicated machine since anything that exceeds the quality of GPU encoding is also going to poo poo up your framerate something fierce, even on an 8-core processor. To be fair to AMD, GPU-encoded Twitch streams looked pretty bad around the time of the Ryzen launch. Unfortunately for them Twitch almost doubled the bitrate limit a month later so it's far more forgiving of encoder quality now.
|
# ? Sep 5, 2017 22:30 |
|
SourKraut posted:Is AVX-512 even going to become a big deal at some point? I thought it's generally been viewed as going the way of AltiVec and currently it's just a way for Intel to put big numbers in certain benchmarks for the "MAH FRAMEZ" crowd. 50+% per-core performance improvement is ridiculously good
|
# ? Sep 5, 2017 22:51 |
|
Paul MaudDib posted:Yes, streamers are typically better off using GPU encoding. CPU encoding has better quality but it's difficult to achieve this in real-time, and if you are going to attempt to do so then the gold standard is to do the encoding on a dedicated machine since anything that exceeds the quality of GPU encoding is also going to poo poo up your framerate something fierce, even on an 8-core processor. I was talking about doing it in software on openCL vs on cpu, not hardware GPU encode.
|
# ? Sep 5, 2017 22:59 |
|
GPUs are good at embarrassingly parallel tasks that can run on shitloads of threads that don't need information from other threads. This is fundamentally opposed to how most of video encoding works; x264 begins to lose quality above a certain number of threads (16-22 on 1080p?) because the video is being split up into pieces too small. A high-end GPU, on the other hand, will want to be using tens of thousands of threads. You can accelerate parts of the encoding process using OpenCL but every pure-GPU video encoder I've seen (that wasn't a dedicated chip) has looked like poo poo. And I'm pretty sure the CPU-only parts still end up being the bottleneck.
|
# ? Sep 5, 2017 23:11 |
|
SwissArmyDruid posted:No. Zen, Zen.5, and ZenTwo. Zen, Zen 360, Zen One? Or no, wait, I've got it. Zen, Zen 2.0 Full Speed, Zen 2.0 High Speed.
|
# ? Sep 6, 2017 04:31 |
|
Zen 3.0, Zen 3.1, Zen 3.1 Gen 1, Zen 3.1 Gen 2 Zen-C
|
# ? Sep 6, 2017 05:00 |
|
Zen, Zen+, Zen++, Zen#, #Zen, @Zen I kinda want to see the last two now just to see how it fucks with social media.
|
# ? Sep 6, 2017 05:07 |
|
Malloc Voidstar posted:You can accelerate parts of the encoding process using OpenCL but every pure-GPU video encoder I've seen (that wasn't a dedicated chip) has looked like poo poo. And I'm pretty sure the CPU-only parts still end up being the bottleneck. https://www.youtube.com/watch?v=_6XYaFqq2mg In case you don't want to watch, the takeaway is that a GTX 660 gives Premiere's encoder exactly as much of a boost as a Vega 64. Unless x264's OpenCL branch is offloading significantly more to the GPU (and I very much doubt it does), you're super correct. I'm curious how weak/old a GPU would have to be to give worse results than the current generation if four years old is indistinguishable still, but that's sort of a moot point.
|
# ? Sep 6, 2017 06:48 |
|
You guys are going about this all wrong. AMD knows exactly how to push Intel's buttons now that Zen is actually competitive. Ladies and gentlemen... Zen Lake.
|
# ? Sep 6, 2017 06:56 |
AzraelNewtype posted:https://www.youtube.com/watch?v=_6XYaFqq2mg This is one of those rare videos where you should actually read the comments, the bottle neck they saw had nothing to do with the hardware and everything to do with the software.
|
|
# ? Sep 6, 2017 07:59 |
|
The devs who do the performance work on Photoshop/Lightroom/Premiere are the ones who failed to cut it in the Flash security team...
|
# ? Sep 6, 2017 08:02 |
|
Pablo Bluth posted:The devs who do the performance work on Photoshop/Lightroom/Premiere are the ones who failed to cut it in the Flash security team... When my friend was still doing video work in Premiere for a living, Adobe became a curse word.
|
# ? Sep 6, 2017 08:26 |
|
Ryzen update: I indeed got dud RAM sticks, the replacement memory runs like charm at 3200MHz. On the other end, 1700X isn't really overclockable, you can bump it to 3.7GHz for example but the system will crap out in a matter of minutes. I can only run it at stock 3.4GHz, which is a minor disappointment but oh well.
|
# ? Sep 6, 2017 10:27 |
|
A SWEATY FATBEARD posted:Ryzen update: I indeed got dud RAM sticks, the replacement memory runs like charm at 3200MHz. On the other end, 1700X isn't really overclockable, you can bump it to 3.7GHz for example but the system will crap out in a matter of minutes. I can only run it at stock 3.4GHz, which is a minor disappointment but oh well. what are you using to cool it?
|
# ? Sep 6, 2017 10:43 |
|
Scarecow posted:what are you using to cool it? Arctic cooling Freezer 33.
|
# ? Sep 6, 2017 12:00 |
|
yeah that could be your problem, what temps are you hitting with it?
|
# ? Sep 6, 2017 12:20 |
|
Scarecow posted:yeah that could be your problem, what temps are you hitting with it? At stock speed, under full load, 60C.
|
# ? Sep 6, 2017 12:35 |
|
A SWEATY FATBEARD posted:At stock speed, under full load, 60C. Lol did you give the chip some more voltage?
|
# ? Sep 6, 2017 12:57 |
|
Bristol Ridge available at retail if you really really need an APU and cant wait until Raven Ridge http://www.anandtech.com/show/11819/amd-bristol-ridge-apu-retail-available
|
# ? Sep 6, 2017 16:57 |
|
and as usual the (7, 9)600 is the only one worth getting as it's the only product in the stack that makes sense in ultra-cheapo-but-not-terribly-poo poo builds
|
# ? Sep 6, 2017 19:16 |
|
|
# ? Apr 28, 2024 04:01 |
|
I would argue not even, since these are only 35W parts. I mean, what ultra-cheapo-but-not-terribly-poo poo build do you have to have that you can't do with 65W and the commensurate perf bump?
|
# ? Sep 6, 2017 21:59 |