|
Cygni posted:One of the leaked Raven Ridge APUs looked to be roughly an Xbox One graphics config (704 shaders at 800mhz for Raven Ridge vs 768 at 853mhz in Xbone, but an older shader design), but i think that was a desktop part. Still great for SFF stuff, hopefully the ****HQ or ****HK equivalent models for mobile will be 80-90% of the desktop models combined with Freesync 2 panels.
|
# ? Aug 14, 2017 20:04 |
|
|
# ? Mar 28, 2024 16:07 |
|
Measly Twerp posted:He has this way of making interesting things unlistenable. No, haircut is right. There's plenty of people in the tech review space with no charisma. Burke just looks slimy, like I can see the no showers in a month and so on.
|
# ? Aug 14, 2017 21:48 |
|
Watermelon Daiquiri posted:....considering room temp is 22C, 20C is impossible unless its out in the arctic or something.
|
# ? Aug 14, 2017 23:42 |
|
I love Gamer's Nexus and think the delivery is great, better than most tech shows. Especially Linus'
|
# ? Aug 15, 2017 00:12 |
|
Paul MaudDib posted:They get more poo poo about being anti-AMD than I do. Not sure that's possible!
|
# ? Aug 15, 2017 01:14 |
|
Arivia posted:No, haircut is right. There's plenty of people in the tech review space with no charisma. Burke just looks slimy, like I can see the no showers in a month and so on. Steve Burke looks like he could hold his own in Quake, a nerd's nerd.
|
# ? Aug 15, 2017 01:33 |
|
I always read techpowerup first and it's almost entirely because of their graphs. Having nice neat charts of performance, performance/W, and performance/$ is something everyone should do.
|
# ? Aug 15, 2017 01:40 |
|
PerrineClostermann posted:I love Gamer's Nexus and think the delivery is great, better than most tech shows. GN gave us a 20 minutes of details, Linus seemed to have made the video as sort as possible as to not piss off amd with bad press. Really out of a 7 minute video 2 minutes was intro and exit ads.
|
# ? Aug 15, 2017 01:47 |
|
Look at the viewership numbers for Linus versus every one else though.
|
# ? Aug 15, 2017 02:04 |
|
wargames posted:GN gave us a 20 minutes of details, Linus seemed to have made the video as sort as possible as to not piss off amd with bad press. Really out of a 7 minute video 2 minutes was intro and exit ads. I guess this is the kind of softball review a couple exclusive Vega previews buys you. (it's infinitely hilarious that Steve Burke sperg-bombed this deal for AMD) They're dividing the cost of the Vega 64 upgrade and the bundle cost into the whole system so it looks better ("look it's only another 10% on your total system cost!"). I wonder what this chart looks like drawn with the GSync tax instead repiv posted:Speaking of Linus, remember when he called Vega FE a piece of garbage and promised to tear it a new one in his review. Paul MaudDib fucked around with this message at 02:10 on Aug 15, 2017 |
# ? Aug 15, 2017 02:08 |
|
Why does that graph show values for prices less than the MSRP of the cards?
|
# ? Aug 15, 2017 02:25 |
|
wargames posted:GN gave us a 20 minutes of details, Linus seemed to have made the video as sort as possible as to not piss off amd with bad press. Really out of a 7 minute video 2 minutes was intro and exit ads. I mean in general.
|
# ? Aug 15, 2017 02:28 |
|
So, what's the possibility of like ASRock or someone else making a mATX or ITX Mobo with all the M.2 slots on the back? I know you probably couldn't do 8 DIMM slots, but you could probably take full advantage of all 64 PCI-E lanes on mATX.
|
# ? Aug 15, 2017 03:18 |
|
wargames posted:GN gave us a 20 minutes of details, Linus seemed to have made the video as sort as possible as to not piss off amd with bad press. Really out of a 7 minute video 2 minutes was intro and exit ads. All of LTTs recent videos have spent exactly 2s of screen time for each graph. They've basically become worthless as reviewers.
|
# ? Aug 15, 2017 04:02 |
|
I'm pretty impressed with the Ryzen performance for business purposes. I think it's the first time in years I've been enthusiastic about a cpu. Seems like an afforable way to get a lot of cpus with better floating point performance than intel.
|
# ? Aug 15, 2017 04:27 |
Devian666 posted:I'm pretty impressed with the Ryzen performance for business purposes. I think it's the first time in years I've been enthusiastic about a cpu. Seems like an afforable way to get a lot of cpus with better floating point performance than intel. That makes a lot of sense, Ryzen is the first compelling new thing we have seen in desktop CPUs since Sandy Bridge or so. Hopefully this lights a fire under Intel and we see some real innovation from both sides over the next few years.
|
|
# ? Aug 15, 2017 04:39 |
|
AVeryLargeRadish posted:That makes a lot of sense, Ryzen is the first compelling new thing we have seen in desktop CPUs since Sandy Bridge or so. Hopefully this lights a fire under Intel and we see some real innovation from both sides over the next few years. My workload is CFD so it's pretty easy to create a mesh for each virtual cpu and of course what I run is so archaic that it uses Fortran so every node has at least 10 floating point variables. A 1950X could theoretically reduce the run times by 75% compared with my quad core xeon. There's a big cost jump between an 1800X and a 1950X (including the cost of the motherboard) so I'll have to think about this for a while.
|
# ? Aug 15, 2017 04:48 |
|
AVeryLargeRadish posted:That makes a lot of sense, Ryzen is the first compelling new thing we have seen in desktop CPUs since Sandy Bridge or so. Hopefully this lights a fire under Intel and we see some real innovation from both sides over the next few years. Perhaps we'll see a new, inventive form of rebate for manufacturers?
|
# ? Aug 15, 2017 05:35 |
|
PerrineClostermann posted:Perhaps we'll see a new, inventive form of rebate for manufacturers? Yeah, but gone are the days of bribing Dell, HP, and E-Machines to lock in 90% of the market. CPU sales are gonna be on the Cloud providers and backend, and because of them much more likely to be subject to increased scrutiny by people willing and able to call them on their bullshit. Also after the EU Intel anti-trust ruling, they're going to be at least a little bit more circumspect than 'never use our competitor ever and get 20% off our chips'.
|
# ? Aug 15, 2017 06:34 |
|
I dunno, Intel seems to be able to appeal that ruling indefinitely.
|
# ? Aug 15, 2017 06:36 |
|
PerrineClostermann posted:I dunno, Intel seems to be able to appeal that ruling indefinitely. Yeah, but 'we literally did the exact same thing you're fining us for again' can curtail the appeal some. Not sure how the EU appeals system works, but that would be the kind of flagrant disregard for the law that would get a judge to issue penalties.
|
# ? Aug 15, 2017 06:48 |
|
NewFatMike posted:Still great for SFF stuff, hopefully the ****HQ or ****HK equivalent models for mobile will be 80-90% of the desktop models combined with Freesync 2 panels. GF's process nodes seem to be way more efficient at low voltage and clocks and I don't think Vega will change that one bit, but raw shader power has never been the issue. DDR4 gets us half way to not choking the iGPU to death but the hype around Vega for me was the TBR/DSBR implementation which seems pretty underwhelming right now. It might be down to the HBM2 memory controller severely underperforming and I'd like to see some testing done on this, but I don't think AMD can properly feed 11CUs at 800 mhz. Edit: now that I've looked at it more, Vega has less effective memory bandwidth than Fiji while still outperforming it by a decent margin That bodes well for Raven Ridge even if Vega is pretty poo poo. Arzachel fucked around with this message at 07:58 on Aug 15, 2017 |
# ? Aug 15, 2017 07:47 |
|
Yea, Raven Ridge is going to be sitting at like 900mv to 1.05v, and based on the GN review Vega is performing way outside it's voltage spec which is causing the absurd power draw; Vega is able to boost past 1600Mhz @ 1.09v and is able to hit 1520Mhz @ 1.025v and power draw drops 50-60W while clocking effectively higher than stock. It's also yet again a very bottlenecked design, with the differences between the XL and XT versions being clock differences. I don't think we can draw too many comparisons between Vega 10 and Raven Ridge yet, if AMD releases Vega 11/12 before Raven Ridge we will have a very good idea. In turn Zen is really good at lower clocks and voltages, so while I expect some 45W solutions I'm also expecting AMD to get some nice 25W, 15W and 5W solutions that still perform really well.
|
# ? Aug 15, 2017 09:55 |
|
From Anandtech's Vega review:quote:On a related note, the Infinity Fabric on Vega 10 runs on its own clock domain. It’s tied to neither the GPU clock domain nor the memory clock domain.
|
# ? Aug 15, 2017 10:20 |
|
Tenatively: Maybe that's what's causing the problems, we will have to see, but at least we know that Zen+ or Zen2 should be less dependent on memory.
|
# ? Aug 15, 2017 10:22 |
|
Devian666 posted:My workload is CFD so it's pretty easy to create a mesh for each virtual cpu and of course what I run is so archaic that it uses Fortran so every node has at least 10 floating point variables. A 1950X could theoretically reduce the run times by 75% compared with my quad core xeon. There's a big cost jump between an 1800X and a 1950X (including the cost of the motherboard) so I'll have to think about this for a while. If you're spending money, make it worthwhile. You could save money now, or save time later. Is it worth upgrading from what you have to an 1800x? Why not a 1700? When I upgraded my graphics card from my r290, I asked myself if it was worth spending $300-400 for a card that barely eeks out my old card, or 700 for a card that beats it by a fair margin. I spent a grand and got a 1080 ti instead, because the performance per dollar with the initial cost (300 to break even) made it the best choice.
|
# ? Aug 15, 2017 11:52 |
|
SwissArmyDruid posted:
That would be the shifftiest 10% performance uplift ever. "Hey guys looks like our VBIOS has been running the Infinity Fabric at half speed this whole time!"
|
# ? Aug 15, 2017 12:33 |
|
All the stuff for my 1700 build arrived . Apart from the memory which ebuyer haven't even shipped yet...
|
# ? Aug 15, 2017 17:20 |
|
Eyochigan posted:If you're spending money, make it worthwhile. You could save money now, or save time later. Is it worth upgrading from what you have to an 1800x? Why not a 1700? A good question. All of the CPUs I'm using for work and home (excluded my laptop) are all getting rather old (the memory in my workstation is DDR3 - I'm assuming this is rubbish by today's standards). An 1800x would be a major upgrade and with the fast floating point could be a leap forward that's faster than what I'm expecting. Why not a 1700? I am prepared to pay for higher clockspeed and prefer not to overclock. Overclocking on a multithreaded job that could take up to a week to complete is a commercial risk if it's unstable.
|
# ? Aug 16, 2017 05:28 |
|
Devian666 posted:A good question. All of the CPUs I'm using for work and home (excluded my laptop) are all getting rather old (the memory in my workstation is DDR3 - I'm assuming this is rubbish by today's standards). An 1800x would be a major upgrade and with the fast floating point could be a leap forward that's faster than what I'm expecting. Could I make an alternative suggestion for your CFD workload? http://natex.us/S2600CP2J-Custom/ That, plus 2x E5-2670's, and 128GB of ECC DDR3 on 8 memory channels = 16 cores and 80-120 Gb/sec (edit: typo+realism) of memory bandwidth for ~$750. You can compare it to threadripper, but I don't think it can match the old dual Sandy Xeons in memory bandwidth per dollar. Assuming your current workstation memory is ECC you have spare parts there, and could even pick up a backup motherboard, and still stay under $1k. Mofabio fucked around with this message at 06:03 on Aug 16, 2017 |
# ? Aug 16, 2017 05:42 |
|
Not to nitpick, but I'm pretty sure that's roughly 80 GB/s of peak DRAM bandwidth assuming you're stuffing it with PC3-10600.
|
# ? Aug 16, 2017 05:55 |
|
Could be. Was going off https://ark.intel.com/products/66133/Intel-Server-Board-S2600CP2J
|
# ? Aug 16, 2017 06:01 |
|
dubba post
|
# ? Aug 16, 2017 06:02 |
|
Devian666 posted:Why not a 1700? I am prepared to pay for higher clockspeed and prefer not to overclock. Overclocking on a multithreaded job that could take up to a week to complete is a commercial risk if it's unstable.
|
# ? Aug 16, 2017 07:58 |
|
How to use a torque driver correctly: https://clips.twitch.tv/HyperInventiveTubersSaltBae
|
# ? Aug 16, 2017 16:56 |
|
repiv posted:How to use a torque driver correctly: https://clips.twitch.tv/HyperInventiveTubersSaltBae Uhh...
|
# ? Aug 16, 2017 17:02 |
repiv posted:How to use a torque driver correctly: https://clips.twitch.tv/HyperInventiveTubersSaltBae loving morons.
|
|
# ? Aug 16, 2017 17:08 |
|
Pretty sure he put at least 3x the effort into tightening that thing I did on my loving giant Noctua cooler How did he not realize he was using too much force?
|
# ? Aug 16, 2017 17:17 |
|
Don't torque-shame.
|
# ? Aug 16, 2017 17:18 |
|
|
# ? Mar 28, 2024 16:07 |
|
Malloc Voidstar posted:Pretty sure he put at least 3x the effort into tightening that thing I did on my loving giant Noctua cooler I guess if you're a twitch streamer it means you're worthless at literally everything else in life... including basic tool usage like a god drat screwdriver.
|
# ? Aug 16, 2017 17:19 |