Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Drakhoran posted:

Depends on how far you want to overclock it. Kyle at Bitwit got his 1700 to 4 GHz with a Noctua cooler, but only to 3.9 GHz with the Spire.

To be fair, there is an enormous difference between 3.9 and 4ghz on Ryzen. 3.9 is pretty much the max clock speed the process can handle with any sort of efficiency and anything past starts seeing the chips suck down volts like it was a Polaris GPU.

Adbot
ADBOT LOVES YOU

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

crazypenguin posted:

I really want to know what they did to accomplish that.

A significant portion of that is bug fixing, Ryzen + Nvidia combos apparently performed extremely poorly in the game in DX12 pre-patch, far below what you'd expect compared to other games. There is some speculation that Nvidia's drivers don't handle Ryzen properly so I wonder if Nixxes worked around it or if it was a Rise of the Tomb Raider engine bug this whole time. There were some generic CPU related fixes as well that should have improved both Intel and AMD performance in this game with this patch as well.

http://www.tomshardware.com/reviews/amd-ryzen-7-1700x-review,4987-6.html - You can see Rise of the Tomb Raider DX12 results here, you can see that Ryzen was performing really out of whack here, the 1700X was barely faster than an FX-8350.

Beautiful Ninja fucked around with this message at 19:04 on May 30, 2017

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Of note that chart only represents Passmark users, not an actual marketshare analysis made by any sort of reputable organization.

The main skepticism of the chart will be 'of course all the people who bought shiny new CPU's are benchmarking them.'

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Cygni posted:

lmao gettin worked up about a tiny rear end joke in marketing slides

Poor Volta.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

incoherent posted:

What the hell are game developers doing if their go-to platform are consoles with 8 core??????

My understanding is games are perfectly capable of using a single thread for individual processes, like sound, networking, physics, AI. But each individual process doesn't need the same amount of CPU power, so you end up with certain processes that use way more CPU power than the others and you can only go as fast as the slowest process.

What games can't do is take the AI thread and make it run on 8 threads at once, since you can't just break your average game process into parallel chunks and have them complete ASAP, everything needs to be done in a certain order. It seems to be a fundamental problem with sequential programs that no one's really figured out how to get around yet.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Combat Pretzel posted:

Actually AI is a thing that can be easily parallelized. Each agent can be simulated on its own thread.

Ah. I picked a poor example then. But the main problem for games seems to be that you can't easily parallelize all the functions of a game, so you end up with some sort of single threaded bottleneck someplace that slows everything else down.

Consoles work around this by liberal use of 30 FPS games.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.
Well I took the plunge on Ryzen today. Saw the 129 dollar 1700X deal and decided now was as good a time as any to move away from Intel. Had a 6700K so in terms of ST performance it's a marginal downgrade, but I game at 1440p and 4K so the performance loss should be marginal to none, I'll just let G-Sync handle the few missing frames at 1440p on the 2080 Ti. Decided to move away since upgrading on the Intel path was going to cost way too much as well as being a dead end for upgrading, Intel isn't going to release another gen on Z3XX and I doubt we're seeing any other CPU on there as well. With AM4 I can look forward to support up to Ryzen 3 and I am already planning on immediately upgrading to Ryzen 2 when it comes out next year, between the IPC and clock upgrades it should be a legit 9900k competitor. The comedy of it all is that I can buy a 1700X, Asus Crosshair VII Hero and the upcoming 3700X for about the same cost as the 9900K alone.

Once I get a proper OS reinstall going it'll be time to tweak this thing, it looks like I should be able to get to 3.9ghz all-core on an overclock pretty easily. I do wonder if my RAM will run at it's rated DDR4 3000, it's apparently some Hynix RAM based on its serial information, not the glorious Samsung B-Die that's the most compatible with Ryzen. Hoping the newer X470 Mobos and BIOS' are better at that than the old X370 boards were.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Risky Bisquick posted:

If all this comes true, Intel is getting dunked on super hard in a couple months. I got cash, sell me the 5ghz 12c part stat

Yep, I'm ready for that as well. Got a 1700X as a bridge CPU for cheap at Microcenter waiting for the Ryzen 3000 series parts to come out. That CPU would finish off my PC build that I've already spent far too much money on so it can sit there as I play Wii U games.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

sauer kraut posted:

So the rumored 3600X would be a single die with 8 cores, no more inter ccx delay and direct access to both Dram channels?
That could be quite the challenger for a stock 9900K :v:

It would suspect it would be 2 dies with 4 cores a piece. Perfectly working 8 cores dies are going to be put in the higher end SKU's.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.
So I just picked up a 2700X to replace my current 1700X as I plan to build another PC and I figured I'd use this chance to do some marginal upgrades to mine.

So far I'm really liking the XFR/PBO auto overclocking, it seems to pretty much make normal overclocking not necessary unless you're demanding very high all-core locked clock speeds. For gaming purposes, it's much more useful for me to have a couple of threads boost up while the others stick at slightly lower clocks.

But I do have a question about the Performance Enhancer stuff on my Asus Crosshair VII Wi-Fi. My understanding is that levels 1-2 are AMD approved settings and shouldn't be an issue, level 3 and level 4 are Asus' own stuff. I do have a 280mm AIO cooler and in my testing it looks like it can handle Level 3 just fine temps wise, I was just wondering if I'm being paranoid about it potentially being dangerous to the CPU to let the mobo do what it wants with the voltages to keep things clocked as high as possible? Level 4 seems to be reserved for people doing exotic cooling and I think I'll steer clear of that for now.

Other than that, I'm actually pretty happy with the upgrade so far. My 1700X wasn't a great overclocker, only did 3.8 before I was pushing past the recommended voltage limits of overclocking Ryzen. The increase in performance going from that to a 2700X doing 4.1-4.2ghz boosting up is actually noticeable for me, it's cleared up a lot of CPU related stuttering I was having before, I've moved on to an ultrawide monitor recently and didn't take into account that my CPU now needs to draw an extra 25% or so objects on screen, my 1700X was just not keeping things as smooth as I liked once I made the switch. Really looking forward to the rumored 3700X if that ends up being a thing, something that can boost to 5ghz on a couple cores would be super sweet for gaming.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

fknlo posted:

You don't need anything exotic for PE4. I'm running a 360mm AIO and it handles PE4 fine. I run a -0.09375 voltage offset with mine and it keeps the voltage around 1.4 or lower under load. Some people can pull off a -0.1 offset but mine won't. One thing I've seen somewhat commonly is PE3 with BCLK adjustments and a positive offset. I haven't played around with that yet but will at some point in the future.

Thanks for that, I'll start working on finding a good offset for my CPU. While looking at voltages, I noticed with PE level 3 using Auto, during heavy CPU stuff like Cinebench my voltages were only around 1.32 or so while the CPU stuck at 4.1ghz. But I did notice under more lightly threaded work, like some games, my voltages went up to 1.5v when 1-2 cores would sit at 4.3 while others are at lower clocks. My suspicion is that this is actually intended when PBO is clocking a couple cores high as Ryzen will need those volts to get the clocks that high, but I just wanted to double check.

Next on the agenda is getting my RAM working at a good speed. Ended up buying some DDR4 4000 because it was relatively cheap for Samsung B-Die. Obviously it failed miserably at loading its XMP profiles at both 3600 and 4000, right now I'm running it at 3200 CL14 because it doesn't require me to put more than the XMP standard 1.35v onto the memory. My understanding is that up to 1.5v is fine here and probably needed to push into the high DRAM frequency range, looking to see if I can push it up to 3600 since Ryzen DRAM calculator says I should be able to do that at relatively low timings.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.
My adventures in overclocking on Ryzen continue. I finally nutted up and put the 1.46v that my memory apparently needs to run at DDR4-3600 speeds according to Ryzen DRAM Calculator and sure enough it's doing that. From what AMD engineers have said and what I've read, apparently up to 1.5v is fine for daily driver OC's on the memory and it doesn't seem like you can get much past DDR4-3200 on Ryzen without increasing voltage past the normal 1.35v XMP recommends in its timings. The increase in overall performance on this 2700X + DDR4 3600 14-15-15-30 compared to my old 1700X + DDR4 3000 CL16 is pretty dramatic in previously bottlenecked situations, seeing over a 20% increase in performance in CPU bottlenecked situations. Haven't done a proper full stability test yet, but it did survive doing Cinebench R15/R20 as well as various game benchmarks. Anything past DDR4-3600 also seems not to be worth doing, you really need to start pushing the latency up and going past the 1.5v AMD recommends.

In WoW, there is a little ledge I can stand on that consistently gave me 40 FPS on my old setup looking out into the horizon, it now does 52 FPS in the same spot, that was exactly the type of increase in performance I was looking for to shore up those bad low end spikes.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Heliosicle posted:

I just got a 2600X and noticed a weird thing going on with CoreTemp/the cpu. When idling the temp goes from 45 down to 35, then spikes up to 45 and back down, and just loops. I thought the same thing was happening with the load/clocks in core temp but they don't seem to be related.

This is from afterburner:


I think that's normal with how XFR works on X-series chips. You're going to have a core that'll get boosted, request voltage for said boost and increase temps appropriately. XFR can be kinda bouncy on the desktop, my old 1700X and my current 2700X look the same in regards to temps.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

SSJ_naruto_2003 posted:

Hmm, some strange stuff happening with my 3700x. If I put PBO on, and do a stress test, it settles in around 4112 all core. If I leave default boosting mode on, the same test settles in at 4150 all core. It's not thermal related, so I'm not sure what's up.

There are currently issues with BIOS' where boosting isn't working correctly. I am running into the same issue on my CH7, turning on PBO lowers my performance over stock boost. It's supposed to be fixed on BIOS' with the 1.0.0.3ab BIOS.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

GRINDCORE MEGGIDO posted:

MSI x570 bios experiences: wish I'd bought asrock.

I've bought Asus the last couple times I bought motherboards, but they seem to be much slower at releasing BIOS updates compared to the likes of MSI or Asrock. I might go back to Asrock next time around, their Z77 Extreme 4 is the most stable experience I've had with a motherboard back in the Ivy Bridge days, my Asus Z170 Maximus VIII Hero and X470 Crosshair VII Hero have both been much more finicky.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

repiv posted:

AMD gave two sweet spots - 3733 for absolute performance (because the IF switches to half-rate beyond that) and 3600 for price/performance tradeoff.



Note that the 3733 RAM speed will require manually overclocking the infinity fabric clock, fclk in the BIOS. It will only do up to 1800mhz for DDR4 3600 by default. It should be an easy overclock to achieve since AMD mentioned it in their promotional videos...but at the same time nobody's Ryzen 3000's are boosting to their proper boost clocks either.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

mediaphage posted:

I like furmark for stressing gpus, does that still get used?

I think both AMD and Nvidia detect Furmark running and intentionally throttle it on newer cards due to it frying older GPU's back in the day.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Craptacular! posted:

While the 3600 beats the 2700X in FPS in many but not all games, it's usually seeing very high utilization in order to do it. Like sometimes as wild as 80% vs 50%.

I always wonder if there's going to be some sort of "Finewine" moment in the future where it turns out that game developers coded around Intel's eight years of sameness and the 2700X becomes a good "old timer CPU" the way my 3770K aged gracefully.

I'm not a programmer myself, but I think I understand enough to know why we're still waiting for the multi-core revolution to happen in games. Game logic, to my understanding, is not a good candidate for parallel operation. Game logic is dependent on doing tasks sequentially, based on previous information. Putting all this on a single thread limits is significantly more performant than spreading it out across multiple cores, where you start getting killed on latency penalties for thread switching. So this is why every game, even stuff in DX12/Vulkan that is generally better threaded, are still generally bottlenecked by a single "master" thread handling game logic and you can only go as fast as the slowest thread. Amdahl's law being the more technical explanation of this. So what we probably need for ultimate gaming performance is probably not just more cores, but significantly lower/elimination of per core latency so that you can spread out latency sensitive logic across multiple threads while minimizing performance penalties. This is why Intel stays good in gaming performance despite getting killed in everything else now, Intel's CPU's have notably lower latency than Ryzen. It's also why the key to extremely good Ryzen 3000 gaming performance is lowering latency through tight timings on your RAM.

What more cores does better for day to day use is let you do a lot more with your PC at the same time. Gaming + CPU streaming in high quality on a single PC is easy now with CPU's like the 3900X or 3950X. Pro streamers would often have two separate PC's before AMD brought the cores to the desktop and made it trivial to do both at the same time.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

64bit_Dophins posted:

Honestly I was considering getting a 3900x just so I can open task manager to see all of the logical cores and go "yup - that's a good CPU alright"

I still don't see why this isn't an objectively solid reason for making a buying decision.

This is pretty much all I do with my 3900X now. Totally worth the money and now I feel inferior having less cores idling than people with 3950X's.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Howard Phillips posted:

Seems like everyone is saying that the 2600 is the real value option. What if the 2600X is only ten dollars more?

I'd pay 10 bucks for PBO, it's really nice on Ryzen 2000, pretty much totally eliminates the need to tinker for anything OC wise. You can probably get a 2600 running like a 2600X with some effort, but it's only :10bux:

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

taqueso posted:

Is that really true about crysis? I thought I was just being silly.

The last time I tried the original Crysis, the bottleneck was CPU. It's an old rear end DX10 that slams 1 core to the moon, so you really need the fastest single core you can possibly get. Your GPU will not be working hard on Crysis 1.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

LRADIKAL posted:

You don't think that's more Linus' thing?

Linus would try to watercool the NUC with a chiller 100x the volume of the NUC.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

gradenko_2000 posted:

do any of these reviews go into the performance of the iGPU? The ones I keep seeing all seem to be for laptops that already have a dedicated GPU like a 1650 anyway

Techspot has iGPU benchmarks - https://www.techspot.com/review/2003-amd-ryzen-4000/

It's now better than the cheapest NV discrete chips, the MX GPU's. It's still only about half as fast as a GTX 1650 Max-Q in a GPU bottlenecked situation, but if you play at low settings which are more CPU bound, it competes a lot better.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.
I think I finally reached the promised land on Ryzen 3000, hitting DDR4-3800 to max out the speed of the infinity fabric before going into 2:1 mode. Got stuck on DDR4-3600 CL14 for a long time, none of the timings and voltages I would use on various Ryzen DRAM calculators would work for me past DDR4-3600 CL14 using their more generic Samsung B-die settings. I thought it was more likely that I just had a CPU that didn't want to do a 1900 FCLK, as that is uncommon. I saw posts about using Taiphoon Burner to import RAM settings and decided to do that, which lead to Ryzen DRAM calculator spitting out totally different sets of timings and voltages, which upon testing so far look stable at DDR4-3800 CL16. Lowered my latency a bit more over my old timings, which looks to be the secret to top Ryzen 3000 performance, it responds super well to tight timings to push latency down for single core tasks in particular. Did the Aida64 latency test and got down to 63.4ns, was about 65ns or so previously. Got more tests to run to ensure full stability, but I'm excited to be pushing basically peak performance out the CPU.

I also started trying out undervolting, despite AMD saying that it should not work at all with how they've tuned things. Did a -.05v undervolt and the CPU has responded well so far, in Cinebench R20 I'm getting scores in the 7350 range now as I'm staying below the soft thermal cap of 75C Ryzen 3000 has before it starts its internal underclocking measures. Stock performance is usually in the 6900 range. I'm only losing out on those who do straight all-core overclocks at this point, but that comes at the expense of single-thread performance which I'm not trying to sacrifice for gaming performance purposes.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Indiana_Krom posted:

Just gonna chime in here a bit about Far Cry 5; I did some pretty extensive benchmarking at different resolutions and settings to find the performance bottleneck and even with just a GTX 1080 at 1080p/ultra that particular game still hits the CPU limit on a 9900k. The Ubershit DRM is probably responsible for some of that, but basically you have to go past 1440p before you consistently hit the GPU bottleneck there. Literally turning down the details or using an aggressive resolution scale makes barely any difference in that game (the averages max out at about 10 FPS higher, but that is mostly in the peaks because the minimums basically don't budge).

Far Cry 5 is just super single-threaded, which is why in particular it works better on Intel, despite being an AMD sponsored game. Even at 3440x1440, it's CPU bottlenecked for me on a 2080 Ti. It's one of the games I use to test my memory overclocks on my 3900X to see how well I'm cutting into that ST lead that Intel has. Got my tuned 3900X to be better than at least a stock 9900K, but still loses out to tweaked 9900K's.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.
So with an impulse decision to upgrade to 32GB of RAM, I got a chance to try out messing around with Micron Rev E./E-Die RAM. I am extremely impressed with it and have actually found it to be much easier to overclock than my old Samsung B-Die, though I do believe that was in part due to my old B-Die not being the best binned while my Micron looks to be excellent binning. Pairing this RAM with a 3900X.

I've got 2x16GB sticks to replace my old 2x8GB sticks, since going 4 DIMMS on a daisy-chain board was likely to force me to drop RAM speed significantly. Ended up getting the Crucial Ballistix RGB DDR4-3600 2x16GB set, cost a little under 200 dollars, a few bucks cheaper than what I paid for my old 16GB of B-Die. The RAM booted at XMP settings perfectly, which is not a guarantee on Zen, even on Zen 2 CPU's. The RAM is DDR4 3600 16-18-18-36 XMP, but handled DDR4 3600 CL 14-17-17-17-34 perfectly. It also looks like it's handling DDR4 3800 CL 16-18-18-18-36 well in my testing so far. Getting latency of 66ns in Aida64, only slightly worse than the B-die, with my memory speeds being even slightly higher than my old B-Die at 3800 with better timings, likely due to the benefits of dual rank vs single rank. In some game testing it looks like the E-Die is actually slightly performing the B-Die, with the dual rank pushing it over the edge.

So yeah, gently caress B-Die if you aren't literally aiming for the last percent of performance. I got the B-Die in the first place as I don't think E-Die was out yet and on a Zen/Zen+ CPU's that I was using before getting a 3900X, it was pretty much B-Die or bust if you wanted to run anything past 3200 or so without hating yourself. E-Die is half the price and 95% as good. I'm extremely pleased with how easy it was to work with Micron E-Die.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.
It doesn't help that AMD's retaining bracket is pretty weak compared to Intel. I had it happen to me once too, I think on my 2700X. CPU's being stuck to the cooler doesn't seem to be nearly as much as an issue on the Intel side.

Gotta make sure to run some prime95 on the CPU before you plan to remove the cooler to loosen the paste up.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

E2M2 posted:

So I just built my wife a super budget machine.

Athlon 3000g
Asus B450M-A/CSM
16gb DDR4 Corsair Vengence 3000mhz
430W Corsair

Anyways the question is why I can't set the ram to COPD speeds without crashing? Its giving it 1.35v on the DRAM. Straight up Youtube crashed it. And sometimes it wouldn't even POST past BIOS.

Make sure the RAM is in the "good" slots on your board, usually these are A2 and B2. Most AMD motherboards are daisy-chain with 2 RAM slots with good traces to the memory and 2 RAM slots with poor traces. Old Zen/Zen+ CPU's like yours also have garbage memory controllers so they really need the RAM to be on the good slots to post past JEDEC 2400 levels of speed. It's theoretically possible you ended up with an APU bad enough that it can't do DDR4-3000, but with modern BIOS updates it should work.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.
Samsung B-Die is not out of production. Samsung apparently just stopped selling B-Die themselves.

B-Die can be had cheaper than ever at this point, companies like Patriot are selling some pretty good bins for relatively discount prices. But they still get soundly beat in performance/dollar by E-Die unless you LOVE tweaking some subtimings for hours on end.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

CaptainSarcastic posted:

Somebody else may correct me, but I kind of thought they had X570 and B550 planned as the last AM4 chipsets. It's possible they will release more AM4 chipsets (they apparently haven't ruled it out) at some point, but I am not aware of any actual plans to do so.

There's been a rumored X590 chipset. I don't know what it would actually do though.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.
Microcenter having 5900X's readily available at MSRP got me to upgrade my 3900X. I was looking for more consistent VR performance and the 5900X is helping me do stuff like get a consistent 120 FPS in games like Project Cars 2 and No Man's Sky, that was happening before on the 3900X paired with a 3090. I'm glad to see that PBO actually works on these CPU's unlike my old 3900X, the 3900X basically never ran at the 4.6ghz clocks it was supposed to run at as a max boost, but with some quick PBO twiddling I see the 5900X able to clock as high as 5.025ghz and do like 4.5ghz all core clocks in benchmarks. It also looks like there's less of a gap between the 'good' cores and 'bad cores', at least with my CPU.

What I'm a bit disappointed with so far is memory, this 5900X seems to not want to take memory clocks I was able to handle on the 3900X. I was running tightly tuned CL14 timings on the 3900X with my Crucial Ballistix 16x2 CL 16 kit but that's causing errors right away when testing memory, it also doesn't want to boot at all at 3800 CL16 with timings I used to be able to boot with on the 3900X and gave me errors quickly at 3733 CL16 as well. I was using Ryzen DRAM Calculator as that's helped me in the past get usable clocks, but nothing it's giving me now is working and I'm just rocking XMP for now. I'm using an X470 Crosshair VII Wi-Fi motherboard still as I didn't see a compelling reason to jump to X570.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

PageMaster posted:

That was or wasn't happening on your old 3900x (the consistent frames in VR)? In the same situation so I'm curious what gains you actually saw; I have a 3900xt and RTX 3080. The 5900x was released before I could build so I'm trying to decide if I should sell the unopened 3900xt and buy bigger.

It's hard to quantify the gains as I wasn't doing raw benchmarks with reprojection off for the most part. I just noticed that in more demanding VR games like PC2 and NMS that it didn't matter what my GPU settings were, I could not hit consistent 120 FPS. Checked things out in fpsVR and saw the problem was the CPU as one would suspect in that case and decided to make the jump. I feel like the 3900X was probably pretty close to being able to do 120 FPS, probably closer to 120 FPS than 90 FPS, but close isn't good enough for VR.

If you have a headset that only does 90hz it may not be worth it to go through the hassle of selling that CPU and getting a new one, the 3900XT should be able to handle that fine.

Adbot
ADBOT LOVES YOU

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Seamonster posted:

Memory OC and tuning on the other hand...

It seems like this is less important on Zen 3 compared to older Zen CPU's at least. My suspicion would be the unified large cache helps not needing super fast RAM to cover Zen's latency penalties for cross CCX communication. But oh boy, did my 2700X ever love having DDR4 3600 CL14 manual timings paired with it.

But man, RAM overclocking is the most tedious poo poo you can do as an enthusiast PC tweaker. gently caress RAM overclocking.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply