Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Shrimp or Shrimps
Feb 14, 2012


Smol posted:

So I have an i5 3750k from a few years back, overclocked to ~3.8GHz (4.2GHz in turbo mode). I recently upgraded my GPU to a GTX 1080 and was wondering if I should do the same to my CPU. But all of the benchmarks tell me that there has been only marginal performance improvements on the CPU front. Having to upgrade my motherboard and RAM as well for a 10-15% performance boost doesn't seem like it would be worth it, no?

(Primary use cases are software development and 1440p gaming)

I mean, I think the difference between Ivy Bridge and Skylake is not the difference between unplayability and playability on any game out right now, and maybe not for 3 more years or so.

I've got a 3570k @ 4.4 and I can't really see myself ditching it at least until what comes after Kaby Lake and min-maxing my FPS counter doesn't bother me much.

There was a graph in the GpU thread about Fallout 4 @ 1440p that showed some fairly significant jumps in average FPS going from 4c/t to 4c/8t, but it was like 70-->80 fps, which to me is not the difference between playability and unplayability.

Adbot
ADBOT LOVES YOU

Shrimp or Shrimps
Feb 14, 2012


Hey if you're willing to forgo a warranty, silicon lottery dot com is where you can buy delidded, liquid metaled, binned CPUs for a small price bump.

Delidding is pretty super easy now anyway, they have these kits you can buy that basically crack your CPU open like some skull-screw torture device from one of the SAW movies.

You don't even have to LM it, but if you stuck some Kryonaut TIM on there you'd probaby get a decent temp drop over the lovely paste stamp thing they use.

Shrimp or Shrimps
Feb 14, 2012


These days delidding is a pretty simple and foolproof procedure anyway as long as you get a proper kit made for your particular CPU, and don't try to eyeball it with a razerblade. Of course, your warranty is toast, but if you know what you're doing with the repaste (like if you want to use LM you need to tape off the PCB etc.) then it's not really a massive deal.

Honestly, the biggest problem of it for me is :effort:

Shrimp or Shrimps
Feb 14, 2012


Rabid Snake posted:

Yup, I can definitely wait one more month for Coffee Lake especially if I can reuse my Z170 mobo.

That will probably depend on a bios update. I've got a Gigabyte Z170 and am hoping they update the bios to support coffee lake.

Hey, here's a question, if I change CPUs do I need to reinstall windows? What if I change motherboards to (say, to a Z270)?

Question pertains both to weird driver issues, and also Windows activation.

Shrimp or Shrimps
Feb 14, 2012


I used to run my 6700k at 4.6 but after a crash had to reset my bios. Set up the XMP profile, and never bothered getting all my voltages right again or overclocking again.

Honestly haven't even noticed it one bit. It's just a gaming machine, though.

Shrimp or Shrimps
Feb 14, 2012


Keeping CPUs for 5 years can't be that uncommon, right?

I've still got a 3570k I plan on putting back into commission once I can grab a cheap z77 motherboard, and I'm sure it'll be a just-fine gaming machine once I pair it with a 1080 and overclock it a bit.

And I've got another Skylake PC which I built over a year ago, and OC'ed to 4.5, I can't really imagine having to replace it within the next 5 years.

These are strictly gaming machines, though, so I guess that matters since everything these days is a console port.

Shrimp or Shrimps
Feb 14, 2012


Sormus posted:

Posting to say that I'm rocking above 100fps with a 6700k @ 4.5ghz in the latest reskin of DICE shootmans game.

I think DICE might have done something in their last update because while I was averaging like 50fps with a 7700HQ (3.4 all core) / GTX 1060 / 1080p on all low settings, I am now hitting refresh rate (60fps/hz) almost constantly on the same settings + mesh at ultra.

Supposedly mesh at ultra helps with spotting players, but damned if I've noticed.

Shrimp or Shrimps
Feb 14, 2012


jisforjosh posted:

Mesh on low is actually the "try hard" help you see people thing apparently. Mesh affects rendering distance of certain objects but they supposedly changed the way the setting works from BF1->BFV. I think on Low, certain buildable coverage is not rendered but players and vehicle models are.

Thanks! Not that I noticed one way or the other, lol. Combat Pretzel: Is this what old age feels like?

One thing I've done is that I've set up a custom TS profile to throttle my CPU from 3.4 to 2.8 every time I touch 90c. So right now my processor is bouncing between 3.4 and 2.8 all core.

Is this a bad practice? Should I just set it to say run at a constant 3.1ghz (that way, I don't load at 90c but 89 which is kinda my 'limit')?

I'm on a laptop: Gigabyte Aero 15 with a 1060.

Shrimp or Shrimps
Feb 14, 2012


D. Ebdrup posted:

I was one of the people who bought the Intel X25-M that didn't have the SandForce controller which plagued basically every other ODM on the market?

Me too, and I'm still using it as a games drive. It only holds 1 game (BF4) lol

Shrimp or Shrimps
Feb 14, 2012


The GPU in the A51M is propriety and not MXM, too, so a lot of the price or possibility of upgrading that depends on whether Alienware ever releases an upgrade kit, and if they do it will definitely have that Alienware tax likely in excess of the MXM tax.

But for the CPU, OP, unless your workload substantially shifts to multicore workloads, going from an overclocked 9600k to a stock 9900k probably won't ever be worth it for gaming.

Shrimp or Shrimps
Feb 14, 2012


Paul MaudDib posted:

if it is really that easy to access the CPU and GPU, it will at least be easy to change the thermal paste if it starts overheating in a year or two

On most laptops these days its pretty easy. On the A51M it is, uh, not that easy because of the 'ribcage' or whatever they call it, and cabling being wired over the top of it. It's actually a little weird to me how difficult they made it to get the whole heatpipe and sink assembly off for a repaste, considering it's absolutely a product aimed at enthusiasts.

The A51M with newer bios revisions however is going to run cool on the GPU because the GPU is set to throttle hard at 75c (instead of 82 or whatever) to avoid the VRM issues they were having at launch killing 2080 GPUs because the VRMs are/were passively cooled.

@kaworu the A51M is definitely a beast like most "DTR" laptops. It's going to reliably max games out at 1080p60fps for a long while. I also definitely understand your particular use case for a DTR, and am looking to possibly get one myself once we hit the 3xxx generation of nVidia GPU or AMD equivalent (lol).

But to be honest, 5 years I don't think is something to bank on. I guess what I'm saying is, 5 years tends to be quite optimistic for a gaming laptop, even one with desktop components. Like, a 5 year old desktop is running, what, a GTX980 and 6700K? For 1080p60 that's mostly fine, but if you're going for high refresh rates or 4K, it's game-dependent.

Even assuming the hardware is relatively performant then, other things can go wrong with laptops (like a hinge breaking or the charger port breaking or whatever) that can meaningfully shorten the life of a laptop if you can't A) fix it yourself once it's out of warranty, or B) if you can, but replacement parts aren't available.

With the new generation of consoles coming out with some very powerful hardware, it's going to raise the bar in PC gaming, too.

Definitely be happy with your purchase and enjoy gaming on it! You are absolutely right that that 9700K is giving more gaming performance than a 9880H mobile 8c/16t part simply due to being allowed to clock up higher and maintain a higher power draw, and of course it completely outclasses a 9750H 6c/12t part.

But to answer your original query, the only reason you would have to upgrade to a 9900K is if your workload shifts primarily to a multithread-benefitting one, like if you started rendering videos everyday and doing it quicker was imperative and that it benefited from hyperthreading.

For gaming it's unlikely to ever be a worthwhile upgrade.

Shrimp or Shrimps fucked around with this message at 06:55 on Mar 28, 2020

Shrimp or Shrimps
Feb 14, 2012


What are the specific reasons that some motherboards allow for a higher overclock on the same chip over other motherboards?

I just had to swap out my 6700K from a Gigabyte Z170 Gaming to an ASRock Z270 Fatality and have managed to eek out another 100 mhz on my CPU at a lower voltage than before (4.5 @ 1.350 versus 4.6 @ 1.325). For both I'm keeping cache ratio at 42x with a -100mv offset, and ram is XMP3000 which I've had to a little juice to VCCIO and SA to get stability.

Is it better power? A better bios?

Edit: Come to think of it I also moved from a gold psu to a platinum one (Corsair RM750 to Silverstone SX700-PT). Could that have made a difference?

Shrimp or Shrimps fucked around with this message at 01:45 on Dec 22, 2020

Shrimp or Shrimps
Feb 14, 2012


DrDork posted:

There are a bunch of things that all can work in conjunction together to provide a better overclocking platform:

-Better PCBs (more layers, more copper) allowing for traces to incur less resistance and/or less EMI from other bits of the board.
-More/better VRMs providing stronger, "cleaner" power lines, which can directly impact stability at iffy overclocks.
-BIOS options exposed to let you fine tune things more, or enable overclocking-friendly options/modes. E.g., allow for fine-tune voltage adjustments.

All sorts of other smaller things, too, but I think those are some of the bigger ones.

PSUs can also have an impact, as if the power it's supplying isn't stable (google for "power ripple" if you want more info), that can also play havoc with a chip's ability to hold a high overclock.

Thanks for the explanation!

Shrimp or Shrimps
Feb 14, 2012


I mean, the 10700K already matches or slightly beats the 5800x in averaged gaming performance on most review sites/tubes? It's hardly outlandish to think Intel might retake The Gaming Crown with the 11th gen. What exactly is the argument here, that you need 12 cores for gaming? Your per-thread performance is going to limit you far before your core count does, anyway.

Shrimp or Shrimps
Feb 14, 2012


TPU has it a little faster: https://www.techpowerup.com/review/amd-ryzen-7-5800x/16.html Off the top of my head HBU had it only a little slower than the 5800x, like 5% or something. So I should have been more clear in my previous post, either slightly beating or slighting losing to.

Either way, the original point that wolverine guy made that the 11700k would be a poo poo purchase for gaming and he inferred that a 5900x would be better because "lol only 8c/16t can only run 3 year old games" when by the time an 11700k can't run a game, the 5900x won't be doing much better.

Shrimp or Shrimps fucked around with this message at 13:02 on Dec 29, 2020

Shrimp or Shrimps
Feb 14, 2012


Not Wolverine posted:

Hold on, what did I say? Lets take a closer look:

Yeah but still what exactly is your point here. That games will become increasingly parallelized because consoles are now 8c/16t parts? Sure.

But then you said that you can't think of a use for the 11700k except for running a 5950x bot. Does that also mean a 5800x has no use for gamers?

Do you think that by the time the per-thread performance of the 11700k cannot run a game at xyz level, that a 5950x will be doing much, if any better?

The performance per-thread is going to limit you in gaming long before core count does. There is going to be no game where a 5800x can't run it, but a 5950x can. Even by the time games are that efficiently parallelized, the 5950x will be an outdated cpu in per-thread performance. That will be your limiting factor.

In some not-perfectly-parallelized apps (but still better than gaming is currently and possibly for a long time), the IPC and clock improvements allowed an 8c/16t part (5800x) to surpass a 12c/24t (3900xt) part from last generation.

Why do you think a 16c/32t part from this generation will somehow outlast an 8c16t part in gaming, of all things? If for gaming strictly the 5800x and 11700k are "turds", then the 5950x is just a more expensive turd.

Like if you wanted to make this about what is surely to be comical power consumption of the 11700k, fine.

Shrimp or Shrimps
Feb 14, 2012


I liked it better when boards didn't shroud the heatsinks so much.

Shrimp or Shrimps
Feb 14, 2012


So, weird question but does anybody know if the cpu cooler mounting holes on a z490 board are the same as a z270 board? Wondering if you be able to swap 2 systems around, with an older one (7700k) going into a MSI trident pre-built and the newer one (10700k) going into a nr200 case.

Shrimp or Shrimps
Feb 14, 2012


Thanks for the advice, everybody. So the trident x uses it's own cooler design but the motherboard seems to be an MSI UNIFY itx board, which you can buy separately, according to Tom's review: https://www.tomshardware.com/reviews/msi-meg-trident-x but I'm having trouble verifying that from a second source past a couple random comments on reddit. E: See edit.

In that case, if the mounting holes haven't changed, I presume the proprietary CPU cooler will probably fit on a Z270 board in terms of actual mounting, but that the clearance from mb heatsinks / ram positioning and such will be the main issue.

Edit: Hmmm maybe not. Looking at the picture of the ports here from the Tom's review: https://cdn.mos.cms.futurecdn.net/9dKE3PEZVYrJPKgmeBWd2Q-970-80.jpg.webp does not look like how it should lineup with the actual board itself here: https://www.amazon.com/MSI-MEG-Z490I-Motherboard-Thunderbolt/dp/B0876H2R85

Looks like it is proprietary in which case it might not be possible to swap.

E2: Yeah, the io is different in this pcmag review, too: https://i.pcmag.com/imagery/reviews/04bOQwneshnbkN7YcktT5mL-4.fit_lim.size_960x.jpg

Looking like a strong no-go. Ah well.

Shrimp or Shrimps fucked around with this message at 05:14 on Feb 10, 2021

Shrimp or Shrimps
Feb 14, 2012


Canna Happy posted:

Ok, so, I was looking at an image of a Trident 3, sorry. It looks like its a z490i, but the rear io is different because the trident uses some sort of internal antenna for wifi/bluetooth. So, you could swap it into an nr200. The only really odd thing about the board is the lack of vrm heatsinks.
https://imgur.com/a/OC68fIo

Thanks, I appreciate the help! So if it appears to be a standard itx board, then it should fit in the NR200 with the proper mounting screw holes. Not having vrm heatsinks is weird; the cooler design in the trident x blows air downward onto the motherboard so I guess they just rely on that. However swapping it out and using a tower cooler would mean needing to get sinks to stick to the vrms most likely. In that case, swapping another itx board into the trident case would probably mean having to remove the vrm heatsinks to get it to fit beneath the cooler.

How essential are vrm heatsinks? I'm guessing once you start ocing and unlock the pl limits for like a 10700k, it's guzzling so much power that the sinks are very essential. I wonder why MSI didn't sink them with something at least low profile.

Shrimp or Shrimps fucked around with this message at 22:54 on Feb 10, 2021

Shrimp or Shrimps
Feb 14, 2012


SuperTeeJay posted:

The conventional wisdom was not to bother with anything faster than 3200MHz but some more recent benchmarking shows FPS gains at 3600MHz and (to a lesser extent) 4000MHz. I'd go for 3600/C16 or C14 in a new build.

Do ram speeds tend to matter less the higher resolution you go? 1080p vs 4k for eg. I'm assuming yes because you move to being GPU bound rather than CPU bound?

Also, what's the general strategy for undervolting / overclocking / increasing efficiency for intel these days? Is cache undervolting recommended? What about downclocking cache to get a better core clock? Or what about overclocking cache? Is undervolting iGPU safe when not using it? Does it even do anything? What about igpu unslice?

VCCIO and system agent, as I understand it, might need voltage bumps when overclocking memory and / or enabling XMP profile on memory. I definitely need to push both a touch for my 6700k/z270 asrock to get my 3200 ram xmp profile stable.

Shrimp or Shrimps fucked around with this message at 06:51 on Feb 18, 2021

Shrimp or Shrimps
Feb 14, 2012


So, sort of following up to gradenko in a tangential way, and as someone who has absolutely no knowledge about how any of this stuff gets designed or made, how is it that the Intel team can iterate on Comet Lake but achieve somehow worse performance (in gaming)? Like, what's the process for that to happen? How do you try to improve something but end up making it worse (or at the least, not better) despite having worked with similar designs on a similar manufacturing process for however many years it's been?

E: Edited to add "in gaming" as that's what I'm interested in.

Shrimp or Shrimps fucked around with this message at 05:38 on Mar 7, 2021

Shrimp or Shrimps
Feb 14, 2012


^^ Thanks for the write up, much appreciated! I didn't realize was entirely new and with the backporting stuff. That makes more sense now.

Shrimp or Shrimps
Feb 14, 2012


Well the multi changing has been a feature for a long time, and my 6700k does that. Pretty sure speedshift in particular was introduced with skylake? (Before that it was something else that dealt with p-states I think). Here's screenshots of a 10700k at normal 'idle' so webbrowsing etc and at load:




If the frequency stays pegged to max, then a setting as been enabled that prevents downclocking. Perhaps certain motherboards just do that by default if you OC?

Shrimp or Shrimps fucked around with this message at 05:27 on Mar 25, 2021

Shrimp or Shrimps
Feb 14, 2012


Thanks for sharing those. I might give them a try.

Currently, I use Throttlestop profiles to limit my max core boost to base clock when I'm just doing day to day browsing or office stuff in case an app gets greedy. So base clock for the 10700k is 3.8ghz for 1-core through all-core load, and my "gaming" profile is 5.1 1-core through 4.9 all-core.

Whenever the creator of TS finishes adding profiles for the TPL tab as well, you'll be able to control short-long tdp via profiles which is super handy, but more so for mobile devices.

Shrimp or Shrimps
Feb 14, 2012


MSI dragon center definitely fucks with my oc settings and Argus monitor fan profiles, but if I uninstall it leds turn back on. So now every reboot I have to kill the process after it loads and turns the leds off, and then reset my oc profiles in afterburner and ts.

Dragon center is a buggy piece of poo poo.

Shrimp or Shrimps
Feb 14, 2012


You aren't excited about Rocket Lake?

Shrimp or Shrimps
Feb 14, 2012


Nam Taf posted:

This is a murder. I'm watching a murder on YouTube.

Gamers Nexus Steve gets really energetic and animated when he's about to rip into something. It's very fun to watch.

Shrimp or Shrimps fucked around with this message at 04:23 on Mar 31, 2021

Shrimp or Shrimps
Feb 14, 2012


LRADIKAL posted:

Buy DDR4 now while it's still cheap! They're gonna stop making it soon and the price will go up forever!

Is this genuine advice? Because I want to upgrade my ram but perhaps incorrectly assumed prices would drop once DDR5 was released but yeah if they stop making it entirely then it obviously won't.

E: and would the same apply to pcie 3 nvme drives as pcie4 are on the market now?

Shrimp or Shrimps
Feb 14, 2012


Thanks for the replies. Actually I just want to move from some crappy 2933 cl21 oem samsung d-die stuff that can't oc to even 3200 without corrupting my OS to a nice fast 32gb kit as I'll be keeping this rig (10th gen Intel) for like 5 years at least assuming nothing breaks. drat yeah the prices last fall were much better but I didn't have this rig back then.

Good point on NVME compatibility, hadn't considered that.

Shrimp or Shrimps fucked around with this message at 02:50 on May 9, 2021

Shrimp or Shrimps
Feb 14, 2012


BIG HEADLINE posted:

Alienware used *this* monstrosity once:



Thankfully the rest of the system was standard ATX. That connector linked up with the PSU.

What was the point of this?

Shrimp or Shrimps
Feb 14, 2012


PSU chat again but is it a strict rule that the more your PSU spends it lifetime at high load, the quicker it'll degrade? Or rather in the converse, does it extend the lifetime of your PSU if it operates way below it's rating? Or does it have a sweet spot in terms of component longevity the way it does in terms of efficiency such as that running your PSU at 65% load of it's rating (or whatever) is actually beneficial for lifespan?

Shrimp or Shrimps
Feb 14, 2012


On a 6700K myself but it hasn't mattered very much to me as my main use-case is gaming at 4K on the living room television and I'm always GPU limited anyway. Didn't win the silicon lottery though and it only gets up to 4.4 game-stable but would never survive a torture test at that freq. From what I gather, subpar IMC too as I need to increase voltage to SA and VCCIO to get 3200 cl16 XMP stuff stable, and for sure the motherboard is already generous with the juice. But looking at TPU's charts, their gaming summary of the 12700k vs the 10400f (closest CPU on there to the 6700k lol) is just 5% at 4k.

But I am on a Z270 board so wondering if it's worth it to try and score an 8700k on the cheap as an upgrade and then wait another couple years before a full platform update. But I'm not sure if I can even sell the 6700k in my region, and it's doing everything I need it to anyway...

lmao just looked it up and that's no bueno without hardmods, forgot coffee lake went to a new chipset.

Shrimp or Shrimps fucked around with this message at 04:35 on Nov 8, 2021

Shrimp or Shrimps
Feb 14, 2012


Why do the new 12th P+E CPUs downclock the cache/ring so aggressively when E cores are utilized? From what I've been reading, most people seem to be able to get their rings to 4.1~4.3ghz pretty easily in all-core loads.

Steve from HUB suspects in their latest video that this is why some games benefit from e-core disabling, because the cache clocking down is increasing l3 latency. <-- I don't know what that means, but why is it not better to get all core 4.1~4.3ghz ring clock in every circumstance, versus 4.5ghz p-core only (when is that going to happen with the scheduler putting background apps onto the e-cores?) and then 3.6ghz when e-cores are used?

Shrimp or Shrimps
Feb 14, 2012


SSJ_naruto_2003 posted:

I'm still at 150watts

What settings did you land on for your 12600k?

Shrimp or Shrimps
Feb 14, 2012


Think I'm going to upgrade my 6700k to that 12400f. That's a sick deal since afaict it is competitive with the 5600x in gaming.

Shrimp or Shrimps
Feb 14, 2012


From what I read online it seems most people have no problems getting their ring to 4.3 with e cores enabled at 1.35v atoml2 voltage, which also overvolts the cores. But it seems like more than that, let alone 4.7, is definite lottery stuff?

Helped a buddy to set up his 12600k a couple weeks ago, and we got 5.0/4.0 P/E all core and 4.3 on the ring at 1.35v, but it used over 200 watts under cinebench. Dropping back to 4.8/3.8 P/E and 3.8 ring at 1.23v sustained resulted in 130w power draw.

I suggested making that his 24/7 setup but he wanted the ~*sick gainz*~from the first overclock lol.

Shrimp or Shrimps
Feb 14, 2012


BurritoJustice posted:

Someone on the Intel subreddit got an early 12900KS. Good news, it is running at 5.3 all core @ 1.1v. Bad news, it looks like AVX-512 is fully hardware disabled and can't be toggled on even with E-cores disabled

Is that just binning or has something changed? 1.1v for 5.3 all p-core seems crazy low and I don't think I've come across posts where people have managed to overclock their 12900k's to that level at that voltage. If it was just binning you'd suppose there must be quite a lot of silicon lottery winners out there that it'd come up more frequently.

Shrimp or Shrimps fucked around with this message at 00:32 on Mar 22, 2022

Shrimp or Shrimps
Feb 14, 2012


movax posted:


I thought about going Alder Lake for my recent USFF purchase but 1) I don’t want Windows 11 and 2) still not convinced SW maturity is there to intelligently use the E cores…

You can use win 10 with alder, but you may have to manually janitor some apps with process lasso. For eg I needed to set 7zip to use only the p cores in my 12600k because it was exclusively using the e cores. But i haven't had to do it often and it's kind of a set it once thing.

Adbot
ADBOT LOVES YOU

Shrimp or Shrimps
Feb 14, 2012


Do we know yet if the raptor lake refresh will be compatible with z690? I'm on a 12600k and it would be rather nice a couple of years from now to just drop a 14700k or whatever into this mobo.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply