Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
forbidden dialectics
Jul 26, 2005





My first overclock was when I built my first computer, a tualatin celeron 1100A on an 815 motherboard (Soyo SY-TISU). Both I pulled out of the garbage. Got that sucker running at 1400 MHz. This was around when the 2.53 Northwood P4 was a mid-range part so I was pretty behind the times, but I was proud since most of the parts were scavanged and the only thing I actually bought was a GeForce4 4200. How did I cool this montrosity? Well, the case didn't have any of the side panels so I just stuck a small window fan next to it.

Adbot
ADBOT LOVES YOU

forbidden dialectics
Jul 26, 2005





ATI's number scheme has already come full circle. poo poo, in two more product cycles we'll all be able to buy a Radeon 9800...AGAIN!

forbidden dialectics
Jul 26, 2005





Agreed posted:

Just trying to pin down what the hell any of nVidia's or AMD/ATI's low-end SKUs even means is an exercise in frustration. There's one that has, like, one Shader Module/pack of CUDA cores and the smallest possible number of ROP units which would be worse than HD4000 graphics by a stretch. I think that might be it, but it's hard to tell, because there are several of them with the same name, some of which are rebadged (or re-rebadged, no poo poo) Fermi products at that.

Would not be surprised at all if it was a rebadged 520M which is a rebadged 420M which is a rebadged 320M which is really a 250M which is basically a glorified 8800GT with half the shader units disabled. Not saying this is a fact but that it's even plausible goes to show how loving stupid nVidia's numbering schemes are.

forbidden dialectics
Jul 26, 2005





I'm still rocking my Nehalem i7 860, with Crossfire 5850s. I am super pumped for Haswell. I've been using this current machine, completely unchanged, since August 2009. I don't think technology has ever lasted that long for me. So I've had great luck with Intel's "tocks" and can only hope to continue this pattern. Ticks are for suckers!!

(Or is it Tocks/Ticks? I always think it's the opposite until I look it up.)

forbidden dialectics
Jul 26, 2005





COCKMOUTH.GIF posted:

Wouldn't Skylake fall under Intel's "tick" timeframe? Isn't it better to choose Haswell as that would fall under "tock"? Is Intel just talking out of its giant rear end when it uses this tick/tock poo poo?

I have always done very well buying "tocks". Haswell will be no exception. The mature process on tocks makes a huge difference when overclocking. Not to mention the new microarchitecture.

forbidden dialectics
Jul 26, 2005





Well this seems pretty good:

http://www.maximumpc.com/article/news/intel_says_company_committed_sockets2012

I guess start the FUD on how long "the foreseeable future" is and exactly which "socketed parts in the LGA package" they intend to keep selling.

forbidden dialectics
Jul 26, 2005





Seems like Intel has resolved the IHS gap issue which was leading people to de-lid their Ivy Bridge CPUs, but Haswell still runs hot as gently caress when you push it to the 4.4-4.5 GHz range (seeing high 60s to high 80s depending on chip/cooler). Seems like the days of "cheap, decent air cooler + change 1 number in bios = 4.5 GHz" are over :(.

forbidden dialectics
Jul 26, 2005






Haswell runs about 15C cooler than Ivy at equivalent clock speeds, which is about the margin of improvement people were getting by de-lidding Ivy:

http://www.techpowerup.com/reviews/Intel/Core_i7-4770K_Haswell/9.html

forbidden dialectics
Jul 26, 2005





Shaocaholica posted:

From the depths of the internet and not very scientific.



Supposidly thats on an Antec Kuhler H2O 620 closed loop cooler (120mm x 1). No idea if its direct die and what TIM is being used but you can bet its top shelf and more likely direct die.

Edit: quick take aways

1)Haswell is still hotter than IVB delidded
2)Possible Haswell needs slightly more volts for the same freq
3)Delidding Haswell still nets big thermal gains a la IVB

Now I just need the balls to take a hammer and a block of wood to my new CPU :(.

forbidden dialectics
Jul 26, 2005





Got my 4770k running at 4.3 GHz on 1.15v stable runniing Aida64 right now, which is as far as my tired brain is going tonight. Maximum temps are right around 69 C. Looks like delidding is in my future.

forbidden dialectics
Jul 26, 2005





A terrible path led me here...



Although it did reduce my load temps at 4.5 GHz @ 1.3v by almost 20 C.

forbidden dialectics
Jul 26, 2005





Shaocaholica posted:

Nice. So 4.5@1.3v is your max?

Also, the single edged kind of razor might have been a bit easier on your fingers.

Pretty much. It will boot at 4.6 and seems stable in Windows, but Aida64 crashes it after about 5 minutes.

While yes, it would have been, it also gave me extra incentive to be really, really careful.

forbidden dialectics
Jul 26, 2005





Shaocaholica posted:

Dang. I was hoping on having something more than 4.5 but I haven't bought poo poo yet. What are your load temps like?

Maxes out around 68-70 C, with very short peaks to around 80 C (I think Aida64 turns different parts of the core on and off). Average temps were 55 C.

forbidden dialectics
Jul 26, 2005





Shaocaholica posted:

Apparently Prime95 has supported AVX for a while. Not sure what the fuss is about Aida supporting AVX which was brought up a few days ago. Or maybe it was some other new instruction.

edit: Oh, maybe it was AVX2

Aida64 does use AVX2, so that's probably what was spiking my temperatures.

forbidden dialectics
Jul 26, 2005





This is a much easier way to delid and works exactly the same with Haswell:

http://www.overclock.net/t/1376206/how-to-delid-your-ivy-bridge-cpu-with-out-a-razor-blade

The only difference is that, in the picture I posted, you can see the line of small surface mount parts next to the die. Just mount the CPU in the vice in such a way that you're hitting the side of the die without the components on it. That way, if you smack it too hard, the IHS doesn't slam into the tiny little components and knock them off. Also, you know. Just be careful. You are swinging a hammer at a block of wood that's touching a $300 processor. Don't miss!

forbidden dialectics
Jul 26, 2005





Shaocaholica posted:

The problem with the vise method, IMO, is:

1)already mentioned but the new surface mount components on Haswell pose an additional risk
2)not everyone has access to a -good- vise or a block of wood or a hammer. Yeah you can buy all those tools but now you're already way over-cost compared to a razor which IMO is safer even if more tedious. Its not like you're doing lots of these.

1) If you clamp the CPU in the vice correctly (aligned such that the IHS slides on the BARE side of the PCB when you hit it, not the side with the surface mount things) there's no chance of hitting the surface mount parts. That is, you smack the side of the CPU with the surface mount parts on it, so the IHS slides on the other side of the die.

2.) You don't need a good vice or hammer (I used the cheapest possible poo poo I could find at Harbor Freight), and the risk of cutting a trace or cutting too deep and knicking the die seems greater than the risk of screwing something up with the vice/hammer method. There's youtube videos where people smack it way too hard and the CPU goes flying across the room and they end up fine. There's also Youtube videos where people slip with the razor and take a corner off of their dies. Then again, there are also videos of people taking blowtorches to their Sandy Bridges to reheat the solder to remove the IHS so I dunno. I think I draw the line at fire, personally.

Your jiggahertz is NOT worth this:

https://www.youtube.com/watch?v=P4Hp0xQhJwg

forbidden dialectics fucked around with this message at 06:29 on Jun 9, 2013

forbidden dialectics
Jul 26, 2005





Well considering how butthurt people are over Haswell's disappointing performance increase over Ivy Bridge, I think desktop users can wait for the next "tock" in 2 years.

forbidden dialectics
Jul 26, 2005





After lots of fiddling and reading about the various BIOS settings in broken Chinese/English, I've finally hit the limit of my chip:



1.32V core, 1.3V uncore, with adaptive voltage on. Isn't close to stable at 4.8GHz even with 1.42V. I'm pretty satisfied, overall.

A couple of tips:

1.) Your uncore will probably max out before the main core. No idea why, but try to find your max stable uncore first, then just leave it while you tweak the core speeds.

2.) Make sure your load level calibration is set to "level 1" or whatever provides the lowest amount of droop under load.

3.) Test your settings in override mode first, then switch over to adaptive voltage. Testing in adaptive voltage mode will skew your results because it adds .1V when AVX is being used, which will overheat the gently caress out of your processor. Even programs that use AVX normally won't do this; for some reason it only manifests with a stress test like Aida64.

4.) CPU input voltage at 1.9v vs 1.8 gave me an additional 200 MHz. Can't explain it.

5.) At 1.42V the chip was stable but it was throttling due to thermals big time. This is delidded, with a Corsair H100i. Try finding your thermal limit first, and then work around that. Voltage seems to have the greatest effect on thermal output, rather than clock speed.

Also the screenshot was with the bugged version of CPU-Z, on the correct version, the idle voltage is 0.708V :stare:. That is just shockingly low to someone who started engineering school when 3.3V CMOS stuff was new and dangerous.

forbidden dialectics fucked around with this message at 06:30 on Jun 23, 2013

forbidden dialectics
Jul 26, 2005





Agreed posted:

So is that direct-to-die, or just fixing the IHS gap, or what? How are you cooling Haswell at 4.7GHz?

I am sad now, my Sandy Bridge is at 4.7GHz with just a big drat cooler (D14) and I thought "well at least Haswell won't be hitting these clocks apart from the suicide run crazies :saddowns:" but here you've gone and ruined it, just ruined it.

I'd love to know how. :D

Edit: Quick aside, are you recommending high levels of LLC? I've always been a fan of high LLC over high constant voltage since it reduces vdroop under duress, but doesn't bake the chip quite as badly full time. I'd love to see that anecdotally vindicated apart from my own overclocking experience. If you're suggesting the opposite (lowest possible LLC, allow vdroop as intended) then phooey. I feel like vdroop is excellent for standard, non-overclocking folks because it's just part of how the chips work, but overclocking is by definition taking hardware outside of intended parameters, rules go out the window in favor of what gets you to the higher clock within a reasonable voltage and thermal envelope. And having a lot of immediate pulldown when trying to clock up seems like a terrible idea - in a distortion pedal circuit for guitar, doing that emulates tube sag and causes quite a bit of distortion. We don't want that at the "thousands of millions and hundreds of millions of nano-scale transistors" level. So, LLC max, or LLC min?

It's just fixing the IHS gap (bare dies are for seriously crazy people), with perfectly applied NT-H1. I had originally hosed up the application thinking that I knew better than the official instructions on how to apply it, and my temps were almost 10C higher than they are now that I've fixed it. FOLLOW THE INSTRUCTIONS!

It's with a Corsair H100i, so a closed loop water cooler. I took a drill, hammer, and tin snips to my Antec 300 to squeeze the radiator into the front such that the fans are sucking cool air from outside of the case through the radiator.

On ASrock motherboards, for whatever reason, LLC "Level 1" is the highest level, as in, it allows for virtually zero vdroop. So, I recommend the most aggressive LLC the motherboard is capable of. With Haswell, during stress test, even if the processor is completely stable in Windows, when that 100% utilization kicks in, the system will crash unless you have very aggressive LLC. When you first start the test, the processor will actually instantaneously peak temperatures over 90C (in my setup); then stabilize in the 70s.

forbidden dialectics
Jul 26, 2005





If you already have a vice, you don't need the delidder. Just jam the headspreader in the vice, aim a block of wood at the PCB, and whack it with a hammer. The processor will go shooting across the room and PRESTO you're all done.

forbidden dialectics
Jul 26, 2005





Palladium posted:

man have some sympathy for the small 10M+ sub channel who can't afford to get a real 9700K

No idea how, he's got to be one of the most obnoxious people on youtube. The PewDiePie of hardware reviews.

Insha'allah, but I'm contemplating delidding and running the 9900k bare die. I've been doing this with my 4700k for like 4 years and it's been great. You're already cracking a $500 CPU in a fancy vice, what's another hour of dremelling some plastic on the motherboard socket?? Get your calipers and start stacking tiny washers, my dudes.

forbidden dialectics fucked around with this message at 07:45 on Oct 22, 2018

forbidden dialectics
Jul 26, 2005





mewse posted:

Removing the solder looks like a loving nightmare. Steve from gamersnexus was doing it to the bare die with a blade snapped off a box cutter, I kept expecting him to ruin the chip.

Yeah I'm gonna whip out the sandpaper like der8auer, looks like the smallest chance of slicing my fingers off. what is wrong with me

forbidden dialectics
Jul 26, 2005





BIG HEADLINE posted:

And perilously scrape off the sTIM.

I'm just gonna pay the $60 Silicon Lottery charges. On the bright side with direct shipping you don't pay sales tax to send it to them (they're in TX iirc) so it ends up being a wash.

forbidden dialectics
Jul 26, 2005





Kestral posted:

My order on Amazon just went to December 7 :wtc: This really is getting ridiculous.

This is quickly turning into one of those exceptions in tech purchasing where waiting for Zen2's supposed 25% IPC increases might actually be not just a good idea, but how it will work out anyways.

forbidden dialectics
Jul 26, 2005





Cygni posted:

AT has a good/long article on TDP on the intel side

https://www.anandtech.com/show/13544/why-intel-processors-draw-more-power-than-expected-tdp-turbo

reminder that you should absolutely go in the bios and crank the PL1 and PL2, especially on non-K Kaby Lake or later

I also recommend kicking up the 4d3d3d3.

forbidden dialectics
Jul 26, 2005





craig588 posted:

AVX Prime will be hotter than any real app, it depends on what type of stability matters to you if it matters or not. If you reboot every day, probably not, if you leave your computer on for weeks at a time, probably.

A decent compromise is disabling AVX in the Prime95 config files (CpuSupportsAVX=0 in local.txt) and running various size FFT tests. Here's a good reference (ignore the version recommendations, use the most recent version and disable AVX as mentioned): https://overclocking.guide/stability-testing-with-prime-95/

Also, Asus' RealBench uses an AVX load which uses a much more realistic mixture of AVX instructions vs. the pure AVX torture test that Prime95 uses. This is probably the best one-stop test for stability, as several newer games use some AVX instructions. Nothing uses AVX like Prime95 so it's probably not a great test with AVX enabled, unless your workload is literally just finding mersenne primes.

AIDA64 and Intel Burn Test are also good for testing your cooling solution. IBT in particular will light your CPU up.

forbidden dialectics
Jul 26, 2005





Mr.PayDay posted:

The 9900K will throttle at 100 degrees anyway and throttle or does IBT override this?

IBT definitely overrides voltages, but unless you're already at chip-killing levels, it won't damage anything in the short term. Most consumer motherboards won't let you even select the kind of voltage that kills chips in the short term (except for a few with physical "LN2" switches that you have to physically engage). It doesn't override thermal protection.

Also just want to stress that IBT is a *cooling* test, not a stability test. Overclocks that are actually 24/7 stable in 99.999999999% of workloads can/will fail in IBT.

forbidden dialectics fucked around with this message at 20:56 on Dec 2, 2018

forbidden dialectics
Jul 26, 2005





Well there's no loving way I'm waiting for the EVGA Z390 Dark to come out at this point - it's now "maybe CES" when it was originally "Early November". I just see it becoming "Q1 2019" to "Q2 2019" and then July rolls around and there are 100 available and then it's sold out until September.

Looking at the Gigabye Aorus Master. Is the BIOS really as bad as people say? If it is I'll probably just spring for the Maximus XI Extreme. Delidded 9900K with a fully custom loop.

forbidden dialectics
Jul 26, 2005





BIG HEADLINE posted:

All I know at this point is that it's pretty clear EVGA's more interested in giving out DARK boards to select pro overclockers under NDA in the hopes that it'll make the wanna-be hobbyist overclockers pay $5/6/700 for it so they can be the new King poo poo of Who-Gives-A-gently caress Mountain. If Newegg has another one of those Masterpass/Visa Checkout deals and the Maximus XI Extreme isn't selling for the gently caress-you price of $800 from some scalper marketplace seller, I'm going with it and it's "pretty good" VRM. Who knows, maybe ASUS will get the APEX out in the next week or two, but I somehow doubt it.

Seriosly. That Luumi guy has like 400 subs on YouTube. None of the big names have one, but this guy no one's heard of does? Wierd.

Also Newegg has the XI Extreme in stock right now for $599 with the COD bundle. I'm still deciding (the Aorus Xtreme also looks great) but I'm tempted. Been sitting on everything but the board now for like 2 months....

forbidden dialectics
Jul 26, 2005





Ordered the Gigabyte Aorus Xtreme (just came back in stock at Newegg)...I figured, I've probably spent $500 just on fittings for this build, might as well go balls out for once. Now I need to re-do my network that I just re-did 2 months ago for 10GbE :smith:

forbidden dialectics
Jul 26, 2005





Volguus posted:

What does a motherboard have to do with the network?

It has integrated 10GbE \/:shobon:\/

forbidden dialectics fucked around with this message at 02:32 on Dec 8, 2018

forbidden dialectics
Jul 26, 2005





BIG HEADLINE posted:

If the order hasn't been tendered yet, Newegg's eBay store also has it in stock, and there's a 10% off promo active at the moment (PHLDAYTEN). *Also*, when you look at Warranty/Returns for motherboards on Newegg's regular site, it says motherboards are non-refundable, whereas on their eBay store, you have a 30 day money-back return policy. Final price comes out to $499.98.

It's not the motherboard *I* wanted, either - but given that going the eBay route gives me options to return it if it's a POS, I think I'll take the shot. It *does* worry me that there hasn't been a new BIOS update for it since the "first release" on 9/11/18.

Direct Link: https://www.ebay.com/itm/GIGABYTE-Z390-AORUS-XTREME-LGA-1151-300-Series-Intel-Z390-HDMI-THUNDERBOLT-3-U-/292799889958?hash=item442c3bc226

Dang, it's already in packaging. But, I had a $50 "giftcard" from Newegg's last sale that was only good until the end of December, so it more or less works out the same. I used my Amex card so if it's a turd and they won't refund it, I can always just use purchase protection to get a refund.

From what I've read though, it's pretty much in a different league even compared to the rest of Gigabyte's strong Z390 lineup. BIOSes get updates; poo poo VRMs and trace layouts don't. Not to mention other than the Dark, it's the only Z390 board with an angled 24-pin power connector :swoon:.

forbidden dialectics
Jul 26, 2005





god help me, i have lost my way



Delidded 9900k running bare die/liquid metal under a waterblock

Doing a 24 hr leak test/Mayhem's part 2 but a quick test I did with just regular thermal paste and low moutning pressure showed just under 70C load at 5 GHz in Prime (non-AVX).

forbidden dialectics
Jul 26, 2005





K8.0 posted:

The only problem I see is that nothing there looks nearly janky enough to be properly risky. Please show us a zoomed out shot with the rubber bands that are the only thing you use to hold the water block down.

Would it help to know the waterblock is held down by a 5 year old EK "Naked Ivy" kit with nylon spacers I just sort of eyeballed to make up the difference in z-height?

forbidden dialectics
Jul 26, 2005





rage-saq posted:

Didn’t der8auer make some shims to go on top of the die and even out the contact area? Seems like that would be easier than guessing with washers

Yes - very recently. If this works out I'll probably order one.

All said, I'm not really guessing, I measured with calipers. The were from Harbor Frieght, though.

forbidden dialectics
Jul 26, 2005





Well fortunately it looks like I have a golden chip:



This is running Prime95 8k smallFFT. This is just messing with Gigabyte's Windows-based OC tool (Easytune). All I changed was core multiplier and LLC. I did manage to run it at 5.3 with LLC set to "Turbo" but this was pushing Vcore to 1.4V which is a bit high for my tastes (this is where the temps in the 90s in the maximum column comes from). I have a lot more fine tuning to do but I'm pretty thrilled so far.

forbidden dialectics
Jul 26, 2005





Alright, a lot of testing my 9900k over the holiday. Came to two stable scenarios. Which would you choose:

5.2 GHz all cores, 1.33V with droop under load to around 1.29V, maxes out around 75C under the most intense workloads.

or

5.3 GHz all cores, 1.425V with droop to around 1.38, maxes out around 92C under the most intense workloads?

That's a lot of extra voltage for....100Mhz, which seems pretty silly, and I'm probably whining about a .1-percentile chip anyways. But then again, this whole business is intensely silly.

forbidden dialectics
Jul 26, 2005





Khorne posted:

Set the lower power in your motherboard, use it regularly, and then use OC software for the higher power when you need the extra boost. If you ever do.

That's how I've managed my 3770k and I can still throw >1.4v at it 7 years later. Most of the time I don't need to, but there is an fps game or two I play where I do.

Weirdly enough after all the testing to find the right Vcore setting (using the motherboard's IR35201 VR_OUT sensor, which according to Buildzoid is the only accurate software voltage reading) - I set my motherboard back to "Normal" to try and get adaptive voltage back. After loving around with DVID with zero success I finally just said gently caress it and adjusted the LLC levels. Turns out, the motherboard basically handles the voltage exactly like I want by setting voltage to "Normal" (basically Auto) and LLC to "Standard", with everything else on Auto. Basically could have just done that right out of the box and been done with it. I'm impressed that actually works; seems like the days of having to manually mess with voltages are over, as are the days of "auto OC" settings sending 1.7V through your chip.

forbidden dialectics
Jul 26, 2005





movax posted:

Was that in iGPU or accompanied CPU die as part of the special-function logic? Losing hardware decode sucks for normal users, but I suppose for a workstation / server, the hardware encoder could be useful for batched headless workloads in a farm.

Quicksync unfortunately sacrifices file size and quality for encoding speed - it's fantastic for transcoding, say, on a Plex server, where you need it real-time and minimizing power overhead, but if you're doing serious encoding for quality you're going to want to do it on the CPU regardless. NVENC is bettter than Quicksync in basically all cases, but still doesn't match the quality/filesizes of using, say, x264. So for most people interested in a 9900k, it's probably not a big loss.

That said it's still stupid and seems pretty desperate coming from Intel.

Adbot
ADBOT LOVES YOU

forbidden dialectics
Jul 26, 2005





BIG HEADLINE posted:

Z390 DARK is finally available: https://www.evga.com/products/product.aspx?pn=131-CS-E399-KR

As much as I wanted one, though - I'm not a buyer at $500. A board needs to have nice quality-of-life add-ons to be worth that. I *would* have been a buyer at $399-449, though.

Yeah, I don't regret my Aorus Xtreme at ALL after reading through that. Glad I didn't wait.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply