Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

What! Well, okay, between 300000 and 500000 times per second, then. Which is still either 300000 or 500000 times faster than software polling. :mad:

Edit: Sccrrrreeeeew yooooooouuuuu

Agreed fucked around with this message at 22:22 on May 3, 2012

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I was about to edit it you fucker, I was in the middle of editing my previous post to account for the new info and thought, oh, dear, my math's off, well, I'm editing this post now, I'll do this right after. Surely in that time period Factory Factory won't decide to make me look foolish.

SHOWS WHAT I KNOW gosh

Edit: Instead of software polling, just have Factory Factory keep an eye on your load line calibration setting, he'll spot any voltage discrepancies from nominal thanks to his high polling frequency.

Agreed fucked around with this message at 22:25 on May 3, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory is the worst best OP :qq:

movax posted:

A switching regulator at 350MHz would be :supaburn:

Honest to god man it seemed more than just a bit much - I work with discrete parts that are also used in that context and that is a few orders of magnitude beyond what makes sense but I just went with what I had always read as the number :shobon:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I call it "thank goodness, we only have to tell people to turn the BCLK back to 100.0 and it pretty much does the rest," unless the people in question are after an edge-case overclock.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Endymion FRS MK1 posted:

For the turbo, it was an OC option called "Load Optimized Turbo", and it auto-set the multiplier at 42. I have no idea what it changes aside from setting all cores multiplier to 42

Probably just applying a "safe bet" overclock by adjusting the multiplier to the substantial but almost certainly doable 42x value, turning on turbo-by-all-cores, and taking care of the related back-end stuff, like giving it an effectively unrestricted turbo power budget so it can exceed TDP throttling, giving it an effectively unlimited maximum time at turbo, and other stuff that on some boards (ASrock, looking at you) you have to set manually and which can be a hassle.

Turbo is just how these chips are overclocked, so you can think of it as saying "load optimized overclock." I am guessing that's their "this shouldn't crash" safe-bet settings. The voltage is probably well in excess of what the CPU needs at 4.2GHz, but that's for stability's sake without having to mess with anything most likely. The only thing you should definitely do is make sure BCLK is set to 100.00, as the automatic OC utility probably adjusted it upward by a few increments and it can introduce system instability as many other clocks rely on a solid BCLK with Sandy and Ivy Bridge. It's no longer the "sorta FSB" that it was with previous i7 architectures, you want it steady at 100.00 for stability.

The upside of auto utilities is that you click 'em and it's done. The downside is you don't get fine-tuned, tweaked to ideal-for-the-clock-speed settings, and generally they go higher on voltage because it's almost certainly safe at that relatively low overclock speed and ensures stability even if it is more than necessary.

Avoid the extreme performance automatic overclock, that has been known to do ridiculous stuff in the past and could cook a chip if it gets over-aggressive. When you want to get to 4.5, or shoot for an even higher clock, you really need to start doing a lot of that stuff manually. You can still let it handle overcoming the Turbo power budget and time limitations, but apart from that, you need to dial it in as precisely as possible to stay on the safe side of the temperature and voltage limits (the latter of which has not yet been precisely established for the chip, though it seems likely that 1.3V should be perfectly safe so long as thermal parameters are within nominal safe values).

Agreed fucked around with this message at 05:16 on May 6, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

dunkman posted:

That was my first dial-in. I just let it run and it was hitting around 76ish at some peaks, but mostly around 71/2/3/4.

So I should dial back on the voltage a bit? Any recommended value? I really am just stabbing in the dark on this one.

72.5ºC for full-time loads is what's published as safe for the chip. We're not really sure about voltage yet, just waiting for some people to push them hard enough to cook them or start experiencing early failure to establish those parameters. Two new lithographic technologies at once, all we know is temperature at this point.

That said, for 42x, you're going way overkill on the voltage most likely. Two ways to go, either lower the voltage or raise the multiplier 'til it gets unstable then go one back down from there.

Thing is, as clock increases, the voltage will cause more heat as well, and you're already pushing safe temperatures. Partly it's because your cooler, it's just not a very efficiently designed unit compared to modern mono- or dual-tower 4+ heat pipe coolers, which can wick heat much more quickly with less noise and greater performance to total radiator fin space.

Either find a way to lower temps so you can push up the multiplier and get a really profound overclock out of it, or stay where the multiplier is now and lower voltage 'til you're at the least necessary for ironclad stability but with significantly lower temperatures.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

dunkman posted:

I dropped it to 1.15v and now it runs OCCT Linpack at ~67 degrees.

Yeah, Ivy Bridge is pretty leaky and the temperature at a given clock and voltage seems to rise rapidly, even at comparatively low clocks.

I predict more people going for "golden chips" this time around; with Sandy Bridge, even a bog standard one could be reasonably expected to hit 44x-45x with a Hyper 212+ or Evo at a reasonable voltage one. Decent chips started hitting that heat and voltage wall at around 46x. My 2600K (pre-2700K) will do 47x at 1.38V, the amended maximum safe voltage for Sandy Bridge, but it runs hot doing so - I've taken extreme measures, it has a Noctua NH-D14 with an optional third fan bolted onto it. As a result I could probably feed it 1.4V and shoot for 48x or go for broke and open it up to 1.425V-1.45V and shoot for 50x, but the truth is that performance gains are so minimal past 45x anyway that it's basically e-peen to go for those really high clocks.

Ivy Bridge, the voltage you were running at is, as FF noted, closer to what people running 44x-45x seem to be using (and may end up being a de facto "safe voltage" if Intel doesn't offer clarification, since the VID value is surprisingly high - like, Sandy Bridge high - and likely can't be trusted). Heat is going to be the primary limiting factor for the majority of Ivy Bridge overclocks, I'd guess, and an H60 would have to be using REALLY loud fans to move enough air over the radiators to help cut down the temperatures enough for a higher overclock.

The current top-end closed loop pre-packaged liquid coolers do offer good performance, it's just poor performance if you take into consideration radiator space, CFM required, and noise. Hell, the pump on most closed loops is at least as loud, usually louder than my three-fan NH-D14. But it weighs like three pounds and looks like it was yanked out of a space shuttle cooling system, it virtually dwarfs the motherboard.

Custom liquid cooling has the advantage of removing the radiator to an arbitrary space and size, so you don't necessarily have to use really powerful Delta fans or whatever just to get competitive cooling. Give it enough surface area and it will passively radiate more than any high-end air cooler made. But in terms of efficiency for surface area for air-flow for noise, liquid loops are poor compared to modern heat pipes.

I'd like to see vapor chamber cooling applications soon, it seems like a natural progression, especially given that it's already shown really impressive results on graphics cards.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

The best GPU cooling setups are still heat pipe, though. Vapor chamber just gets along real well with the default "blower exhaust all heat out of the computer because we don't trust the end user to have any airflow" design.

To the best of my knowledge that's not really true - what is true is that the best aftermarket and non-reference cooling uses 8mm heat pipes still, not because they're superior to vapor chamber cooling but because they are still the most convenient when it comes to targeted wicking heat from location A to location B for removal.

Vapor chambers employed with regard to surface area and height still allows more efficient cooling, but adapting it to a non-blower cooler would require some creative thinking. Hence pretty much every after-market/non-reference cooling design coming in at either "juuuuust barely 2-slot" where it's questionable if you could actually run them adjacently, to straight up tossing the idea of 2-slot out the window and taking a full 3.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Glen Goobersmooches posted:

I've been running my new 2550K @ 4.8GHz (it was the same price as the 2500K from my favorite vendor and I don't encode a drat thing ever okay!) nicely stable at 1.40 vcore for a while now on a Asus P8Z68-V Pro/Gen3 (never seen it exceed 68c in any core during BF3 in full swing). After reading the 1.38 vcore safety limit, I'm going to try to drop this to 1.30-1.35 and work up. Really, what are the hazards of keeping the 1.40v? Are we talking degrading performance or stability within months? I didn't think it was an issue but I suppose it's a designated upper limit for a reason, huh.

What would be a good offset config if I want to max at around 1.35 vcore? This option is nice but it's not really intuitive to mess around with.

I was going to say "if it's stable and doesn't get hot, don't worry about it" but then I saw that your temperature reference was Battlefield 3. Which, while a demanding game, is not really representative of a torture test load. What's it look like under IBT in admin mode with 4 threads, Very High stress, 5 iterations? (cancel the test if it starts topping 80ºC)

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

No offense intended, but 100% of the info you need is in the OP series. It kinda sounds like you just "overclocked" it (scare-quotes intended). The purpose isn't really to test for PERFECT STABILITY(ilityilityility), since as someone smart mentioned, without ECC RAM, eventually even a perfectly operationally stable system will kick an error on tests which leave no margin for error because some stray radiation flips a bit and the calculation comes back wrong.

It's more for testing for operational stability. A few IBT runs will tell you if your CPU and memory are doing what they're supposed to. If they're not, bad things happen. That may sound nebulous, but consider that every single damned thing that happens in your computer relies on your processor, all its integrated poo poo, and your RAM working as perfectly as can be expected. If your computation stuff decides to wig out while doing I/O operations, it can ruin files currently being written to, for example.

Prime95 is the golden test because it's the most comprehensive measure of how your system will behave under heavy, but normal, computational loads involving both processor and RAM. It has your computer do a bunch of fancy poo poo to calculate a bunch of fancy versions of pi. That's seriously all it does. But, it does so in enough fancy ways that it gives your system a really solid workout. After each calculation, it compares the result to its table of known accurate results. If they all match, rad! Stable! If they all do not match, booooo, it'll stop that worker thread right there (or lock up your computer, or bluescreen, depending on the severity of the instability). Run Prime95 with Admin permissions for accuracy.

As stated earlier, if you just let it stress test forever, even a stable system will eventually get an anomalous failure that's unrelated to operational stability because the parts involved in calculations and storing data temporarily are small enough that they can be affected by little sunspot emissions and stuff. 12 hours is a pretty safe test, 24 if you're anal. In my experience, an unstable system - one which will demonstrate issues of some kind at some point - will usually fail in some way during the three to five hour period. That was the case for a few Wolfdale/Yorkfield (last Core2 processors), and is the case for my Sandy Bridge 2600K system too. Getting to 12+ hour stability is my "aaaaand done" point in an overclock. Leave it on overnight or while you're away, once you've ascertained safe temperatures (no greater than 72.5ºC steady temps in Prime95).

Speaking of temps and your high voltage and clock, watch for heat during IBT. Linpack, the stress testing utility that IBT acts as a handy front-end for, is, well, stressful. Really stressful. If you're too overclocked, it will push your processor well past the safe point for extended thermal operation. Nobody recommends prolonged IBT testing. 10 Standard runs, 5 Very High runs, 2 Maximum runs are my benchmarks there. Maximum runs are great for a quick assessment as to whether you might have memory-related instability. If you can pass 10 standards but your system hard locks on a maximum stress test, something is going wrong in the general communication between memory and processor. Solving that may require upping RAM voltage slightly, especially if you've got all of your board's DIMM sockets populated. If you are still unstable, bump VCCIO (the integrated memory controller voltage) up by one or two of your motherboard's increments, max. As memory control is integrated into the chip, you can expect these actions to raise your CPU temperature.

If you suspect memory is causing you issues, run Memtest86+ at least one full go, preferably two or more; it's at least as good as Windows Memory Diagnostic, in my experience better at detecting unusual memory failures.

BUT WHY DOES ALL THIS MATTER? I WAS PLAYING BF3 JUST FINE!

Because you might be doing cumulative damage to your computer's hardware which will result in early component degradation, and you risk procedural reduction in data integrity as well.

All that said, despite being a bit out of Intel's 24/7 recommendation, 1.4V is not an abnormally high voltage to reach 48x with a chip which can run at that multiplier to begin with. That's a pretty select group, by that point, too. Whether it's something you'll be able to safely do remains to be seen. It usually requires exceptional cooling and a good setup. That board features Asus' 12-phase Sandy Bridge VRM design, it won't get in your way if you've ensured that all phases are enabled and thermally controlled (t-probe, rather than maximum current at all times regardless of temperature).

Edit: I should note that sometimes games and specialized processing can poke holes in stability tests, too - a system can be arbitrarily Prime95 stable and pass IBT flawlessly but still have instability if it's extremely marginal and a game or other application calls on it to do something that neither stability stress test would normally do. The third and final phase of "testing" is referred to as "just using your computer as normal, and noting if anything seems off or unstable." :)

Agreed fucked around with this message at 03:37 on May 14, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Anyone shooting for a 47x+ overclock at higher voltage really ought to be ensuring the load is balanced across them - for "normal" overclocks, Optimized is probably fine, but if your OC is legitimately rather extreme, use Extreme. Do always use t-probe, though, unless you want to cook parts.

AI Suite is the god-damned devil, no one should use it. Bloatware, gets in the way of overclocking. Keep the temperature monitoring part, if you want to, but they're pretty bad about keeping the BIOS and the software able to communicate accurately, and absurd temperature readings are pretty commonplace in AIsuite.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I'm going to have to remain skeptical of its functionality on the basis that they don't have an even software-to-BIOS update schedule. If they support it, great, but as you note their temperature monitoring is all over the place (I had one BIOS where the extra special sensors on my Sabertooth P67 would show temps correctly in the at-that-point current AIsuite, after that it's always been some read negative and others melting). Perhaps it's a difference of focus, perhaps the BIOS updates don't affect the TPU/EPU software integration. But even you had issues with it previously, permissions problems; given that the Ivy Bridge chipset and motherboards are still pretty new and will surely see plenty of BIOS updates, I just don't think it should be relied on to accomplish what is easily and safely done from within the UEFI.

I'd love for them to prove me wrong in the end, it's not like I wouldn't appreciate a utility which allows robust overclocking and configuration within the operating environment.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

KillHour posted:

Noctua NH-D14

make sure your PSU has long enough cables.

CPU 8 pin wasn't long enough to route behind the motherboard tray.

Man, an extender for that ought to be packaged with PSUs. I always have to do some goofy poo poo like run it alongside the rear fan and tighten it to the fan with a zip-tie or something like that. And in this case, as a fellow NH-D14 (:love:) owner who did not plug in the 8-pin 'til the mobo and PSU were installed, let me tell you, plugging that barely-long-enough fucker in... was an unpleasant experience.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

See with the Archon your hand has plenty of room to grab the plug, and with the 650D and a seasonic based PSU there is plenty of room for the 8-pin to be pulled behind the motherboard tray :smugdog:

Man, all I have to say to you is you can go... ... :qq:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I will literally beat you up, each of you, who wants to fight

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

movax posted:

The guy with the largest hunk of metal cooler is probably going to win.

1.8 kilograms or almost four loving pounds.

I get such an irrational boner for that thing, even though its performance is extremely disappointing compared to boring old regular aluminum fins on the same model, and you probably ought to build some scaffolding to support the installation. It's just... so... beautiful :allears:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I go to Noctua for my 120mm/140mm fans because they apparently last the computer equivalent of forever, are incredibly quiet, and have great airflow for their noise level. Plus they're readily available.

Still using Corsair 200mm fans, and with three of them in the case, replacing them would represent a non-trivial pain in the rear end (and expense) so they're probably going to continue being used. Good enough air flow to noise - they're really, really quiet, and they move air well. My components stay cool.

I'll probably cut down on noise and give up on CUDA by picking up a nice, non-blower model of the GTX 680 in a month or two. Yeah, it's a big price premium for a ~10% performance increase, but 1. I won't be suggesting anyone else do it, and 2. I have to wonder about power delivery and overclocking capability given the remarkably shorter board on the 670.

LorneReams posted:

Does heat-sink design lead to larger cases, or do case changes lead to larger heat-sinks?

Not sure if this is a serious question, but I believe the interior dimensions would be part of the ATX specifications, and heat sink manufacturers take those into consideration (and so do case manufacturers). Cases with special features, motherboards with certain features, stupid tall peacock-feather RAM heatspreaders, and some heat sinks are incompatible (any mixture of those can present an incompatibility). Specs are revised, and so are designs. But the chicken probably comes first. We're lucky Intel didn't get their way when revising ATX, we'd have some screwed up layouts to work with internally...

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

gently caress it, why not? Warranty isn't transferable anyway, and you go back enough that applying thermal paste to a CPU directly isn't scary, right? :D Go for it, make sure you use something that isn't going to desiccate in two years, and report your results.

I'm going to guess there's three really important margins for error here, though...

1. Application of the good TIM to the CPU itself. Here's one where I'd say classic style thin layer, get the plastic out and get ready to spread!

2. Reattachment of the heatspreader to the silicon. You figure that one out, man, 'cause I dunno. Screw it up and you could end up with a worse situation than now, or bolting a 70+lb/inch^2 cooler directly to a processor. Seems dangerous, get the heatspreader back on right so it doesn't screw up step 1...

3. Installing the cooler so that it doesn't screw up number 2. 'Cause then you're back to step 1 again.

Sounds fun! But you could grab a bunch of overclocking headroom if you do it right, so good luck and godspeed.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

coffeetable posted:

So is it conclusive that the thermal paste is the issue? I've seen articles claiming a 20% reduction by switching up for aftermarket paste, but I've also seen articles claiming no change at all.

Of those two tests, I think the one using Intel's engineered thermal solution but replacing their marginal application of questionable TIM is a much more effective test, as coolers are designed with the heatspreader in mind and there's a huge margin of error for correctly applying a modern heat sink, designed to seat firmly against the integrated heat spreader, to the processor itself.

The more interesting thing is reports of pretty widely varying temperatures at voltage, which correlates more strongly with the hypothesis that iffy internal TIM applications in the manufacturing process could be to blame for some chips performing better and others performing worse (which, in my opinion, some screenshots I've seen definitely are, causing the inappropriately applied TIM to act as an insulator rather than a proper filling-in-microscopic-gaps conductor of heat). While the process and transistor changes are both big deals and can't be discounted, nonetheless it is very odd to go from chips that are more often voltage-walled (Sandy Bridge) to chips that are more often temperature-walled, especially given that the power savings and superior thermal characteristics were supposed to be some of the more profound improvements in the design, not dramatic clock for clock improvements. I'd speculate further that Intel wouldn't have abandoned the blatantly superior thermal conductivity of solder to use a gooped TIM application if the chips themselves weren't capable of running cooler than Sandy Bridge in the first place.

Perhaps a cost savings due to a lack of competition, and one which affects primarily the smaller enthusiast market, since the thermal performance of their chips is still dandy in stock configurations for normal usage.

Getting back to my problem with the article which tests the naked CPU and cooler and concludes it's the chip, not the TIM, I'd especially question their choice of cooler for this test. The mounting pressure required to secure a Noctua NH-D14 is immense. I have one, it's over three pounds. It features a full contact block engineered to make flush contact with the heatspreader, which also serves as strain relief for the profound mounting pressure exerted by the forceful mounting system. It's kind of neat that taking the IHS element out of the equation at least doesn't hurt the NH-D14's performance as measured there, but it is also seriously questionable what performance gains could be expected. That's not the engineering scenario decided upon by either Intel or Noctua. It is outside of design parameters, so what is actually being tested is pretty much up in the air.

More and better data might come on, but for the time being I am very skeptical of tests which go so far out of the design parameters of both CPU and heat sink. Replacing one variable, still within the scope of the design parameters of both devices, and doing so well getting better results speaks more to me than an effectively entirely different test.

Agreed fucked around with this message at 23:33 on May 20, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I'm not entirely sure that Asus' motherboards are accurately communicating with the thermal sensors beyond the package-level at the moment for Ivy Bridge. I know it took a few BIOS revisions for both the main line, the Sabertooth, and the ROG boards to get accurate temperature readings for most software (including generally trustworthy ones like RealTemp or HWiNFO64). The first few BIOSes had about a 20ºC error that led people into thinking they were running really, really hot sometimes.

That doesn't mean the Ivy Bridge heat issues aren't real, or that it's just a mis-read of the temps rather than something in the manufacturing process (in my opinion, which isn't set in stone but which does seem pretty factual at this point - still watching for more evidence one way or another, leaning "iffy IHS TIM applications" over "Ivy Bridge just runs hot inherently!").

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

craig588 posted:

I have a Noctua D14 in the mail that accidentally got sent to the wrong sorting facility and my old computer finally died so I have the option of using the stock 2500K heatsink with my new computer just to get it running for a week, or not having a desktop computer. I still have some Arctic Silver 5 leftover from the last time I built PCs, should I clean off the stuff Intel ships with and use that, or has their stock solution improved to where it'll be a marginal difference? I'm not planning on overclocking much with the stock heatsink, but if I can get the 3.7ghz turbo frequency to be the baseline for all cores and not just single threaded applications I'd be happy.

Doing anything that actually requires greater than stock performance that is going to just be awful in the meantime? I mean, not to discourage you, this is the overclocking thread and we generally don't think like that, take it to the moon when the NH-D14 arrives!... But in the meantime, you know, even non-K 2500s are really fast processors and unless there's a good reason ("tweaking is fun" is or is not a good reason mainly depending on your outlook, I suppose) I'd personally just stick the Intel heatsink on since it's already applied fine for the engineered application, and then not dick with it too much, it's a lot easier to clean off the Intel TIM than AS5 in my experience.

Edit: Also, you can probably set it to turbo-by-all-cores safely with the Intel stock cooler since they're actually really conservative with their temperature... everything, recommendations. Remember, these processors and heat sinks are intended to keep working under extraordinarily laborious conditions, e.g. some dude who knows not very much about tech but was upsold by an enthusiastic customer service guy to a 2500K at Microcenter, but and never bothered to overclock it and sticks it in a mostly unventilated area that would kill an Xbox 360 dead...

Agreed fucked around with this message at 23:21 on May 24, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

It's actually not a very good idea to keep using IBT once you've got past the initial stability testing. "Burning in" components isn't a concept that makes sense. They're transistors. Tiny little ones. They burn out, not burn in. CPUs become less stable over time when over-volted significantly (you're plenty in the clear for that, by the way, so don't fret, you will most likely have the same life expectancy for your 4.2GHz overclocked 2500K as someone who bought a 2500 non-K).

Switch to Prime95 for stability testing after you've passed the initial stress tests of IBT (for me, I consider that 10 runs of Very High, 2 runs of Maximum - that works your processor, the integrated memory controller, your RAM, and all the power delivery involved and is a pretty good quick stability check). Prime95 will run at safer temperatures and do complex workloads, but safe ones, that aren't intended to maximize the stress on your CPU but rather to sort of relentlessly check it over and over and over again until it comes back with a flawed calculation. It does so in a manner that is much more like ordinary calculations, and if you're at a safe voltage and have good heat removal, it should under no circumstances damage your CPU.

IBT can damage your CPU. Those tiny parts rely on other tiny parts (called 'bumps') to supply power, or current, or transfer information... And everything has a point at which you've overworked it and it's in danger of failing. IBT is for initial, quick evaluation. Prolonged usage is a poor idea. I think this is something that OCCT guys really miss the mark on with their long-term Linpack-based testing. You don't get anything more useful from trying to cook your components, stability can be assessed much more safely without long-term degradation symptoms showing up earlier because of over-stressing the hundreds of millions of 32nm transistors (in Sandy Bridge, anyway). People running extraordinary cooling and voltages past 1.4V who use excessive hardcore stress testing to "prove" their system are probably going to get to "enjoy" electromigration a hell of a lot earlier than the rest of us.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Lowclock posted:

Can modern motherboards deal with clock and voltage throttling properly when overclocked, or are we still stuck with space heaters at idle?

They downclock extremely gracefully. At idle, Sandy Bridge is either under or at 1.0V, 1600mhz, and gates power to unused cores. Ivy Bridge improves on that.


Edit: FF, I thought Piledriver was supposed to improve power gating? Though I guess that really doesn't matter at this point...

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

HT carries an almost universal 20W cost. It just does. And in my experience a weirdly out of proportion temperature cost as well. Great for anything that eats threads like candy, though it's maybe questionable if it's "200mhz/core" better or not. Might kinda even out, and that'd be better for not-so-multithreaded tasks obviously.

If you're not going to use it but you are going to use the extra 200MHz then go for it, hoss. One of those quirks of overclocking, I guess, that whatever it is does the hyperthreading thing in the physical processor is unstable at voltages the cores themselves can handle.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

20W cost in the ideal case when your application multithreads well and can fill unused execution resources at 100% load. Essentially, it lets certain real-world workloads light up a processor almost like IntelBurnTest/Linpack, i.e. everything that can be active is.

And in such ideal cases, a 20% HT boost outweighs a 4.7% clock rate boost.

Ah, no, I think that's all the time. It being used efficiently, that's iffy and relies on good multithreading for sure, but check this out:

http://www.kitguru.net/components/cpu/zardon/power-consumption-fx-8150-v-i5-2500k-v-i7-2600k/

Edit: Although, maybe I'm misinterpreting that - I'll always allow for the possibility of error on my part :v: - but I'd think the power consumption delta would be much more dramatic in higher clock scenarios, but they're not. A 2500K and a 2600K/2700K stay roughly equidistant in wattage (check other benches, I did awhile back). So I'm not sure if (a potential) 20% HT boost and (pretty much always) 20W HT cost are the same thing. I'd think performance would scale with power costs.

Agreed fucked around with this message at 13:34 on Jun 4, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I did a lot more poking around last year when I was first putting it all together and saw similar results - it surprised me, because the whole point of Hyperthreading is that, like you say, it should let multithreaded apps take full advantage of the chip, so you'd expect power requirements to scale with performance, but that doesn't seem to be the case (hell, I can confirm that, at least roughly, I flirted with turning HT off for a similar reason and decided it was a bad idea since I actually do need the multithreading efficacy more than I need an extra multiplier tick or two).

Wish I'd have bookmarked other sources, that was one of the first that came up on Google and it's pretty crap, but if you poke around for power requirements for high end processors from that time period you'll find a poo poo-ton of Bulldozer vs. 2700K/2600K/2500K power draw comparisons and it's nearly always within a 20W-30W todal power draw delta which is not commensurate with a 20% greater performance utilization in a best-case scenario.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Alright, I give. About a 12W penalty.

A shreklekheh zakh, 8W, what is this, an overclocking thread?!

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Not trying to claim overclocking OG status or anything (I was there when you could overclock so long as you had a pencil :smuggo:)... And I understand that it's basically the huge push for much more affordable, non-confusing overclocking that's leading everyone to hop on board (which really makes the clock rate a peculiar commodity item and makes those who buy anything that could have a K on the end, but doesn't, seem really off in my eyes). But thinking back to when overclocking was WAY more of a pain in the rear end, it's a little bit adorable to see people talking about how the UEFI we get today is too confusing. I mean it even has a mouse, and you change one thing for the simple overclocks, and only a few, comparatively speaking, for the complex ones... It's beautiful.

Hell of a long way from all the pain in the rear end involved in screwing with the front side bus and making sure your processor and RAM frequency lined up appropriately for the type of setup you were running. Everything's so easy these days. I love it. I don't fault anyone for using software, apparently it's got better at its job so go hog wild - but the UEFI is just so elegant, and there's so much you just do not have to think about or worry about at all (regardless of whether you do BIOS or operating-environment overclocking) that it's pretty mind boggling.

Like, the decision to get stupid high clocks pretty literally comes down to "how much do I want to spend on my motherboard and my heat sink?" and that's cool.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

movax posted:

Memory voltage is fixed; you only need that higher voltage because Kingston says you do for stability at that speed. I don't think memory overclocking is worth it, to be honest. If you have 4 sticks, I'd just bump up to 1.52V or so to improve stability and leave memory at stock speeds.

Thousand times this. I do stuff that involves memory and noticed no qualitative performance increase with the pain in the rear end and higher voltages (both to the dimms and VCCIO) involved in getting it to 1T from the stock 2T. Might be worth 2% difference on a gigantic decompression? Maybe?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Word of warning, Amazon sometimes doesn't know what the gently caress when it comes to RAM. They've sent me the wrong kind of RAM twice within a week of placing the orders. Granted, the only difference is printed on a rather small sticker, not in big typeface 9-9-9-24 1.5V as opposed to 9-9-9-24 1.65V... But that's a mistake that could, maybe, possibly kill an Ivy Bridge processor? 1.65V is beyond the excursion range of the system agent and memory controller alike innit?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Nope, buy that, cool them temps down and raise them clocks up. Them.


Them.







themm

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Man, I remember being nervous as heck when I installed a big copper heatsink on my 2800+ because of all the horror stories of crunching a corner due to improper installation. One false move and crunch, you've got a newly angular chip that won't function unless there's something basically miraculous going on.

And 92mm fans, haha, anyone else remember when the AC Freezer Pro 7 was HOT poo poo with its "quiet" 92mm fan? Thing would crank up but, hey, still not nearly as noisy as stock heatsinks of the era when it was relevant.

Looks like they still have some currently-made coolers aimed at small cases, weird to see sub-120mm fans being sold on aftermarket heat sinks these days! And while poking around, looks like they're trying to compete in the 120mm big-boy heatsink arena, too, albeit with a peculiar angle - still compact (130mm-ish instead of the ubiquitous 160-162mm designs others use), and using push-pin installaton ... Weird. AC Freezer XTREME Rev. 2. Pre-applied MX-2, looks like rather a bit much of it at that but it's a good compound and should perform well without too much installation hassle. I wonder if that 30mm of additional clearance makes a substantial difference in what cases can and can't accept it? Interesting choice for lower profile cooling without sacrificing the 120mm fan.


Edit: Oh, man, this is awesome. The earliest heat-pipe CPU coolers! Referred to in that prior article as "the geekiest" heat sinks around, haha. Well, I guess it didn't take much to cool an AMD XP+/64.

Agreed fucked around with this message at 03:43 on Jun 18, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

KillHour posted:

Actually, since the chip isn't soldered to the spreader, what's stopping someone from soldering the spreader directly to the heatsink? Obviously, this would make the heatsink useless for anything else, but it might shave off a few degrees if you really want to go [H] with it.

Thinking through, how would you even do this in a way that would improve rather than damage cooling ability (let alone the processor)? You'd have to solder it only to the part of the heatspreader that contacts the core or else I'd bet there would be no benefit at all, and then you've added several uneven millimeters to the heat sink and IHS setup which is going to bite hard when you go to tighten it down. If you apply paste as normal then just solder the outside, not sure you'd have any reason to expect benefits, but I am sure it'd be a good way to ruin a perfectly good processor and heat sink all at the same time. You'd have to do it with the processor already installed to the motherboard, too, unless part of the plan is to remove the IHS-heatsink soldered together setup to install the naked processor (possibly replacing the potentially mediocre application of goopy TIM with a more carefully applied layer of quality TIM), but then you run into the problem of crushing the processor when you go to tighten the heatsink...

That's gotta be too [H] for [H], even.

Josh Lyman posted:

Oh man, I remember that heatsink. That was before Zalman became the "giant loving copper heatsink" brand. I think the heatsink in my first build in 2000 was a standard square aluminum Coolermaster with a 60mm fan. I don't think Athlons came with heatsinks, possibly because they were all OEM pieces and retail boxes didn't exist like they do now with Intel CPUs.

I definitely remember my Barton-core AMD Athlon XP 2800+ arriving with a heat sink. That was in early 2003. Yeah, I really missed the boat - bought a 32-bit, generation-old processor on the cusp of the awesome Athlon 64 launch, installed it into an AGP 8X motherboard just in time for that standard to come to an end... Cut me off from upgrade options in a hilariously deserved way, the last good video card I could put in it was a 6800 GT, which was replaced by a 7600 GT when it failed years later due to an early fan failure, gently caress Leadtek Winfast :mad: By that time that was the fastest AGP card and pretty much my only option, hah.

Agreed fucked around with this message at 03:53 on Jun 18, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

KillHour posted:

You'd have to remove the heat spreader from the processor (replacing the paste in the process) and then sand a little off the bottom of the heatsink to make up for the difference in solder depth. You'd then probably want to reflow solder the heatsink and heat spreader together to make sure it's level. I didn't say it would be easy, but it's got to be easier than actually soldering the core to the heat spreader.

Hm, but if you reflow solder the heatsink, there goes the pretty (and pretty important!) solder joints between the heat pipes and the fins on any modern high-end cooler that could take advantage of this, unless you knew for damned sure that you were using solder with a lower melting point than the solder used elsewhere and you could very carefully control the heating environment. And it wouldn't ruin the heatpipes. And...

This is a pretty hilarious thought experiment, I wonder what kind of temps you'd end up with at the end of the Rube Goldberg-esque process :laugh:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

The IHS part sounds simple enough. The lapping sounds like a total pain in the rear end/chore/unnecessary in 2012, I haven't had issues with even inexpensive heat sinks and contact surface, personally, in a long time. That was a problem with older Thermalright units, iirc, early in the 2000s. The XP-120 was released in 2004, and is a very early high-performance model, but is horizontal in mounting shape and often (despite great workmanship otherwise) had an iffy contact surface.

You sure you're thinking of the right heat sink? The XP-120 was famous for giving me the biggest bone back in 2004, gently caress copper block arrangements, I wanted one of those so bad being the first commercially available 120mm cooler ever. And it "only" has 5 heat pipes, too. Is that definitely the one you meant?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Animal posted:

That is really awesome

It's been really awesome for awhile now, though, really curious when it'll be brought to market. It's the thorium reactor of CPU cooling. :v:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Shaocaholica posted:

Do any of the SNB-E cpus overclock well? I'm a bit torn over going with SNB-E or IVB for my next build. It will definitely be a high end build and overclocked of course. I'm also 100% inclined to pop off the IVB IHS after about a week of heavy testing.

Yeah, and they overclock without having to get the eXtreme SKUs as well if I recall correctly. FF will know more, I remember that it came out and was kinda neat, and that you could actually put together a fairly price competitive system with a salty overclock thanks to the non-adherence to multiplier only overclocking that we get with the 4-cores. Specifics, not sure.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down


Crushing a CPU to death is one of life's rare pleasures. RMA? In your dreams :buddy:

I'm all for people voiding their warranties in a real permanent kinda way for reapplying TIM properly, but it is pretty :downs: to bolt today's 2-3 pound heatsinks directly onto a CPU composed of a billion or more 28 22nm transistors edit: not GPU transistors, durr. Feel the crunch.

I remember a 2003-2004 era overclocking topic that showed someone who had made a very lovely, flat indentation on the corner of their at that point kinda long in the tooth Thunderbird CPU (probably misrecalling the exact details but the image was priceless and has stuck with me - that was when AMD's 32-bit Barton CPUs were getting EOL'd in favor of the first Athlon x64 CPUs, but I seem to remember it being a Thunderbird-era processor nonetheless).

It was the best of times, it was the worst of times. We don't want to go back to that time, though. The one reasonable test I've seen showed pretty much no difference with the IHS off or on, probably because the IHS is somewhat integral to how modern processors disperse and heat sinks wick up heat. Thing's not just there to look pretty, and protecting the processor from incidental damage with a misapplied heat sink is bonus points.

I remember there being a bunch of silly internet chest-thumping back when they were first introduced, too. Oh, the dumb poo poo we'll argue over. And don't get me started on lapping... :corsair:

Agreed fucked around with this message at 01:48 on Jul 7, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Shaocaholica posted:

H100 water block is not a 2-3lb heatsink. And no, I'm not the type of rear end in a top hat that tries to RMA something thats deliberately voided. I guess the default prejudice is to think that for anyone who asks?

e: found this thread claiming 9C drop going to direct die on water

http://www.overclock.net/t/1269943/flat-waterblock-for-direct-to-die-cooling-on-ivy-bridge#post_17499510

Don't think either of us were directing any negative commentary at you specifically, though if you're actually thinking about doing it, weeeell, all yours to try, amigo. I'd guess that would be the safest way to do something fundamentally unsafe. Nice clock he's got on the chip, certainly, though I'd be concerned about running it that hot and at that voltage regardless of temperature. Seems like a good candidate for a cooked chip sooner or later. But that's overclock.net for you, I guess. If you go for it, let us know how it goes!

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Agro ver Haus doom posted:

You have to realize that Prime is a stress test that blasts your CPU to 100% load and RAM takes a beating too. That said, you will never actually see this sort of stress and load in real world everyday use... not during a game or anything really.

Yeah... that's not true. The farther beyond spec you go, the less you can count on a poorly tested overclock. Please don't say stuff like this, it's walking potential newcomers to overclocking in the wrong direction vis a vis system stability. Prime95 is a representative high load, but remember that it exists as such because the Prime95 team needed a way for people to ensure that their systems weren't sending in bunk data for the calculations done in the search for higher (Mersenne, I believe?) primes. An unstable system eventually kicks an error and sends in an exciting new finding, which then wastes other systems' time because it turns out it was just a miscalculation.

Anything Linpack-based is waaaay off the grid when it comes to real workloads, that's more what you're talking about (and, maybe ironically, less likely to find low-level instabilities because of the speed with which it exposes medium to high level instabilities - it shouldn't be used much with these more and more heat-sensitive components, get a 22nm transistor up to 75ºC+ and the resistance increase becomes dangerous to the nano-scale components themselves, can cause precipitous failure).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply