|
spasticColon posted:Well at 4.2GHz its stable at 1.2v so I don't know what the deal is. All I know is that it at 4.3GHz it bluescreens until I pump up the voltage to 1.28v and at 4.4GHz I have to pump it up to 1.3v so I may just have a chip of lesser quality. Unless there is something I'm not doing right. Would messing with the RAM voltages and/or timings make any difference? I have them all set to auto right now. Turning off XMP doesn't make a difference either. Someone with more knowledge might have a better idea, but 4.2 or 4.3 might just be as far as one of your cores wants to go. What's your bclk at? If it's over 100 you could try turning it down to 100.
|
# ? Jul 27, 2011 04:04 |
|
|
# ? Apr 24, 2024 00:32 |
|
spasticColon posted:Well at 4.2GHz its stable at 1.2v so I don't know what the deal is. All I know is that it at 4.3GHz it bluescreens until I pump up the voltage to 1.28v and at 4.4GHz I have to pump it up to 1.3v so I may just have a chip of lesser quality. Unless there is something I'm not doing right. Would messing with the RAM voltages and/or timings make any difference? I have them all set to auto right now. Turning off XMP doesn't make a difference either. Mine has a pretty similar voltage curve. I've got it at +.05V offset and it hovers around 1.28V load at 4.3 Ghz. I think chips might just vary in their stock voltage. I can get it to 4.5 Ghz stable, but I have to set the voltage around 1.34V if I remember correctly, which isn't worth the effort on a 24/7 overclock of an already fast chip in my opinion. I think I noticed that bumping my RAM from 1.5v to 1.53V helped stability a little bit, but that could just be my imagination too. I haven't messed with overclocking this PC since I built it months ago.
|
# ? Jul 27, 2011 04:31 |
|
brainwrinkle posted:I think I noticed that bumping my RAM from 1.5v to 1.53V helped stability a little bit, but that could just be my imagination too. I haven't messed with overclocking this PC since I built it months ago. Ever since my 650i board (DDR2) I've always bumped up RAM voltages by .1 or .2V and I think it does help stability, especially when you have multiple sticks / all the slots populated. Don't expect any ill effects from this either, as I think JEDEC requires them to function up to 1.575V and resist damage up to 1.9V or so (don't have to function, they just need to survive getting that voltage). Also, 4.2 stable at 1.2V is great. I'm at 1.325V to get stable at 4.7GHz now, started getting some BSODs...maybe because it's gotten warmer out, or because OCZ has hosed me yet again with the Vertex 3.
|
# ? Jul 27, 2011 05:48 |
|
movax posted:Ever since my 650i board (DDR2) I've always bumped up RAM voltages by .1 or .2V and I think it does help stability, especially when you have multiple sticks / all the slots populated. Don't expect any ill effects from this either, as I think JEDEC requires them to function up to 1.575V and resist damage up to 1.9V or so (don't have to function, they just need to survive getting that voltage). I'm at 1.36-1.38v to get my 2600K stable with HyperThreading enabled. Can kick that off and achieve lower temps and stability at lower voltages, but it's damned useful for what I do and I've got a high-airflow case and an efficient cooler so I'll let it do its thing, no problems. I wonder if I should up the RAM voltages. I've got 16GB, populated all four slots of my Asus Sabertooth Rev3 with 4GB DDR31600 9-9-9-24 2T modules. At 1.5V I'm able to lower that to 9-9-9-24 1T and pass memtest86+ no problem, just slightly bumped (like, one incremental step as permitted by the UEFI) the VCCSA and VCCIO voltages while pegging the actual memory voltage at 1.5V. Wonder if I could lower my vcore if I raised my RAM voltage a bit. They can survive, boot and run at 1.65V, which I know because updating the BIOS resets everything and the newer BIOS doesn't read them as 1.5V modules even though the March-era BIOS did. Noticed it pretty quick but not before running them into Windows and a successful stability test at 1.65V, d'oh. If this is representative of G-Skill quality, I'll pick 'em in the future, I just went with what would fit under the damned heat sink and could be had quickly from Amazon. Anyone have thoughts on PLL overvolting? Safe/unsafe, apart from the known bug with sleep that I don't particularly care about? Is there a nice, fat white paper I can read that goes in depth on the P67 platform so I can stop asking dummy-level questions in a smart thread?
|
# ? Jul 27, 2011 06:00 |
|
Dogen posted:Someone with more knowledge might have a better idea, but 4.2 or 4.3 might just be as far as one of your cores wants to go. What's your bclk at? If it's over 100 you could try turning it down to 100. It's at 100 already. I set it to that before I even began overclocking it back in April when I upgraded my system to a 2500K. And I can push it to 4.6GHz but then I have to run the vcore at 1.38v and I don't think its worth it at that point. I even turned off all the power saving options and turned off the overspeed protection in the overclocking options. My RAM is running at auto settings (DDR3-1600 9-9-9-24 2T @ 1.48v) and the RAM itself is Corsair Vengeance DDR3 1600 CL9 with XMP support so its designed to run at those speeds. I do have the RAM in dimm slots 2 and 4 rather than slots 1 and 3 so they clear my Xigmatek Gaia SD1283 cooler. I'm not worried about it though as the vcore is still well within safe levels and the CPU is plenty fast enough at 4.3GHz although I would like to turn power saving on for when my system is idle but what should I turn on so my current overclock doesn't destabilize? Edit: I got the clockspeed to throttle down back to 1.6GHz but the vcore is still 1.28v so is that because I have the vcore manually set for my overclock? spasticColon fucked around with this message at 08:12 on Jul 27, 2011 |
# ? Jul 27, 2011 07:11 |
|
spasticColon posted:Edit: I got the clockspeed to throttle down back to 1.6GHz but the vcore is still 1.28v so is that because I have the vcore manually set for my overclock? On my Asus board you have to manually enable vcore powersaving, I haven't bothered
|
# ? Jul 27, 2011 12:55 |
|
Not exactly... On an Asus Sandy Bridge motherboard, you have to use offset overclocking if you want speedstep and the C1E to function. Which they will, just fine. You ALSO have the option of trying your luck with their EPU power saving which disables Intel's power saving (and, hell, might actually work with a manually set voltage, who knows) but that has always been a poo poo sandwich for me at least going back to the P45Q-E so I leave it off in favor of Intel's fully compatible power saving.
|
# ? Jul 27, 2011 17:27 |
|
My MSI board has "Overclocking Profiles" but I don't feel like loving with it. It only seems to jump between 1.6GHz and 4.3GHz but nowhere in between even with speedstep turned on but it only jumps to 4.3GHz when it needs to like when I'm running a game or encoding a video. A single threaded app that needs a lot of CPU would only cause one of the cores to hit full speed right? Isn't that how Sandy Bridge chips work?
|
# ? Jul 28, 2011 05:20 |
|
spasticColon posted:A single threaded app that needs a lot of CPU would only cause one of the cores to hit full speed right? Isn't that how Sandy Bridge chips work?
|
# ? Jul 28, 2011 05:25 |
|
Alereon posted:Cores can't be clocked independently, but unused cores can be power-gated (shut off) and the remaining cores can Turbo up using the TDP headroom. Oh I see. But does that feature still work on an overclocked chip?
|
# ? Jul 28, 2011 05:49 |
|
spasticColon posted:Oh I see. But does that feature still work on an overclocked chip? Yes; you can overclock on the -Ks by increasing the max turbo-ratio when the CPU determines it can begin to turbo. So if I set my ratio to 47, at MAX TURBO it will hit 100.00MHz BCLK * 47 = 4.7GHz, thermals permitting (which my huge rear end air cooler permits). I always leave all the power-saving options on in BIOS, though now that I think about it, I only ever noticed CPU-Z bouncing between 1.6GHz and 4.7GHz...so I guess some form of saving is still in effect. Also, what is the "max" safe VCore for 24/7, non-electromigration causing operation? I know Asus's EFI BIOS turns the voltage red after you exceed 1.330V, but I've seen some yahoos running at 1.4V 24/7 as well.
|
# ? Jul 28, 2011 15:23 |
|
I'm probably the only one that disabled turbo and doesn't overclock, I figure? I kinda value data integrity. With this upgrade, I lost the ability to use ECC RAM, so I'm not taking any chances. Seeing as this box runs 24/7.
|
# ? Jul 28, 2011 17:17 |
|
movax posted:I always leave all the power-saving options on in BIOS, though now that I think about it, I only ever noticed CPU-Z bouncing between 1.6GHz and 4.7GHz...so I guess some form of saving is still in effect. Yeah, that's just how Asus rolls when overclocking, basically. It used to allow base clock overclocking but at some point in updating the BIOS that stopped and now it's all turbo, so in order to ensure performance as per your requests, if you've got turbo-by-all-cores selected it automatically tells turbo to ramp that poo poo up as soon as your processor gets remotely involved. I haven't experimented with turbo-per-core.
|
# ? Jul 28, 2011 17:17 |
|
What's the disadvantage of running at high core voltage if you can keep the CPU cool? My 2500k runs 4.8ghz at 1.36 at around 40C (55 load). But looking around, it seems like people aren't too happy taking it over 1.3V
|
# ? Jul 29, 2011 07:06 |
|
MeruFM posted:What's the disadvantage of running at high core voltage if you can keep the CPU cool? Nothing at that voltage, just when you start getting higher it can get a bit dangerous and lead to early failure.
|
# ? Jul 29, 2011 07:17 |
|
MeruFM posted:What's the disadvantage of running at high core voltage if you can keep the CPU cool? As the process shrinks, the transistors that make up your CPU are literally getting smaller and more delicate. The first P4s had VIDs ranging from 1.6 to 1.75V, but were on a much larger process as well. Higher voltages are capable of degrading and wrecking transistors in the long-term with these new processes. Electromigration is even more of a concern as our transistors get smaller and smaller. The 6-Series chipset bug in fact was caused by a change that resulted in a transistor that made up the PLL (IIRC) receiving too much voltage and potentially failing early (mine did). The trend in everything is dropping it like its hot; successive generations of DDR are dropping voltage (remember running some of your DDR at 2.1V?), lots of mobile stuff already operates at <1V, etc. Dropping voltage helps mitigate heat. A few months from now, or maybe a few years from now, there could be people with 2600Ks that they ran for "24/7 @ 1.5V with no problems under water " who suddenly start suffering mysterious system errors because transistors in their CPU are literally falling apart.
|
# ? Aug 1, 2011 21:26 |
|
Some schedule slides about Ivy Bridge workstation chips released by Sweclockers on the 28th.
|
# ? Aug 2, 2011 22:23 |
|
Is anyone else seeing graphical corruption with old Java apps when you use the embedded graphics? I'm sure it is an Intel driver issue, but apparently I am the first one to report it to Dell and subsequently Intel and it is an absolute pain in the rear end to be the first to get them to acknowledge this kind of thing.
|
# ? Aug 2, 2011 22:34 |
|
BangersInMyKnickers posted:Is anyone else seeing graphical corruption with old Java apps when you use the embedded graphics? I'm sure it is an Intel driver issue, but apparently I am the first one to report it to Dell and subsequently Intel and it is an absolute pain in the rear end to be the first to get them to acknowledge this kind of thing. Got an example app we can try?
|
# ? Aug 2, 2011 22:41 |
|
Factory Factory posted:Got an example app we can try? In my case it is the Oracle Jinitiator 1.3.1.30 so unless you have a Oracle middleware server running like I do then it won't do you much good. But if anyone has some crappy software that runs on a 1.3 JRE they could test I would appreciate it. The behavior is odd. Basically the windows won't refresh unless you move them. But if you open up the Intel graphics properties dialog and leave it in the background, then everything starts refreshing properly.
|
# ? Aug 2, 2011 22:44 |
|
Agreed posted:Yeah, that's just how Asus rolls when overclocking, basically. It used to allow base clock overclocking but at some point in updating the BIOS that stopped and now it's all turbo, so in order to ensure performance as per your requests, if you've got turbo-by-all-cores selected it automatically tells turbo to ramp that poo poo up as soon as your processor gets remotely involved. I haven't experimented with turbo-per-core. I might play with turbo-per-core now, I bet it gets power wasteful running at 4.7GHz while I'm using my PC. Though right now it just keeps flipping back between 1.6 and 4.7, so, . Will report if stability goes down/BSODs start happening.
|
# ? Aug 3, 2011 03:54 |
|
I've thought about that, too. My system is letting it hit 120-130W under stress test loads because my cooler can handle it and it's got no reason not to, but most applications will only benefit from one or two cores turboing up. However, I have done measurements and when running Starcraft 2 for example, which is the lovely combination of CPU limited and poorly multi-threaded, it doesn't get anywhere near the stock TDP even at 4.7GHz, so it's not like the Asus board is forcing all those cores to be active just because their multiplier jumps up at the same time. My concern is similar, potential BSODs and stability issues instead of my powerful and stable overclock in exchange for ___________ where I can't really fill in the blank.
|
# ? Aug 3, 2011 04:07 |
|
Rumors from SemiAccurate are that Sandy Bridge-E (Nehalem replacement) has been pushed back until Q1 of next year due to bugs with the on-die PCI-E 3.0 controller.
|
# ? Aug 4, 2011 04:16 |
|
Alereon posted:Rumors from SemiAccurate are that Sandy Bridge-E (Nehalem replacement) has been pushed back until Q1 of next year due to bugs with the on-die PCI-E 3.0 controller. Not too surprised, Xilinx pushed back some of their 7-series FPGAs for the same reason (bugs in the IP cores) e: you know, I wonder how much of it is running into issues with early PCIe 3.0 silicon from ATI/nvidia, who in turn might be cooperating on transceiver/etc development with the FPGA guys. The protocol layer hasn't changed much from the last time I read the spec, just a bump in speeds and tightening of some tolerances when it comes to reference clocks. movax fucked around with this message at 06:04 on Aug 4, 2011 |
# ? Aug 4, 2011 04:41 |
|
movax posted:Not too surprised, Xilinx pushed back some of their 7-series FPGAs for the same reason (bugs in the IP cores) What exactly is your job, and how do I get that kind of thing?
|
# ? Aug 4, 2011 06:57 |
|
Sinestro posted:What exactly is your job, and how do I get that kind of thing? I'm a hardware engineer in a R&D group; we do high-speed data acquisition/control systems and as part of that we roll our own motherboards / FPGAs that connect via PCIe/HyperTransport to minimize latency and increase speed I got lucky and got recommended by a prof before I graduated, it's probably one of the very few non-automotive related engineering places in Michigan. So looking at the PCIe 3.0 spec, I guess I lied, there are some protocol layer changes in addition to physical layer changes. They went to 128b/130b for 8.0GT/s operation + a ton of new specifications PHY-wise for the 8.0 rate. I don't know how much practically GPUs will benefit as I know HardOCP did a test last year where they forced a CrossFire/SLI setup into x4/x4 and didn't notice much of a drop (i.e. the GPUs themselves were the bottleneck), but more speed is always good!
|
# ? Aug 4, 2011 15:07 |
|
movax posted:Also, what is the "max" safe VCore for 24/7, non-electromigration causing operation? I know Asus's EFI BIOS turns the voltage red after you exceed 1.330V, but I've seen some yahoos running at 1.4V 24/7 as well. Supposedly 1.35v is the max for 24/7 use. A little higher is certainly fine for a while but you usually need water cooling at that point unless you like the sound of jet engines in your case. I don't think anyone knows exactly how long it'll take to actually kill the chip at a given voltage outside of Intel though. Doing quick n' dirty googles shows people who've gone up to 1.5v and have had thier chips suddenly die already. e: "Turbine/jet engine" is pretty subjective admittedly. Personally I don't mind the WHOOOOOOSH of the several moderate rpm 120mm fans in my case but most people I know flip out at that sort of thing. YMMV \/\/\/\/\/\/\/ PC LOAD LETTER fucked around with this message at 17:29 on Aug 4, 2011 |
# ? Aug 4, 2011 16:57 |
|
Intel said 1.5V then chips died. Oops. Now Intel has said 1.38V, but who knows, its TDP is 95W and if you're overclocking and giving it voltage that high it'll hit 120-130W under the right conditions. I haven't had any issues with mine running 1.36V-1.38V, offset method, and it almost never reaches 1.38V even when running stupid long iterations of stress tests (and has never exceeded it); I'll let you know with a massive if my system suffers an early death related to my processor deciding enough's enough or something. I do feel obligated to note that while a dedicated liquid cooling setup will remove heat faster for extreme (dangeorus) overclocking, the "turbine" thing is a relic. Today's most powerful and effective air coolers are generally still very quiet, using 120mm and 140mm fans and impressive heat pipe arrays. My case is a Corsair 650D, its fans are 200mm front intake, 200mm top exhaust, 120mm rear exhaust. I've installed an optional third 120mm fan on my Noctua NH-14D, which is one of the current contenders for the top spot. I can't hear my processor cooling over my case fans, which I can only hear if my AC is off and the house is silent. High airflow and great cooling is not necessarily coupled with loud noise as used to be the case thanks to efficient heat wicking and push-pull fan setups. Agreed fucked around with this message at 17:24 on Aug 4, 2011 |
# ? Aug 4, 2011 17:14 |
|
movax posted:So looking at the PCIe 3.0 spec, I guess I lied, there are some protocol layer changes in addition to physical layer changes. They went to 128b/130b for 8.0GT/s operation + a ton of new specifications PHY-wise for the 8.0 rate. Yeah it's nothing like gen2 where the diff of the entire spec against gen1 is like 5 lines, gen3 had to do some extreme changes to hit double the bandwidth.
|
# ? Aug 4, 2011 19:12 |
|
movax posted:I'm a hardware engineer in a R&D group; we do high-speed data acquisition/control systems and as part of that we roll our own motherboards / FPGAs that connect via PCIe/HyperTransport to minimize latency and increase speed
|
# ? Aug 4, 2011 21:37 |
|
PC LOAD LETTER posted:Supposedly 1.35v is the max for 24/7 use. A little higher is certainly fine for a while but you usually need water cooling at that point unless you like the sound of jet engines in your case. I don't think anyone knows exactly how long it'll take to actually kill the chip at a given voltage outside of Intel though. Doing quick n' dirty googles shows people who've gone up to 1.5v and have had thier chips suddenly die already.
|
# ? Aug 4, 2011 23:12 |
|
movax posted:I might play with turbo-per-core now, I bet it gets power wasteful running at 4.7GHz while I'm using my PC. Though right now it just keeps flipping back between 1.6 and 4.7, so, . Will report if stability goes down/BSODs start happening.
|
# ? Aug 5, 2011 01:51 |
|
JawnV6 posted:Yeah it's nothing like gen2 where the diff of the entire spec against gen1 is like 5 lines, gen3 had to do some extreme changes to hit double the bandwidth. Oh god yeah, opening up the _CB copy of the spec w/ all the changes, literally every other page has updates of some type or the other. All the validation has to be done at some point I guess, I just don't see anything current for us consumers benefiting from the intro of PCIe 3.0 other than the PCB guys getting a break and getting to lay down less traces (that of course would have to wait until PCIe 3.0 devices are really prevalent, but then you could get away with running x4 or x8 somewhere instead of x8 and x16). Maybe the blade/backplane/telecom guys will appreciate it though. Bet PLX and Pericom are hard at work with some PCIe 3.0 goodies. Combat Pretzel posted:So you're basically telling me that Creative's bickering about PCIe increasing latencies for sound cards is bullshit? I think it's a cop out. Obviously I have no idea behind their development process, but perhaps their PCIe IP Core + the rest of their logic wasn't up to snuff. Hell, by the 7-series chipset, PCI will only be provided via PCIe to PCI bridge chips like the PEX 8112 at motherboard manufacturer discretion, so people with PCI cards will be going through PCIe whether they like it or not. You Am I posted:My system does the same thing. It runs at 1.6GHz at idle, but when I am playing games or doing video work it ramps up to 4.7GHz Ahkay...I think I'll tolerate it, I don't know how large the power-savings from enabling per-core turboing would be, but I think it would hurt stability of my machine.
|
# ? Aug 5, 2011 02:30 |
|
Alereon posted:Just a reminder that water cooling is always louder than air cooling. The pumps alone in one of those commercial Antec/Corsair kits are about as loud as a noisy case fan, and the fans have to spin much faster to get equivalent cooling because water cooling is so much less efficient (remember, the water is just moving heat from the CPU to a radiator, and heat pipes move more heat faster without a pump). It's possible to build your own custom water cooling system that will out perform air (by using a massive car radiator for example, or putting the radiator underground), but that's not what most people mean when they talk about water cooling.
|
# ? Aug 5, 2011 02:53 |
|
PC LOAD LETTER posted:Well that depends on how you do it. The cheap pre built kits can indeed suck, once you get around or over $100 with dual or triple 120mm fan radiators they seem to get good to decent with low noise.
|
# ? Aug 5, 2011 04:04 |
|
I used to run a pretty beefy water cooling setup, but I've more than happy with the noise and performance of air coolers since heat pipes came around. Granted though, I think it'd be easier to do a low noise build with water assuming you're ok with higher temps: easier to increase radiator surface area I think than increasing air cooler area.
|
# ? Aug 5, 2011 05:09 |
|
movax posted:I used to run a pretty beefy water cooling setup, but I've more than happy with the noise and performance of air coolers since heat pipes came around. Granted though, I think it'd be easier to do a low noise build with water assuming you're ok with higher temps: easier to increase radiator surface area I think than increasing air cooler area. A couple of years ago I'd agree with you. But now that larger heatsinks, with heatpipes, are common, I'm gonna go with air cooling. I had the CM Hyper 212 (92mm) on my AMD until I upgraded to the i5 this week; snagged its big brother, the Hyper 212+ (120mm). My case also has 1 intake, 1 exhaust, and 1 power supply fan, all 120mm as well. Intake/exhaust run at a paltry 500 RPM, CPU fan ranges from 800-1300 depending on load. If it's sitting on the desk I can barely hear a slight hum coming from it. Put it on the floor or under the desk and I can't hear a thing from it. Even when the CPU fan ramps up under load, it's not an annoying sound and not loud at all. Granted, the CPU will get up to about 60C if I abuse it with Prime95 for awhile, but it's also overclocked; it idles around 28-30C. Then again, ages ago I had a screaming Delta 7000 RPM fan on my CPU, my ears are probably still hosed up from that. Plus this heatsink/fan was a whopping $25. The downside is the side panel barely fits over the heatsink, and this is a pretty standard midtower case.
|
# ? Aug 5, 2011 06:55 |
|
some texas redneck posted:A couple of years ago I'd agree with you. But now that larger heatsinks, with heatpipes, are common, I'm gonna go with air cooling. I had the CM Hyper 212 (92mm) on my AMD until I upgraded to the i5 this week; snagged its big brother, the Hyper 212+ (120mm). My case also has 1 intake, 1 exhaust, and 1 power supply fan, all 120mm as well. Intake/exhaust run at a paltry 500 RPM, CPU fan ranges from 800-1300 depending on load. Oh yeah, I've been air for awhile now; I have an Ultra 120 Extreme that I just keep buying socket adapters for, does pretty great. Right now I have some nasty resonance coming from my case though, so I'm probably going to need to take it apart and reassemble the drat thing at some point. I have 4 120mm fans in there, 2 in the front as intake, one exhaust and one on the Ultra 120 itself.
|
# ? Aug 5, 2011 16:23 |
|
So since we were talking about how RAM frequency doesn't make much difference beyond the base 1333, I have a question that internetting has not provided a satisfactory answer to because most of the google results I have come up with contain too much idiocy to be reliable. Is it preferable to run at 1333 and 1T command rate or 1600 and 2T? I have one of those Corsair vengeance kits and it can do one or the other, but not both. edit: Well, I'll chalk my previous results up to some other random setting that was wrong, because I let it cook on Prime95 while I played several games of Zombie Gunship and there was nary a hint of instability, so who the hell knows. I'm still interested as to what the answer to my question would be, though. some texas redneck posted:Then again, ages ago I had a screaming Delta 7000 RPM fan on my CPU, my ears are probably still hosed up from that. God, that loving jet engine. Oh to be young and foolish again. Dogen fucked around with this message at 17:08 on Aug 5, 2011 |
# ? Aug 5, 2011 16:48 |
|
|
# ? Apr 24, 2024 00:32 |
|
Alereon posted:Even if you buy one of those expensive water cooling kits you're still not going to get performance rivaling air cooling. Silent PC Review just did a review of the Antec Kuhler water coolers, Well poo poo didn't know the heat pipe HSF's had gotten that good. Last one I got was at least 3 years ago now. Guess is car radiator route or nothing for water cooling now.
|
# ? Aug 5, 2011 17:15 |