Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Animal
Apr 8, 2003

Agreed what resolution do you play at? I can see TXAA being a big deal at 1080p, but at 1440p anything above FXAA does not seem to be worth going below 45fps.

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Animal posted:

Agreed what resolution do you play at? I can see TXAA being a big deal at 1080p, but at 1440p anything above FXAA does not seem to be worth going below 45fps.

1080p, and I wouldn't trade Very High for the uppermost setting of TXAA on a 1440p or greater monitor. The cinematic look is very very neat, but not worth putting your performance in the shitter. The thing about TXAA is that it gives a very smooth quality to motion, which isn't at all the case with other AA methods or with just upping the resolution. The trade-off is a loss of sharpness, which could be bad in an FPS but isn't in Crysis 3 at least.

I intend to stick with 1080p until Maxwell, by the way - I know this goes against conventional wisdom but I still consider 1080p high resolution, at least with games that actually push the limits of hardware. I prefer to be able to crank everything and get smooth framerates without having to pay for a SLI setup; further, I both prefer a single GPU setup for compatibility and low-hassle reasons, and I use a headless PhysX coprocessor (which may become totally deprecated as a thing as engines move to different GPGPU simultaneous compute methodologies thanks to impetus from console development) which makes SLI a no-go.

nVidia knows their current GPGPU isn't awesome outside of CUDA where it is awesome thanks to highly specialized software packages; as engines adapt to the changing console landscape, nVidia won't be left behind if this is any indication:

quote:

What we do know about the Maxwell family of chips so far from the official sources is that they will integrate general-purpose Denver ARMv8-compatible cores in addition to graphics stream processors and that they will be able to support unified virtual memory technology with microprocessors from Intel or AMD, a rather big deal for many applications. It is also logical to expect higher horsepower in general, which will boost video games, the main driver for Nvidia’s GeForce business.

From this article from March.

I believe that when Maxwell hits, team green will be in a good place to take advantage of higher resolutions and compute simultaneously without necessarily relying on anything proprietary to do so.

Jan
Feb 27, 2008

The disruptive powers of excessive national fecundity may have played a greater part in bursting the bonds of convention than either the power of ideas or the errors of autocracy.

Agreed posted:

I do really appreciate your being here to offer a practical perspective, though - I don't work in the industry at all, you can probably guess my industry by my avatar :) Did you check out the Practical Clustered Shading white paper? It looks really promising as an alternative to totally deferred or totally forward rendering, though it's not the first approach to that goal, conceptually. Just nicely open about it, at least in the abstract. I appreciated the visual reminder that a "rote" mental picture of a frustum isn't really correct, hadn't thought about it like that before - but that's just one thing, the paper as a whole was very interesting and it seems like their particular approach, while not production proven yet, could yield interesting results.

Yeah, I did, which is actually one good example of how my fundamental knowledge is riddled full of holes. I understand most of what they're describing, but I haven't actually encountered the problems they're trying to solve.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Jan posted:

Yeah, I did, which is actually one good example of how my fundamental knowledge is riddled full of holes. I understand most of what they're describing, but I haven't actually encountered the problems they're trying to solve.

I'm sure you'll have plenty of knowledge under your belt as you keep on, though. And really, thank you for being around to offer an insider view on this stuff, it's interesting to me but it is more my hobby than my livelihood obviously.

I'm about to work on the nVidia Boost 2.0 overclocking guide, but I figured I'd link another really fun white paper talking about interesting ways to do voxelized shadow volumes for crazy good scene lighting without wrecking performance due to Pixar-like rendering requirements :v:

Jan
Feb 27, 2008

The disruptive powers of excessive national fecundity may have played a greater part in bursting the bonds of convention than either the power of ideas or the errors of autocracy.

Agreed posted:

I'm sure you'll have plenty of knowledge under your belt as you keep on, though. And really, thank you for being around to offer an insider view on this stuff, it's interesting to me but it is more my hobby than my livelihood obviously.

Yeah, not to mention I just changed studios into a project that is still in preconception. This is going to be a hell of a learning experience.

And, incidentally, this studio also doesn't block multipart/form-data encoding, so I can actually post on SA! The taste of freedom!! :shepface:

Literal Hamster
Mar 11, 2012

YOSPOS
I think my EVGA GTX 680 is having heat issues.

I have it overclocked to 132% Power Target with a +30MHz GPU and +100MHz memory offset for a 1202MHz GPU clock and a 3200MHz memory clock. When playing Planetside 2 tonight, MSI Afterburner reports the TDP at 75%, GPU Usage at 65% and Temperatures at between 65C - 70C at full fan speed. At no load, the GPU will idle at around 50C. When I use Furtest to stress the GPU at 100% load, the temperatures spike to 80C - 85C before the GPU self-throttles.

Without an overclock (the GPU is factory-overclocked), the GPU will still spike past 70C and self-throttle even at full fan speed when tested with Furmark. There's no way I can keep the card from throttling without actually downclocking the card. Are these temperatures typical of 680s or do I have a problem?

TheRationalRedditor
Jul 17, 2000

WHO ABUSED HIM. WHO ABUSED THE BOY.
That is nowhere near the heat those modest clocks should be putting it out, and there's nothing whatsoever typical about a 50C idle unless your computer is in a desert. I'm guessing you have one of the reference blowers that aren't very effective. Have you checked if it's positively clogged with dust and poo poo (or if your case has airflow problems)? have you checked that the fan is actually doing the work it claims it is?

Literal Hamster
Mar 11, 2012

YOSPOS

TheRationalRedditor posted:

That is nowhere near the heat those modest clocks should be putting it out, and there's nothing whatsoever typical about a 50C idle unless your computer is in a desert. I'm guessing you have one of the reference blowers that aren't very effective. Have you checked if it's positively clogged with dust and poo poo (or if your case has airflow problems)? have you checked that the fan is actually doing the work it claims it is?

GPU-Z seems to corroborate what MSI Afterburner is saying about the fan speed, and it sounds like a leaf blower is running in the room so I guess it's probably running the fan at what it claims the speed is.

Yeah I have one of those reference blowers that exhaust out the back. My case seems to have sufficient airflow, but I can't check to see if the blower is clogged because one of the screws that secures the blower to the card came improperly manufactured and I can't get a screwdriver head to catch.

TheRationalRedditor
Jul 17, 2000

WHO ABUSED HIM. WHO ABUSED THE BOY.
You need to get around that somehow sooner than later. Another theory is that maybe the heatsink contact to the GPU is lovely due to deformation or lousy factory thermal grease application.

Klyith
Aug 3, 2007

GBS Pledge Week

Daysvala posted:

Yeah I have one of those reference blowers that exhaust out the back. My case seems to have sufficient airflow, but I can't check to see if the blower is clogged because one of the screws that secures the blower to the card came improperly manufactured and I can't get a screwdriver head to catch.
You should be able to take off the shroud, the plastic/metal thing that goes over the heatsink and directs airflow from the fan, without removing the entire heatsink from the card. The screws that hold on the shroud are small ones flush on the sides of the heatsink. Once you take it off it will look something like this.

Blower style heatsinks are very prone to getting clogged by dust if you don't have a well-filtered case (or a very clean house).

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

TheRationalRedditor posted:

That is nowhere near the heat those modest clocks should be putting it out, and there's nothing whatsoever typical about a 50C idle unless your computer is in a desert. I'm guessing you have one of the reference blowers that aren't very effective. Have you checked if it's positively clogged with dust and poo poo (or if your case has airflow problems)? have you checked that the fan is actually doing the work it claims it is?

Just chiming in to say that this post made me take a look to see what my own GPU is doing temperature wise, and I've discovered my gtx680 (reference design, with blower, stock clocks) is idling at 60 degrees C, with a fan speed of 36% (I have the power management mode set to prefer maximum performance however)

Should I be concerned?

BOOTY-ADE
Aug 30, 2006

BIG KOOL TELLIN' Y'ALL TO KEEP IT TIGHT

Daysvala posted:

GPU-Z seems to corroborate what MSI Afterburner is saying about the fan speed, and it sounds like a leaf blower is running in the room so I guess it's probably running the fan at what it claims the speed is.

Yeah I have one of those reference blowers that exhaust out the back. My case seems to have sufficient airflow, but I can't check to see if the blower is clogged because one of the screws that secures the blower to the card came improperly manufactured and I can't get a screwdriver head to catch.

Are you able to pop off the heatsink/fan shroud independent of the heatsink itself? I know some older GTX cards could do that so you could clean out the heatsink and fan without removing the entire heatsink itself, so it's worth a shot. Usually just have to squeeze the card at the edges to pop away some tabs so the shroud comes loose. Other than that, are you using a regular screwdriver to try to get the heatsink screws loose? Might be worth a shot to get a mini set of them if you haven't already, I have a set and it's just a cheap little Stanley 6 piece but they work perfectly for video card screws. If you can get the heatsink loose, definitely worth a shot to clean off any old paste and reapply some new stuff to see if that helps out.

BOOTY-ADE
Aug 30, 2006

BIG KOOL TELLIN' Y'ALL TO KEEP IT TIGHT
quote /= edit

craig588
Nov 19, 2005

by Nyc_Tattoo
Post your afterburner log. There's a VERY outside chance those temperatures as possible with an incredibly high power limit in the bios. The numbers are not comparable between cards. If it says it's only at 75% power level during furmark you probably have a card with an incredibly high power target set in the bios. On one hand that's better for performance in high load situations, on the other you end up with incredibly high temperatures and a lower overall speed because of the increased average amount of heat. My modded to fairly high levels card has a 265 watt power limit and can throw out above 70c temps at the stock speed with a big open air cooler and I'm pretty sure there have been bioses dumped from EVGA cards with a 300 watt limit.

It's probably more likely you just have a clogged up heatsink, especially if it used to run cool. It might be getting hot enough to prevent the card from even daring to reach the maximum boost available even when there's still power overhead.


It's not your fault, but sometimes I get annoyed with how vague the overclocking settings can be, especially when something goes wrong.


The Lord Bude posted:

Just chiming in to say that this post made me take a look to see what my own GPU is doing temperature wise, and I've discovered my gtx680 (reference design, with blower, stock clocks) is idling at 60 degrees C, with a fan speed of 36% (I have the power management mode set to prefer maximum performance however)

Should I be concerned?
That's pretty hot since at idle the card should be dropping down to 324mhz and .987v (the perfer maximum performance setting doesn't change that, I'm not sure exactly what it does on Keplers other than sometimes give people problems in adaptive mode) which should be easy enough for even the stock cooler to push into the 40s. That's probably a clogged up heatsink.

craig588 fucked around with this message at 05:35 on Jul 5, 2013

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

craig588 posted:

That's pretty hot since at idle the card should be dropping down to 324mhz and .987v (the perfer maximum performance setting doesn't change that, I'm not sure exactly what it does on Keplers other than sometimes give people problems in adaptive mode) which should be easy enough for even the stock cooler to push into the 40s. That's probably a clogged up heatsink.

According to GPU-Z it idles at 1006mhz. What could be preventing it from dropping further?

For what it's worth, I ran Uniengine Valley, and it hit a maximum load temp of 82 degrees, with the fan reaching 70%.

craig588
Nov 19, 2005

by Nyc_Tattoo
I don't really trust GPU-Z for clock speed detection, it says my card is idling at 1300MHz too (Using version 0.7.2, I don't know if there's a newer one). Precision or Afterburners logs will give you a more accurate idea of what your card is doing.

craig588 fucked around with this message at 05:50 on Jul 5, 2013

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

craig588 posted:

I don't really trust GPU-Z for clock speed detection, it says my card is idling at 1300MHz too. Precision or Afterburners logs will give you a more accurate idea of what your card is doing.

I solved the problem. Turns out Google Chrome counts as a 3d application, preventing the GPU from idling, since I keep chrome open pretty much permanently, even overnight when I'm asleep.

When I closed chrome the gpu clocks dropped right down to where they should be at idle, and the temperature dropped down to 45 degrees C, which appears to be my true idle temperature.

FetalDave
Jun 25, 2011

Moumantai!
About 1 out of 4 times my computer comes back from turning the monitors off due to being idle for 10 minutes, the computer literally locks up for about 2-3 minutes and the display won't turn on, then after it finally does turn on, it shows me that the display driver crashed. I've tried 4 different driver versions and the problem still happens.

I posted a thread on nVidia's forums, but no one responds. https://forums.geforce.com/default/topic/547642/geforce-drivers/monitors-won-t-turn-back-on-after-screensaver-turns-them-off/

Did I seriously get another bad card? I RMA'd my 9790 that I got (that Newegg refused to take back because they say I didn't return the installation disc in the box which is loving bullshit, I know it's in there). I could RMA this 770 as well but now I'm ultra paranoid that they won't take this back for some bullshit reason as well.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

nVidia Overclocking with the 700 series (and later?): Boost 2.0 and Knowing Your Graphics Card

Part One: Key Concepts To Understand
  • Preface - A cool card is a happy card. Strive for lower temperatures. Resistance increases as temperature rises, which means that you can have an overclock that is 100% stable below a certain temperature, but above that temperature it becomes unstable. This, in spite of the fact that graphics cards can run at temperatures of up to 90ºC safely. We're talking about taking performance out of the range of normal behavior, so what works for stock settings won't be ideal for an overclock. Stay frosty. I like to have a nice side fan for any kind of non-reference cooler design, personally, though basic good airflow will solve most temperature issues. Non-reference coolers require some special attention. I'll address that briefly.

  • nVidia Greenlight - A newish program intended to introduce a standardized level of quality for all nVidia partners. Intel has something similar going on with their laptop specifications. The idea is that a card from a partner, even if it is a non-reference design, has to meet or exceed a reference card in several key parameters, including temperature handling, power delivery, and other factors that are a little more arcane to the end user but which matter nonetheless. If this program works as intended, then we can trust all manufacturers to produce a quality product, and the question as to what to buy comes down to specifics, like cooling designs, customer service, and of course price.

  • Reference vs. Non-Reference - A reference design is a design which is built to nVidia's specifications. A non-reference design (also sometimes referred to as after-market, not to be confused with after-market cooling you install yourself) is built with customized features, usually things like custom cooling solutions, or more power phases; non-reference designs tend to cost more, and they also tend to perform better. An exception at the time of writing is the EVGA GTX 780 SC ACX, which doesn't cost any more than the standard EVGA GTX 780 SC, but as time goes on and AMD eventually gets their refresh out the door I imagine pricing will shake up a bit. It always does. It used to be that manufacturers of ill repute earned their bad reputations by providing cards with, say, reduced effectiveness of power delivery to save a few pennies per card manufactured. The aforementioned nVidia Greenlight program should cut down on that crap!

    An important practical difference is that non-reference coolers typically use multiple fans instead of the standard blower & vapor chamber design, are often not 2-slot compliant (EVGA'x ACX is an exception, there, but it is very common to see 2.5 slots, which might as well be 3 for the purposes of determining how much you can fit in your PCI-e slots), and exhaust their heat inside your case! Do not invest in a non-reference cooling card if you know that your case will have issues with the hot air it dumps inside the case itself. Reference design blowers have a great place in cramped cases, because they can operate on less airflow and exhaust entirely outside the case.

  • TDP - After all is said and done, the real, BIOS-level TDP determines how much power the card is allowed to use, and as such, it is the true limit of how much you can overclock even assuming you have a best-of-the-best card in every respect. You still won't be able to exceed the intended power draw without modifying the BIOS (generally not a good idea, very much an enthusiast thing), and once you hit it, the card will not go any further. It's the brick wall of overclocking graphics cards. Software offers some adjustment, but it is minor at best and not especially transparent compared to what's going on behind the scenes. You can find the listed TDP specification of your card on nVidia's page if it's a reference design, or your manufacturer's page.

  • Boost 2.0 has notable differences from Boost 1.0, the method of overclocking that was introduced when Kepler launched in 2012. In particular, we now have the capability to adjust both Power Target and also Thermal Target, and also to prioritize which of those two the card's overclocking pays attention to most. As will be explained later, this is fantastic.

  • Power Target - In Boost 1.0, this was the only adjustment for trying to get the most from the card. There was a bit of a balancing act that still applies to some cards in overclocking both the VRAM and the GPU/SMX Cores, since depending on available memory bandwidth you might not get the full benefits of the Core clock... Yet power given to running the GDDR5 VRAM hotter is power that is not going to the core clock. The 700-series being mostly a refresh, plus the GTX 780, means that both TDP and clock rates have been raised, including nVidia shipping 7GHz effective clock VRAM (or 1750MHz base clock GDDR5 modules) on the GTX 770 to meet the bandwidth needs of the fast core. Power Target is still there in Boost 2.0, and as before you'll want to raise it as high as possible because you don't want it holding you back - but there's a new setting in town that is pretty fantastic. And it is...

  • Thermal Target - This is the killer app of Boost 2.0, the reason it is better than Boost 1.0, and a fantastic tool for keeping clocks high so long as temperature is low. Adjusting - and prioritizing - the Thermal Target to, say, 94ºC will encourage the graphics card to ramp up everything it has in terms of power delivery to try to reach the set temperature. Since any respectable cooling setup won't ever let it get there, what it really means is that you're giving the card carte blanche to run full bore. Of course, you might need to overvolt the card somewhat manually to give it more operating headroom, and raise your GPU clock and memory offsets as well since you do have an important manual role in how much your card will overclock.

  • GPU Clock Offset - While your card, when given a high Power and Thermal target, will overclock itself according to some settings in the BIOS, you have a manual control to bump up the core clock. On Kepler cards, control operates in increments of 13MHz, despite appearing to have finer control than that; you can tell it to add +9MHz, but it'll really just add +13MHz, since it operates off of turbo bins and those come in 13MHz and 26MHz increments, the 13MHz ones being all we have direct control over.

  • Turbo Bin - The increments of the overclock on the core. The card's BIOS, when operating on, say, just the power and thermal targets maxed with no manual clock offset, will turbo up to a given point. We only have granular control over the actual clockrate of the GPU/SMXes, specifically in 13MHz increments. Just how it is with Kepler.

  • Overvoltage/Overvolting - The ability to adjust the voltage manually (I say manually, because the card will automatically raise the voltage to keep up with the overclock - it's a good idea to go ahead and raise the voltage manually, though, because it helps to prevent instability as the card dynamically clocks up and down in demanding scenarios). You want to maximize the amount of juice that the GPU has. The stock voltage adjustment range across all nVidia cards are pretty much 100% safe to crank to the maximum allowed, provided you can keep the card cool, and won't even meaningfully impact the life of the card. One interpretation of this is that nVidia is draconian in their willingness to allow overvoltage. Another is that they're encouraging the protection of their hardware. Either way, you can't feed it enough voltage to harm it unless you're using a modified BIOS. I mentioned earlier that doing so is a bad idea, I'll reiterate that now. Don't use a modified BIOS. Don't do that. Please.


Part Two: Overclocking Your Card with Boost 2.0

If you read the above, then you're already mostly prepped to do the actual overclocking. If you own an EVGA card or can find a download somewhere, I prefer EVGA Precision X. It is in version 4.2.0 at the time of this writing and needs an update to fix a teensy instability that manifests after long periods of use, but it still works really well and it is tuned specifically for nVidia cards and thus the user interface is straightforward as can be. The software is free, it's just restricted to EVGA card owners... in the sense that you can't download it from EVGA unless you have a card of theirs. Third parties have opted to host it, though, and EVGA isn't exactly cracking down, here.

MSI offers an alternative software powered by literally the same coder, basically pushing the same product. It's cross-platform since they make both nVidia and AMD cards, with a key distinguishing feature being that it's freely available in addition to just being free, and MSI allows the developer (who previously worked on Rivatuner, if you remember the good old days) to push beta software for public release. This is good and bad. It isn't really any harder to use, the concepts are identical, so it's strictly a matter of preference. I prefer Precision X, others might prefer MSI Afterburner, at the end of the day they do the same thing in basically the same way, allowing you to overclock and set custom fan profiles without having to use other, less purpose-built software like nVidiaInspector which can be a little clunkier to overclock with.

The big deal with Boost 2.0 is Thermal Target. Seriously, it's all about switching over to that. In Precision, you move the sliders for Power and Thermal target to the farthest right, and then you make sure Thermal Target is prioritized by clicking a the little arrow that by default points up at Power Target. What this will do is tell the card "crank everything until this temperature is met." Since you can move that temperature well beyond what any card in a case with decent airflow is going to ever actually hit, you can effectively tell the card to do its background automatic baseline overclocking balls-to-the-wall, 100% of the time. It doesn't interfere with power saving in the least, and when playing older games on powerful hardware like a GTX 760 or above, it will still clock down automatically because it just doesn't need to use more power than that. But in modern games, it will keep clocks as high as possible. Thermal Target is waaay better than Power Target for encouraging cards to run quickly.

Manual Adjustments, Overvolting, and Testing
Once you have your card set up with the baseline maximum Power and Thermal targets, make sure to increase the voltage by as much as possible. On the GTX 780, this is a measly +38mv, which is nothing at all and if the card doesn't come from the factory with a pretty decent voltage it could impact overclocking ability. EVGA's SC ACX cards ship with the ability to reach 1.2V, which is a respectable target - if your GPU and Cores will hit a clock rate, they will probably do it on 1.2V, and that is completely safe to run all day long for years provided the card is sufficiently cooled. Other cards in the 700 family have different voltage ranges, but, again, crank 'em up for manual overclocking, you're gonna need the juice.

The easiest way to start an overclock on the core itself is to read a ton of reviews, see what overclocking results reviewers end up with, factor in that they're running open-air test boards and see what happens if you shoot for their overclock levels by using the same offset +MHz on the GPU clock adjustment. Leave memory alone for now, we want one variable in play. Copycat overclocking works to show you what will happen right away when you start testing with software like Unigine Heaven, Unigine Valley, 3Dmark11, 3DMark (2013)... It'll probably crash out unless you got a really nice chip. Not that I'm saying nVidia sends the best ones to reviewers, but... Well, use your judgment. Crashing out is okay. You'll need to re-set everything, including your voltage adjustment, and reduce the overclock (paying attention to the actual clock rate of the card as you do so, knowing that it will be reduced in increments of 13MHz). Eventually, through a process of trial and error, you'll find a clock rate that passes the tests well. It probably won't be your final clock rate, since games have a way of showing testing software who's boss, but it'll work for now.

Some advocate a "start low, build up" method, but I feel like that's kind of a waste of time given how gracefully the drivers and the OS handle a videocard GPU crash - what's the point of crawling up 13MHz at a time if you can just start high and back off as necessary, saving time and getting to the meat of your OC faster? At the worst, you'll need to Ctrl+Alt+Del to kill the 3D application that crashed your card, and then get back to your overclocking software to readjust.

Overclocking memory is its own thing. It's also often a bad idea, as GDDR5 is some of the very least stable stuff there is in terms of running it out of its stock designed performance, so keep that in mind, especially if you're using an already fast GDDR5 setup, like the GTX 770 or a GTX 780 Ti - improvement past the stock configurations there are often pretty much academic, and memory instability is one of the most pernicious and difficult things to ferret out in terms of system stability and seeing how stable your overclock really is. You need to know the basics of how GDDR5 works to do it, so I'll explain that briefly. The clockrate you see is halfway in between the effective clockrate and the base clockrate. If your card ships with 6000MHz GDDR5, that means the base clockrate of the actual GDDR5 memory modules is 1500MHz. Multiply that by two and you see a clockrate of 3000MHz in your overclocking software; multiply what you see in the software by two and you have the effective real clockrate of the memory in terms of operations per cycle. So, say you want to push your GDDR5 from 6000MHz to 7000MHz (good luck with that), you'd do some quick math and go with +500MHz on the memory offset. That's a 250MHz overclock of the base clock of the VRAM, up from 1500MHz to 1750MHz, and if you think it's guaranteed you're out of your head. A smaller value will likely be possible, though, and depending on the card you're using, may be either very helpful or kind of a risky waste of TDP.

Take the GTX 780, a card which ships with 1500MHz GDDR5 modules, effective clockrate 6GHz, with a wide 384-bit memory bus for a solid 288.4GB/sec memory bandwidth. That's pretty damned high and doesn't really bottleneck anything the card is doing. A "modest" +200 offset overclock bumps it up to 1600MHz base clock, 6400MHz effective clock, and is likely within reach of nearly every card. It also increases the memory bandwidth to 307.6GB/sec. While many cards will be able to get higher overclocks from the base 1500MHz VRAM, at that point you start asking why, exactly does it need more memory bandwidth? 307.6GB/sec is a ton. Now, compare that to the GTX 680, which was rather starved for memory bandwidth at 192.2GB/sec, stock, and did need a solid memory overclock to realize the benefits of a solid core overclock. Thank the 256-bit bus for that issue.

The GTX 770 and GTX 780Ti ships with actual 1750MHz GDDR5 modules, for an effective clockrate of 7GHz from the factory. When this was revealed, it was a real achievement in terms of memory controller design and means that the GTX 770, which is in every regard a 680 but with higher clock rates and the newer Boost 2.0 technology, has a default memory bandwidth of 224GB/sec. That allows the core and shaders to realize their improved stock clock rate and also to overclock without much difficulty. The 780Ti is about as far from bandwidth starved as it gets, as well.

You might wonder why, at 7GHz GDDR5, the GTX 770 has ~60GB/sec less memory bandwidth than the stock GTX 780 with its 1500MHz GDDR5 (6GHz effective clock) - that's all in the memory bus. A 256-bit bus is much narrower than a 384-bit bus, and so even very fast GDDR5 doesn't outperform the real-world memory bandwidth of the higher end card. Narrower buses mean you get less bandwidth per overclock of your memory, too; with just +100MHz base clock/+200 memory clock offset, the GTX 780 grabs 20GB/sec of additional bandwidth. The GTX 770 would have to do considerably more to get the same degree of memory bandwidth improvement.

I want to emphasize that memory overclocking is not something that you should do just casually or just because. If your memory bandwidth bottlenecks your card's performance, bring it up as required, but any faster than necessary is just wasting power and inviting instability into your system as well.


The Nitty Gritty of Testing Your Overclock
We don't actually have any solid means to TRULY validate overclocks. That's just the lovely truth of video card overclocking. Our best tools are imperfect, and you will probably run into some games that exercise the logic of the chip in unexpected ways at unexpected times and crash what was previously thought to be a stable overclock. This is just part of running parts outside of their engineering specifications. Hell, for truly high-precision applications, even using factory overclocked cards can result in instabilities. Accept that the more intensive your overclock is, the less likely it is to be truly stable, and accept that you may find that what you previously thought was a stable clock turns out to be unstable in "some new game X." It's just part of it, I'm afraid.

Benchmarks and the like are useful up to a point to establish initially whether your overclock is crash-prone. Run several loops of the latest version of Unigine's Heaven and Valley benchmarks (handy because they do just keep on looping, so you have a prolonged stability test); run 3DMark11 and 3DMark (2013) over and over. They will give you a good idea of the basic stability of your overclock. But eventually, shock and awe, you'll actually have to play games to see if your overclock holds up. DX9, DX10, and DX11 games all run on their own paths, as it were, in the GPU/shaders, and beyond that every engine has its own quirks that will exercise different parts more or less. The nice thing is that you're not really testing, at this point, you're just playing your games and if you notice it isn't stable, adjust downward... With an important caveat.

Drivers can also affect the stability of games, and can give you a false sense of insecurity about your overclock - at the time of this writing, we've just had the 320 series of drivers introduced, and they have some issues. nVidia has even recommended that if you encounter said issues, uninstall them and go back to using the 314 drivers. This is only an option for people using older cards or a modified driver, the latter of which is outside the scope of this guide. They are aware of bugs and are squashing them as quickly as possible, so it shouldn't be too long before we hit a stable point again. I have had very few issues at all with the 320.49 beta drivers, which implement a number of bug fixes. New drivers, especially release drivers for products, tend to suck, regardless of which company programs them. nVidia's usual driver advantage does not apply here at all.

So play your games, and if you get crashing, back off of your GPU. If you see weird texture behavior, back off your VRAM. If none of it helps and the issue persists even at stock clocks, it's either a faulty card or just the drivers going through teething, as they do.

One Last Thing - Fan Profiles!
Since a cool card is a happy card, and we're using the Thermal Target for maximum overclocks, let's go ahead and cover the concept of setting up custom fan profiles. Both Precision X and Afterburner give you pretty much an identical means to adjust the fan control. The option may be located in a different place, but it should be easy enough to see. On Precision, it's right by the fan percentage usage indicator. You have to leave the fan control on "Auto" to reap the benefits of setting a manual fan adjustment, but you can change the fan curve by setting specific temperature points at which it switches to a higher setting. Double-click the fan curve quickly to turn it from a blocky mess into a smoothed line. The default nVidia fan control on hardware and in the drivers, with a reference model, will tend to do its best to stay as quiet as possible for as long as possible.

You have to decide for yourself where you want to make the trade-off in terms of acceptable noise levels and cooling power, but both reference and aftermarket fans can pull some serious heat off of the card's vitals and exhaust however they do if you choose to set a more aggressive cooling curve. I set mine up to more or less go with fan% exactly at temperature in degrees celsius, starting at 40/40, then 50/50, then 60/60; I ramp it up starting at 67ºC, and it's at max by the time the GPU is at 70ºC. I have a quiet after-market cooler that doesn't become obnoxiously loud even at this high setting, and thankfully my airflow is nice and my system's bits are kept well-cooled anyway with special emphasis on the GPU and CPU getting lots of incoming cool air. Only the most graphically intensive modern games can get my card into the 55ºC+ range, and it usually idles between 22ºC and 27ºC.

That will not be the case for reference, blower designs unless you run it full bore all the time unnecessarily. The GTX 680 and other 600-series Kepler cards using Boost 1.0 would usually throttle by 13MHz (one turbo bin) starting at 70ºC and every 10ºC thereafter. As stated earlier, GPUs can safely run in the high 80ºC range, so even if you lose a little clockrate here, if you've got cramped conditions and are running a reference design blower fan that exhausts out of the case or whatever, don't freak out about temperatures that would be :stonk: on a CPU! GPUs are made of sterner stuff, honest. But do consider that the stability of your overclock may be affected by higher temperatures. And if you're getting 80ºC temperatures DESPITE an after-market cooling solution (Examples: MSI Frozr series, Gigabyte Windforce, Asus DirectCUII, or the EVGA ACX), then you either have a faulty card or serious case airflow issues to address.

Agreed fucked around with this message at 17:49 on Jan 29, 2014

Literal Hamster
Mar 11, 2012

YOSPOS

Klyith posted:

You should be able to take off the shroud, the plastic/metal thing that goes over the heatsink and directs airflow from the fan, without removing the entire heatsink from the card. The screws that hold on the shroud are small ones flush on the sides of the heatsink. Once you take it off it will look something like this.

Blower style heatsinks are very prone to getting clogged by dust if you don't have a well-filtered case (or a very clean house).

Yeah, one of the seven screws that secure the shroud to the top of the card was improperly manufactured and the interior surface is too smooth to catch a head from any of my tools. I'm not sure how I'm going to get past that problem yet.

craig588 posted:

Post your afterburner log. There's a VERY outside chance those temperatures as possible with an incredibly high power limit in the bios. The numbers are not comparable between cards. If it says it's only at 75% power level during furmark you probably have a card with an incredibly high power target set in the bios. On one hand that's better for performance in high load situations, on the other you end up with incredibly high temperatures and a lower overall speed because of the increased average amount of heat. My modded to fairly high levels card has a 265 watt power limit and can throw out above 70c temps at the stock speed with a big open air cooler and I'm pretty sure there have been bioses dumped from EVGA cards with a 300 watt limit.

It's probably more likely you just have a clogged up heatsink, especially if it used to run cool. It might be getting hot enough to prevent the card from even daring to reach the maximum boost available even when there's still power overhead.


It's not your fault, but sometimes I get annoyed with how vague the overclocking settings can be, especially when something goes wrong.

It used to run much cooler than this, and the problem seems to be getting worse as time goes on, so I guess it's probably a clogged heatsink. I just wanted to make sure that the temps were too high and it wasn't my imagination.

MSI Afterburner paste: http://pastebin.com/crh6XZqq

uhhhhahhhhohahhh
Oct 9, 2012
All nvidia GPUs throttle in FurMark now, no matter your temperature. They changed something to do with the profile to stop it running at max clocks. You need to use something else to stress test or Google for a modified FurMark profile.

Magic Underwear
May 14, 2003


Young Orc

uhhhhahhhhohahhh posted:

All nvidia GPUs throttle in FurMark now, no matter your temperature. They changed something to do with the profile to stop it running at max clocks. You need to use something else to stress test or Google for a modified FurMark profile.

Furmark is considered (correctly, imo) a power virus. If you want a benchmark for how a GPU plays games, there are synthetic benchmarks that will give you a good idea. Furmark won't.

craig588
Nov 19, 2005

by Nyc_Tattoo

Daysvala posted:

Yeah, one of the seven screws that secure the shroud to the top of the card was improperly manufactured and the interior surface is too smooth to catch a head from any of my tools. I'm not sure how I'm going to get past that problem yet.


It used to run much cooler than this, and the problem seems to be getting worse as time goes on, so I guess it's probably a clogged heatsink. I just wanted to make sure that the temps were too high and it wasn't my imagination.

MSI Afterburner paste: http://pastebin.com/crh6XZqq

Nothing looks crazy, the temperature drops very quickly (only like 15 seconds to go from fully loaded to nearly idle temperatures) when the fan spins up and the furmark power usage and clocks look right. It might be dusty, but you can probably blow it out with a can of compressed air. It's probably still clean enough you can ignore it if you REALLY want to, and at this point I'd only blow it out, not fully disassemble it.

You didn't play anything long enough to get the card throttling unless you're talking about the small difference that furmark showed. That's normal because of how boost works, if the card is drawing too much power it will automatically slow down and lower voltages to stay within the power target. There are also 13mhz steps at certain temperature intervals. (I don't remember the exact temperatures or if they can be defined on a per card level in the bios) There is a much more extreme version of temperature throttling that goes to somewhere in the 700mhz range (again, another thing I don't remember the specifics on) that I thought you might have been experiencing.

It's best to just ignore all of the minor clock speed changes because the card is just working magic to make sure nothing gets too hot.

Or say gently caress all that, I want my card max boosting all the time. Then mod your bios for crazy high power levels and install some 120mm fans with a bigger cooler and still barely maintain 70C temperatures.

Oh yeah, also your idle temperatures are only in the high 30s to low 40s. I don't know if you changed something since you first checked, but they're perfectly normal now.


Magic Underwear posted:

Furmark is considered (correctly, imo) a power virus. If you want a benchmark for how a GPU plays games, there are synthetic benchmarks that will give you a good idea. Furmark won't.
With the way Nvidia (and AMD?) cards are going it's much less a GPU stability test and more a power supply test. If any problems show up they're more likely to be power supply issues than from the card itself. It's still fuctional in a sense of drawing the most amount of power possible, it's just that Nvidia hardware has gotten smart enough to automatically speed up for the 99% of cases when games aren't drawing the maximum amount of power.

craig588 fucked around with this message at 08:00 on Jul 5, 2013

TheRationalRedditor
Jul 17, 2000

WHO ABUSED HIM. WHO ABUSED THE BOY.

FetalDave posted:

About 1 out of 4 times my computer comes back from turning the monitors off due to being idle for 10 minutes, the computer literally locks up for about 2-3 minutes and the display won't turn on, then after it finally does turn on, it shows me that the display driver crashed. I've tried 4 different driver versions and the problem still happens.

I posted a thread on nVidia's forums, but no one responds. https://forums.geforce.com/default/topic/547642/geforce-drivers/monitors-won-t-turn-back-on-after-screensaver-turns-them-off/

Did I seriously get another bad card? I RMA'd my 9790 that I got (that Newegg refused to take back because they say I didn't return the installation disc in the box which is loving bullshit, I know it's in there). I could RMA this 770 as well but now I'm ultra paranoid that they won't take this back for some bullshit reason as well.
Try disabling the display power/sleep modes and just turn your monitor off manually. It's a tradition that certain hardware combinations (on certain Asus motherboard revisions for example) have serious problems with rest states and suspend. See if that stops the problem first

FetalDave
Jun 25, 2011

Moumantai!

TheRationalRedditor posted:

Try disabling the display power/sleep modes and just turn your monitor off manually. It's a tradition that certain hardware combinations (on certain Asus motherboard revisions for example) have serious problems with rest states and suspend. See if that stops the problem first

I do have an ASUS motherboard as well. I didn't know there was an issue with some of their boards. I'll play with that and see what happens.

Thanks!

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I've used Asus motherboards going back as far as I can remember. Ever since the P45Q-e, sleep has never worked for me, right on up til today with the Sabertooth P67. I guess we'll see if it'll work with the Sabertooth Z87 I'll be swapping around with before too long, but I'm not holding my breath.

Sleep is a neat feature, it works great on my laptop and I've witnessed it working fine on many a computer! But never on my own Asus builds. If I used it I'd care, I guess, but I don't so... I don't. SSDs are so fast that shutting down and powering up on my computer is many times faster than waking from sleep on my laptop.

Edit: Any comments on the Kepler second generation/Boost 2.0 overclocking guide I wrote? The idea is that it combines the abstract knowledge about videocards you need to have a good foundation to work with anything, with concrete instructions as to how to work with Boost 2.0 specifically. I spent rather a lot of time on it, I'm hoping the lack of feedback is just a "looks good to me" unspoken rather than a "what is the point of that?" unspoken.

I am leaning towards trimming the section on memory bandwidth, as it's more technical than necessary for either goal, really, and bogs the guide down in some unnecessary specifics. Could replace most of it with a much shorter explanation of bus bandwidth and memory speed that doesn't get quite so specific but still makes the point that a narrow bus needs faster VRAM to prevent performance loss.

Apart from that I don't know what else to do with it to improve it on my own (maybe pictures?) so others' eyes are very welcome.

Agreed fucked around with this message at 09:12 on Jul 5, 2013

craig588
Nov 19, 2005

by Nyc_Tattoo
Yeah, it looks good to me. I think the problem is that right now like 5 people on the forums have 7xx cards so there isn't a lot of people to check it. I think the voltage can only/always be boosted to 1.2v on 780s and 1.212v on 770s regardless of who made the card, or no? I may have read reviews wrong.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

craig588 posted:

Yeah, it looks good to me. I think the problem is that right now like 5 people on the forums have 7xx cards so there isn't a lot of people to check it. I think the voltage can only/always be boosted to 1.2v on 780s and 1.212v on 770s regardless of who made the card, or no? I may have read reviews wrong.

Depends strictly on the manufacturer and the product itself. Right now the product stacks aren't super diverse, and right now there are a lot of 780s from the reference batch that ship from the factory (from various brands) that top out underneath 1.2V. Same deal with 770s. It's lead to there being more than one vBios'es out there just for flashing it so it'll run up to 1.21V.

I imagine as the really high end products start coming down there will be some more variance in the power situation (Lightning edition from MSI, Classified models from EVGA... Gigabyte already has a pretty fancy unit with like 10+2 power phases that's not just on paper iirc).

Dogen
May 5, 2002

Bury my body down by the highwayside, so that my old evil spirit can get a Greyhound bus and ride

The Lord Bude posted:

I solved the problem. Turns out Google Chrome counts as a 3d application, preventing the GPU from idling, since I keep chrome open pretty much permanently, even overnight when I'm asleep.

When I closed chrome the gpu clocks dropped right down to where they should be at idle, and the temperature dropped down to 45 degrees C, which appears to be my true idle temperature.

Despite not having multiple monitors, I use multiple monitor power saver from nvinspector to set thresholds for this. The negative is that occasionally it takes a second ot switch into P0 mode when starting up a game, but the plus side is you can fine tune GPU and VPU usage %s that will kick up the clock speeds based on your regular usage patterns, so you can, you know... run whatever in chrome and it won't kick into full gaming mode.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

Dogen posted:

Despite not having multiple monitors, I use multiple monitor power saver from nvinspector to set thresholds for this. The negative is that occasionally it takes a second ot switch into P0 mode when starting up a game, but the plus side is you can fine tune GPU and VPU usage %s that will kick up the clock speeds based on your regular usage patterns, so you can, you know... run whatever in chrome and it won't kick into full gaming mode.

The power consumption in and of itself is irrelevant to me, since I don't pay for electricity in my household, I was mainly concerned if the card maintaining a higher temperature/clockspeed more of the time would be detrimental to the card itself over a reasonable lifespan (3 years tops).

Dogen
May 5, 2002

Bury my body down by the highwayside, so that my old evil spirit can get a Greyhound bus and ride

The Lord Bude posted:

The power consumption in and of itself is irrelevant to me, since I don't pay for electricity in my household, I was mainly concerned if the card maintaining a higher temperature/clockspeed more of the time would be detrimental to the card itself over a reasonable lifespan (3 years tops).

Well right, I mean that's just the name of the utility. I would think the card would probably be alright, but really having that extra heat in your case and fan noise seems like it would be annoying.

Drythe
Aug 26, 2012


 
I just bought the Asus 770 and can only tune the voltage up to 1.212, however the card is still being a beast for whatever I use it for. I'm using my old 560 as a dedicated Physx card, I dunno if I should keep it that way or not.

FetalDave
Jun 25, 2011

Moumantai!

Agreed posted:

I've used Asus motherboards going back as far as I can remember. Ever since the P45Q-e, sleep has never worked for me, right on up til today with the Sabertooth P67. I guess we'll see if it'll work with the Sabertooth Z87 I'll be swapping around with before too long, but I'm not holding my breath.

I've never actually used the "sleep" function on any computers I use. I just set the monitors to turn off after 10 minutes instead of using a screensaver. The card doesn't even have to return the computer from suspend, just literally switch the monitors on and it can't accomplish that half the time.

Cavauro
Jan 9, 2008

Drythe posted:

I just bought the Asus 770 and can only tune the voltage up to 1.212, however the card is still being a beast for whatever I use it for. I'm using my old 560 as a dedicated Physx card, I dunno if I should keep it that way or not.


When the 600 series was still crescent-fresh, someone here mentioned the vanilla 560 being the minimum for a worthwhile dedicated PhysX card. I'm not sure if that has changed by now but if anything seems fishy you can always do some testing.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

Dogen posted:

Well right, I mean that's just the name of the utility. I would think the card would probably be alright, but really having that extra heat in your case and fan noise seems like it would be annoying.

Oh I understood that it was the name of the utility. It maintains the current temperature using a fan speed of around 35%, and it only drops by about 5% when I close chrome so it doesn't make that much noise - I've spent the past 3 years sleeping with this PC in my bedroom, and I can tell you right now the gtx680 is a hell of a lot quieter, and probably cooler than my old 5970 ever was. I was only worried about the wellbeing of the card.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Drythe posted:

I just bought the Asus 770 and can only tune the voltage up to 1.212, however the card is still being a beast for whatever I use it for. I'm using my old 560 as a dedicated Physx card, I dunno if I should keep it that way or not.

If you play GPU PhysX enhanced games, by all means. If not, it's just sitting there sipping power. I swapped from a GTX 580 that was dramatically underutilized to a GTX 650 TI because it saves a shitload of power when it's not in use compared to Fermi's power saving mode, and it provides ample PhysX calculation power without being too slow to keep up with the GTX 780 when the rendering is going on. In fact the 650 Ti benches very close to the 560 Ti, and it overclocks easily as well - I've got mine running at 1088MHz GPU/SMXes (which are the CUDA cores used for PhysX when using a dedicated card), with a memory overclock of +400MHz in Precision (or up from a base clock of 1350MHz to 1550MHz), which helps ensure that there's no bandwidth issues. That puts it at just a .2 or so under 100MB/sec, which is plenty to keep up for PhysX.

I'm playing the Batman games and Borderlands 2 with all expansions (finally, I never had time before but now I do) right now, and of course I intend to download and play the crap out of Metro: Last Light as soon as I can get back on my home connection with its unlimited bandwidth and real internet instead of this egregious phone internet pricing. I don't even want to think about how much it's going to cost me in overages today. It does way more with "Advanced PhysX" (GPU accelerated) than its predecessor.

I also have Mafia II, Alice: Madness Returns, and pretty much every other PhysX game because while I had the GTX 580 that Flute is now rocking as a badass rendering card underutilized as my PhysX card, the tail wagged the dog for a while...

Drythe
Aug 26, 2012


 
All I was concerned about was it not performing as well as just having the 770 do it along with graphics procession. I bought a high end PSU when I first bought the computer in case I ever wanted a second card and I don't pay utilities so power consumption isn't a concern to me.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Drythe posted:

All I was concerned about was it not performing as well as just having the 770 do it along with graphics procession. I bought a high end PSU when I first bought the computer in case I ever wanted a second card and I don't pay utilities so power consumption isn't a concern to me.

There are some scenarios in some games where, with heavy PhysX going on, a 680 for rendering along with a dedicated PhysX processor of suitable make and model is faster than two 680s (which, you know, root design of the 770) in SLI.

Unreal 4 will support PhysX, but whether developers will embrace a proprietary physics engine that pretty much needs an additional card remains to be seen - especially when the consoles are going to clearly encourage simultaneous compute as a big step forward for all kinds of fast calculations (wave tracing for sound, robust ray-tracing for light, quick/imperfect volumetric shadow volumes, realistic deformation, and general physics).

Since you already have it on hand, hang onto it for now if you play PhysX games, but the future of PhysX looks kind of mediocre to me. Post-Kepler, Maxwell will be doing some seriously neat poo poo, including slapping an ARM coprocessor on the card along with the GPU, shaders, and VRAM. Graphics cards are going to be doing so much cool stuff in the future.

Animal
Apr 8, 2003

Great job on the guide, Agreed.

As of now I have my Geforce 780 overclocked to ~1202/6500 boost and its stable on Crysis 3, with occassional dips to ~1175 on the core. The reference cooler actually does a great job, although I can only imagine how sweet the lower noise levels on the ACX cards are. This seems to be a really good chip, as its clocking faster than most EVGA ACX overclocks which seem to average about ~1150 on the core. I am sure the chip can clock even higher, but I dont wanna push the reference cooler to its limits, and I'm sure I'll hit a voltage wall anyways. Maybe if I had a beefier cooler I would do one of those [H] BIOS flashes in order to increase voltage :getin:

I did have some hangs playing Spec Ops: The Line playing at this speed after I modified the game for 2x2 Supersampling, but I am unsure if this is due to memory speed, core speed, or lovely drivers. I eventually finished the game at much reduced speeds. Crysis 3, Metro Last Light, and Battlefield 3 seem to be perfectly stable though, so for now I'm sticking with the overclock until more mature drivers come out and I can make more conclusive testing.

I absolutely love temperature based overclocking. Its the right way to go, I don't understand why it hadn't been attempted before. As for the Geforce 780 it is the first high end card that can handle any top of the line title I throw at it, at the high resolutions I enjoy. This is like a new golden age of PC gaming.

Adbot
ADBOT LOVES YOU

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map

Animal posted:

This is like a new golden age of PC gaming.

Sure is. Nevermind all the Oculus Rifts enhancing immersion or the microtransaction-based economy motivating indie developers to step up and deliver original ideas. Yup, the new golden age of PC gaming is based on the silicon of a workstation design held back from us for too long and our ability to warm it up with electricity. :c00lbert:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply