Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
craig588
Nov 19, 2005

by Nyc_Tattoo


Power target percentage numbers are arbitrary and not comperable between cards. One cards 200% could mean the same as your cards 112%. That's why the power target setting is always safe to max out (as long as you have enough case cooling) because the manufacturer sets the limits and what the values actually mean.

Adbot
ADBOT LOVES YOU

Nondescript Van
May 2, 2007

Gats N Party Hats


Does any other game recording software use the h.264 encoder on the newer nvidia cards or do i need to wait for that shadowplay thing to be release? I've tried a few but they kill my frame rate and MSI afterburner video is about 10 fps even though i told it 30 and then 60.

TheRationalRedditor
Jul 17, 2000

WHO ABUSED HIM. WHO ABUSED THE BOY.


craig588 posted:

Power target percentage numbers are arbitrary and not comperable between cards. One cards 200% could mean the same as your cards 112%. That's why the power target setting is always safe to max out (as long as you have enough case cooling) because the manufacturer sets the limits and what the values actually mean.
I figured this but I remember you talking about finding some kind of way on your Gigabyte card that let you jack up its limits to no ill effect so I don't know what the hell it truly purports to represent!

TheRationalRedditor fucked around with this message at 18:15 on Jun 21, 2013

craig588
Nov 19, 2005

by Nyc_Tattoo


A long time ago I edited the bios with a hex editor and manually appended the checksum, but now there's a tool you can use. It's better to edit the bios with really high limits than to use one of the overclocking software settings that ignores all of the limits because you get to retain all of the power management functionality instead of the software mode which just locks it at the max voltage and clock speed even when it's idling.

TheRationalRedditor
Jul 17, 2000

WHO ABUSED HIM. WHO ABUSED THE BOY.


craig588 posted:

A long time ago I edited the bios with a hex editor and manually appended the checksum, but now there's a tool you can use. It's better to edit the bios with really high limits than to use one of the overclocking software settings that ignores all of the limits because you get to retain all of the power management functionality instead of the software mode which just locks it at the max voltage and clock speed even when it's idling.
That's interesting, what hallmark values should I be looking at altering and which are fatal mistakes to stay away from? lol

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


I know Agreed was interested in this, and others might be, too:

PCIe scaling on 2xTitan SLI, comparing x16 Gen3 to x16 Gen2 (x8 Gen3 equivalent):



Not terribly interesting for a single monitor. Surround setups might be more interesting, click through for that. Unfotunately, x16 Gen2 was the slowest tested. They didn't bother trying Tri-SLI at x16/x16/x8.

terre packet
Nov 24, 2007


I used these instructions to add anti aliasing to EVE Online with Nvidia Inspector, however it only works in full screen. Is it even possible to force anti aliasing if a game is in borderless window mode?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



Factory Factory posted:

I know Agreed was interested in this, and others might be, too:

PCIe scaling on 2xTitan SLI, comparing x16 Gen3 to x16 Gen2 (x8 Gen3 equivalent):



Not terribly interesting for a single monitor. Surround setups might be more interesting, click through for that. Unfotunately, x16 Gen2 was the slowest tested. They didn't bother trying Tri-SLI at x16/x16/x8.

For single monitor, doesn't seem especially arsed at PCI-e 2.0 8x either. Maybe slightly but in testing I am scoring just like everyone else in the two 3Dmarks I use, and better than several open-air non overclocked benches by a lot (for example, I've got mine running faster on 3Dmark11 for Performance marks than dual-670s in some testbed benches - that's not a card hitting a wall). And it eats Metro for breakfast, Bioshock Infinite no problem, and I can finally turn on fuckin' Ubersampling in The Witcher 2. It looks good.

None of that would be the case if it were struggling for bandwidth, and the gentleman who took his 570 out to test his 780 on its own confirms that too. Looks like this is yet another (but likely the last) generation where PCI-e 2.0 8x is still enough bandwidth.

Edit: Potentially misleading bit axed, multi-card setups are peculiar in their bandwidth requirements to work at all (regardless of the fact that they HAVE enough bandwidth, apparently SLI just will not work at lower than 8x/8x of some variety - a forced thing) and I don't want to contribute to someone making a motherboard purchase that screws them and doesn't allow whatever SLI/Crossfire plan they had going on to actually work. Correction of my initial assumption regarding SLI bandwidth requirements courtesy of Factory Factory.

terre packet posted:

I used these instructions to add anti aliasing to EVE Online with Nvidia Inspector, however it only works in full screen. Is it even possible to force anti aliasing if a game is in borderless window mode?

Best you can do is globally force FXAA, which will apply it to some stuff in Windows but not much. They've got it pretty well dialed back to 3D apps only at this point.

Agreed fucked around with this message at 22:48 on Jun 21, 2013

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Actually in triple mode, the trifurcation is x8/x4/x4. Your point stands, but since SLI absolutely requires a full eight lanes, you can't tri-SLI on standard Haswell without a PLX-equipped board. If you add a third card, even a PhysX GPU, it disabled dual-SLI, too.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



Factory Factory posted:

Actually in triple mode, the trifurcation is x8/x4/x4. Your point stands, but since SLI absolutely requires a full eight lanes, you can't tri-SLI on standard Haswell without a PLX-equipped board. If you add a third card, even a PhysX GPU, it disabled dual-SLI, too.

Huh, so even though the cards don't care, nVidia does. Weird.

Mayne
Mar 22, 2008

To crooked eyes truth may wear a wry face.

Nondescript Van posted:

Does any other game recording software use the h.264 encoder on the newer nvidia cards or do i need to wait for that shadowplay thing to be release? I've tried a few but they kill my frame rate and MSI afterburner video is about 10 fps even though i told it 30 and then 60.

Open Broadcaster Software uses x264.

Nondescript Van
May 2, 2007

Gats N Party Hats



Thanks for the suggestion but this apparently doesn't use the one on the GPU and is more for then encoder found on intel cpus (which i do not have)

David Mountford
Feb 16, 2012


Agreed posted:

How do you feel, as a Titan owner, now that they've released the 780 for considerably less (though it's still expensive, obviously) and yet thanks to the very things that make it slower than Titan at stock settings (lasering off parts of the chip after binning, basically), it has more headroom for overclocking at the same TDP and so tends to keep up with or beat the Titan in games? If you're using it for dual purposes, I reckon you don't rightly give a poo poo since the 780 has nothing going on with high precision computing, but I don't want to assume that you bought it as an entry level GPGPU development card without hearing it from you first.

I'm still entirely contented with my purchase. It was designed to serve the dual purpose of having a proper GPGPU dev card for playing around with and powering my absurd 7680x1440 screen setup for some gaming. While I haven't done as much as I'd like on the compute end yet, it does a drat good job at making a good chunk of my games playable and enjoyable at that absurd edge case resolution. It's a great card, there's no denying that.

Guni
Mar 11, 2010


So when overclocking my Sapphire 7870GHz it has a power control setting of -50% to 50%. Should I slide it all the way to 50%?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Yep. That's the TDP/thermals throttling adjustment, and you want as much headroom as possible for as little throttling as possible.

craig588
Nov 19, 2005

by Nyc_Tattoo


TheRationalRedditor posted:

That's interesting, what hallmark values should I be looking at altering and which are fatal mistakes to stay away from? lol

I can't remember many specifics, but you can enable up to 1.21v in the bios and I think the highest power target I've seen is 300 watts. EVGA says up to 1.3v is safe on their Classified Evbot cards and the 770 is based on a version of the GK104 and it offers 1.2v from the factory. The voltage has much less of an effect on the temperature than the power target though. Be ready for a whole lot of additional heat, the whole power target system does a fantastic job of keeping that all in check. The coolest thing about the boost system is when the load is low enough it's smart enough not to ramp up all the way, when I was playing Half Life 2 the fans remained silent to the point where I thought something was wrong and I had to run Precision in the background to see what was happening and it just was staying at a low speed and voltage because the GPU was only something like 40% loaded. Then without changing anything it automatically ramps up to space heater mode to play Bioshock Infinite.

I didn't mess with the boost table, but for benchmarking there might be some merit to creating a hosed boost table with a low power target and a really high offset clock in order to speed up the card on areas where it doesn't crash and slow it down when the load is too high.

ijyt
Apr 10, 2012



Has anyone had issues with the GTX 780 (320.11) and Far Cry 3? It seems that every other game I've tried runs fine, but Far Cry 3 results in shadow artifacting and the screen getting tinted green.

Boten Anna
Feb 22, 2010



Out of curiosity, what's the tl;dr on the GTX 700 series of graphics cards versus the 600? I assume it's generally not worth full sticker price to upgrade from the previous generation, but is anything interesting happening?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



Boten Anna posted:

Out of curiosity, what's the tl;dr on the GTX 700 series of graphics cards versus the 600? I assume it's generally not worth full sticker price to upgrade from the previous generation, but is anything interesting happening?

New 700-series features: Boost 2.0, letting the card overclock based on power target (boooo) OR thermal target (yaaaay). New fan algorithm to stop it from ramping up and down quickly in loading screens.

Specifics:

GTX 780 = Mini-Titan for gamers. Fewer CUDA cores and half the VRAM of Titan, but equal or better performance in games thanks to more gaming oriented allocation of resources. Titan has a bunch of power-hungry fully DP capable parts; removing them gives the 780 TDP headroom to meet or exceed Titan's gaming performance. It overclocks better than Titan too, but these are BIG GOD DAMNED CHIPS. 7.1 billion transistors is a lot of transistors. As far as we know AMD just straight up does not have an answer to this, and will be fighting below this performance level for market share. Current price: $659, expect markups from many vendors.

GTX 770 = 680 Plus! Higher clocks out the gate, 7GHz VRAM instead of 6GHz VRAM, and a 230W TDP instead of the sub-200W TDP of the 680. Otherwise, spec identical in terms of CUDA cores, ROPs, whatever else to 680. Current price: $399, expect markups from some dickhead vendors.

GTX 760 = GK104-225 based part, high powered. Doesn't really leave room between itself and the 770 for a Ti part, which has all of us GPU carers wondering what the hell is going on there since nVidia loves its Ti branding. Current price: $249.99-$299.99??? (all specs on this one are confirmed from multiple leaks rather than public, so it's a bit of an unknown commodity)

Mobile is somebody else's purview, Factory Factory?

Agreed fucked around with this message at 15:35 on Jun 22, 2013

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



ijyt posted:

Has anyone had issues with the GTX 780 (320.11) and Far Cry 3? It seems that every other game I've tried runs fine, but Far Cry 3 results in shadow artifacting and the screen getting tinted green.

I am playing this now to test it out for you, had planned to finish Crysis 3 before starting a new game but what the heck- I'm running the latest WHQL drivers, my GTX 780 maxes out at 1175MHz core and somewhere in the ballpark of 6600MHz VRAM. No issues at all like you're describing (also, this game is so much better than Far Cry 2, holy poo poo).

Edit: Getting further in, starting to see some shadow glitches, nothing going on with a full screen tint although black loading screens flash different luminosity which is a little weird. I've read all over that FC3 has issues with the launch drivers and the 780, I don't expect this to be a permanent thing. Guess what plays FC3 fine? 7970GHz

(Gotta throw 'em a bone after my pre-surgery suggestion that we just full-stop quit recommending them, honestly.)

Agreed fucked around with this message at 17:01 on Jun 22, 2013

ijyt
Apr 10, 2012



Agreed posted:

I am playing this now to test it out for you, had planned to finish Crysis 3 before starting a new game but what the heck- I'm running the latest WHQL drivers, my GTX 780 maxes out at 1175MHz core and somewhere in the ballpark of 6600MHz VRAM. No issues at all like you're describing (also, this game is so much better than Far Cry 2, holy poo poo).

Edit: Getting further in, starting to see some shadow glitches, nothing going on with a full screen tint although black loading screens flash different luminosity which is a little weird. I've read all over that FC3 has issues with the launch drivers and the 780, I don't expect this to be a permanent thing. Guess what plays FC3 fine? 7970GHz

(Gotta throw 'em a bone after my pre-surgery suggestion that we just full-stop quit recommending them, honestly.)

Yeah looking into it, I've read probably some of the same sites as you regarding launch drivers. Guess I'll just wait a while then, shame because other than that the 780 absolutely destroys FC3.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



ijyt posted:

Yeah looking into it, I've read probably some of the same sites as you regarding launch drivers. Guess I'll just wait a while then, shame because other than that the 780 absolutely destroys FC3.

No kidding. It seems silly on the face of it but we all have our hobbies and man is it nice to just keep turning settings up, up, up and not experience any performance issues by doing so. And these are release drivers, being buggy with one game is a miracle for either company's driver team. Glad they finally seem to have a real return to form here, FC2 felt very forced to me in a lot of ways (especially the stupid checkpoints that magically repopulated as if to just pad out the gameplay in an obnoxious manner). FarCry was hard to follow, it's still fun today if you can manage to install it (my discs - and most discs - shipped with a fun bug: they totally lack a 64-bit compatible installer, period, and getting around that is challenging). And it still looks good. Having that studio move on to make Crysis and having FarCry 2 end up in new hands, I was not super pleased with the outcome, or with some of the gameplay decisions they made. It ended up being just okay, when I was hoping for "downright loving awesome."

Far Cry 3 seems to fall in the latter category, and when I can finally get Blood Dragon installed (have I mentioned gently caress PHONE INTERNET lately?) I reckon it's going to reach out of the monitor and punch me in the pleasure areas of my brain directly.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.


Agreed posted:

Mobile is somebody else's purview, Factory Factory?

The 680M was a lower-clocked GeForce 670 (1344 cores @ 720 MHz). The 780M is practically a full-blown 680, just underclocked a smidge (1536 cores @ 830 MHz with adjustable boost targets, or a ~110 MHz increase over the 680MX).

In theory, the 780M is the bestest ever mobile gaming GPU. In practice, it's currently only available in a really lovely MSI notebook with terrible cooling, and a 7970M (mobile 7870) can even keep up with it, even coming neck to neck in some titles.

The 7970M (and 8970M, the same thing with a 50 MHz turbo clock increase) have their own issues, though, almost 100% the fault of AMD's drivers. No matter how many problems you have or haven't had with AMD drivers, if you want to win the argument, you say "Enduro sucks" and you are right. Here's AnandTech's analysis. If you disable Enduro (and therefore the Optimus-like switchable graphics feature), you can gain significant performance but it kinda sucks to do that on a laptop.

SocketSeven
Dec 5, 2012


So, I'm trying to eek out what little left I can out of my 2 EVGA 660's in SLI (Yes, I now know I should have gotten the ti )

Is my understanding correct; that there is not much I can do with this card, but to use Precision X to set the power target to 110% (it's max) raises the voltages up as far as I dare (975mv at the moment) then start bringing up the offsets on the GPU clock and memory clock until things begin glitching?

Could I get some advice on what I should be focusing on, and safe max limits for things like voltage? It's kind of a balancing act right? giving the card more volts doesn't always mean higher performance.

I'm gonna go back to the start of the thread and begin reading the thread over again. Any card specific info or advice would be appreciated though.

craig588
Nov 19, 2005

by Nyc_Tattoo


The voltage slider only sets the minimum voltage, it's still going to ramp up to 1.175v (Or whatever the limit is set at in the bios if it's a non reference board) as long as it's operating within the power envelope. There's something to be said for maxing it out to prevent issues arising from voltage changes, but I prefer allowing the boost system to regulate it because it can sometimes allow for higher clock speeds and greater power efficiency in low load situations. It's more a personal preference thing than anything though.

It's always safe to max out your power target as long as you have decent case airflow because that's a predetermined maximum set by the manufacturer.

If you don't care about your warranty download the bios editor I linked earlier on this page and enable 1.21v and dramatically higher power limits in your bios, but that's where you can start getting into card destroying levels of heat if your cooling can't keep up.

craig588 fucked around with this message at 23:44 on Jun 22, 2013

TheRationalRedditor
Jul 17, 2000

WHO ABUSED HIM. WHO ABUSED THE BOY.


craig588 posted:

If you don't care about your warranty download the bios editor I linked earlier on this page and enable 1.21v and dramatically higher power limits in your bios, but that's where you can start getting into card destroying levels of heat if your cooling can't keep up.
Haha, since my boiling point incident this windforce fan curve is playing for keeps. I've got that Gigabyte 670 OC model, is it kosher to bump to 1.21v max safely? Where did the benefit from this most directly express itself in your experience?

craig588
Nov 19, 2005

by Nyc_Tattoo


Yes, but the voltage increase didn't get me nearly the clock speed gains (or heat output) of the increased power target. The most dramatic change I saw was instead of dropping down to around 900Mhz during furmark the card was able to stay at 1200MHz. In normal games it stays within a bin or occasionally two bins of 1300mhz. I think I only gained about 20Mhz from the higher voltage, but the far more steady boost speed was a massive benefit, I'd bounce around between 1100 and 1200mhz while playing games with the factory power limit.

If you want you could try just increasing the voltage limit and checking out how much it gets you, you'll probably end up with very similar temperatures because GPU boost is pretty much a cleverly marketed power/temperature regulator. (It works great too, I'm not trying to take anything away from it, it's just a shame people with way more computer enthusiasm than me end up stuck within soft limits because the full extent of its functionality isn't well exposed)

Edit: Just to give you some idea of how much hotter it can get, with my modified power target I needed to keep my stock fans at around 70% in order to keep the temperatures under 70C without any overclocking at all! With the factory settings and stock fan curve it was rare to see the temperatures even rise much above 60C. I've since switched to some giant 120mm fans so they can run slower/quieter. Edit 2: This is on that same massive Gigabyte cooler your card has.

craig588 fucked around with this message at 02:44 on Jun 23, 2013

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



Call me when they support the 780, see if I can't wrangle another bin or three outta this thing. It's got the cooling, just needs the juice.

On an unrelated note, replaced my GTX 580 with a GTX 650 Ti today (headless PhysX coprocessor). Their behavior in games is near-identical, except the 650 Ti doesn't get nearly as hot. GPU utilization is almost the same number for number, except in Borderlands 2 where the extremely high physics can get the 650 Ti up to about 21% where the 580 hung out around 17% under the most demanding physX conditions.

It also did not affect FluidBench score (topped out at 76% GPU utilization), nor did it affect Metro's benchmark score (exact same FPS as before - that one's clearly 780-limited). I could go on but you get the point: a GTX 650 Ti is both sufficient for any current and likely future PhysX game for the foreseeable future, and it's plenty fast to sync with the 780 so as not to slow it down.

Power savings outside games is great, too, and I'm not exactly put off by the TDP difference between the 650 Ti and the GTX 580. For a PhysX card, the 580 defines overkill. the 650 Ti is a good option that will likely go down in price as the 700-generation's entry level performance cards roll in. At twice the CUDA cores and superior memory bandwidth to the 650 non-TI, I think it's worth it. This is an area where you can future-proof, if you wish to do so.

Also, the last guy who bought it didn't register it, must have decided to just get something else because I got a screaming deal on it from Amazon and immediately registered it myself. The 3-year warranty is transferable anyway, this just makes it quicker in case something goes wrong.

craig588 posted:

If you want you could try just increasing the voltage limit and checking out how much it gets you, you'll probably end up with very similar temperatures because GPU boost is pretty much a cleverly marketed power/temperature regulator. (It works great too, I'm not trying to take anything away from it, it's just a shame people with way more computer enthusiasm than me end up stuck within soft limits because the full extent of its functionality isn't well exposed)

I think "a cleverly marketed power/temperature regulator" is almost the right way to describe it, just change "cleverly" to "overtly." There's a reason they want to keep it within a given power and temperature range, and it's because as you've alluded to, start getting much higher than that and if you don't have really, really exceptional cooling, your chip gets glassed or you blow VRMs or god knows what. They're protecting a lot of people from themselves while still allowing very good overclocking potential from the silicon lottery. Just because a few middling enthusiasts like me would be willing to risk it for a few turbo bins more does not make it a good idea. I would argue quite to the contrary. Most people have no business dicking around in the BIOS, straight up. It's an intermediate step between "regular graphics card buyer" and "get the dry ice, I've got it dampened, let's loving DO THIS."

Agreed fucked around with this message at 03:10 on Jun 23, 2013

craig588
Nov 19, 2005

by Nyc_Tattoo


I was actually explicitly thinking of you. It felt wrong you were "only" able to get 1241mhz from your card when I got 1300mhz and all I did was discover some hidden settings in the bios. You wrote it better, but that's what I meant by it working great.

There's GK110 support in the 1.25 version and I've already seen reports of success on 780 ACXs. I'm probably skipping this generation because I'm way too busy this year so I haven't looked into many of the 7xx series details, but it looks like it doesn't expose everything adjustable yet, or maybe just more has been moved to software controls. I don't feel comfortable saying much or offering advice on a card I haven't used.

craig588 fucked around with this message at 04:17 on Jun 23, 2013

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



craig588 posted:

I was actually explicitly thinking of you. It felt wrong you were "only" able to get 1241mhz from your card when I got 1300mhz and all I did was discover some hidden settings in the bios. You wrote it better, but that's what I meant by it working great.

There's GK110 support in the 1.25 version and I've already seen reports of success on 780 ACXs. I'm probably skipping this generation because I'm way too busy this year so I haven't looked into many of the 7xx series details, but it looks like it doesn't expose everything adjustable yet, or maybe just more has been moved to software controls. I don't feel comfortable saying much or offering advice on a card I haven't used.

Aw, shucks. Well, think of Factory Factory in the future, he's about to get my soon-to-be-former 680. Sigh. Goodnight, sweet prince, etc.; within the limitations of a standard, non-modded product, it overclocked really really well. He might be up for going in and poking around, though.

I only found Titan support with the current tool on the page linked, maybe they updated it since I checked a few days ago? That said, I've got a card that goes to 1.2V from the factory, but its ASIC score is looking to be pretty middlin'. Well, it'd be outright bad for a prior generation card, but a hell of a lot of 780s of all stripes are coming in at or below 67% ASIC score. The really high ones are definitely the exception, not the rule, and "really high" in this case is more like 75-80% than what you'd expect. Titan cards are pretty much the same, incidentally, though you'd think they would be higher.

Point being, getting 7.1 billion transistors in consumers' hands seems to require a willingness on nVidia's part to go with lower clocks and accept more leakage in the validation process. It seems a lot of tech sites got somewhat hand-picked samples when it comes to overclocking, because I've read extensively now and there are WAY more people stuck at 1149MHz core and under 6400MHz VRAM than you'd think given that a lot of reviewers from high profile sites hit 1200MHz-1241MHz and regularly got within spitting distance or cleared 7GHz VRAM. Less high profile sites that had to roll their own got the same luck of the draw as everyone else and I saw some reviews where the OC maxed out lower than I can push my card. The ASIC scores and the generally profoundly high requirements for everything in the GTX 780 to work together well at the per-card level seem to quite demanding indeed. I seem to recall the Anandtech guy reviewer saying something like "most GTX 780s should hit 1200MHz" - my rear end!

There are modded vBIOSes out there specifically for various 780 models from different manufacturers, and if I wanted I could install one that lets me hit 1.21V, but the truth is that I don't think it's a good idea. I just really, really don't want to chance that on a card that cost me $660 plus shipping. The fact remains that the in-game performance difference between my card at 1175GHz core and 6600MHz VRAM versus somebody who voided the sheeeeit out of his or her warranty for the modded BIOS to push it to 1254MHz (usually losing memory clocks in the process, too) might differ by like 4-7FPS in really demanding games (and bigger but totally trivial differences in non-demanding ones, or synthetic benchmarks).

I don't see that as being worth losing all support for my card in the instance I need help. I'm early-adopting this time around, there might be something not so cool about the ACX that I'll want EVGA's help fixing - would hate to lose support over a few FPS.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



Haha, for shits and giggles I decided I'd overclock my 650 Ti. Why not, free extra PhysX performance for, uh, the year 2015 or something.

It's old-school overclocking, none of this boost malarkey. You raise the core, you raise the memory, and you raise the voltage. So that's what I did, but in reverse order. I poked around online and got a general feel for how they overclock and it turns out that the answer is "like a bat outta hell," so I settled in, got the Voltage to 1.15V, ramped the clocks up to 1150MHz (from a stock 925MHz) and upped the memory to 6400MHz effective (from a stock 5.5GHz effective).

I could probably get the core to 1175MHz based on all I've read, as it turns out these chips are actually pretty damned salty, but the only real stability testing I have (or need, honestly) is FluidMark set to use just that GPU for PhysX, plus it configured to take on the whole workload in the control panel. Had FluidMark running the whole time, so I got to see in real-time as the GPU utilization went down, and I raised the number of emitters to bring it back up, etc.; kinda cool!

These clocks would almost certainly hold in gaming as well; the memory is probably maxed out to where it will comfortably clock, nVidia has a good memory controller but a 900MHz overclock is probably as good as it gets on a 128-bit bus card aimed at the entry level price:performance segment of the market. There is that additional 25MHz that AnandTech pulled out of each brand of 650 Ti that it had on hand without issue. The non-factory-OC'd EVGA one - the one I bought, though I didn't purchase a factory overclocked model - has a lower voltage, both default and maximum, but that doesn't hinder anything.

So now my PhysX card is heavily overclocked and it was like a trip back in time to the GTX 580, before all this boost stuff changed it all to weird mode. I guess if my 780 ever needs some EVGA lovin', I'll have a card that will run games well at medium settings while I'm waiting on the biggun' to get back. Neat

To top it off I still get all the benefits of Kepler power savings when it isn't running. This is a major win-win; another forums user is getting a badass performer that overclocks well and ought to be making games pretty, not doing PhysX, and I get an excellent PhysX card that can pull main card duty in a pinch without leaving me high and dry on the graphics front. Really glad I went with a "Like New" (and it was, original packaging and everything and not a scratch on it) 650 Ti instead of the much more anemic 650. No way the 650 would cut the mustard as a main card, it's barely competitive with middle-of-the-road laptop GPUs and if I recall isn't it the example being used to show how nice the latest iteration of Intel's HD5200 Iris graphics are?

If my calculation isn't totally off, based on memory bandwidth, texel and pixel performance, this is closer to my old GTX 280 in performance. Hard to compare when there's no real apples-to-apples that far back and it's such a solid overclock. Better than a 560, though, and from what I can tell at 1080p it should hold FPS around 30 minimum with excursions higher, so if I must rely on it for something, I'll be set.

It was fun OCing the old way, I miss it.

Agreed fucked around with this message at 08:59 on Jun 23, 2013

Sober
Nov 19, 2011

First touch: Life.
Second touch: Dead again. Forever.

Kinda tempted to get a 4GB 770gtx but people are pretty consistent with saying that gigabyte cards keep getting artifacts. I'm guessing my next choice is evga but the premium is pretty big considering a 4 gb gtx770 (even at 1080p) is almost the same price as a EVGA SC ACX 2GB. Haven't seen anyone with a non-Gigabyte card complain just yet but 450$ for a 4gb card where everyone else is sitting at 440$ for 2GB is prettttty tempting.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



Sober posted:

Kinda tempted to get a 4GB 770gtx but people are pretty consistent with saying that gigabyte cards keep getting artifacts. I'm guessing my next choice is evga but the premium is pretty big considering a 4 gb gtx770 (even at 1080p) is almost the same price as a EVGA SC ACX 2GB. Haven't seen anyone with a non-Gigabyte card complain just yet but 450$ for a 4gb card where everyone else is sitting at 440$ for 2GB is prettttty tempting.

nVidia's Greenlight program should ensure that any card coming off the line meets their reference specifications (or beats them). I think the issue might be drivers rather than the cards at this point. My 780 has had some very mild artifacts in certain games, Far Cry 3 being out and out the most noticeable but I believe there have been reports of it artifacting pretty badly in BF3 as well. The drivers are very immature, these are the release drivers after all - I expect the next set to be much better and to address the issues people have been having. The 770 and 780 are amazing performers but they do not yet have the benefit of a driver that's been tested broadly in the wild; nVidia can only do so much in-house during development.

Launch drivers are lovely for nVidia and AMD alike. So it goes.

Edit: That said, 2GB is still probably plenty of VRAM for the next year and a half or so, even if it seems smaller next to AMD's 7900 cards and the 780 (ignore Titan, it has a shitload of VRAM for a totally different reason). And I will vouch that EVGA's ACX cooler is incredible. It might be the best aftermarket cooler currently manufactured. Better noise characteristics, superior cooling capability, and exceptional construction quality that bolster's the card's own structural integrity while also effectively addressing the problem of how to get a bunch of nickel-plated copper heat pipes to to cool the whole card including VRM, memory, etc.

Your call, though. If you plan to SLI down the road, I'd hold out for some variety of 4GB because that's going to be your ticket to a more solid price:performance average and with two GPUs you can probably actually use the 4GB. As far as I know, the only games currently topping 2GB at 1080p/1200p are RAGE (which will use all of the VRAM on any card as part of its texture streaming) and maybe, maybe Bioshock: Infinite during some scenes. Most "holy poo poo, these graphics!!" modern show-off games at those resolutions with max settings and MSAA are closer to 1600MB, which is plenty of wiggle room.

If you're gaming at a higher resolution, you might be in for SLI sooner than you'd like, as even at 1080p the very powerful GTX 780 won't lock frames in at 60FPS minimum in all modern games. 30, sure, 45 usually, but 60 is a tough nut to crack with the kind of graphics we have today. The option is always there to turn down settings, but who wants to buy the best card made just to turn down stuff?

Agreed fucked around with this message at 08:24 on Jun 23, 2013

Animal
Apr 8, 2003


You guys are making me want to sell my 670 and get a 780

ONE OF US, ONE OF US

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



Animal posted:

You guys are making me want to sell my 670 and get a 780

ONE OF US, ONE OF US

It's a terrible idea and the only possible use you could get out of it would be to have about 35-40% better performance and image quality like nothing I personally have ever seen before. Who would even WANT that?


Seriously though, early adopters can almost always expect a smack in the wallet when the price drop comes after the other company brings their line out and, surprise surprise, it does not suck and ends up being competitive with something that the first company to launch can't afford for it to outright win at. Then the $600+ price point cards hit $500 real, real fast. If AMD can compete very strongly at the 770 price point of $400, nVidia will almost certainly drop the 780 as first response to maintain market dominance. Only babbies who can't wait to turn up the sparklies buy the first batch

dont be mean to me
May 2, 2007

I'm interplanetary, bitch
Let's go to Mars




I have a 560 Ti (not even 448) right now and this thread is making me feel shameful.

On the other hand, until the PC industry responds to the PS4's shakedown cruise, upgrading is likely to leave me paying two big outlays in less than two years.

I actually forgot that console launches do this to me.

Josh Lyman
May 24, 2009





Sir Unimaginative posted:

I have a 560 Ti (not even 448) right now and this thread is making me feel shameful.

On the other hand, until the PC industry responds to the PS4's shakedown cruise, upgrading is likely to leave me paying two big outlays in less than two years.

I actually forgot that console launches do this to me.
Same, 1GB 560 Ti + 3570K. Luckily I mainly play Starcraft.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



I dunno, the 560Ti is getting long in the tooth at exactly the right time. Less than the consoles about to launch, consider more that you had great price:performance for the useful life of the card and now it's starting to show its age, but that's because it's time to upgrade now. This is why in the upgrade thread we tell people to get what they need, not over-spend on some giant boondoggle. You saved a lot of money and got great gaming out of your 560Ti, but it's time to upgrade - just in time for the same price category to come into its own, the GTX 760 is looking like it's going to be a hell of a card and it should launch at about the same price. Or wait for AMD to put up their 8000-generation and put price pressure on nVidia and hope that the $249 rumors are validated instead of the $299 rumors.

You're in a good spot. Look at you vs. me - I'm selling a GTX 580 I bought for like $550 that we probably bought around the same time, for $200. More than half of that is funding a 650Ti for PhysX to act as its replacement, and the 650Ti performs almost identically to the 560Ti except better at PhysX (worse at CUDA, but I don't have to care about that anymore). So I'm netting $80, but throw that out the window and I'm still operating on a $300+ loss for the card - more than you probably paid for yours - and it's just plain time to upgrade for those of us who bought a Fermi card, now.

Not such a bad thing. Hang in there 'til the 760 gets off paper and into stores and put yourself in a good place price:performance for 1.5-2 years, as intended.

Or just say gently caress it and get the highest end card every generation because you're willing to allocate funding for that particular extravagance. Just don't expect to get your money back in the end, or anywhere near it.

Agreed fucked around with this message at 00:04 on Jun 24, 2013

Incredulous Dylan
Oct 22, 2004



Fun Shoe

Ever since the 8800 gtx I've had a rule for myself where I only upgrade every two generations. I got burned on my 280 gtx (almost literally) for 3D. That's enough time to see which of the never ending stream of the exciting developments are really marketing garbage. I might wait even longer this time since the 680 has had some real staying power at 1080p. The rift will be using just a 1080p screen and I plan to jump ship to that from nvidia 3D vision. Waiting a gen past that will also give me a great idea of how the new console graphics bottleneck is going to affect PC gaming.

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down



Incredulous Dylan posted:

Ever since the 8800 gtx I've had a rule for myself where I only upgrade every two generations. I got burned on my 280 gtx (almost literally) for 3D. That's enough time to see which of the never ending stream of the exciting developments are really marketing garbage. I might wait even longer this time since the 680 has had some real staying power at 1080p. The rift will be using just a 1080p screen and I plan to jump ship to that from nvidia 3D vision. Waiting a gen past that will also give me a great idea of how the new console graphics bottleneck is going to affect PC gaming.

What do you mean when you say that you got burned on your GTX 280? I loved that card while I had it, maxed everything that year and held its own very well up 'til 2011 when I built a new system. Of all my cards, it had the most lasting power, for sure. Not even a 285, just a 280 overclocked to the max that any of them would go. I believe it's performance is very similar to a 560 or 560Ti.

I just really enjoy gaming and like turning stuff up, and I keep myself well informed as to what technologies are at work in given hardware so I'm not especially concerned about falling victim to "marketing garbage." You can keep yourself well informed enough to know what is and isn't a genuine upgrade, and why, without imposing a hard generational limit. Still, given the relatively long development cycles of GPUs, probably ends up being alright on that time scale.

It's important to keep in mind the transistor count of the products we're talking about, and how that alone affects development and yields - Haswell with GT2 integrated graphics comes in at 1.4 billion transistors; the "sleek" GTX 680/770 GK104 part by comparison packs over 3 billion, and the GTX 780 and Titan are built on a design with 7.1 billion transistors. AMD's current top of the line Tahiti XT2 has 4.3 billion. I guess I'm begging a bit of understanding on the part of GPU companies, here - even as workloads converge in remarkable ways and the distinction between CPU and GPU sort of degrades, GPU companies have much bigger projects (in the simple sense of the word) on their hands, and while the 28nm node is working out great for TSMC, the previous process was a nightmare and yields were crap for at least a year into the production of their products.

Where's the marketing BS at that you're trying to avoid?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply