Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen, I highly advise you pick up an Asus GTX 670 DirectCU, overclock the crap out of it, and supply me with a very affordable GTX 580 to end my torment in Metro 2033. We can haggle on power supply needs. This is professional advice, I am a professional.

If as your avatar might suggest you like guitar stuff, I have some amazing gear we could work out as a trade... Think on it :D

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

spasticColon posted:

I ordered a HD7850 Tuesday and Amazon still hasn't shipped it. I guess that's what I get for selecting the free shipping.:f5:

The main reason I got one is that I saw in benchmarks the HD7850 runs Skyrim and BF3 a lot better than the 560Ti. Is there a specific reason for this?

Yes and no - nVidia had issues with Skyrim from launch. There was a driver update that netted me a solid 30-40% performance increase in Skyrim on my GTX 580. So it could be driver related. Or, it could be that AMD/ATI's architecture is better for what Skyrim does than nVidia's. Or, it could be that AMD/ATI's drivers knock it outta the park on that engine, while nVidia's still struggle some by comparison.

For what it's worth, this is probably the answer to nearly every possible iteration of the question: "Why does ________ perform better than the competition in game __________?"

Barring known incompatibilities like the 560Ti's weird issue with Battlefield 3 and black textures despite attempts from the devs and nVidia's driver team to fix them, it usually just comes down to peculiar, idiosyncratic things that are difficult to pin down as being specifically because of this or that. Exception might be very high resolution gaming, or 3x/4x GPU gaming, where AMD/ATI's greater VRAM and multi-GPU scaling give them a distinct edge over nVidia for known reasons.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

That's been a trick for some time, since they introduced power saving states and some games don't play nice with 'em. Glad it works with the modern cards too. In the 500 series and before, you don't need nVidiaInspector, it's just in the regular boring control panel as a power management setting.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

AnandTech will be doing a Q&A with a GPGPU sexpert, so submit some questions.

And by "sexpert," I mean he was a founder of Aegia (of PhysX fame), became one of Nvidia's top CUDA guys, and is now with AMD spearheading their hetereogeneous systems architecture, i.e. the seamless integration of highly-parallel cores (i.e. GPUs) with complex serial cores (i.e. CPUs).

That guy has the job that parallel, cooler universe my dad has, and that DadP,CU rules, and got me into computers for real at a younger age and now I'm on my way to being cool as heck and knowing lots about microarchitectures and eventually being technology coordinator for a major computing firm.

But here in the real world I just look at that dude, who isn't my DadP,CU and his CV makes me go :allears: because seriously, what cool tech he's been a part of. I use work he was integral to all the time, and probably will continue to do so until a major computing paradigm shift.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

KillHour posted:

I'm not really seeing any recent games that use PhysX. Have there been any high profile ones that use it for more than incidental effects in the last year or 2?

Hrm. Off the top of my head - Mass Effect games, Mirror's Edge, Metro 2033, Batman:AA, Batman:AC, Battlefield 3. Some upcoming games - Metro 2034: Last Light, Borderlands 2.

Best example for wow-factor is Metro 2033, where it's used for real-time cloth simulation along with additional volumetric light processing - bullet holes in cloth with real-time volumetric shafts of light beaming out, it looks pre-rendered but nope.

RAGE had fantastic hardware accelerated megatexture streaming on nVidia cards as well though that's not really PhysX, just compute that was optimized heavily for Fermi.

The idea of compute tasks on the card is a great one. Everyone, including me, thought it was stupid back when "dedicated physics cards" were getting tossed around. I mean, hey, we had DUAL CORE processors, the second core could handle physics just fine. Turns out GPU architecture is way better at physics stuff, who knew :v:

Doesn't need to be or stay proprietary, though. A lesson I feel nVidia is learning with time, even if they do have barrels of money better to be generalist and have their cards offer perks than to be so restrictive that they don't end up getting used broadly. See: original proprietary MLAA vs. "oh yeah? how 'bout now? :smuggo:" FXAA.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Star War Sex Parrot posted:

Mass Effect and Battlefield 3 did not have hardware PhysX, at least according to NVIDIA.

edit: Also the Batman games have used it to the best effect, in my experience. The Scarecrow sequences in AA were night and day.

https://www.youtube.com/watch?v=6GyKCM-Bpuw&hd=1

I think everyone who does use PhysX has their favorite, but it's just another physics engine at the end of the day - prettier than most, if you're using an expensive nVidia card and can turn it on without eating framerate poo poo. Havok, other licensed engines and custom physics engines do fine, but GPU acceleration allows for some remarkable stuff. Bringing us back to the original topic, which was "that guy had a cool impact on technology, bringing first PhysX, then CUDA, and now heading AMD's push for GPGPU+CPU integration" - that's a bit more significant than "pffft who uses PhysX anyway?

Edit: SWSP mentioned that some titles were not GPU accelerated and I guess that's a good point while we're on the PhysX tangent. Software PhysX looks good too, it's just more in a "this could probably be accomplished with other premade physics engines" kind of good, not exclusive and really knock-out impressive good. Some very neat stuff, but nothing with the WOW!-factor of Batman:AA/Batman:AC and their massive integration of GPU accelerated PhysX to do stuff that would slow the CPU to a crawl.

Agreed fucked around with this message at 15:37 on May 15, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

The cape effects actually aren't PhysX, as I recall, because they wanted everyone (consoles included) to see them.

PhysX is just another physics API. The cape is PhysX - it's just not GPU accelerated PhysX. Lots of games are PhysX on the processor without any GPU acceleration, it's like Havok or other pre-made physics engines, just with some really cool effects possible with GPU acceleration if the devs get serious with it (with nVidia's tech team helping, usually - I recall they were deeply involved in the Batman games' development tech-wise).

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

EVGA is -KR'ing the crap out of the 600-series, too. Where's my lifetime warranty, EVGA? Come on! :mad:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

New consoles and new graphics cards built on process shrinks/new tech: things I don't even try to buy for several months after launch.

If I just luck into one that's a different story, I'm not going to look a gift horse in the mouth if I'm browsing Amazon and looking at 680s and there's one in stock that's actually new and not a scalper making a few hundred bucks on the damned thing. But I don't go searching for something that's virtually guaranteed to have absurd scarcity that soon, you've gotta be either very lucky, supremely dedicated (shopping bots for video cards? seriously?), or moneybags willing to pay a scalper 33%+ extra for the privilege of getting it while they're not so easy to come by.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Animal posted:

This guy at hardforums.com compared an overclocked 670 vs an overclocked 680. Both cards are the same brand and cooler. It pretty much cements the conclusion that the 680 is a waste of $100.

http://hardforum.com/showthread.php?t=1694021

Taking a lot of willpower not to just buy one right now but gently caress scalpers.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

That particular example is because of the way they do shadows in TW2 iirc. Happens on my GTX 580, too.

No idea what's up with the drivers not overriding correctly, that's such an off-and-on thing with both companies' control panels that I've abandoned the control panel entirely and just use nVidiaInspector to force things at a deeper level than the control panel.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Alereon posted:

In terms of managing settings, especially per-game settings, nVidia's is far better. I have noticed more UI bugs, things like the control panel not knowing which setting I'm hovered over, linked settings not updating properly, and other weirdness in nVidia's conrol panel, but I am using a beta driver (301.34). AMD's panel scaling options seem better and more effective, even though they do require you to be in a non-native resolution to expose them. I haven't done much with the other settings, though the Digital Vibrance option Desktop Color Settings was an easy and effective way to make my older secondary display look less washed out.

You have to get nVidiaInspector. It allows for much more robust management of game profiles, including the ability to override certain flags and thus enable different types of AA, or allow SSAO, etc., in games that wouldn't support them with stock settings. It also gives you access to every nVidia AA mode, including supersampling AA as well as sparse-grid supersampling options that you can adjust to taste for the perfect balance of performance and incredible appearance.

On modern games on my GTX 580 I usually run, at 1080p, 8xCSAA + 2x or 4x Sparse Grid supersampling, and it is image quality heaven. But you can do a lot of things there, and while it does assume some knowledge on the part of the user, it gives you a great deal of power as well to tweak your games' and adjust profiles... Very powerful tool, I get so much out of my card thanks to it and I can only imagine with a 670/680 you'd get so much more.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

For one, set the basic setting to High Quality on the Performance<------->Quality slider to disable image-compromising optimizations. Also, consider manually using nVidiaInspector to use at least 2xSGSSAA (below the regular AA options, and not the same thing as transparency Multisampling, the one below it) - it really helps with shimmer, especially in deferred rendering engines, in my experience.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

I also discovered that my graphics overclock is stable for Furmark but not for Metro. Good lord, that game. :gonk:

Have you not done the fun stuff of going through and seeing whether it's stable for DX9, DX10, and DX11 at the same clocks? That's where it gets good! And by good of course I mean :suicide:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

That's half the fun of overclocking GPUs, since every game engine is basically a huge quagmire of hacks that skirt or just outright ignore some pretty significant standards, and drivers get released all the time specifically to sort of address those hacked-in elements and can grab ludicrous performance gains out of games. E.g. something wrong with Skyrim's "Gamebryo PLUS" shadows and nVidia's Fermi architecture (especially higher end cards, where you'd have the horsepower to turn 'em up in the first place), and especially indoors for some reason - a new driver gave between 30% and 40% performance improvement. That's hilarious, isn't it? Pre-driver, might as well be using a GTX 280, post-driver oh that's where my performance ought to be.

And since they're all special flowers, the way they ask your card to behave can be pretty dramatically different. For example, I'm stable at 920mhz in DX10 in Metro 2033, or at 925mhz in DX11 in Crysis 2 (930mhz in DX11 Crysis 2!), or 920mhz in DX10 or DX11 in S.T.A.L.K.E.R. CoP with Atmosfear 3 and Absolute Nature plus third-party shaders... I can crank it up to 960mhz for some hot texture transcoding action in RAGE without any crashing or artifacts (edit: so we're clear, I'm pretty sure memory bandwidth has a lot to do with texture transcoding speed as well, so don't ignore that if you like RAGE and are still playing it despite the sad lack of continued support from id), and ~950mhz or so for S.T.A.L.K.E.R. SHoC in DX9.

Lovely stuff, engines. Unreal 4 looks really cool, but I'm up idtech's rear end because I feel like they have a ton of creativity and they (were, at least, pre-layoffs) working really closely with FXAA development to try to bring crazy good image quality potentially even to current-gen consoles. Identified the softness and figuring out how to do a "free" MSAA pass to resolve leftover aliasing... Cool stuff. But I'm sure development will continue regardless. MLAA's been coming along, AMD/ATI may have been one-upped but at least they didn't just sit around moping about it.

Agreed fucked around with this message at 11:57 on May 21, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

3Dmark11 and UNIGINE Heaven are the real stability testing tools for overclocking cards because between the two of them, they actually manage to pretty much do everything in DX10/DX11 in ways that games MIGHT ACTUALLY USE (holy poo poo, a relevant 3Dmark?? Where did the good old days of pointless e-peen go? Don't worry, it's not perfect, it's just a hell of a lot more useful than it used to be).

I don't even bother with Furmark or OCCT or EVGA OC Scanner X, bunch of total bollocks that will as Alereon rightly points out simply show you the power throttling safety features of your card unless you're using modified firmware (and hopefully after-market cooling, because otherwise you're going to cook your card or cards).

Fire up 3Dmark11 for a few runs on Performance mode for quickest results there - it's a bit of a chore since you have to do it manually, but if you can make it through the whole run ten times, you're *probably* stable.

Or, let UNIGINE Heaven 3.0 go on its merry loop and you'll find out real quick what kind of stability your overclock has. Heaven is my favorite, both because they've consistently sorta one-upped Futuremark for relevance despite it being a freeware product; because there's no manual dicking around required, it auto-loops through a scene where every camera hard change is testing/showing off something new and DX11, so if you want to, feel free to just let it run for awhile and if you come back to a driver crash you know to reduce clocks; and you can monitor it to see if you've got shader artifacts, geometry issues, etc. and help narrow down what isn't working right.

Of course then you go play fancy games and it turns out that you need to dial clocks back some more somewhere because stress test utilities are only useful to a point... It's just a further point than Furmark, I guess.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down


Speak of the devil... This is for Fermi, not Kepler, an architecture that's getting mildly long in the tooth at this point and yet:

"GTX 570/580:
...
Up to 21% in The Elder Scrolls V: Skyrim
..."


So adding that to the previous WHQL drivers' improvement, that's, what, 60% cumulative performance improvement compared to the drivers that were out when Skyrim launched? Something was going on between the engine and the architecture of the card that was costing somewhere around half of its possible performance, and they've now got that lined out, what, half a year later? Neither company can say a word about driver superiority, both are just doing the best they can to keep up with the engine antics that accompany new AAA game launches. Drivers are one part features, hundreds of parts individual per-game hacks to improve compatibility.

Edit: I checked, and yeah, drivers 295.73 WHQL boasted "Game-changing performance boost of up to 45% in The Elder Scrolls V: Skyrim, “the fastest selling title in Steam’s history"". Welp!

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

I've been using beta/dev drivers for a while, but if this is the first WHQL version with adaptive vsync I highly recommend it. I get more tearing than I did with regular ol' vsync but the performance overall is much better and on games where my GPU is vastly overpowering it doesn't waste as much juice.

I can't be arsed to read the loving manual because I am almost literally a babby, could you share some experience-based insight as to which adaptive vsync I ought to go with since we're using the same usually-drastically-overpowered card? I'm using a 60hz Samsung 1080p monitor, appears to be roughly, ah... 24" or so. Regular adaptive, or Half?

Framerate limits have been available for some time through nVidiaInspector, but I am interested in the Framerate Target feature... Not quite sure what that's going to do, though.

I should probably rtfm after all. But any insight you can offer as someone who has been using beta drivers with the same card would be very much appreciated.

Edit: Also, if I get a 670/680 ($100 for a 10% improvement? ... we'll see) I'll sell you my GTX 580 at around half price, promise. Put that Cadillac of power supplies to use. :v: But I'm a little concerned at Kepler's apparently rather mediocre DX9 performance thus far... Still enough games using DX9 (e.g. The Witcher 2) that I really don't want to spend a bunch of money to downgrade my performance there. Anyone know if more has come of that since initial reports of the peculiar behavior?

Agreed fucked around with this message at 11:02 on May 23, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

Regular, half limits it to 30fps, which I don't really know why that option is in there.

You might not like it, I still get tearing occasionally (I guess when frames drastically shoot up and it has trouble compensating? not sure).

I can put up with some tearing if it means I get more consistent FPS. High end cards are in a weird spot when it comes to that "well it'd do 50+fps but it won't lock in at 60" with high graphics. Especially with Triple Buffering.

I am very interested in seeing FXAA just plain enable-able across the board, as FXinjector (the most configurable one, anyway) is limited to DX9 games.

I'll be testing how FXAA works on games I've already enabled FXAA on via the injector :v:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

FXAA on top of FXAA looks... pretty good, little soft on some things but a quick CSAA pass takes care of that.

Niiiiiice.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Socialism posted:

Has anyone tried overclocking the 670? With a good custom cooler it seems the only limitation is voltage and stability rather than noise/heat. It seems insane to me that in some reviews factory OC'ed 670 is within single digit percentage performance of the 590. Also with the option of custom cooled 670 SLI for ~$850, I can't imagine 690 to have much of a place anymore.

Info gathered 'round the net suggests that its overclocking capability is pretty much the same as the GTX 680's, and since they perform extraordinarily close to one another in most tasks with the stock clock discrepancy, the fact that some people get bummed out if they can't get their GTX 670 to at least 1200mhz core is pretty funny.

Here's a wrinkle - from what I can tell, the 7970 has better clock-for-clock performance at most tasks than the GTX 680, around 5% to 7% depending. And you can overclock the poo poo out of it. But, I do feel nVidia is a clear features leader right now, regardless of the potential performance discrepancy there, and driver support for nVidia's lineup has resulted in what seems to me to be a more thorough performance improvement over previous generations in most titles. Not all, most, though. And the basically-free FXAA and capability to do PhysX acceleration as a bonus. AMD/ATI is working on MLAA, I know, and the latest I saw there suggested that their image quality was getting more and more impressive (an area where their tech probably has a slight edge over FXAA as development continues, though the quirky and proprietary nature of it compared to nVidia's usage of it as a general post-processor for any game still hinders it a bit).

What's really vexing me is that the GTX 580 is still holding on. It's virtually clock-for-clock even with the 7970, which means a heavily overclocked GTX 580 performs kinda similarly to a (stupidly) not overclocked 680. While it is fun to be able to say "wow, an overclocked GTX 670/680 sometimes outperforms a GTX 590 and has smoother gameplay!!" you can very nearly say the same thing about a heavily overclocked GTX 580. Just because GF110 was never the right chip to slap together for a two-GPU card in the first place and performs like poo poo :v:

Basically, if you've got a GTX 580 right now, your best bet is to just accept it for what it is, get the highest clock you can, and wait for the next cards to land and push the performance envelope farther - assuming rough overclocking performance percentage parity, which is a pretty safe bet since as mentioned you get at least a 5% head start per megahertz on a 580, you're stuck at around 25-35% better performance in most games, up to 40% in some with a 680... And pretty much the same story, just with virtually identical clock-for-clock per-frame rendering performance with a 7970.

Even overclocking the poo poo out of them, that's not going to make the difference between "can't turn everything up" and "CAN turn everything up!" in the most demanding titles that nutjobs like me enjoy punishing our graphics cards to try to get lookin' good.

Edit: Reason that bums me out instead of making me go :unsmith: is because I really want to turn stuff up all the way, I've got cash in hand to get a card, but performance just isn't there yet except for the 690 and there's no way in hell I'm paying a grand for a god damned graphics card :suicide:

Agreed fucked around with this message at 05:47 on May 26, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Arzachel posted:

It sucks that there are only two GPUs this gen that are worth a drat, HD7850 and GTX670 :( While both of them are pretty great, I expected such deals in every price range. I guess Canary Islands and GK110 aren't that far off now.

Do you mean HD 7950? The 7850 is a strong performer if you want "last gen performance++" but it's not in the top SKU, I wouldn't call it the great card from ATI this gen...

It's pretty normal for there to be the "top ender" then the "closer to price:performance curve," we just haven't had a real update on nVidia's side for years. They Fermi'd the poo poo out of the graphics card market. And ATI did ~basically the same thing, process tweaks rather than huge architectural shifts.

This is a new generation of cards. Look, averaging over 30% performance increase is impressive as hell, these are great cards. The value units from ATI are coming in strong, too, they'll end up filling the same role that, say, the 6850s did in the last generation.

I don't know WHAT to expect from GK110. My crystal ball is currently out for repair, but nVidia is strong on branding and if the performance improvement is commensurate with transistor count then that will be, like, the 700-generation card... Or we'll have another "GTX 285 and GTX 260 Core 216" situation where they bring more powerful hardware to market under the same generational name in order to compete with (what will hopefully be) an even stronger contender performance-wise from ATI.

What happened here that usually doesn't is that nVidia looked at pricing and said gently caress IT, WE WIN. ATI had a few months (unfortunately plagued by supply issues) of total market dominance, did a pretty good job in my opinion of managing to balance price of supply to maximize the revenue from what they could put out since demand was rabid... But once the 680 dropped, so did the sky, and nobody really talks about the 7970 in reverent tones anymore even though it overclocks like a motherfucker too and it's got some really fancy stuff of its own going on. The 680 just kinda marginalizes it, clock-for-clock discrepancy in most games be damned.

Plus, if I recall correctly, the 7970 still works better in Metro 2033! ... :smith:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

In terms of price:performance, I guess that's true. But the 7850 really is "last gen++," with the exception of superior DX11 performance and a more reliable Civ V bench :v:.

Bonus points for the 7850 for overclocking headroom, since you could usually get a 6950 to at least 900mhz, and reasonably lucky people into the 960mhz region, while the 7850 seems like it can push to 1000mhz pretty reliably and usually go higher with good cooling, if you're willing to risk the voltage :)

But I don't see it as a performance contender - the 670 might as well friggin' be a 680, the performance discrepancy is so small.

Edit: Oh, hold on, the 7850 was artificially choked for overclocking and it turns out you can get more than that out of it? Reliably, or "the good cards?"

And I do think that it's possible nVidia took a lesson from Fermi not to cross the streams and may indeed relegate big Kepler to compute, it was a strange choice to build a card that has far more compute than a gamer will ever require - potentially never even use unless they're folding@home or something - and just accept the problems of making that high end compute chip also work for high end graphics. There's room for a separation there, and with the way big Kepler is designed, they could probably run it for some time as Quadro/Tesla workstation units, ignoring the gaming consumer market entirely.

Agreed fucked around with this message at 19:03 on May 26, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Arzachel posted:

Up to 50% overclocks limited by voltage throttling isn't anything I've seen before.

Could you point out some of these? 'Cause I'm looking but not having an easy time finding them. It seems more like "yeah, they overclock really well, you can pretty safely expect a good OC out of them and performance that deprecates the GTX 570 completely and pushes up against nVidia's top end last gen card for a great price" - which is an extremely laudable thing, of course, at the price point, I hope it's clear I'm not saying "pfffft that's nothin'" - but (multiple) 50% overclocking?

Reasonable expectations from what I'm seeing are more like a ceiling at 1100mhz-ish with some able and some not able to go much further. But if I'm looking in the wrong place, help me out, I keep up with nVidia's technology more than AMD/ATI's (example: I didn't know the previous 1050mhz ceiling was artificial :v:)

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

So it's a bit of a retread of the 6950 flash to 6970 thing, then, but even more risky - sure, knock yourself out, but you're violating the warranty hardcore with a hacked BIOS and if it turns out that card had a non-artificial-segmentation reason for being a 7850, you might end up with a dead card.

If it's anything like previous AMD/ATI flash gambles, early bird gets the worm as they want to hit that market hard and grab up price:performance seekers as soon as possible before nVidia makes their next move. Hopefully TSMC isn't still lagging on production and they can nab the price:performance bracket while the nabbing's good.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

spasticColon posted:

According to GPU-Z, the voltage on my 7850 is 1.075V so how far could I push it on that voltage?

There really isn't a blanket statement that applies there. Overclock it, test it, come back and report. Help us build information to give people, experiences to share :)

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Also worth noting that the answer to "how much memory bandwidth does it have?" is already "plenty," so raising clocks - and as a result using power for the memory controller and GDDR5 instead of for, you know, the GPU and shaders - is pretty moot. I'd almost advocate leaving memory bone stock 'til you find your GPU clockrate wall and then only raise memory from there, as power permits.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Meta Ridley posted:

The GTX 670 comes in 2GB and 4GB models, if I would like to game at 2560x1440, would the 2GB version suffice? And I'd like to be able to use things like the High-Res texture pack for Skyrim, would that make any difference to go for the 4GB instead?

Rule of thumb is don't bother with higher VRAM models unless you're planning on using them in SLI, since one GPU will be bottlenecking before the memory is. However, at very high resolutions, I mean, I guess, maybe it's possible there could be a reason you would want to have 4GB of VRAM... ... I dunno. Probably not. We were all fairly astounded that AMD/ATI launched with 3GB of GDDR5 on their card, they seem to have been betting on a lot of people doing multi-monitor. nVidia is appropriately saving costs by keeping it to a normal amount of VRAM for modern games. AMD/ATI blunder, a bit, there.

1GB isn't enough at those resolutions, certainly, but in the past, I remember a [H]ard-style test gaming with three 1440p monitors, tri-crossfire 6970s punished tri-SLI GTX 580s, which is a combination of the fact that at least Fermi and before, nVidia has a MASSIVE loss of GPU power per card past two, really bad scalability, but significantly the 1.5GB of VRAM was getting choked pretty badly at 3x1440p and the 6970s' 2GB of VRAM wasn't.

Not enough VRAM is a bad thing, but more than enough is really pointless. I would imagine that 2GB is plenty, even with one 2560x1440 monitor.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

2GB of RAM doesn't seem to limit the 680 in SLI when working with 3x1920x1080, so I don't think a single 2560x1440 monitor will provide any VRAM issues whatsoever.

There you go, no problems expected and no need to spend the extra for needless VRAM :)

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

nVidia's expanded GPGPU compatibility with Kepler, though GK114 (That's GTX 680/670) is kneecapped pretty badly for compute performance compared to the upcoming GK110.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Can't really go into a lot of details here but there are plenty of cases in the software world where the hardware guys and the software engineers are so far from left hand talk-to right hand that they might as well be on different bodies. It seems like kind of a neat idea for the tech, since they're up against good competition in that arena from AMD, yeah? Who expects to pay less for their computer than for a cheap home theater receiver and get gaming performance? Now, expecting a desktop, that's reasonable. :laugh:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

WanderingKid posted:

"Display driver has stopped responding and has recovered"

Is usually the result of not enough voltage or too much heat. It's a full driver reset and usually means a protection measure kicked in.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

WanderingKid posted:

Yeah but its a common problem that has been reported for a long time (years) and several people who report the problem on the nVidia forums don't have power supply or heat issues. I don't think mine is a heat issue (temps are high 20s/low 30s idle and mid 60s at full load). I don't overclock. I haven't had it for a while now and definitely not since 296.10 but it has always come and gone and my hardware has stayed more or less the same for at least 2 years now. Shrugs. It doesn't bother me that much since I don't do anything really important with my graphics card. My soundcard drivers are much harder to live with (i.e. since getting the TC card, I have not been able to play live at all).

If it's faulting and reporting a reset, the terminology seems like it ought to be less about how the issue keeps getting reported but nobody's fixing it... More like

a different way of putting it posted:

several people who report the problem on the nVidia forums don't think they have power supply or heat issues but also don't know what precisely is causing the repeatable hardware fault and should probably treat it as the warranty issue that it is rather than de facto asking nVidia to remove fault-recovery mechanisms that are apparently allowing their unstable card to keep working.

If it's doing that and you're not overclocking and you're not using a Deer power supply or whatever, it shouldn't be doing that, end-of. If the graphics company is unwilling to address the issue through warranty, spread the word about that company being super lovely in the hopes that you can get a replacement the hard way. Call someone on the phone. But it's symptomatic of a specific, not a general issue. (And as Srebrenica Surprise notes, not limited to nVidia).

I have experienced it - when overclocking and pushing things too far, only, which is a normal circumstance to see that particular error and a rather more elegant failure and recovery than overclocking a processor, which, even with SSD-fast speeds, still sucks having to restart, dick with the BIOS some more, get back into Windows, and see if it's still unstable. GPUs are hardy bastards even when they're somewhat faulty or dying.

Edit: Which, by the way...

quote:

I now get the "pink screen of death" where my screen suddenly turns into a bunch of messed up pink and purple pixels and my computer hard locks, requiring a restart.

... that would seem to suggest might be the culprit here. Nothing about that's normal. Over the course of two years your card has gone from intermittent faults to now much more overt lockups accompanied by tons of artifacts. If it's still in warranty, any time would be great to start an RMA, man.

Agreed fucked around with this message at 17:38 on May 29, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Yeah, it's sort of technically actually clocked at 1/4 the stated and effective transfer rate. The modules themselves are running (in SDRAM terms) at 1500mhz from the factory. You'll see the displayed rate in overclocking software, I imagine, I've never seen numbers in the 4000s in EVGA Precision. 2000s, though. And GPU-Z will tell you the god's honest truth about the SDRAM base transfer rate. Because it doesn't care about your fancy numbers, it just wants you to know about a clock rate. It does offer grief counseling once you see that it's "really" just 1/4th the displayed-on-the-box speed, for what it's worth. Okay, that's bullshit. :v:

All about "effective" transfer rate, not, you know, actual transfer rate. But most programs will display the double-data-rate (DDR) clock, which is twice the fundamental clock of the stuff.

That said, overclocking memory is a good way to waste TDP in modern cards, honestly. 200GB/s+ memory bandwidth is already more than plenty, and if you're running in SLI or Crossfire, double/triple/insanity mode quadruple that because of the nature of parallel access. Basically cards' memory bandwidth in the high end have been "plenty loving fast" for a few generations now, and overclocking memory by displayed rate usually means upping the real clock by at least 1mhz at a time, so you can set it to, say, 6007mhz if you want to, but the fundamental clock rate is probably not going to actually be 1001.75mhz with the way it's all strapped.

The best thing you can do is set it to factory stock settings to free up power and heat for the GPU and shaders, where it counts.

Agreed fucked around with this message at 20:16 on May 30, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Torn between "Why would anyone turn off hardware acceleration, it's important" and "how the hell did that become a videocard issue?"

Drivers is a fancy word for "big pack o' hacks, enjoy"

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I was surprised to see the GTX 680 launch with a 256-bit memory bus, same effective bandwidth as a GTX 580, just faster GDDR5. The HD 7970 certainly isn't hurting for memory bandwidth, 384-bit bus and fast GDDR5, memory bandwidth stock is 264GB/sec. Yet even in tasks where you'd expect the GTX 680 to lose out - synthetic benchmarks, especially - due to its lower memory bandwidth, it seems to keep up remarkably well, and the specialization toward DX10/DX11 functionality means the 7970's tessellation performance is about on par with a GTX 580 while the GTX 680 is about three times as high, iirc?

Edit: Also I broke down and ordered a fuckin' GTX 680. I just want to turn up the graphics... :negative:/:rock:

Looking forward to installing this bad motherfucker. I said I wasn't going to pay a scalper's price, and I didn't, I found one for about $500 new from EVGA (SC+, has some kinda back-plate, not... totally sure what the deal is there, but it normally costs more than the standard EVGA SC, when I bought it the standard EVGA SC was running about $640 with Amazon's nanosecond price matching and the SC+ was actually slightly discounted from MSRP, which means somewhere there's gotta be a sale going on. I know for sure their algorithm locks dead in to NewEgg, so maybe somewhere to look if you're in the market. But I could swing it and it's the normal damned price, so gently caress yeah.

Heeeeeeey, Dogen, want to buy a GTX 580 for an attractive price? Overclocks well, within stock EVGA SC voltage (which is, ah, 1.080V-ish, whatever the tick-mark is there) it'll do 860mhz/2100mhz and if you up it to a perfectly safe 1.138V you can get rock-solid 920mhz/2080mhz (though you'd probably want to lock it to your MSI, obviously, and if you need to overclock is another question). Still have all the stuff it came with, and I take good care of my stuff. Put that power supply to good use! You and Factory Factory have first shot, just looking to recoup some of the 680 cost and of the two of you I figured you'd be more interested since you'd be able to run them in SLI and FactoryFactory's Crossfire setup already gives him single GTX 580 performance, minus PhysX and more powerful DX11. Not a great reason to upgrade, in my opinion. Cross-brand SLI works fine as long as they're the same architecture, hardware specifications, and clocks, and in this case you might actually be better off since this one's a blower exhaust and your other unit is case exhaust. Shoot me a PM if you're interested.

Agreed fucked around with this message at 16:44 on May 31, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

unpronounceable posted:

:sigh:

How's this going to affect your audio production? I know CUDA wasn't as strong as you would have liked for the price on the 580. With the 680's overall weaker performance, how will you compensate?

I suppose what I'm saying is keep both. The 580 for compute, and the 680 for gaming. :getin:

I've moved away from CUDA accelerated audio lately. It didn't really go where I was hoping. Now I'm back toward native plugins to take the most advantage of the very low latency but high power the 2600K at 4.7GHz can handle.

The main thing this affects is that it's not a tool, just a toy.

That said, I guess my power supply could probably run both at once. It'd be topping it out, two 580s in an i7 system run total system draw at about 740W and my supply is a 750W Seasonic-based. The 680's power draw is lower and with only PhysX or CUDA duties, presumably the 580 would mostly hang out at idle and its lower p-states. I'd return it to stock clocks, of course... It's a thought, I don't know, we'll see. Talk about a ridiculous setup if I were to do that. Like, worthy of ridicule. "Oh, that? Yeah, that's a 580 hanging out to do PhysX in the games that support PhysX. Uh... Arkham Asylum, for one, er..."

Probably not going there.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Yeah, for me it was strictly the price - immediately after I bought it, went back up to the gouge-tastic $640. I slipped in right under that crap. Hell, the USED ones are all $30 more than I paid for mine. I said I wasn't going to pay a scalper and I didn't. That's as far as my GPU integrity goes, god drat it, shiny stuff.

Anyone in the market for a GTX 580? :shepface:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

WanderingKid posted:

Is there a list of plugins that can use CUDA? I don't think I'm using the correct google search terms.

I also might try taking my graphics card (and wireless internet) out of my music production box entirely to see if TC Near x64 still shits the bed.

You happen to own Nebula? :smith:

Under NDA related to specifics on other stuff but me getting away from it and back to native is for a reason. GPGPU performance isn't what it could be and latency is too much of a factor unless you want to buy a legitimate mid-to-high end Quadro/Tesla, which means you have great GPGPU but you should have spent that money on some UAD-2 hardware :mad:

Agreed fucked around with this message at 14:41 on Jun 1, 2012

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

KKKLIP ART posted:

So from reading this thread, people seem to be having problem with the Asus TOP GTX670 cards, is there anything I should know to look out for or not do? It should be here tuesday or wednesday.

It's a binned chip, didn't make the 680 cut, but it has approximately a 10% difference in performance of a 680 at stock speeds. The new fine-control turboing thing is a little tough for manufacturers to get a handle on, and if it doesn't perform at stated specs RMA it. Otherwise, hopefully you won't have any issues and can enjoy top tier performance for a nice discount. Overclocking is not guaranteed, ever, but sometimes you can eat a fully OC'd 7970's lunch with them. Just depends on what's wrong with it that made it a 670 part. Seems like a lot of perfectly reliable companies are having issues with the 670s and overclocking, so don't freak any more than is sensible if it doesn't give you [H] levels of OC.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply