Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

KillHour posted:

So, does that mean it would be worth it to use one of my GTX260's as a PhysX card when I upgrade? (Doubt it)

Probably not. The bandwidth difference would be very substantial, and the GT200-era was really their first crack at fully integrating PhysX from Ageia's PPU model into their proprietary CUDA technology. GT200 CUDA was better and more integrated than anything that came before it, but it is a poor substitute for the Fermi cards' extraordinary CUDA performance, or the "we aren't good at general CUDA stuff since we only have 1/24th DP, but we can do PhysX great since it's comparatively simple maths!" that you get with consumer Kepler's SMX units' high CUDA core count.

I want to be really clear that in most situations, the benefits to having a separate renderer and a dedicated PhysX coprocessor GPU aren't realized. Borderlands 2 looks like it's going to have some very cool, fancy stuff going on, and as addictive as the gameplay will probably be, it may be worth it for enthusiasts in particular who really, really want the best of the best when it comes to graphics to go down this usually dumb road. I wouldn't recommend it generally, but nVidia picked a really good game to sell this generation's PhysX with, and went all out making it really flashy and interactive.

Conventionally, the biggest drawback to PhysX usually is that while it improves the environment, there's just not a whole lot you can actually do with it - since the game has to work for people without GPU-accelerated PhysX too. So there hasn't ever to my knowledge been a game which required nVidia GPU PhysX, since that would be very explicitly locking out a good portion of gamers completely, and disappoint a lot of others who were falsely expecting PhysX to run really well with just one card. So the question becomes how much development time and money do you spend making improvements that only a small portion of the customer base is going to be able to enjoy? nVidia seems to approach it as a chicken-and-egg problem, and are willing to interrupt that cycle by sending in a team to work closely with the developers to make really whiz-bang-WOW factor improvements which will hopefully draw in more people willing to make the jump into GPU-accelerated PhysX.

I know that it's sort of paradoxical to say that 1. I would hate for people to waste money on such a low return on investment, but that 2. for the games that feature well-done PhysX, I am really glad I have that damned 580 to handle it. Because it means I can have my cake and eat it too, I guess. It doesn't make good financial sense to invest in a PhysX card "just because," but if you happen to have a good one (which, again, estimating for a 670 or similar card, you wouldn't want to go lower than the GTX 460 with a 256-bit memory bus), plugging it in lets you get more life out of the card and enjoy that low-margin but really visually impressive feature, in games that have it.

Really am sorry to say that the 260 will probably just slow things down, even if you're going with a higher end card from last-gen as your rendering card to save money. It'd be nice if some of the game suggestions, like the ones in Batman: AA and AC were accurate and you could use an 8800-era high end card for PhysX, but that's just not the case. Have to be able to keep up with the rendering card or you might as well just save the power draw and let the rendering card pull double duty, complete with massive framerate hit :ohdear:

Adbot
ADBOT LOVES YOU

spasticColon
Sep 22, 2004

In loving memory of Donald Pleasance
I'm having this weird issue with my Sapphire HD7850 in my main rig. After installing the newest driver the card makes a weird creaking noise for about a second after the monitor goes into sleep/standby mode. And it keeps doing it at random intervals until I move the mouse and wake up the monitor then the noise goes away. The card is working just fine otherwise so what's the deal? :iiam:

Verizian
Dec 18, 2004
The spiky one.
I've seen a few reviews of the 3GB 660Ti's compared with an unlabelled 670 that still outperforms them, would it be reasonable to assume this was the more expensive 3GB 670 or a baseline 2GB model?

Someone mentioned that the limited memory bandwidth means that even the 2GB 660's are "wasting" memory so how would that affect the 3GB model?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Got a link? Anything we say would be based on speculation without seeing stuff like how many physical DRAM chips are on the 3GB 660 Ti, the clock rates of the cards involved, and the reliability of the review setups.

Kramjacks
Jul 5, 2007

Right now I have an EVGA GTX570 SC and I'm thinking of upgrading to a GTX 670. I use a 1920x1080 120Hz/3D monitor. If I was to get the new card and leave the 570 in for PhysX would my Corsair 650w power supply be enough? My CPU is an i5-3570k overclocked to 4.1ghz.

My motherboard doesn't support SLI, does that matter for running an extra card for PhysX?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Verizian posted:

I've seen a few reviews of the 3GB 660Ti's compared with an unlabelled 670 that still outperforms them, would it be reasonable to assume this was the more expensive 3GB 670 or a baseline 2GB model?

Someone mentioned that the limited memory bandwidth means that even the 2GB 660's are "wasting" memory so how would that affect the 3GB model?

It's not quite as simple as "limited bandwidth means they're wasting memory" - it's more that with a 192-bit bus, 2GB is necessarily run asynchronously. Usually, if your memory bus isn't a power of two, neither is your RAM. On modern cards, that's ordinarily meant that with a memory bus that isn't power-of-two, to run synchronously, non-power-of-two RAM amounts are used: either 768GB or 1.5GB for a 192-bit bus, and 1.5GB or 3GB for a 384-bit bus.

It means that, as Factory Factory put it in a discussion, nVidia's got a sort of black-box uncertainty going on with how they're handling it beyond simply allocating it in an unusual way w/r/t how much RAM is on each memory controller. It's not the first time they've done it, but it's one of the very few, and it's weird.

That's a different issue entirely though from the 3GB 660Ti. There, while that is a more computationally normal configuration, the fact that it's a 192-bit bus rather than a 384-bit bus means that there's not really enough throughput to take advantage of the higher RAM. If the price premium isn't sky-high it might be the one to get for SLI, especially if you're going for higher resolutions on a budget, since it does allow for synchronous operation of the RAM as well as parallel operation combining 192 and 192 to get an effective 384-bit bus in SLI.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Agreed posted:

as well as parallel operation combining 192 and 192 to get an effective 384-bit bus in SLI.

Ahhhhh that's not how that works. VRAM contents are mirrored between the cards, because each GPU can only work on the contents of its own RAM yet they're rendering the same scene. You can't even load different CUDA programs on cards in SLI; you have to break the SLI pairing.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

Ahhhhh that's not how that works. VRAM contents are mirrored between the cards, because each GPU can only work on the contents of its own RAM yet they're rendering the same scene. You can't even load different CUDA programs on cards in SLI; you have to break the SLI pairing.

You can't use CUDA with SLI at all, unless they've changed something (though you can use PhysX, causes your performance to tank but you can do it). But that's not really what this post is for - I'm not sure you're interpreting what I wrote correctly. You do get "effectively parallel" memory access in SLI, even though each card is still running the same bus speed on its own. Faster access to the same amount of memory since while it is mirrored, what's being operated on is more limited. I'm not saying, and this is where I am concerned you're misinterpreting me, that you get more addressable space than if not running in SLI. Two 1GB cards in SLI doesn't get you 2GB of VRAM, obviously, and two of the 2GB 660Tis in SLI are still going to have SOME performance penalty because of asynchronous operation, though exactly what or how we don't know.

But in upping the VRAM to 3GB, it's now running synchronously on a 192-bit bus; I doubt a single card's throughput to address that much RAM before the GPU chokes. By accessing it effectively in parallel it might actually be able to make use of it.

This is the same reason we recommend the price:performance guys willing to dual-card, for higher resolution screens, check into the 2GB 6950s over the 1GB models. Except in this case it also maybe kinda solves a problem for the 660Ti's high-resolution performance, but only when in SLI.

Agreed fucked around with this message at 18:51 on Aug 18, 2012

Longinus00
Dec 29, 2005
Ur-Quan

Whale Cancer posted:

Wow thats well under what I was expecting. I thought 550 bare minimum.

In case you find my claim incredulous I'll back it up with some cold hard numbers. A good first order approximation of how much power you system needs is to add up the maximum power requirements of all your system components. In your case it's 77W(cpu) + 150W(660ti) + 50w(everything else) which gives you 277W. In the real world components don't actually hit their max quoted TDP power figures so this approximation can probably be reduced to something like 225W. This is right around the 50% sweet spot for a 400-450W power supply (power supplies tend to be most efficient at 50% usage).

For real world conformation of these numbers check out the anandtech review of the 660ti. At the wall they measured a power draw of ~310W in an overclocked sandy bridge E + 660ti system. Taking into account power supply efficiency that means the computer was pulling less than 300W (probably in the 280s or so) from the power supply. The processor you have uses way way less power than a SBE so my 225W estimate is about right.

Another thing that's important is that the idle power draw of your system because that's probably where you'll be spending the majority of your time. This is a bit harder to calculate but we can use the anandtech review as a guide. They clock their beast of a system at around 110W with a non overclocked non SBE processor and a 1200W beast of a power supply. Taking into account power supply efficiency as well as the power usage difference between an OCd SBE and your CPU I'd guess that you're going to idle at around 80W or less. With a 650W PSU like you were considering, your going to be below 20% usage (closer to 10%) which means your efficiency drops like a rock. With 400-450W PSU you're still below 20% but not nearly by so much.

Chuu
Sep 11, 2004

Grimey Drawer
I am on the verge of buying a GTX 680 for an upgrade from a HD4850 because I want to experiment with some CUDA apps, want to future-proof my system for a 27" monitor upgrade, and have money to burn. Some questions:

  1. Is a 500W power supply enough for a system with a GTX 680, i5-2500, and one hard drive? I've seen posts recommending at minimum a 550W power supply but looking at these numbers it looks like total system power consumption never goes over 400W without overvolting.

  2. What exactly triggers idle and long idle states?

  3. Do superclocked cards effect idle states or idle power consumption meaningfully, assuming they are not overvolted? What if they are overvolted?

  4. I've heard people say that these high powered cards dramatically effect their power bills. If I'm doing my math right, assuming 112W idle power consumption @ $0.142/kWh wouldn't running a computer 24/7 be only $11.45 a month at idle?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Chuu posted:

I am on the verge of buying a GTX 680 for an upgrade from a HD4850 because I want to experiment with some CUDA apps, want to future-proof my system for a 27" monitor upgrade, and have money to burn. Some questions:

First, don't buy a 680. It isn't meaningfully higher in performance than a 670, including with high resolutions. Just leaving money on the table, really (or, to use your term, burning it just to see it go to ash).

Second, consumer Kepler GK104 cards are poo poo for CUDA... with some caveats. The biggest issue is that the chip is heavily optimized for rendering, and rather handicapped for compute performance, the tune of 1/24th DP performance (Compared to 1/8th DP performance on Fermi GF110) means you get sometimes rather lower performance even with the higher number of CUDA cores.

All CUDA cores are not created equal and nVidia is carefully segmenting their gaming and workstation/compute markets this go-around. That said, you can still do CUDA stuff, but it's racing the GTX 580 and does not always win despite the fact that it has many more CUDA cores and ~arguably better cache management. (ATI still totally licks 'em for GPGPU, but application support is not yet well established for ATI's GPGPU).

The 580's hot-clocked, two-per-execution-cycle CUDA cores in the Fermi SM architecture plus less intentional-kneecapping on the part of nVidia means that if you really intend to do compute-heavy things, if CUDA is a serious investment, you might want to hold off until a non-handicapped Kepler comes along (and spend a lot more than the difference between a 670 and a 680, too...).

Third, there's really not a big difference in power consumption between idle and long-idle in terms of your total system power draw over a meaningful period of time.

Fourth, at least from EVGA but I would imagine from anyone, Superclocked cards have their full adaptive power and clock tech going just like any other cards, and when you don't need their high performance, automatically downclock and lower to minimum voltage - regardless of how you have them set for higher clock speeds. This kind of rules :) Note, in overclocking I've found that even though the Power Target slider and the hardware and driver-level controls are good, you can make an unstable overclock stable by manually upping the voltage. Even though it would normally raise the voltage as needed, because of what can be rather large swings in clock and voltage in certain circumstances, my hypothesis is that the voltage doesn't get where it needs to be quite in time and causes, basically, a brown-out for the GPU and memory. By manually raising the voltage, you are setting it to run full voltage all the time (instead of likely hanging out one or two steps from top voltage most of the time, then cranking up when called for), preventing those large voltage swings from occurring in the first place. It already has the juice required for the clockrate adjustment that it would have needed that much power to run stably.

Fifth, aggressive fan profiles and keeping the card cool are paramount. Moreso than even in Fermi, where it was pretty damned important already, but in this case you actually get automatic clockrate reduction at temperatures which would be considered safe - 13mhz at 70c, 80c, and I want to say some sharper reduction at 90c although something's wrong if the card gets that hot I think, it runs very efficiently even at the highest voltage and with an extraordinary overclock.

Agreed fucked around with this message at 20:19 on Aug 18, 2012

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Chuu posted:

I am on the verge of buying a GTX 680 for an upgrade from a HD4850 because I want to experiment with some CUDA apps, want to future-proof my system for a 27" monitor upgrade, and have money to burn. Some questions:

Don't. Get a 670. It's almost all of the performance for $100 less.

quote:

  1. Is a 500W power supply enough for a system with a GTX 680, i5-2500, and one hard drive? I've seen posts recommending at minimum a 550W power supply but looking at these numbers it looks like total system power consumption never goes over 400W without overvolting.

500W is just fine. If you check the testbed, that's an overclocked hexacore system using at least 40W more power than an i5-2500K would with a similar overclock.

quote:

  • What exactly triggers idle and long idle states?

  • Idle is "just a desktop, not hard, doop de doo, maybe decode some video, tum te tum" state. Most 2D applications are run in this power state. A notable exception is when you hook up multiple monitors with different display resolutions; because of some hocus pocus with VRAM clock and refresh rate synchronization, this causes cards to increase their clock rates even on a desktop display. This is much less of a problem with a modern card than it used to be, though, because there are many more available power states to switch in to.

    Long Idle is the "turn off monitor" state where literally nothing is being displayed, so the GPU has to do no work. It's usually set for a computer to trigger that state a few minutes before putting the entire system into Sleep mode. It's also the power state for the second (and third etc.) GPUs in a CrossFire or SLI setup when game rendering is not occurring.

    quote:

  • Do superclocked cards effect idle states or idle power consumption meaningfully, assuming they are not overvolted? What if they are overvolted?

  • Because of BIOS fuckery and licensing issues (see the third page of the review you linked), the standard GeForce 680 BIOS gives up some control of voltages. On the bright side, this means they change as per the card's regular logic even when you increase their "full on" target value, so idle power is not affected. On the downside, dynamic voltage is harder to test for stability and can interfere with extreme overclocking.

    A "Superclocked" card is just one of EVGA's factory-overclocked SKUs. It's binned a bit better, has the clock frequencies tuned up a bit, maybe has a minor stock overvolt (dynamic, not static). Actually, sometimes they lower the top boost clock voltage, since these cards tend to have quite a bit of headroom. The EVGA 670 SC actually drops .0125V vs. the reference 670.

    Only top-end, ridiculously expensive tweaker cards have any way to do static overvolting and so affect idle power consumption, and that method requires a hardware dongle, a custom liquid cooling system, and also probably a brain disease.

    quote:

  • I've heard people say that these high powered cards dramatically effect their power bills. If I'm doing my math right, assuming 112W idle power consumption @ $0.142/kWh wouldn't running a computer 24/7 be only $11.45 a month at idle?

  • Note that this generation of high powered card takes far less power than last generation's. If you go back to Nehalem and the GeForce 400 series, then those 1kW power supplies weren't just for show. You could draw that much power on an overclocked i7-920 and a pair of GeForce 480s in SLI, easily. Actually, that probably wouldn't be enough. A fully-pushed i7-920 can pull almost 400W including motherboard and cooling, and the 480 had a TDP of 300W per card.

    And yeah, your math checks out. That's still a sizable percentage of a typical power bill, per-person, though. Sleep your system when you're not using it.

    e:f;b

    Longinus00
    Dec 29, 2005
    Ur-Quan

    Chuu posted:

    I am on the verge of buying a GTX 680 for an upgrade from a HD4850 because I want to experiment with some CUDA apps, want to future-proof my system for a 27" monitor upgrade, and have money to burn. Some questions:

    1. Is a 500W power supply enough for a system with a GTX 680, i5-2500, and one hard drive? I've seen posts recommending at minimum a 550W power supply but looking at these numbers it looks like total system power consumption never goes over 400W without overvolting.

    2. What exactly triggers idle and long idle states?

    3. Do superclocked cards effect idle states or idle power consumption meaningfully, assuming they are not overvolted? What if they are overvolted?

    4. I've heard people say that these high powered cards dramatically effect their power bills. If I'm doing my math right, assuming 112W idle power consumption @ $0.142/kWh wouldn't running a computer 24/7 be only $11.45 a month at idle?

    4. Currently cards don't typically run at idle speeds if you have dual or more monitors. In the worst case scenario your card could be running full speed at all times.

    Chuu
    Sep 11, 2004

    Grimey Drawer
    Thanks for the replies, I'll be looking at the 670s now as well. As for CUDA, my gaming system doubles as a development platform so I'm not too concerned about performance at this stage. If I do come up with anything interesting I could convince my company to foot the bill for full Tesla setup.

    Longinus00 posted:

    4. Currently cards don't typically run at idle speeds if you have dual or more monitors. In the worst case scenario your card could be running full speed at all times.

    Will def. be looking more into this, I plan on using this with a dual-monitor setup. Do you have a link to any articles that explain more about this and could quantify it?

    Chuu fucked around with this message at 20:42 on Aug 18, 2012

    Factory Factory
    Mar 19, 2010

    This is what
    Arcane Velocity was like.
    http://www.legitreviews.com/article/1461/19/

    Chuu
    Sep 11, 2004

    Grimey Drawer

    Thanks for the link. I was planning on buying a 27" as a 2nd monitor and keep using a 24" as a secondary display. Will buying a cheap quadro nvs for the 2nd "2D only" monitor fix this?

    Agreed
    Dec 30, 2003

    The price of meat has just gone up, and your old lady has just gone down

    Chuu posted:

    If I do come up with anything interesting I could convince my company to foot the bill for full Tesla setup.

    :stare:

    Where can I send an invoice for the consultation fee? It's pretty cheap, just one middle of the road Tesla, no big deal.

    :qq:

    Factory Factory
    Mar 19, 2010

    This is what
    Arcane Velocity was like.

    Chuu posted:

    Thanks for the link. I was planning on buying a 27" as a 2nd monitor and keep using a 24" as a secondary display. Will buying a cheap quadro nvs for the 2nd "2D only" monitor fix this?

    1) As I said, it's much less of a problem with current cards because there are more power states available. Plus they use less power in general.

    2) i5-2500, right? Use the IGP.

    MixMasterMalaria
    Jul 26, 2007

    That's really crazy, I assume that hasn't changed with modern cards? I usually keep my second (lower resolution) monitor off when I'm not using it but I'm not keen in a tangible power bill increase so would I have to disable it via windows to get a card to run idle?

    Edit:

    Factory Factory posted:

    1) As I said, it's much less of a problem with current cards because there are more power states available. Plus they use less power in general.

    2) i5-2500, right? Use the IGP.

    Some of us are on P67 motherboards so that wouldn't be an option, right?

    Chuu
    Sep 11, 2004

    Grimey Drawer

    Agreed posted:

    :stare:

    Where can I send an invoice for the consultation fee? It's pretty cheap, just one middle of the road Tesla, no big deal.

    :qq:

    There is a pretty big qualifying "if" in my statement, but aren't tesla modules only $2-$5K or so, or am I way out of the ballpark?

    (edit: guess I was way out of the ballpark, but really, $10-$15K for a 2x Tesla K10 system is still cheap compared to developer time. I'm still at the stage where I am not even sure what the difference between the high end Quadro and Tesla powered workstations are though, so continuing on this topic would probably be a derail)

    Chuu fucked around with this message at 21:21 on Aug 18, 2012

    Chuu
    Sep 11, 2004

    Grimey Drawer

    Factory Factory posted:

    2) i5-2500, right? Use the IGP.

    Didn't even think about that, thanks.

    Agreed
    Dec 30, 2003

    The price of meat has just gone up, and your old lady has just gone down

    Chuu posted:

    There is a pretty big qualifying "if" in my statement, but aren't tesla modules only $2-$5K or so, or am I way out of the ballpark?

    (That being said, I'm obviously going to be doing a ton more research before even thinking about that step)

    No, you're right about the price, though I believe the top enders get into the 7K range - it's just, holy moly, that's just rather more than I could justify slinging in my setup, with my money, given that its CUDA workload is fairly light these days. But for a company willing to set a developer up, just another line item on the budget I guess. :shobon:

    Factory Factory
    Mar 19, 2010

    This is what
    Arcane Velocity was like.

    MixMasterMalaria posted:

    That's really crazy, I assume that hasn't changed with modern cards? I usually keep my second (lower resolution) monitor off when I'm not using it but I'm not keen in a tangible power bill increase so would I have to disable it via windows to get a card to run idle?

    I have said multiple times ITT and in the past few pages that current-gen cards (and last-gen Radeons, for that matter) are much less affected because 1) they use less power than the GeForce 480 and 580 in that link, and 2) they have more discrete power states to switch to, vs. that 580 that just goes straight to Full On.

    Since the limitation is also related to TMDS and RAMDAC generators, you can skip the problem entirely by using only DisplayPort outputs on the card, no DVI, HDMI, or VGA. Stating the obvious: this only works if your card has enough DP outputs to cover every monitor you want to plug in. And you can't use any passive adapter dongles, since that just passes signals from the TMDS or RAMDAC generators.

    quote:

    Some of us are on P67 motherboards so that wouldn't be an option, right?

    Yes.

    MixMasterMalaria
    Jul 26, 2007

    Factory Factory posted:

    I have said multiple times ITT and in the past few pages that current-gen cards (and last-gen Radeons, for that matter) are much less affected because 1) they use less power than the GeForce 480 and 580 in that link, and 2) they have more discrete power states to switch to, vs. that 580 that just goes straight to Full On.

    Since the limitation is also related to TMDS and RAMDAC generators, you can skip the problem entirely by using only DisplayPort outputs on the card, no DVI, HDMI, or VGA. Stating the obvious: this only works if your card has enough DP outputs to cover every monitor you want to plug in. And you can't use any passive adapter dongles, since that just passes signals from the TMDS or RAMDAC generators.


    Yes.

    Sorry to make you reiterate your exposition on this topic, but that's what you get for being both knowledgeable and helpful! Thanks for dispelling my ignorance and confusion.

    ~Coxy
    Dec 9, 2003

    R.I.P. Inter-OS Sass - b.2000AD d.2003AD
    On the PSU wattage thing, I didn't see anyone mention so I thought I'd point out that the total wattage of your PSU is completely meaningless without knowing the amperage on the 12V rail(s).

    Some companies make great PSUs with almost all the total wattage available on the useful 12V rail.
    Some companies make lovely PSUs with significant amounts of the total wattage on the practically useless 5V rail. I assume it's a cheaper way to get an inflated figure for the front of the box.
    Some companies make both!

    34+A on the 12V seems to be a good amount for a decent CPU with some overclock, reasonably good GPU, and a drive or two.

    Factory Factory
    Mar 19, 2010

    This is what
    Arcane Velocity was like.

    ~Coxy posted:

    Some companies make lovely PSUs with significant amounts of the total wattage on the practically useless 5V rail. I assume it's a cheaper way to get an inflated figure for the front of the box.

    It's one part legacy. Before GPUs and power-managing CPUs came into fashion, much of a system's power was drawn on the 3.3V and 5V rails. PSUs that still have the older-style proportions are just repackaged older PSUs with different connectors tacked on. Cheap to make, little R&D expense.

    And also, are you saying that charging USB gizmos is useless?! Dat's pure +5V right dere.

    Captain Libido
    Jun 17, 2003

    Factory Factory posted:

    For Premiere's Mercury Engine, the one that does rendering, it's still mostly Nvidia-only with only a few official Radeons in Macbook Pros. Hacking in support for Nvidia cards is easy, but hacking in support for Radeons is not.

    For Photoshop's Mercury Engine (a different one), that's OpenCL based and works on pretty much everything, with a larger list of officially supported GPUs which includes even Intel Ivy Bridge IGPs.

    Not sure if this should belong on the parts picking thread, but I just caught this reading the last page and after reading Agreed's reply about Kepler vs. Fermi a few posts before, I just thought I would ask here. A co-worker recently asked about a video card for his new build for Adobe CS6, going with an i7-3770K. He'll be mainly using Premiere and After Effects and in keeping to his budget, ~$300 is his max to spend on a GPU. Two weeks ago, he was leaning toward a GTX 570 but with the GTX 660 Ti at that price point, it seems like a better card on paper. Any input would be appreciated as he is ready to pull the trigger, but unsure of what card to get now.

    zer0spunk
    Nov 6, 2000

    devil never even lived

    Captain Libido posted:

    A co-worker recently asked about a video card for his new build for Adobe CS6, going with an i7-3770K. He'll be mainly using Premiere and After Effects and in keeping to his budget, ~$300 is his max to spend on a GPU. Two weeks ago, he was leaning toward a GTX 570 but with the GTX 660 Ti at that price point, it seems like a better card on paper. Any input would be appreciated as he is ready to pull the trigger, but unsure of what card to get now.

    Premiere has no support for the 6 series yet. I'm on an overclocked 3770k as well, and it really hasn't been a problem being software only for MPE.

    You can add the card manually so it's recognized by the program, but according to devs on the adobe forums it's not as simple as that with this gen. They say it takes more then that to officially support kepler and that they are working on it. Once MPE can take advantage of the gpu then you'd gain the ability to have certain things real time that normally need render like select transitions.

    On the plus, after effects was the same deal, and they pushed out a .0.1 release about a month after the 680 came out, so it's just a matter of time before premiere gets the same patch.

    Gunjin
    Apr 27, 2004

    Om nom nom
    If it were my money, and I was using the system for work, I'd get the 570.

    Kramjacks
    Jul 5, 2007

    AMD has released an updated firmware for reference HD7950s. It changes the core clock from 800 MHz to 850 MHz and adds something called PowerTune with Boost which can increase the core clock to 925 MHz when an application asks for it.

    New cards manufactured from mid-August on will ship with this firmware.

    http://www.tomshardware.com/news/HD7950-GTX660_Ti-Radeon-BIOS_Update-amd,16897.html

    TheRationalRedditor
    Jul 17, 2000

    WHO ABUSED HIM. WHO ABUSED THE BOY.
    I'm getting a brand spankin' new 670 tomorrow, and I've being reading some scary things about Nvidia's last few rounds of driver releases. Is there a general consensus on the least wretched Beta version at the moment?

    Berk Berkly
    Apr 9, 2009

    by zen death robot

    Glen Goobersmooches posted:

    I'm getting a brand spankin' new 670 tomorrow, and I've being reading some scary things about Nvidia's last few rounds of driver releases. Is there a general consensus on the least wretched Beta version at the moment?

    Its okay as far as beta releases go:

    http://www.anandtech.com/show/6069/nvidia-posts-geforce-30479-beta-drivers
    http://techreport.com/discussions.x/23210

    madsushi
    Apr 19, 2009

    Baller.
    #essereFerrari
    My EVGA 660ti just arrived, with two loose screws floating around the box. Opened up the box, and all of the screws securing the PCB to the sheath are stripped and the two floaters had obviously fallen out completely. Just sent it back to Amazon, here's hoping the next one arrives in better shape.

    Agreed
    Dec 30, 2003

    The price of meat has just gone up, and your old lady has just gone down

    Glen Goobersmooches posted:

    I'm getting a brand spankin' new 670 tomorrow, and I've being reading some scary things about Nvidia's last few rounds of driver releases. Is there a general consensus on the least wretched Beta version at the moment?

    Last page - in a rush or I'd expand further, if necessary I can come back and do so:

    Factory Factory posted:

    Nvidia has released new WHQL drivers, chock-full of new game profiles and suchlike: 305.68 WHQL.

    E: Oops. If you don't own a 660 Ti, you actually want 305.67. Or just wait a bit for them to get posted officially.

    Agreed posted:

    This driver new post note: (305.67 for anyone except 660Ti owners, or 305.68 ONLY FOR 660Ti owners) is A Very Good Driver, and I recommend it. The current Beta driver from the nVidia drivers page is rather out of date, having been released July 3, and doesn't include profiles for many current games.

    nVidia has stated intent to push a new WHQL driver which will presumably wrap both of these functionalities (660Ti features + everything else) soon, and further enhance compatibility with major releases that are ongoing, including a Sleeping Dogs SLI profile for anyone doing that.

    These are developer drivers, not beta drivers, and they're several steps ahead of the July 3 beta drivers linked in that they provide proper profile support for a number of games either recently released or coming up (though notably NOT Sleeping Dogs, manual SLI compatibility bits have been released and a profile will be part of the WHQL release for sure - nVidia has more or less said to expect that release any day now).

    Make sure to get the Desktop version, and to match your operating system carefully. The wrong drivers will straight up not work and possible cause a total pain in the rear end.

    movax
    Aug 30, 2008

    Glen Goobersmooches posted:

    I'm getting a brand spankin' new 670 tomorrow, and I've being reading some scary things about Nvidia's last few rounds of driver releases. Is there a general consensus on the least wretched Beta version at the moment?

    I powered on mine with 305.67, worked great over the weekend.

    Which reminds me, Agreed, would you mind posting a screenshot of your fan profile for your 680?

    TheRationalRedditor
    Jul 17, 2000

    WHO ABUSED HIM. WHO ABUSED THE BOY.
    Thanks a lot you two. Would've had no idea how to find 305.67 otherwise as I've been running AMD a while and Catalyst at makes up for its questionable driver behaviors with ease of version identification!

    Endymion FRS MK1
    Oct 29, 2011

    I don't know what this thing is, and I don't care. I'm just tired of seeing your stupid newbie av from 2011.
    New AMD pricing strategy/deals:

    http://www.anandtech.com/show/6175/amd-radeon-hd-7800-series-price-cuts-new-game-bundle-inbound

    And despite the title, it applies to the 7950 too. Drops on 7950, 7870, and 7850, plus bundled with Sleeping Dogs. I'm perfectly fine with my 6950, but I keep itching for a 7950. I'm forcing myself to wait for 8xxx because I'll feel stupid upgrading every generation.

    Agreed
    Dec 30, 2003

    The price of meat has just gone up, and your old lady has just gone down

    movax posted:

    I powered on mine with 305.67, worked great over the weekend.

    Which reminds me, Agreed, would you mind posting a screenshot of your fan profile for your 680?

    No problem, here's my whole overclocking setup. EVGA GTX 680 SC+, so it's got a base clockrate boost already on both the GPU/cores and the VRAM. If the offsets were at +/- zero, it'd be at SC+ clockrates. So the +81 and +300 are on top of the factory overclock. Feel fairly lucky to have managed a good sample twice in a row, my GTX 580 SC was a salty overclocker on air, too, though it ran hotter and louder than this card when really involved.

    End result of those settings is 100% stable, DX9/10/11, max dynamic clockrate (where it will jump to and hang out at in the most demanding applications) of 1270MHz GPU/cores and 6800MHz VRAM. The voltage being raised seems necessary for stability; without it, when the clock jumps in the rather extreme way that it can (e.g. going from 900MHz-ish to 1270MHz in a really demanding section), it seems like the voltage isn't there quiiiiite quick enough for true stability. Since it would just be raising to that voltage automatically anyway, setting it there manually and having an ample power well for the card to draw on at any time seems to be the ticket.



    I've only ever seen it get past 60ºC-65ºC in torture tests, where it will hang out around 70ºC or so.

    Agreed fucked around with this message at 05:02 on Aug 21, 2012

    Alereon
    Feb 6, 2004

    Dehumanize yourself and face to Trumpshed
    College Slice

    Endymion FRS MK1 posted:

    New AMD pricing strategy/deals:
    I keep hearing that Sleeping Dogs is amazing, and the Radeon HD 7870 looks like a compelling deal at $249, especially if you were planning on buying the game anyway at $50.

    Adbot
    ADBOT LOVES YOU

    TheRationalRedditor
    Jul 17, 2000

    WHO ABUSED HIM. WHO ABUSED THE BOY.

    Endymion FRS MK1 posted:

    New AMD pricing strategy/deals:

    http://www.anandtech.com/show/6175/amd-radeon-hd-7800-series-price-cuts-new-game-bundle-inbound

    And despite the title, it applies to the 7950 too. Drops on 7950, 7870, and 7850, plus bundled with Sleeping Dogs. I'm perfectly fine with my 6950, but I keep itching for a 7950. I'm forcing myself to wait for 8xxx because I'll feel stupid upgrading every generation.
    Wow, if you're into this stuff this is literally like watching a heated horse race between two champion steeds. I won't lie, the Borderlands 2 package deal was a critical component in my choice of in buying a new 670 two days ago. These promotional tactics are as potent as they are insidious!

    Sleeping Dogs isn't a bad counter on AMD's behalf though, it's got a lot of good buzz from what I've seen. However it's almost a certainty that Borderlands 2 will be more a somewhat more popular game.

    • 1
    • 2
    • 3
    • 4
    • 5
    • Post
    • Reply