Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Did you guys run multiple tests and average the results, or what? Everyone definitely using the same PCI-e bandwidth? How about quality settings - is everyone on the same page as far as the broad-strokes performance<------->high quality slider goes, and the global profile? Same drivers (dev drivers posted earlier probably aren't nearly as common as the current beta drivers but, you know, methodological question)?

Just curious how you came by the results.

Agreed fucked around with this message at 14:54 on Sep 12, 2012

Adbot
ADBOT LOVES YOU

Whale Cancer
Jun 25, 2004

We both just ran the test once and we used the exact same settings in the test. As far as pci bandwidth I haven't changed whatever it runs stock. I know we are both running the same nvidia driver as well.

Rigged Death Trap
Feb 13, 2012

BEEP BEEP BEEP BEEP

Berk Berkly posted:

Avatar/Post combo win.

Haswell is due out around June/July next year? That almost feels too good to be true. At that point I'm curious if we will even have cards like the 7750 or even 7770 when you can just get very decent quality graphics without a discrete card at native resolution 1080p.



I have no doubt that they will stay. Just so you can run multiple screens off them or for cheap compute power, and for AMD-toting computers.

But still though, This means mini-ITX just got a shitload more value. Great gaming at around ~300 dollars for an entire system.

Wozbo
Jul 5, 2010

Whale Cancer posted:

We both just ran the test once and we used the exact same settings in the test. As far as pci bandwidth I haven't changed whatever it runs stock. I know we are both running the same nvidia driver as well.

Some thoughts:

Antivirus being a dick (check it with AV off), any random other program asking for processing time. I'd average like 3 or 4 of em.

Edward IV
Jan 15, 2006

Factory Factory posted:

Here's Skyrim running on the Intel Haswell GT3 GPU.

https://www.youtube.com/watch?v=uohmFVIAASU

Left: HD 4000 (IVB GT2), 1366x768 Medium detail preset
Right: Haswell GT3, 1920x1080 High detail preset

Those chips are running at the same TDP. Haswell takes the IVB execution unit uarch and doubles the performance per watt.

Next year, "good enough" computing will mean Skyrim at 1080p/high detail :circlefap:

Wow. Though I wonder how Intel is dealing with the memory bandwidth issue that makes AMD's APUs need high speed memory. Speaking of which, it looks like AMD's graphics advantage may have just hit a wall. Now the only advantage their current APUs have is price. Things aren't looking so great for AMD. :smith:

Grim Up North
Dec 12, 2011

Edward IV posted:

Wow. Though I wonder how Intel is dealing with the memory bandwidth issue that makes AMD's APUs need high speed memory.

Intel apparently intends to die-stack some low-power DDR3 directly onto the CPU die, which allows for a very wide bus and would considerably reduce the bandwidth requirements for external RAM.

E: Here's a cool image showing Sony doing it in the PS Vita:

Grim Up North fucked around with this message at 18:06 on Sep 12, 2012

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Edward IV posted:

Wow. Though I wonder how Intel is dealing with the memory bandwidth issue that makes AMD's APUs need high speed memory. Speaking of which, it looks like AMD's graphics advantage may have just hit a wall. Now the only advantage their current APUs have is price. Things aren't looking so great for AMD. :smith:

I'm not sure of the engineering details, but another IDF presentation said that the Haswell GPU basically has a dedicated cache and memory access bus (no longer sharing with the CPU), and that the cache bandwidth is 512 GB/s. Coupled with this is a piece of hardware they call a resource streamer, which the CPU can dump data into at once and move on to other things, then the streamer will distribute that to the GPU as it comes ready. Results in fewer RAM hits to push data.

Boten Anna
Feb 22, 2010


someone on that link's comments posted:

Never say never. The reality is that if the rumors of little real progress on the home console front are true, then PC's might already far outstrip consoles this year and last year already.

If that is the case (and traditionally consoles enjoy a six month-year advantage over PC's at launch), then how long do you think it will take Intel with its constant updates to GPU's before they catch up to "acceptable" levels of performance? And how long before there are advantages to using that integrated GPU over a discrete GPU? Directly shared memory space, CPU-to-GPU direct access, etc.

Is this person right or kind of full of poo poo?

With stuff like Lucid starting to exist, might hybrid iGPU/dedicated GPU become something of the norm as a way to take advantage of the aforementioned direct access and shared memory while still having something to throw 200W at to do a bunch of number crunching? Or does this idea have fundamental flaws?

jonathan
Jul 3, 2005

by LITERALLY AN ADMIN
Are dual 670's enough to do 3 monitor 1080p gaming in 3d ? This is with an i7 and 8gb ram.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
With lowered detail settings and AA in some titles, sure. Triple screen and S3D is a ridiculous workload, though. If you want the "full details, AA, solid 60" experience, then you have to talk quad SLI.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Boten Anna posted:

Is this person right or kind of full of poo poo?

With stuff like Lucid starting to exist, might hybrid iGPU/dedicated GPU become something of the norm as a way to take advantage of the aforementioned direct access and shared memory while still having something to throw 200W at to do a bunch of number crunching? Or does this idea have fundamental flaws?

gently caress, I just lost a big effort post. Here's Cliff's notes:

Consoles rarely if ever have actual hardware advantages over same-age computer hardware. What they have are software advantages. A console can count on a specific hardware configuration, a specific feature set of that hardware (e.g. DirectX version), and, especially, on having near-monopoly command over those resources. When you port to a PC, the big problem is that not everybody has balls-to-walls hardware (which limits your market if you want to require a certain level of features or performance), and your program will have to share resources with background tasks and OS overhead.

Heterogeneous compute is this big thing because there's a fundamental trade-off in processor design: A jack of all trades is a master of none. The general purpose CPUs we use today are very good at handling almost anything that comes their way, but they're not particularly good at any of it compared to a more specialized chip. For example, compare encryption benchmarks on the older Core CPUs without AES-NI vs. new CPUs with that instruction set, the difference in speed is roughly an order of magnitude. But that came at the cost of additional transistors on the chip, which aren't free or unlimited. GPUs offer the same kind of advantage, enormous speed increases compared to CPUs on certain tasks, because the silicon is arranged in a specialized manner.

Some consoles already enjoy heterogeneous compute. The PS3, for example: the Cell processor is a big execution core (CPU-like) with a number of coprocessors (GPU-like) on a single die. And the PS3 is a real powerhouse for it, what with the US military building compute clusters out of them and all.

But heterogeneous compute per se isn't the technical hurdle. It exists. It's a thing. It's good. The hurdle is getting heterogeneous compute on the x86 PC platform. The PS3 does heterogeneous compute because the platform was designed for it from the ground up. The PC, on the other hand, is this big evolving amalgam of standards for off-the-shelf components dating back to when drawing overlapping squares was a major real-time computing hurdle.

Lucid is entirely the wrong way to look at it. It's a cool tech, and it will be very useful on heterogeneous compute platforms sorta the way VirtualBox, ESXi or Hyper-V is useful on multi-core systems. But it's not directly related to heterogeneous compute. Lucid virtualizes graphics adapters, so that render commands can be sent to different physical GPUs from a single logical GPU interface. The problem with heterogeneous compute is the hoops that systems have to jump through to exchange data and commands between CPU and GPU in the first place.

Right now, for a CPU to send data and commands to a GPU, that requires a context switch. That is, a thread must be halted and stored, the GPU thread must be read from cache (or worse, RAM) and resumed, and then data must be copied from CPU-addressed RAM (i.e. main system RAM) to the GPU's video RAM.

Here's the crazy thing: Even if you are using an IGP and the VRAM is just a segment of system RAM, that full process still has to happen. The CPU and the GPU maintain memory addresses entirely separately, and they cannot tread on each other's turf, and the GPU is entirely dependent on the CPU to tell it what work to do. That's just how a modern PC works. To change that would mean to change the entire system architecture.

The solution, then, will in fact be a system architecture change. The key is to make such a change available without breaking decades of backward compatibility and legacy support (or heck, even the stuff that you're using right now). But this change won't be limited to IGPs just because they happen to be next to the processor. The GPU in an x86 system acts the same way regardless of whether it's connected within the CPU's bus, adjacent to it, or connected via PCIe as an external card. So when AMD or Intel finally push out a heterogeneous compute solution (and boy is AMD trying hard on it), it will work just as well with a dedicated GPU as it will with an IGP.

Yes, that's the short version.

The Illusive Man
Mar 27, 2008

~savior of yoomanity~
To distract from the Haswell chat, here's a bit of a dumb question:

I have a Radeon card with Catalyst 12.8 installed and am upgrading to a newer Radeon this weekend. Will I need to do anything other than just swap the cards in and out of the case?

Tunga
May 7, 2004

Grimey Drawer

Space Racist posted:

To distract from the Haswell chat, here's a bit of a dumb question:

I have a Radeon card with Catalyst 12.8 installed and am upgrading to a newer Radeon this weekend. Will I need to do anything other than just swap the cards in and out of the case?

You might like to install the drivers again after swapping the cards but there's no reason that you should need to do anything.

H.R. Paperstacks
May 1, 2006

This is America
My president is black
and my Lambo is blue

Space Racist posted:

To distract from the Haswell chat, here's a bit of a dumb question:

I have a Radeon card with Catalyst 12.8 installed and am upgrading to a newer Radeon this weekend. Will I need to do anything other than just swap the cards in and out of the case?

I was just looking through here for the same answer but for Nvidia. I'm swapping from a GTX460 to a 660ti

Rosoboronexport
Jun 14, 2006

Get in the bath, baby!
Ramrod XTreme
So, the non-oem GTX 660 is benched by TweakTown. It's a new chip, GK106, instead of binned GK104 so there is notable difference to OEM version. OEM has ~10 % more shaders but lower clock speed. The memory configuration in 2 gb models is same as 660Ti which means lower bandwidth to the final 512 mb of RAM.
Performance-wise it's 20 % slower than 660Ti and trades blows with 7870 and GTX 580. I wonder if they're going to offer it in 1.5 GB configuration and will the card get by with 1 PCIe power connector. Depending on the price that's the card I'm eyeing at.

Berk Berkly
Apr 9, 2009

by zen death robot

Rosoboronexport posted:

So, the non-oem GTX 660 is benched by TweakTown. It's a new chip, GK106, instead of binned GK104 so there is notable difference to OEM version. OEM has ~10 % more shaders but lower clock speed. The memory configuration in 2 gb models is same as 660Ti which means lower bandwidth to the final 512 mb of RAM.
Performance-wise it's 20 % slower than 660Ti and trades blows with 7870 and GTX 580. I wonder if they're going to offer it in 1.5 GB configuration and will the card get by with 1 PCIe power connector. Depending on the price that's the card I'm eyeing at.

I kind of doubt it. Card makers always want to make bigger margins by splashing on extra memory and charging more.

That is why we have plenty of 3GB GTX660TIs but no svelte 1.5GB models.

Berk Berkly fucked around with this message at 11:45 on Sep 13, 2012

Boten Anna
Feb 22, 2010

Dang, thanks for the explanation Factory! To tl;dr your tl;dr, I'm getting that that guy is a little bit full of poo poo in that the direct memory access doesn't really exist in iGPUs, however a fundamental architecture shift might be needed in a future generation to get to the next level of performance.

TwoKnives
Dec 25, 2004

Horrible, horrible shoes!
If Broadwell delivers another doubling of GPU power, which discrete cards in today's market would that make it roughly equivalent to?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
This is just a guesstimate, so I'll spell out the logic.

HD 4000 benches Skyrim 1366x768 medium detail at 46 FPS. Haswell GT3 manages about the same if not a little better (exact FPS not disclosed) at 1920x1080 High detail. So the question is "What pushes Skyrim on High at 1080p around 90 FPS?"

And gently caress if I'm doing that rigorously by reverse-engineering benchmarks to get res and detail scaling for Skyrim, so 90 FPS, minus about a third to match up with MSAA-enabled benchmarks...

GeForce 460/Radeon 6850/Radeon 7770.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Factory Factory posted:

With lowered detail settings and AA in some titles, sure. Triple screen and S3D is a ridiculous workload, though. If you want the "full details, AA, solid 60" experience, then you have to talk quad SLI.

Also, I'd be looking at 7970 GEs for the 3GB VRAM, or 680 4GBs, for sure.
Not that I would recommend this, because of the crazy requirements and heat, but it is what it is..

TwoKnives
Dec 25, 2004

Horrible, horrible shoes!

Factory Factory posted:

This is just a guesstimate, so I'll spell out the logic.

HD 4000 benches Skyrim 1366x768 medium detail at 46 FPS. Haswell GT3 manages about the same if not a little better (exact FPS not disclosed) at 1920x1080 High detail. So the question is "What pushes Skyrim on High at 1080p around 90 FPS?"

And gently caress if I'm doing that rigorously by reverse-engineering benchmarks to get res and detail scaling for Skyrim, so 90 FPS, minus about a third to match up with MSAA-enabled benchmarks...

GeForce 460/Radeon 6850/Radeon 7770.

Perfect, thank you. I was thinking and hoping that it would be near to a 7770, but wasn't sure how to work it out.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has their review of the Geforce GTX 660 up. While it slots in well in nVidia's lineup, it simply can't compete with AMD on value. The card GTX 660 delivers performance between the Radeon HD 7850 and 7870, but is priced above the 7870, which is insane. I guess nVidia is hoping people don't know about the bundled offerings of the Radeon cards? Even if you don't care at all and Ebay it for a fraction of the retail price you're still coming in $10 ahead of the slower GTX 660 (or $30 after rebate).

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

Alereon posted:

Anandtech has their review of the Geforce GTX 660 up. While it slots in well in nVidia's lineup, it simply can't compete with AMD on value. The card GTX 660 delivers performance between the Radeon HD 7850 and 7870, but is priced above the 7870, which is insane. I guess nVidia is hoping people don't know about the bundled offerings of the Radeon cards? Even if you don't care at all and Ebay it for a fraction of the retail price you're still coming in $10 ahead of the slower GTX 660 (or $30 after rebate).

Ever since the aggressive price cuts from AMD the 7850 has been the card to get price/performance wise and this changes nothing.

TheRationalRedditor
Jul 17, 2000

WHO ABUSED HIM. WHO ABUSED THE BOY.
I had a 7850 briefly, and after looking at extensive benchmarks and the current prices I really think the 7870 is the best price/performance if you don't want or need to splurge for the very upper tier. I've seen it as low as $240 before a rebate which is loving wacky value. The 7850 is a nice little card though, especially as it encroaches upon 560Ti prices from mere months ago.

spasticColon
Sep 22, 2004

In loving memory of Donald Pleasance
I got a 7850 for my main rig back in April and one for my HDTV rig just last week and both cards overclock to 1GHz without issue so that probably gets them close to 7870 speeds I would think.

One thing I noticed with my newer 7850 is that its also a Sapphire card like the first one I got but the cooler is a little different and the stock voltage is a bit higher.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

spasticColon posted:

I got a 7850 for my main rig back in April and one for my HDTV rig just last week and both cards overclock to 1GHz without issue so that probably gets them close to 7870 speeds I would think.

One thing I noticed with my newer 7850 is that its also a Sapphire card like the first one I got but the cooler is a little different and the stock voltage is a bit higher.

I have a Sapphire 7850 that has coil whine like a motherfucker, does yours make a noise when you put it under load?

TheRationalRedditor
Jul 17, 2000

WHO ABUSED HIM. WHO ABUSED THE BOY.

spasticColon posted:

I got a 7850 for my main rig back in April and one for my HDTV rig just last week and both cards overclock to 1GHz without issue so that probably gets them close to 7870 speeds I would think.

One thing I noticed with my newer 7850 is that its also a Sapphire card like the first one I got but the cooler is a little different and the stock voltage is a bit higher.
Mine was OCed (the CCC cap is really low for the series unfortunately) but the gap in power between the two models is more significant than could be bridged by such.

The Illusive Man
Mar 27, 2008

~savior of yoomanity~
Speaking of 7850s, got mine up and running last night, though it didn't go as smooth as I was expecting - swapped in the new card and Windows said that my driver was invalid, so I uninstalled and reinstalled Catalyst 12.8 and then it was fine. Thing is a beast (to me) and what really blows me away is how efficient it is compared to my 6850 - way more power, with similar wattage and it runs about 20 degrees cooler. Plus, even though it has dual fans (on Sapphire's card at least) I swear it's slightly quieter than my 6850 was.

In related news, while I doubt anyone in this thread is interested, I'll be throwing my lightly-used 6850 up on SA Mart this weekend.

Alpha Mayo
Jan 15, 2007
hi how are you?
there was this racist piece of shit in your av so I fixed it
you're welcome
pay it forward~
I can't think of any reason someone would buy a nvidia card right now. Their pricing is just wacky right now. Maybe a slight exception for the 670Ti if that really is your price range ($360-400), though the 7970 can be had for not much more.

Basically it seems like +$10-20 gets you the higher tier AMD card, which blows away the nvidia card below it. Or you can OC its direct competing card, and OC vs OC AMD will still win.

The 660 would be much more appealing with BL2. Kind of sad they didn't extend it to that, I mean it is still an enthusiast card.

Lowclock
Oct 26, 2005
I wonder if OEM coolers are ever going to not suck. I had a 660ti with the reference cooler on it and it was loud as hell and 20c hotter than the Asus one I replaced it with. I realize one flows out the back and the other blows around in the case, but geez.

Rigged Death Trap
Feb 13, 2012

BEEP BEEP BEEP BEEP

Meta Ridley posted:

I can't think of any reason someone would buy a nvidia card right now. Their pricing is just wacky right now. Maybe a slight exception for the 670Ti if that really is your price range ($360-400), though the 7970 can be had for not much more.

Basically it seems like +$10-20 gets you the higher tier AMD card, which blows away the nvidia card below it. Or you can OC its direct competing card, and OC vs OC AMD will still win.

The 660 would be much more appealing with BL2. Kind of sad they didn't extend it to that, I mean it is still an enthusiast card.

Uhh what?
The 660ti pretty much matches the 7950 at resolutions of 1920x1200 and under. And it's the same story for all the tiers.
They're pretty neck and neck, OC or not.

after that it just comes down to personal preferences and circumstances. As of now the 660ti has great value since they come with a great game for free.

Except if you might be talking about the compute abilities of the cards. Then AMD wins hands down.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

Meta Ridley posted:

I can't think of any reason someone would buy a nvidia card right now. Their pricing is just wacky right now. Maybe a slight exception for the 670Ti if that really is your price range ($360-400), though the 7970 can be had for not much more.

Basically it seems like +$10-20 gets you the higher tier AMD card, which blows away the nvidia card below it. Or you can OC its direct competing card, and OC vs OC AMD will still win.

The 660 would be much more appealing with BL2. Kind of sad they didn't extend it to that, I mean it is still an enthusiast card.

There are other considerations besides price and performance.

I see many more games that have Nvidia branding on them compared to AMD branding... in fact the only AMD branded games I can think of are Deus Ex and Sleeping Dogs.

As I've mentioned I currently have a 5970. My previous computer had Quad SLI 9800gx2s. In all the time I was running an Nvidia card the only game I ever had an issue with was WoW (had to disable sli to keep it from crashing) and I believe WoW works better on Nvidia these days, although I haven't played in years.

Since I bought my 5970, I've had a good half dozen release day issues. Dragon Age Origins will crash 50% of the time you open the inventory screen unless you disable crossfire. The game came out in 2009 and its never been fixed. Crysis 2 was unplayable at launch till AMD updated their drivers. When Battlefield 3 and Rage came out both had serious issues, and AMD released desperate drivers optimised for each game forcing you to pick which one you wanted to be able to play. Deus Ex, which is supposed to be AMD optimised needs Crossfire disabled or it flickers during cut scenes.

I know what brand I'm going to stick with when I buy a new PC.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
FWIW, my CF 6850s have worked very well for me. Other than a hiccup or two in Skyrim they're pretty problem-free. CF and SLI both mature with each generation - CF gets more compatible and gets more ins for zero-day support, and SLI's performance scaling improves.

And Nvidia's issues with BF3, the 560 Ti, and now with the Turbo clocking GPUs deserve a little more weight.

Because there was no set reference clock for GF114, GeForce 560 Ti cards crashed all over the Goddamn place when BF3 came out, because it stressed them hard enough for stability errors. It affected almost every non-minimum-clock 560 Ti, and people looking to play one of the biggest releases that year just got black screens, artifacting, TDR driver restarts, and BSODs. There was no fix other than manually overvolting the cards or playing the refurb lottery for a chip that could take the load at stock.

Now with Turbo clocking GPUs, you're seeing the same problems across every single GK104 SKU, because nobody has yet worked out a way to reliably validate and bin GPUs with Boost in the mix. Stock cards with locked voltage get overclocking failures on stock frequencies, behaviors, and power targets. It's not as widespread in terms of hitting a single product so hard, but if you get a GK104-based GPU, it basically has the same DOA instability chance as a hard drive. And you may not find out until months later when you load that one game that stresses the chip just right.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

FWIW, my CF 6850s have worked very well for me. Other than a hiccup or two in Skyrim they're pretty problem-free. CF and SLI both mature with each generation - CF gets more compatible and gets more ins for zero-day support, and SLI's performance scaling improves.

And Nvidia's issues with BF3, the 560 Ti, and now with the Turbo clocking GPUs deserve a little more weight.

Because there was no set reference clock for GF114, GeForce 560 Ti cards crashed all over the Goddamn place when BF3 came out, because it stressed them hard enough for stability errors. It affected almost every non-minimum-clock 560 Ti, and people looking to play one of the biggest releases that year just got black screens, artifacting, TDR driver restarts, and BSODs. There was no fix other than manually overvolting the cards or playing the refurb lottery for a chip that could take the load at stock.

Now with Turbo clocking GPUs, you're seeing the same problems across every single GK104 SKU, because nobody has yet worked out a way to reliably validate and bin GPUs with Boost in the mix. Stock cards with locked voltage get overclocking failures on stock frequencies, behaviors, and power targets. It's not as widespread in terms of hitting a single product so hard, but if you get a GK104-based GPU, it basically has the same DOA instability chance as a hard drive. And you may not find out until months later when you load that one game that stresses the chip just right.

It's really not that bad with GK104. It was earlier on, but issues haven't persisted like they did with the 560Ti. You're quite right, though, that it's amazing and you really never know what's going to be the stick that pokes your overclock in the groin. SC2, a game that people routinely use to bench CPU because it's so famously not GPU bound, cost me 4MHz core and 5MHz VRAM off of a previously stable overclock.

S.T.A.L.K.E.R. DX9/DX10/DX11, Metro 2033 DX10/DX11, Batman AC DX9/DX11, Crysis 2 DX9/DX11, and every benching utility made left to run (except Furmark, that's really only useful for a few minutes of testing before TDP throttling kicks in, totally the linpack of this generation's GPUs). All solid. And if I'm leaving anything out, it's because I forgot it, not because I haven't tested with it, you don't spend this much on graphics capability not to experience what the high end looks like :v: Then Starcraft 2, a DX9 game that shouldn't be especially demanding, came along and popped a few MHz off.

If you'll forgive me pointing out one flaw in your reasoning, though, it's that AMD cards have the same problem. Any of them that dynamically manage clock speed to get ideal power and performance as needed will be just as prone to being run past limits from the factory because of the difficulty of ascertaining the exact OC point. The only difference is AMD was initially quite conservative in clocking Tahiti cards, especially, while nVidia's reference designs (and partner adjustments) are more aggressive about utilizing the full card potential right out of the gate, leaving less expected OC headroom.

A difference in expectations doesn't change the underlying nature of the hardware, though, and both companies have encouraged manufacturers to take advantage of better-binned chips by making aggressively cooled cards with premium power delivery components (is there even a reference design for the HD 7970GHz Edition?).

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I had to look it up: AMD produced a reference 7970 GHz design, but all the board partners decided to stick the higher-binned GPU straight into their existing 7970 semi-custom and custom designs. I guess the reference 7970 GHz went the way of the Z75 PCH.

:shrug:

DoctorOfLawls
Mar 2, 2001

SA's Brazilian Diplomat
Cross-posting from the recommendations thread - Here's a personal anecdote hoping it helps someone in the future: I had a 560ti and wasn't very happy with its performance, even on relatively older games/console ports - Assassin's Creed II/Brotherhood, Batman was so-so etc. I got a GTX 680 to test out and much to my surprise performance was the same if not worse. Intrigued, I decided to check what CPU/GPU-Z would report.

It turns out that the card was running at PCI-E 2.0 X 1 rather than X 16. All this time the performance was subpar because of that issue. After a motherboard BIOS update and reseating the card, I got it running at PCI-E 2.0 x 16 and it's like I installed a video card from the future. :-)

real_scud
Sep 5, 2002

One of these days these elbows are gonna walk all over you

Factory Factory posted:

I had to look it up: AMD produced a reference 7970 GHz design, but all the board partners decided to stick the higher-binned GPU straight into their existing 7970 semi-custom and custom designs. I guess the reference 7970 GHz went the way of the Z75 PCH.

:shrug:
So would that be a good or bad thing that I have a AMD reference 7970?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Entirely neutral? Pro: It was inexpensive, a known-stable config, and the cooler will work anywhere. Cons: It might not have the most overclocking headroom, and it's louder and runs hotter than a custom-cooled model.

Palladium
May 8, 2012

Very Good
✔️✔️✔️✔️

Meta Ridley posted:

I can't think of any reason someone would buy a nvidia card right now. Their pricing is just wacky right now. Maybe a slight exception for the 670Ti if that really is your price range ($360-400), though the 7970 can be had for not much more.

Basically it seems like +$10-20 gets you the higher tier AMD card, which blows away the nvidia card below it. Or you can OC its direct competing card, and OC vs OC AMD will still win.

The 660 would be much more appealing with BL2. Kind of sad they didn't extend it to that, I mean it is still an enthusiast card.

The worst offenders from Nvidia has been their ~$100 segment, their price/performance are terrible with GTS450, 550Ti and now the 650 to complete the unholy trinity.

Adbot
ADBOT LOVES YOU

FSMC
Apr 27, 2003
I love to live this lie

Meta Ridley posted:

I can't think of any reason someone would buy a nvidia card right now. Their pricing is just wacky right now. Maybe a slight exception for the 670Ti if that really is your price range ($360-400), though the 7970 can be had for not much more.

Basically it seems like +$10-20 gets you the higher tier AMD card, which blows away the nvidia card below it. Or you can OC its direct competing card, and OC vs OC AMD will still win.

The 660 would be much more appealing with BL2. Kind of sad they didn't extend it to that, I mean it is still an enthusiast card.

I got the 660ti, and only quickly glanced at the ATI cards, they seemed like they might be better at the price point but I then ignored them.

My decision was probably based on ignorance and stupidity. RAGE never really worked well on ATI. (RAGE is opengl, valve is getting games to run faster under opengl(the port to linux)) Save myself the hassle and just get a nvidia which will likely just work with games and might be more future proof. It's not worth my time spending hours researching or looking at it in too much detail over $20 or a few fps.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply