Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Weeeelll, actually the 570 probably beats the 670 by a respectable margin in clock-for-clock rendering performance. You'll never get a 570 to 670 clock speeds without killing the card, and the 670 is a pretty salty overclocker on its own that can literally stay within 5%-10% of a GTX 680 as both cards are overclocked if it's a decent sample, so the rendering efficiency is more trivia than application, but it is true. No Fermi card is going to match the texel performance of Kepler cards, by a long shot, but despite early thoughts to the contrary (related to concerns over the VRM), you can overclock a 570 to run really well, at least as well as a 580 with a bit of an overclock itself, if you're aggressive in the fan control and keep it cool.

90ºC is hot, gigantic die and 3 billion transistors or not. I would expect temperatures closer to the low 80ºC range in extreme graphical applications. For reference, when I was using my GTX 580 for graphics instead of PhysX (:suicide:) I was able to get it to 920MHz core/1840MHz shaders/4200 VRAM on 1.138V, and with a custom fan profile it never got much above 70ºC. My case does have really good airflow, heat's never just hanging around inside it, but nonetheless that's what I got with a GTX 580 - a hotter running card than the 570, by a smidge.

To get that overclock with the stock blower, I set my fan profile to match fan speed to temperature up to about 65ºC and then jump to max at around 68ºC. The fan would crank up playing games like Crysis or Metro or other demanding titles but I never suffered instability and I never had excessive temperature issues.

This is a bit rambly, so here's a quick take-home summary:

1. That's a high temperature to be running at for any extended period of time and it's a good idea to get it down. Don't attempt overclocking unless you can lower stock speed and voltage temperatures to sub-70ºC with your fan profile.

2. You can overclock a 570, especially if you're willing to get more aggressive with the fan profile, though factors within the case may be limiting if you've got an airflow problem, or high ambient temperatures to begin with.

3. It is worth overclocking your card if you can safely do so. For Fermi, even with the 570's somewhat narrower memory bus and lower amount of VRAM, memory bandwidth is less of an issue than core and shader performance. Keep memory clocks low - one of the main issues with GF110 cards is that the memory controller just isn't that robust, and does not handle overclocking the memory much. Use the juice available to you to get the core up to a stable clock, at reasonable temperatures. You can damage the chip if you overclock it and can't get temps under control, but if you manage to ease up the temperature ceiling some, grab free performance. Why not?

Adbot
ADBOT LOVES YOU

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Agreed posted:

For Fermi,

Ah yes, Fermi, not Kepler. Silly me..

HalloKitty fucked around with this message at 08:43 on Jul 20, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Huh? Sorry, I don't grasp, I just bolded that for emphasis since Kepler is really bandwidth limited and you have to balance the overclock to spread the hard TDP limit between GDDR5 and the core/shaders, or you won't have a meaningful improvement. I didn't mean to call you out, just drawing a distinction that applies to him :shobon:

movax
Aug 30, 2008

So, I feel bad for folks having issues with SR3 and AMD cards...I'm running it on my 460 @ 2560x1600, and with AA off, it looks great and is incredibly fluid. I don't know how much better I could get it running on a 670/680, but I am planning on upgrading "soon".

real_scud
Sep 5, 2002

One of these days these elbows are gonna walk all over you

movax posted:

So, I feel bad for folks having issues with SR3 and AMD cards...I'm running it on my 460 @ 2560x1600, and with AA off, it looks great and is incredibly fluid. I don't know how much better I could get it running on a 670/680, but I am planning on upgrading "soon".
I'm running a 7970 and haven't had any kind of issues with SR3 and I run it at max everything @ 1920x1080

298298
Aug 14, 2011

by Y Kant Ozma Post
I sold my old system to my cousin which had a 5870 in it, it went out and XFX replaced it with a 6950 1gb. In my newest system I'm using a 560 Ti (EVGA superclocked 1gb), but I miss ATI's default color settings and cousin is annoyed at XFX taking over a month to get the new video card here, so he's willing to trade me the 6950 for the 560 Ti.

Good or bad trade? I'm ultimately planning to be gaming at high resolutions, watching movies, etc.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

298298 posted:

I sold my old system to my cousin which had a 5870 in it, it went out and XFX replaced it with a 6950 1gb. In my newest system I'm using a 560 Ti (EVGA superclocked 1gb), but I miss ATI's default color settings and cousin is annoyed at XFX taking over a month to get the new video card here, so he's willing to trade me the 6950 for the 560 Ti.

Good or bad trade? I'm ultimately planning to be gaming at high resolutions, watching movies, etc.

If you prefer ATI, seems like a good trade to me.

Overclock it a lot.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
AnandTech has reviewed EVGA's top GeForce 680 SKU, the GTX 680 Classified. It's a fully custom design, which means that it's just the slightest bit crazy. We're talking 14+3 phase VRM (vs. 4+2 reference), and a power target of 250W +/- 32% (vs. 170W stock). It also has an 80mm radial cooling fan that's limited to 55% speed by default for noise reasons, which, despite its efficacy, still can't actually handle 300 actual Watts flowing through a GK104 GPU without toasting the chip.

Of course, if 300W isn't enough power for you, you can flip a tri-mode BIOS switch and disable power targets entirely, which lets the card draw up to the full 375W its plugs allow.

This is not a purchase recommendation in the slightest.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

I'd say in fact definitely don't get it, the performance improvements of a massive overvolt and overclock are not at all commensurate with the added price, heat, noise, etc.; in fact I'm pretty surprised, that turns in numbers that are pretty well on par with my card, and if I had to guess why, I'd say it's got to do with the relatively low memory overclock and the core being, as a result, fairly bandwidth limited despite its high clock. Gotta balance them, and my 680's GDDR5 runs at 6.8GHz, with a core at/around 1260-1290 depending on what's going on (Kepler, it's not as straightforward as it used to be). I know that I lucked out and got a good sample, higher clocks than might normally be expected from a "mere" EVGA GTX 680 SC+, but they're advertising this as a very heavily binned part, and it features all kinds of freaky poo poo going on with the layout and nature of the circuit only to end up bandwidth limited and turning in results that can't take advantage of its key features?

Even if it weren't just a bad investment to begin with based on cost, it certainly is based on relative performance.

real_scud
Sep 5, 2002

One of these days these elbows are gonna walk all over you

Factory Factory posted:

AnandTech has reviewed EVGA's top GeForce 680 SKU, the GTX 680 Classified. It's a fully custom design, which means that it's just the slightest bit crazy. We're talking 14+3 phase VRM (vs. 4+2 reference), and a power target of 250W +/- 32% (vs. 170W stock). It also has an 80mm radial cooling fan that's limited to 55% speed by default for noise reasons, which, despite its efficacy, still can't actually handle 300 actual Watts flowing through a GK104 GPU without toasting the chip.

Of course, if 300W isn't enough power for you, you can flip a tri-mode BIOS switch and disable power targets entirely, which lets the card draw up to the full 375W its plugs allow.

This is not a purchase recommendation in the slightest.
All the more interesting that boutique PC makers are already putting them in their systems and convincing people to order the upgrade.

edit: Though now that I think about it, of course it makes sense that they are. They want to make $$ on the upgrade.

movax
Aug 30, 2008

Factory Factory posted:

AnandTech has reviewed EVGA's top GeForce 680 SKU, the GTX 680 Classified. It's a fully custom design, which means that it's just the slightest bit crazy. We're talking 14+3 phase VRM (vs. 4+2 reference), and a power target of 250W +/- 32% (vs. 170W stock). It also has an 80mm radial cooling fan that's limited to 55% speed by default for noise reasons, which, despite its efficacy, still can't actually handle 300 actual Watts flowing through a GK104 GPU without toasting the chip.

Of course, if 300W isn't enough power for you, you can flip a tri-mode BIOS switch and disable power targets entirely, which lets the card draw up to the full 375W its plugs allow.

This is not a purchase recommendation in the slightest.

14 phases is loving ridiculous; I tried looking for more high-res pics of the 680C to see which voltage regulator they were using, but couldn't find one. You can count all the low ESR bulk caps for each phase on the front of the AnandTech pic though. (Sanyo POSCAPs or some competitor I guess) Fourteen inductors as well of course plus 28 FETs. I wonder how many layers the damned board is.

Maxwell Adams
Oct 21, 2000

T E E F S
Do all new video cards have the power connectors on the top edge of the card? I have an awkward HTPC case, and it's more convenient if they're at the back.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
They're sometimes at the back, and some video cards have no connectors at all. For typical HTPC use, you'd want one of the connector-less low-power cards anyway. Unless you're doing ridiculous GPGPU post-processing of 4k video, graphics oomph beyond a low-end card is wasted unless you're gaming.

tijag
Aug 6, 2002

Agreed posted:

If you're not overclocking it, there's no real point to setting a higher-than-100% power target. And while it does very much love to be kept cool and overclocks best if you can keep it from ever seeing 70ºC, you can probably relax a bit from "fan hits max speed at 60ºC" as that's a little overkill. For a non-overclocked card, to keep it cool and relatively quiet, you could set it to run with, say, 5% less fan than the temperature, with the "crank it up" big fan speed boost coming at 68ºC to keep it from hitting 70ºC. With 2ºC of hysteresis that should be ample (once it hits 68ºC, it won't reduce the fan speed until it's at 66ºC, allowing for demanding sections to get the cooling you need without ramping the fan up quite so aggressively for what is in general not an aggressively tuned setup).

What brand and model is the card? I'm generally wary of blaming drivers for crashes, because Windows has a pretty robust mechanism in place for recovering from a too-aggressive overclock (it'll say something like "the display driver stopped working and has recovered"). Next time you get a BSOD, write down the failure code and look it up. Check your error logs as well. It could be you're looking for the wrong culprit.

I have an EVGA, the stock/standard model. I got it at launch.

BLOWTAKKKS
Feb 14, 2008

I recently bought a GTX 690 and now have two 570s laying around doing nothing. I'd like to throw one into my computer as a physx card just for the hell of it but I'm not sure if my PSU will explode. I have a Corsair HX850 which according to reviews can actually provide up to 1000w. It was running the two 570s perfectly (which drain more power than a single 690) but I'm afraid this might be pushing it over the edge.

LRADIKAL
Jun 10, 2001

Fun Shoe
You could compare the stated wattage on the old stuff you had with the new. Also there are many power supply calculators on the internet.
http://lmgtfy.com/?q=power+supply+calculator&l=1

Porkchop Express
Dec 24, 2009

Ten million years of absolute power. That's what it takes to be really corrupt.

BLOWTAKKKS posted:

I recently bought a GTX 690 and now have two 570s laying around doing nothing. I'd like to throw one into my computer as a physx card just for the hell of it but I'm not sure if my PSU will explode. I have a Corsair HX850 which according to reviews can actually provide up to 1000w. It was running the two 570s perfectly (which drain more power than a single 690) but I'm afraid this might be pushing it over the edge.

Since you have two, if you are looking to sell one I might be interested! (I assume its something similar to this: http://www.newegg.com/Product/Product.aspx?Item=N82E16814130593 )

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

BLOWTAKKKS posted:

I recently bought a GTX 690 and now have two 570s laying around doing nothing. I'd like to throw one into my computer as a physx card just for the hell of it but I'm not sure if my PSU will explode. I have a Corsair HX850 which according to reviews can actually provide up to 1000w. It was running the two 570s perfectly (which drain more power than a single 690) but I'm afraid this might be pushing it over the edge.

I hope you have good AC, because you're creating an oven.

I couldn't imagine why you'd need more graphics cards in your machine than a single 690.

I'd just ebay/sa-mart the 570s.

BLOWTAKKKS
Feb 14, 2008

Jago posted:

You could compare the stated wattage on the old stuff you had with the new. Also there are many power supply calculators on the internet.
http://lmgtfy.com/?q=power+supply+calculator&l=1

I tried a few of them and some of them said I would need more power than I have just to run the setup I was having before. Oh well, I only have a P67 motherboard so if I installed a physX card I'd have to switch to 8x pcie. Not worth it I guess.

Porkchop Express posted:

Since you have two, if you are looking to sell one I might be interested! (I assume its something similar to this: http://www.newegg.com/Product/Product.aspx?Item=N82E16814130593 )

Thanks for the offer, but I don't think I'd get anything near what I paid for them since I also added some Accelero Xtreme Plus coolers to them. Also I've never sold anything online because I'm really lazy and hate mailing things.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

BLOWTAKKKS posted:

I recently bought a GTX 690 and now have two 570s laying around doing nothing. I'd like to throw one into my computer as a physx card just for the hell of it...


Important note - everything that follows starts from the assumption that you must have PhysX. So when I say that something "makes sense," I don't mean it's a good idea, I mean it's a good way to achieve that specific goal. It's a horrible waste of money if you don't already have the PhysX card lying around and a power supply that can handle a lot of wattage already installed.

So, Physx and multi-card setups. I can speak on that, as I've got a 580 for CUDA/OpenCL & PhysX, and a 680 for GraphX. :rimshot:

Kepler's performance there is less than stellar. A 570 has sufficient memory bandwidth and great compute performance. While it isn't the most cost and power effective card (that'd be the GTX 560Ti-448), as a dedicated CUDA processor it'll tear through PhysX and keep in sync with 690s no problem.

That said, even with Kepler's miserable compute performance, PhysX is a very specific kind of load and isn't as hard on the SMs (Fermi) or SMXs (Kepler) as some other calculations. While it's prettier than what a CPU can do in realtime, it's still low precision in the grand scheme of things, and Kepler can do low-precision operations pretty fast. It's even quite optimized for certain kinds, with kickass tessellation performance, for example. Really is a DX11 monster.

If you have TWO 690s, I don't think you need an external card for PhysX. Also, I'd be concerned about getting bandwidth limited if you're running three cards... One 670 or 680, it "makes sense" in that a GF110 card will do PhysX like crazy, allowing the Kepler to render and the Fermi to compute without any lag time between the two. They sync up really well and you get the best of both worlds. But with a 690, you're so far over the rainbow that unless you're actually getting close to maxing the 690's power draw/GPU load, you can almost certainly afford to spare some of the card's SMX/CUDA Cores for PhysX processing.


I decided to put together a quick list of pros and cons of each architecture just for shits and giggles, maybe somebody will get something out of it.


Fermi GF110 Pros:
1. SM units (AKA CUDA Cores) are hot-clocked at double the GPU speed, allowing for double the executions per cycle, and can perform a number of specialized operations quickly in a given cycle.

2. Double precision operations on GF110-based chips (GTX 580, 570, 560Ti-448) are "only" artificially limited to 1/8th as fast as single-precision. Which is a poo poo-ton faster than Kepler consumer cards, which iirc run a mere 8 FP64 processors per CUDA Core. (Big Kepler's going to raise that number to 64 FP64 processors per CUDA Core).

Fermi GF110 Cons:
1. Downside to pro number two is that the Fermi GF110 transistor budget, 3 billion transistors, is fundamentally optimized for workstation usage, even in non-workstation cards. And since it's a 40nm process, the chip is freaking huge at 552mm². Upshot is a much higher-than-necessary transistor count for videogames and hotter running and more power hungry cards in the GF110 lineup.

2. Characteristically, nVidia went with a conservative amount of GDDR5, which means VRAM limitations will hit them sooner in future games than ATI's reference 2GB GDDR5 cards. Further, consumer cards' relatively low VRAM also limits outright CUDA performance, though this does not affect PhysX specifically.



Kepler GK104 Pros
1. SMX units highly optimized for DX9 through DX11.1 shader operations, resources allocated efficiently to play videogames well. Totally destroys GF110's texel performance, while upping the ante significantly in pixel performance too.

2. Efficient transistor budget fits 3.5 billion transistors on a 28nm process into 294 mm². Lower temperatures, less power consumption, smaller cards (okay , 690 doesn't count, but for a dual-GPU card it kinda does!), lots of clever hardware tricks with software integration to control performance on an as-needed basis... Intel could learn a thing or two about clock modulation from nVidia at this point when it comes to power savings.

3. These 3.5 billion transistors are gaming-oriented and don't "waste" transistors on GPGPU/CUDA performance that most gamers will never need. They've changed from integrating one design into both consumer and workstation cards to providing products that fit each market separately and perform well in their respective roles. GK114 is going to be a compute monster.

Kepler GK104 Cons
1. poo poo GPGPU. If you expected three times the CUDA cores to offer three times the performance... nope! With a mere 8 CUDA FP64 functional units, FP64 performance is 1/24th. That's garbage. It's on purpose, they don't intend to spend transistors on non videogame-related stuff here, but it's still a negative thing if you wanted GPGPU from the card. AMD's Tahiti architecture has about a 5x advantage in FP double-precision processing, continuing a proud tradition of kicking nVidia in the nuts and nobody but Bitcoin miners and password crackers buying their cards for GPGPU. :smith:

2. I guess it's harder to predictably overclock, especially at the factory where they have to crank 'em out quickly? Stability testing is easy in the comfort of your home where you can dick around with the power target, clock rate and memory clock, and if you get driver-has-crashed-but-recovered you know you're overdoing it. At the factory, though, they tend to base the OC on quick tools that end up throttling at TDP and thus product ships that ends up not actually working at stated clock speeds. This leads to ridiculous poo poo like the EVGA GTX 680 Classified, which is so overbuilt it's not even funny, yet features component compromises to maintain a decent profit margin and barely guarantees higher performance than stock models when overclocking is concerned.

3. Hard-coded performance drops at 70ºC, 80ºC, 90ºC. It's not much, and it's kind of confusing why it happens at all, honestly, but chop off 13MHz at each break-point.

BLOWTAKKKS posted:

... but I'm not sure if my PSU will explode. I have a Corsair HX850 which according to reviews can actually provide up to 1000w. It was running the two 570s perfectly (which drain more power than a single 690) but I'm afraid this might be pushing it over the edge.

Yeah, they source that from a high-quality CWT power supply which stays at over 80% (nearly 85%, actually) with good test figures for 1000W or so. The HX750 is similarly overspecced supply, also CWT-sourced, and can run over 900W at high efficiency. Both are gold-rated according to 80Plus, but Corsair did something cool and released them with specs drawn from an actual, well-tested (higher temperature) rating.

Remember that the 570 if used as a PhysX card will default to stock voltage, can't have its core or shaders overclocked, and can only have its memory overclocked (which you might want to do, if you can, since you'll want to raise the memory bandwidth to get to a substantial portion of the 690's, what, 400GB/sec+).

Temps do become a concern, placement of the cards become a concern, hopefully you're running a PCI-e 3.0 board because you'll need at least PCI-e 3.0 8x not to lose performance on the 690 by installing a PhysX card in another/the other 16x-if-single slot. If you're on a PCI-e 2.0 board, I'd say forget about it, because you need the bandwidth of PCI-e 2.0 16x to not bottleneck the 690.

It's a pretty bad idea, but that's never stopped me from at least exploring the concept... And I don't have room to judge, so I just infodump instead.

Agreed fucked around with this message at 01:17 on Jul 24, 2012

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

tijag posted:

I have an EVGA, the stock/standard model. I got it at launch.

If you're certain your case is well-ventilated (take the side off and see if temps drop - if so, you've got an airflow problem), I'd consider getting in touch for an RMA. Remember, with Kepler, you lose 13MHz at 70ºC, 80ºC, and 90ºC. So cumulatively, your card is underperforming its specified clock rate (really just losing some of the built-in Kepler boost, but still). But only go down that route if you're certain it isn't high ambient temperature within the case or poor ventilation causing the problem, because depending on what you did with the card you'll be out the card for some time while they process the RMA - and if you get another one and it does the same thing, it was all for nothing.

Most common temperatures reported in reviews for the GTX 680 range between the mid 70s to the low 80s using the default fan control. If you're exceeding that substantially then there's probably something going on. Just make sure you eliminate all the variables you can before deciding that what's going on requires a trip back to EVGA.

spasticColon
Sep 22, 2004

In loving memory of Donald Pleasance
I know that nvidia cards have PhysX but does AMD have any hardware accelerated physics engines that run on their cards? And if so, do any games actually support it?

Magog
Jan 9, 2010

spasticColon posted:

I know that nvidia cards have PhysX but does AMD have any hardware accelerated physics engines that run on their cards? And if so, do any games actually support it?

Nope, only Nvidia has a physics solution.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Goes back to the acquisition of Ageia, though back then it was very differently done. There was a period in the GTX 9800/Radeon HD whatever that was then (X series?) where you could use it on ATI hardware with a hacked driver.

Rough timeline goes:

The physics calculations were originally done via PPU extensions on Ageia physics cards (literally, physics processing units), then nVidia acquired Ageia and moved in a more general GPGPU direction, then you could kinda hack it in for ATI cards, but then they went back in a more explicitly proprietary direction for GPU-accelerated PhysX and now the closest you can get to PhysX on an ATI card is a pretty unstable hack setup with an nVidia PhysX-dedicated card alongside an ATI graphics card.

Josh Lyman
May 24, 2009


spasticColon posted:

I know that nvidia cards have PhysX but does AMD have any hardware accelerated physics engines that run on their cards? And if so, do any games actually support it?
There's so few games that support PhysX that it should not be a factor in any purchasing decisions.

Dogen
May 5, 2002

Bury my body down by the highwayside, so that my old evil spirit can get a Greyhound bus and ride

Josh Lyman posted:

There's so few games that support PhysX that it should not be a factor in any purchasing decisions.

What if I'm a huge Mirror's Edge fan? :ohdear:

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Josh Lyman posted:

There's so few games that support PhysX that it should not be a factor in any purchasing decisions.

On the positive side I own all of them and the physics in them is SO GOOD

...

sigh

Edit: Although, and this is such a technicality that I'm a shithead for pointing it out, PhysX is a massively widespread physics API! Tons of games use it. Hell, it's in the Unreal Engine, that's about half the games there :v: They just don't use GPU-accelerated PhysX for extra fancy stuff, because it's a catch-22 of requiring more hardware than most people have and limited development time/budget to include marginal features. But it is genuinely badass in games that support it, few though they are.

Porkchop Express
Dec 24, 2009

Ten million years of absolute power. That's what it takes to be really corrupt.
Perhaps someone could help me understand how this works, because I am not 100% clear. What is the difference between a budget & performance card in the same pixel/resolution category? Are the budget cards just older generations of cards?

Also are the resolutions the cards are listed under set in stone? IE: If I got a 550 Ti can I still set the maximum screen resolution to 1920x1080 but just set it lower for games?

I ask because Newegg put this on sale:

http://www.newegg.com/Product/Produ...-_-14130625-L0D


And I have no idea how good it is. I always found video cards to be a complicated affair back when only Nvidia & ATI made them, but now it confuses me even more.

Dogen
May 5, 2002

Bury my body down by the highwayside, so that my old evil spirit can get a Greyhound bus and ride
lovely, you would want to buy a 7770 or something in that price range, I think, or spend more on a 560Ti/wait for a 660Ti... someone correct me if I'm wrong on the $100-150 recommendation.

I do know that the 550Ti has never been a 'good' card, though.

unpronounceable
Apr 4, 2010

You mean we still have another game to go through?!
Fallen Rib

Porkchop Express posted:

IE: If I got a 550 Ti can I still set the maximum screen resolution to 1920x1080 but just set it lower for games?
Yes.

quote:

I ask because Newegg put this on sale:

http://www.newegg.com/Product/Produ...-_-14130625-L0D

As Dogen mentioned, the 550Ti is simply a bad value. You can get better performance with this 7770 for $15 less after rebate.

Porkchop Express
Dec 24, 2009

Ten million years of absolute power. That's what it takes to be really corrupt.

unpronounceable posted:

As Dogen mentioned, the 550Ti is simply a bad value. You can get better performance with this 7770 for $15 less after rebate.

My board only has PCI-E x16 2.0, I assume this would just run at those lower speeds with no issues?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
That's right.

Also, it's still the case that only AMD and Nvidia make graphics cards. Intel only does integrated GPUs that you get "for free" with a CPU that aren't on their own card.

And the difference between "budget" and "performance" categories is pretty much "Budget will run the game at good detail settings, performance will run the game with most details cranked up and probably some antialiasing, too."

Porkchop Express
Dec 24, 2009

Ten million years of absolute power. That's what it takes to be really corrupt.

Factory Factory posted:

That's right.


And the difference between "budget" and "performance" categories is pretty much "Budget will run the game at good detail settings, performance will run the game with most details cranked up and probably some antialiasing, too."


Good to know, thanks.


Factory Factory posted:

Also, it's still the case that only AMD and Nvidia make graphics cards. Intel only does integrated GPUs that you get "for free" with a CPU that aren't on their own card.

Well I meant when they were putting the cards out branded as Nvidia & ATI, before you could get them branded by all of these other companies.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Trivia, you still sometimes see nVidia-branded nVidia cards. I don't recall AMD/ATI doing anything similar these days barring review samples, but we've had posters here who had straight up nVidia cards, made by nVidia :v:

Doesn't get much more reference than that.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
There were some like that in the HD 5000 series, though I think they were actually OEM'd by someone else on contract, and destined for Big Box machines. About a year ago they sold for super cheap because all the Buttcoin miners were selling off their mining farms.

Porkchop Express
Dec 24, 2009

Ten million years of absolute power. That's what it takes to be really corrupt.
Haha well for reference until the past couple of weeks, the last time I did anything with PC parts AGP slots were the ticket and PCI-E hadn't even been introduced yet, so I had had to try and to some catching up. Beware the perils of having only Mac hardware for years!

Tunga
May 7, 2004

Grimey Drawer

Agreed posted:

Trivia, you still sometimes see nVidia-branded nVidia cards. I don't recall AMD/ATI doing anything similar these days barring review samples, but we've had posters here who had straight up nVidia cards, made by nVidia :v:

Doesn't get much more reference than that.

Aren't/weren't these actually made by Palit? I could be wrong but I have some vague memory of that being the case.

TheRationalRedditor
Jul 17, 2000

WHO ABUSED HIM. WHO ABUSED THE BOY.

Agreed posted:

Trivia, you still sometimes see nVidia-branded nVidia cards. I don't recall AMD/ATI doing anything similar these days barring review samples, but we've had posters here who had straight up nVidia cards, made by nVidia :v:

Doesn't get much more reference than that.
Didn't Sapphire originally start out as an ATi partner or something, though? I vaguely recall that they produced some really old timey ATi reference boards. Just for trivia.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Glen Goobersmooches posted:

Didn't Sapphire originally start out as an ATi partner or something, though? I vaguely recall that they produced some really old timey ATi reference boards. Just for trivia.

I think PowerColor was an ancient buddy-buddy partner with ATI, it's certainly possible Sapphire is. Isn't Sapphire owned by the same company that owns Zotac, a brand which has been aggressively pursuing higher partnership with nVidia lately?

Adbot
ADBOT LOVES YOU

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Agreed posted:

I think PowerColor was an ancient buddy-buddy partner with ATI, it's certainly possible Sapphire is. Isn't Sapphire owned by the same company that owns Zotac, a brand which has been aggressively pursuing higher partnership with nVidia lately?

It's a little fuzzier than what some guy on [H] said, I think. Zotac is one of the brands of a group called PC Partner, Ltd. PC Partner has some affiliation with Sapphire, but at least in 2006, Sapphire publicly considers itself quite independent.

However, in 2010, Kitguru.com seems to take Zotac and Sapphire as semi-independent brands of PC Partner.

:iiam:

Factory Factory fucked around with this message at 04:26 on Jul 25, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply