Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Pretty much.

Adbot
ADBOT LOVES YOU

hobbesmaster
Jan 28, 2008

Wouldn't you need to buy a Quadro card to see any kind of performance for double precision? I thought the consumer cards were essentially constrained to single precision floats for sales purposes (similar to OpenGL performance and CAD).

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
GF110 is so much faster at CUDA than GF114 that the GeForce cards based on GF110 (e.g. GeForce 580) are competitive with GF116/GF106-based Quadros (e.g. Quadro 2000) in DP float, even with the throttling, and smash them at every other workload.

Magog
Jan 9, 2010
Hey guys, quick question, could I support a GTX670 on a Seasonic M12II 520W? A google had a dude or two say yes, but I wanted a little more certainty.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Assuming your CPU isn't an overclocked i7-920 or Bulldozer chip, yes. The card needs 170W for itself.

Magog
Jan 9, 2010

Factory Factory posted:

Assuming your CPU isn't an overclocked i7-920 or Bulldozer chip, yes. The card needs 170W for itself.

Alrighty then, don't need a new PSU then. Thanks dude. :)

Endymion FRS MK1
Oct 29, 2011

I don't know what this thing is, and I don't care. I'm just tired of seeing your stupid newbie av from 2011.
I'm surprised this hasn't been brought up:
http://www.techpowerup.com/167711/AMD-Radeon-HD-7970-GHz-Edition-quot-Tahiti-XT2-quot-Detailed.html

Shipping with a new chip design, 1100MHz core and lower voltages than a normal 7970. Maybe this can compete much better with the 680?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

AMD isn't losing on performance, they're losing on price. If they can make that competitive, they'll be back in the game. Otherwise, they won't.

Edit: Here's both of them in Anandtech's GPU bench 2012 to demonstrate what I mean - remember that these are stock settings and nVidia's card sort of auto-overclocks, as a result edging higher than the stated clock would actually give. If you look at overclocked performance, which is what people should look at since the HD 7970 is an amazing overclocker (even moreso than the 680 overall) is how the two compare when overclocked. Nothing's guaranteed running out of spec, but consider that there's a clock-for-clock discrepancy in the 7970's favor, enough of one that when you run an overclocked one alongside a very overclocked GTX 680 it starts to outperform it.

So it's not that the 7970 is bad. As usual some games do better for nVidia, some for ATI. It's just that it's too expensive and doesn't make sense as a value prospect when nVidia's got the 670 that runs like a 680 and can be found in stock at around $400. "Performs similarly, costs more" is not a good look.

Agreed fucked around with this message at 05:53 on Jun 17, 2012

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast
Do remember that in the low to mid-range, AMD's cards are very good propositions, and NVIDIA only has slow or outdated solutions until they release their 660 series and so on.

Incessant Excess
Aug 15, 2005

Cause of glitch:
Pretentiousness
Does anyone have any opinions on 'Galaxy' as a brand? They offer a card here in europe, under the 'kfa2' name, that is pretty similar to the Asus D2CU Top except in that it is actually in stock at some vendors. It looks pretty good in that one review I was able to find:

http://www.hardwareluxx.de/index.php/artikel/hardware/grafikkarten/22686-test-3x-nvidia-geforce-gtx-670.html?start=5

That's power draw, temperature and noise respectively.

future ghost
Dec 5, 2005

:byetankie:
Gun Saliva

Biggest human being Ever posted:

Does anyone have any opinions on 'Galaxy' as a brand? They offer a card here in europe, under the 'kfa2' name, that is pretty similar to the Asus D2CU Top except in that it is actually in stock at some vendors. It looks pretty good in that one review I was able to find:

http://www.hardwareluxx.de/index.php/artikel/hardware/grafikkarten/22686-test-3x-nvidia-geforce-gtx-670.html?start=5

That's power draw, temperature and noise respectively.
I've heard decent things about them on other forums. They're not top-tier but their cards are generally reasonably-priced and they don't skimp on components (VRM's/minor heatsinks). I don't know anything about their warranty service however.

movax
Aug 30, 2008

grumperfish posted:

I've heard decent things about them on other forums. They're not top-tier but their cards are generally reasonably-priced and they don't skimp on components (VRM's/minor heatsinks). I don't know anything about their warranty service however.

I think Galaxy has a US RMA warehouse/address at this point, so at least they've got that going for them now.

Rap Game Goku
Apr 2, 2008

Word to your moms, I came to drop spirit bombs


movax posted:

I think Galaxy has a US RMA warehouse/address at this point, so at least they've got that going for them now.

Isn't Galaxy a partner or something of Sapphire? Galaxy for nVidia cards and Sapphire for AMD.

EDIT: \/\/\/I might be mistaken and have Galaxy mixed up with Zotac.

Rap Game Goku fucked around with this message at 04:17 on Jun 18, 2012

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I believe that's Zotac, as in Zotac and Sapphire are owned by the same holding company. But maybe they have Galaxy, too?

td4guy
Jun 13, 2005

I always hated that guy.

grumperfish posted:

they don't skimp on components (VRM's/minor heatsinks).
Speaking of minor heatsinks, I was surprised to see that my GTX 680's RAM chips are naked. Is vram cooling not really a big deal?

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

td4guy posted:

Speaking of minor heatsinks, I was surprised to see that my GTX 680's RAM chips are naked. Is vram cooling not really a big deal?
No, memory chips produce very little heat at all, several watts each at most.

Tunga
May 7, 2004

Grimey Drawer

Alereon posted:

No, memory chips produce very little heat at all, several watts each at most.
If Corsair made graphics cards the top of the card would have giant comedy plastic spikes along it as a VRAM cooler.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Crossposting this because it's relevant to some of this thread's interests:

Factory Factory posted:



Well, this has been a long time in the making. More precisely, six years of research into x86-based GPUs and highly parallel processors. That thing is 50 original Pentium cores with added 16-wide vector and FP64 hardware.

And as you might notice, there are no video outputs. Intel is not pretending that's a GPU; it's a supercomputer co-processor designed to compete with Nvidia's Tesla.

AnandTech

Dogen
May 5, 2002

Bury my body down by the highwayside, so that my old evil spirit can get a Greyhound bus and ride
New nvidia beta driver release, more performance improvements for 400/500/600 cards and some 600 specific fixes

Boten Anna
Feb 22, 2010

Factory Factory posted:

Crossposting this because it's relevant to some of this thread's interests:

What do they mean exactly by "Pentium 1 cores"? Just that the core lacks all the fancy extensions (MMX, etc.) or is it a literal Pentium 1 just slapped on 22nm process so it's now faster? Kinda both?

nmfree
Aug 15, 2001

The Greater Goon: Breaking Hearts and Chains since 2006

Factory Factory posted:

Crossposting this because it's relevant to some of this thread's interests:
I'm guessing that's going to cost somewhere in the range of $Texas, which is too bad because I'd definitely think about getting one for F@H.

MeruFM
Jul 27, 2010

nmfree posted:

I'm guessing that's going to cost somewhere in the range of $Texas, which is too bad because I'd definitely think about getting one for F@H.

You probably should just donate to F@H at that point.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Boten Anna posted:

What do they mean exactly by "Pentium 1 cores"? Just that the core lacks all the fancy extensions (MMX, etc.) or is it a literal Pentium 1 just slapped on 22nm process so it's now faster? Kinda both?
P54C (pre-MMX Pentium) cores according to Intel. That said, I'm a bit surprised/skeptical they're not really using Atom cores, as those are essentially a Pentium tweaked for efficiency and with support for the current ISA bolted on.

nmfree
Aug 15, 2001

The Greater Goon: Breaking Hearts and Chains since 2006

MeruFM posted:

You probably should just donate to F@H at that point.
Yes, but that wouldn't allow me to show off my epenis++++

Not Wolverine
Jul 1, 2007
Probably good for bitcoin mining. . .

Grim Up North
Dec 12, 2011

Colonel Sanders posted:

Probably good for bitcoin mining. . .

I know you said that in jest, but I'd expect BitCoin mining to be a purely integer based computation and a scientific co processor, with its focus on double precision floating point math, not to be cost effective. And BitCoin mining was all about cost efficiency. On the other hand, I don't really know what the Pentium cores are there for.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
The articles go into detail, but basically to steal some of Nvidia's thunder by offering a highly parallel HPC processor cluster on which you can re-use familiar x86 code and standard Intel systems architecture optimizations.

Many desktop computer tasks take good advantage of having a few large processing cores available which are complex and flexible. The chip maker adds complexity by sticking more and more transistors on silicon, and more and more complex tasks can be solved in a single clock cycle.

But there are plenty of computing workloads that do not need as much per-core complexity. A lot of statistical modeling, scientific simulation, heavy-duty content creation (like 3D rendering or bulk video transcoding), and graphics rendering tasks don't have a huge range of extremely complex calculations. Rather, they have tons and tons of the same calculations that have to be run over and over on a ton of data. In such workloads, a ton of simple execution cores is much more effective than a few complex cores at getting the work done.

There's more to it, but that's the gist.

So this Intel Xeon Phi is just a highly parallel processor. It's an add-in to a system with complex regular Xeons the way a GPU would be - computing power optimized for different things. Actually, Xeon Phi is a full system-on-a-board, but Intel isn't selling it that way.

It's just that tech has moved on from where it use to be, so the cheap and simple processor core of 2012 is an augmented top-of-the-line model from 1993. Nvidia and ATI/AMD came to HPC computing by a different route, starting with tons of extremely simple execution cores and building up complexity.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Dogen posted:

New nvidia beta driver release, more performance improvements for 400/500/600 cards and some 600 specific fixes

Wow, these drivers are... really something. I know, beta means what it says, but it's screwing with GPGPU capability in 670s/680s... Like, turning it the hell off... And some of the features aren't all the way there. They've got a ways to go with the FXAA selectivity functionality, but that's understandable given how many applications don't need to be FXAA'd up.

What I'd like is for different levels of FXAA to be selectable as global or specific options, though. Have a more aggressive FXAA that does more post-sharpening, for example, some games would strongly benefit from that. CSAA, as cool as the technology is, interferes with the rendering on some in-house deferred rendering engines in a really noticeable way (like, do not use CSAA with Diablo III, it breaks the visuals).

I'm surprised they singled out S.T.A.L.K.E.R. CoP for performance increase, since it already ran like CRAZY before - DX11, everything maxed, Absolute Nature 3, Atmosfear 3, forced SSAA and SGSSAA and FXAA and it pretty much chills out at the un-boosted clockrate and never hits above ~70% of the power target even indoors with tons of interactive shadows. Maybe if it were a higher resolution, I dunno.

I wonder what the vsync fix means. I noticed some weirdness with adaptive vsync especially, hopefully the new drivers take care of that...

Tunga
May 7, 2004

Grimey Drawer

Agreed posted:

I'm surprised they singled out S.T.A.L.K.E.R. CoP for performance increase, since it already ran like CRAZY before - DX11, everything maxed, Absolute Nature 3, Atmosfear 3, forced SSAA and SGSSAA and FXAA and it pretty much chills out at the un-boosted clockrate and never hits above ~70% of the power target even indoors with tons of interactive shadows. Maybe if it were a higher resolution, I dunno.

Same for Batman. My 670 already runs it completely maxed out without any issues, I can't imagine the 680 needs another 13% performance or whatever it was.

Maybe for 3D?

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

There's a new pointless e-peen benchmark in town, ladies and gentlemen.

http://origin-www.geforce.com/whats-new/articles/put-your-system-through-its-paces-download-the-pla-directx-11-and-physx-benchmark/

It's an nVidia-centric benchmark in that it lets you test PhysX at various levels. It's a little unintuitive to get running correctly, you have to hit escape to access the settings button - even starting it in DX11 mode on high, it fires up in 720p and most settings are decidedly not high. Manual adjustments to make to compare it to other scores would be to set it to 1920x1080 resolution and make sure PhysX is set to High (requires at least as many CUDA cores as a 560Ti has, iirc, or it'll force a non-1:1 "Medium" PhysX processing that's okay but not nearly as impressive in how it's used).

I'm pleased as punch at the score my overclocked-as-hell 680 turns in, using the GTX 580 for PhysX. Strongly beats nVidia's benched score and compares favorably to 580 SLI scores others are turning in. :rock:

Since I'm on a P67 motherboard running Sandy Bridge, to use both the 680 and the 580 means that each has to run at PCI-e 2.0 8x; that does sacrifice some bandwidth, anywhere between 2%-5%, and it does show at 1920x1080 with current-gen cards. But two of them working in tandem with a minor performance penalty still comes out majorly ahead when considering the performance penalty of GPU-accelerated PhysX. It's really, almost surprisingly intensive. Shame more games don't use it so the rest of the time I've just got a 680 going 2%-5% slower than it could for no reason :v:

On that note, you know, 1920x1080 may be commodity when it comes to panels, but that's really not all that low of a resolution, I don't know why we tend to shrug at 1080p and only treat 1440p/1600p/surround resolutions as genuinely high resolutions when looking for really high performance at max or near-max settings.

Tunga
May 7, 2004

Grimey Drawer

Agreed posted:

I'm pleased as punch at the score my overclocked-as-hell 680 turns in, using the GTX 580 for PhysX.

[...]

Shame more games don't use it so the rest of the time I've just got a 680 going 2%-5% slower than it could for no reason :v:
You really need to stop posting about this, I keep seriously considering buying another card just for PhysX :swoon: and it's your fault!

If more games actually supported PhysX, I'd be all over this. It seems like it's basically Crossfire/SLI without the ridiculous limitations (fullscreen only) and flaws (microstutter).

But yeah...I wonder what the cheaptest card would be to make this worthwhile...

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Tunga posted:

You really need to stop posting about this, I keep seriously considering buying another card just for PhysX :swoon: and it's your fault!

If more games actually supported PhysX, I'd be all over this. It seems like it's basically Crossfire/SLI without the ridiculous limitations (fullscreen only) and flaws (microstutter).

But yeah...I wonder what the cheaptest card would be to make this worthwhile...

FactoryFactory and I kicked the ball around on that and we figured given the bandwidth of current-gen cards, and given that PhysX is a specific kind of compute hard that even the hamstrung GK104 parts are good at, you really don't need, or substantially benefit from the amazing compute performance of a GF110 (GTX 560Ti-448, GTX 570, GTX 580) part. You just need a bunch of CUDA cores. When they're not being shaders, PhysX gives SMs (aka CUDA cores) a very zen-like workload. They munch through it no problem. The only thing is you need to match like to like to some degree or you'll have the rendering card chilling out waiting on the processing card to catch up. It's impossible to generalize like "a last-gen card should be fine," because while that is 100% true with Kepler's very gaming-focused performance (compared to Fermi's "amazing at both!... poo poo this is expensive" approach), it would not have been true last-gen. A top-end G80 (8800 GTX) would slow a top-end GT200 (GTX 280) down. A top-end GT200 might be able to keep up with a GTX 560Ti, but it could slow a GF110-based part down. So it goes.

After a long session of bullshitting about it we figured, eh, 560Ti would be a very safe bet. You could probably get away with a standard GTX 560. That's to not handicap a GTX 670/GTX 680's rendering speed. When using a card as a dedicated PhysX processor you can only overclock the memory. It is advisable to do so, you want bandwidth. CUDA workloads (and that includes PhysX) are almost all about bandwidth - tons of tiny parallelized processors working in tandem, hungry for all the bandwidth they can eat.

One thing's for sure, a GF110 part (let alone the top-tier version) is almost certainly dramatic overkill for the task at hand, it's just what I've got.

Agreed fucked around with this message at 11:16 on Jun 20, 2012

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
That's certainly a better reply than the 500 pixel :geno: I was considering posting.

I have eye candy lust, but nowhere near enough to drop big bones on a PhysX assist card. Agreed worked very hard to earn a pass from my raised eyebrow of high-horse scorn.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Factory Factory posted:

That's certainly a better reply than the 500 pixel :geno: I was considering posting.

I have eye candy lust, but nowhere near enough to drop big bones on a PhysX assist card. Agreed worked very hard to earn a pass from my raised eyebrow of high-horse scorn.

Yeah, I should reiterate that I'm only hanging onto the 580 because it's an -AR part and I bought the advance RMA for it. I am keeping it until it dies, so I can get something current-gen with respectable performance to fill a similar role. EVGA's change to their warranty makes a lot of sense, imo, they've moved from one tenable model to another, both of them allow for actual warranty issues (failure within three years is better than the previous KR or unregistered AR's 2-year limited). Past that, they're only obliged to replace it with something that performs similarly to the card they're replacing. So, for example, let's say my GTX 280 had been bought from EVGA instead of BFG (who said gently caress LIFETIME WARRANTIES and also YOU). It dies today. Oh no!... Okay, I send it off, they determine that it performs a lot like a GTX 560. That's more or less true, and more or less sucks for me, since while that card cost $500 they don't go by cost. :v:

The interesting part is the two options for extending the warranty (which are also prerequisites for the step-up, so it's a more complex racket). You can gamble on buying two extra years to stay within some kind of likely performance margin. The blowers on 'em are rated for something like 9 years, and that's the part you're hoping is going to fail early if you're in it for the opportunity cost gamble, so the house has an advantage, but they always do. Even though it's still going to be similar to the above scenario, if my GTX 580 died some time a couple years from now and they decide that something like a GTX 760Ti (this is make-believe, bear with me) is a proper replacement, I'd probably be okay with that. Even though it's downgrading from the top notch part, it'll have whatever fancy technology is then-current, and, bonus, new, so not likely to die.

---

Shorter, more relatable version: nobody needs this, I just happen to have it because I used to use CUDA and now I really don't, and I splurged on a 680 because SHINY poo poo RULES. I did a bunch of digging to find out exactly what my power supply can do and good rough figures on system power usage of two cards like this, and... Now I've got a setup that I would not recommend anyone buy.

What'd you say, it's like SLI without the hassle? More like "all the cost of SLI without the majority of the benefits," really, it's still about $1000 of graphics cards, but only PhysX accelerated games see any benefit from it, and then only when using PhysX. Whereas with SLI you've got scalable rendering that, sure, might not be perfect, but if you're running a three-monitor setup, having one 680 rendering graphics and one 580 doing nothing most of the time except sucking up its idle power usage for no reason is literally worthless. Or worse than worthless, if you're not on PCI-e 3.0, since PCI-e 2.0 at 8x costs some performance.

This message has been brought to you by the Scared Straight Vis A Vis GPU Opulence program.

Agreed fucked around with this message at 11:47 on Jun 20, 2012

Tunga
May 7, 2004

Grimey Drawer
Thanks for the writeup. I'm not really going to spend money on it, especially not £130-170 when there are currently only about three games with decent PhysX support. But I have to say that I find the idea intriguing. To me there is more value in being able to offload a certain type of processing to a lesser card than there is in sharing between identical cards in standard SLI becuase it's quite possible that you already have just such a card when upgrading. It still requires them to match up in some kind of sensible performance ratio though, as you say.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

When a game does support PhysX though it rules so hard. Last word. Honest.

Tunga
May 7, 2004

Grimey Drawer

quote:

What'd you say, it's like SLI without the hassle? More like "all the cost of SLI without the majority of the benefits," really, it's still about $1000 of graphics cards, but only PhysX accelerated games see any benefit from it, and then only when using PhysX. Whereas with SLI you've got scalable rendering that, sure, might not be perfect, but if you're running a three-monitor setup, having one 680 rendering graphics and one 580 doing nothing most of the time except sucking up its idle power usage for no reason is literally worthless.

Maybe I will run SLI and offload PhysX :colbert: .

Enjoy your crazy setup!

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Tunga posted:

Maybe I will run SLI and offload PhysX :colbert: .

Enjoy your crazy setup!

I'll drink to that. :tipshat:

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice
Anandtech has their review of the Geforce GT 640 2GB up. It's a pretty sweet option for HTPCs, though gaming performance is significantly worse than the expectations nVidia set due to the extremely low memory bandwidth. It would be interesting to see how a similar card performed with GDDR5.

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

A hundred dollars for that level of performance is sort of a joke when the AMD 7750-900 eats its lunch for about twenty five bucks more.

Don't eat fast food one day, double your graphics performance for the current generation of cards.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply