Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Grim Up North
Dec 12, 2011

I'm looking for a entry-level NVIDIA GPU to develop double precision CUDA software on. Now this is not for production, so I don't really need much horsepower (and I have no use for the actual graphics part at all) but don't want to get a card with an out-of-whack price/performance ratio either. However I don't really now where to look for kind of general double precision CUDA benchmarks.

Would a GTS 450 GDDR5 (shipped for 80€) be an OK buy? Thanks.

Adbot
ADBOT LOVES YOU

Grim Up North
Dec 12, 2011

Factory Factory posted:

CUDA price/performance ratios usually kick in when comparing GeForce 560 Ti-448, 570, or 580 to a Fermi-based Quadro. Anything else and CUDA price/performance is just awful since the CUDA performance is so low. At that point, basically anything works for "Will it run?" testing.

Thanks, I'd like to able to get "Will it run reasonably fast?" testing as well, so I should probably go for a 570. (The 560 Ti-448 prices are almost the same as the 570 prices).

Am I right in assuming that Kepler is completely uninteresting for double precision computing until the GK110 (Tesla K20) is released?

Grim Up North
Dec 12, 2011

Colonel Sanders posted:

Probably good for bitcoin mining. . .

I know you said that in jest, but I'd expect BitCoin mining to be a purely integer based computation and a scientific co processor, with its focus on double precision floating point math, not to be cost effective. And BitCoin mining was all about cost efficiency. On the other hand, I don't really know what the Pentium cores are there for.

Grim Up North
Dec 12, 2011

Whale Cancer posted:

I'm looking to invest about $300 into a card for gaming. I'll be running an i5 3570 chip. I'm currently set on the 560 ti 448. Is there a better option at my price point?

Wait for the GTX 660 Ti to come out in in three days. It will initially cost $300 and be quite a bit faster. (Read the posts above yours.)

Grim Up North
Dec 12, 2011

There are rumors on Chinese tech sites that the GTX 660 and the GTS 650 will be released on September 6th.

Grim Up North
Dec 12, 2011


These are actually the GTX 660 OEM specs and in NVIDIA-land OEM parts and retail parts sometimes have the same name, completely different innards and wildly different performance. I pretty sure we will soon see the the retail GTX 660 based on an new chip, as I can't believe they'll release another retail card with yet another GK104 binning.

Grim Up North
Dec 12, 2011

Factory Factory posted:

Zotac - Newer but good brand reputation, not many Goon experiences, reviews well

Just one data point, but I have a Zotac 8800 GTS 512 which is still going strong after god knows how many years.

Grim Up North
Dec 12, 2011

Avocados posted:

What Nvidia cards are out right now that are comparable to the 5870 in terms of performance? I'd rather "replace" the card instead of upgrade, as funds are a little low.

Not out right now, but the GTX 660 should launch in less than a week, be comparable to a Radeon HD 7850 and have a MSRP of $229.

Grim Up North
Dec 12, 2011

Edward IV posted:

Wow. Though I wonder how Intel is dealing with the memory bandwidth issue that makes AMD's APUs need high speed memory.

Intel apparently intends to die-stack some low-power DDR3 directly onto the CPU die, which allows for a very wide bus and would considerably reduce the bandwidth requirements for external RAM.

E: Here's a cool image showing Sony doing it in the PS Vita:

Grim Up North fucked around with this message at 18:06 on Sep 12, 2012

Grim Up North
Dec 12, 2011

Factory Factory posted:

Radeon 7000 through 9800, X300 through X1950, HD 2000 to HD 8000...

Radeon 4K 2870.

Grim Up North
Dec 12, 2011

Did y'all think that your card would survive going from 1080p to 4K? :stonklol: Pretty fun, if way to short article at Anandtech.

Only registered members can see post attachments!

Grim Up North
Dec 12, 2011

Sindai posted:

You should post that in the Xbone or PS4 threads because people have been claiming they probably won't really do 1080p because some previous gen games didn't really do 720p for ages now.
Well, that's based on the decision taken by CryTek to have Ryse render at 900p internally, probably to stick more buffers into the eSRAM.


Grim Up North
Dec 12, 2011

Sir Unimaginative posted:

I don't know if it's just 560 Ti chat, but I ran into a TDR recovery failure bugcheck (0x116) on 327.23.
And it gets worse: Windows Update now points at 327.23, instead of 311.06 - at least, on Windows 8.1 it does. It's no longer a recourse for TDR victims.

Yeah, I know that the 560 Ti is ancient by :pcgaming: standards, but I'm really starting to hate NVIDIA's driver team's blatant disregard for 560 Ti owners.

Grim Up North
Dec 12, 2011

quote is not edit, damnit

Grim Up North
Dec 12, 2011

Yeah, sounds nice, but I'm a bit wary. Will this be a repeat of 3DVision where there's really only one display that supports it, and its great for gaming but poo poo for everything else?

Grim Up North
Dec 12, 2011

Sormus posted:

Currently I've cancelled my order for a reference-cooler 290X and awaiting for an improved design one.

Wait a week, the R9 290 might very well be worth the wait.

Grim Up North
Dec 12, 2011

Does ~Raptr by AMD~ automatically push your your data into the cloud? You have to opt-in into that, right?

Grim Up North
Dec 12, 2011

You know apart from hoping that they get my 50 bucks worth of silicon working again, I'm now really interested to finally hear what the hell is going on with the whole 560 Ti TDR stuff.

Grim Up North
Dec 12, 2011

Pretty, uh, hot card. A loud and hot card. But with a custom cooler on it, it's actually an awesome card. Here's a test with an Accelero Xtreme III: (290 "Uber" is them setting fan speed to 55%)



(Lautstärke being noise level if the db(A) didn't tip you off.

Grim Up North
Dec 12, 2011

HalloKitty posted:

The big question mark here is: what the hell happens to Titan? It's less capable in every single way than 780 Ti. It's a product with a fancy name and no market.

It's still the entry-level compute card, is that no market?

Grim Up North
Dec 12, 2011

mayodreams posted:

Cross posting this from the Wii U demise thread. Someone posted a chart with the relative performance of the consoles, and I added in the 780 TI and 290x.

Interesting, but maybe a 280X would be a more appropriate card as that would be at least somewhat comparable cost wise.

Grim Up North
Dec 12, 2011

GrizzlyCow posted:

Join them. Join them with their 780s, their R9 290X's, their aftermarket cooling, and their 3-Way setups. SLI with them, and buy two 120Hz 1440p monitors. Don't look back. Lower than a 770 lies the path of weakness. Be unconstrained by such trivial notions as value or common sense. Burn your money and be free.

Burn, money, burn! On Mount Vesuvius the Titans crush all mortal sense. Burn, money, burn!

Grim Up North
Dec 12, 2011

I don't get it, are there really people who choose one card over another just because someone's lljk handle is printed on it?

Grim Up North
Dec 12, 2011

Agreed posted:

If you're running a 400- or 500-series card, nab this sucker quick-like and see if you can finally play some recent releases worth a drat instead of being stuck on R316 and earlier drivers.

While this driver seems to fix the problem in many maybe in the majority of cases, unfortunately in my case it didn't. I've been sending Nvidida the dump files and hope they keep working on it. Still, if you haven't tried it yet, it's definitely worth a shot.

Grim Up North
Dec 12, 2011

Agreed posted:

They are selling an FPGA that will replace the scalar unit on compatible monitors (can't remember where I saw that specific detail, but it was with an nVidia guy). We don't know which monitors will be compatible yet, though.

To be honest, the whole time I've been wondering if they really want to sell a board where you have crack open a monitor (made by a third party) that is not meant to be opened by end-users. That seems fraught with a whole lot of liability concerns and even if it's sold as a enthusiast, do it on your own risk kit, it could see it lead to negative PR.

Anyways, offer me a Dell Ultrasharp compatible kit and I'll be interested.

Grim Up North
Dec 12, 2011

Nephilm posted:

Maybe we'll start seeing the 800 series in April-June, or they could start releasing more 28nm Maxwell parts before then. Either way, announcements and rumors should start appearing in March.

The last official word I've read was

http://wccftech.com/nvidia-maxwell-geforce-gtx-750-ti-gtx-750-official-specifications-confirmed-60watt-gpu-geforce-800-series-arrives-2014/ posted:

NVIDIA also confirmed during the conference that they are planning to introduce the GeForce 800 series which is fully based on the Maxwell architecture in second half of 2014. This means that we will see the proper high-performance GPUs such as the replacements for GeForce GTX 780, GeForce GTX 770 and GeForce GTX 760 in Q3 2014. We have already noted codenames of the high-end Maxwell chips which include GM200, GM204 and GM206, however NVIDIA didn’t mention what process they would be based on but early reports point out to 20nm.

I'm not sure if NVIDIA just confirmed second half of 2014 or Q3 but I wouldn't expect high-end parts in Q2.

Grim Up North
Dec 12, 2011

Heise (article in German) apparently heard at the CeBIT that 20nm GPUs are coming out in August (AMD) at the earliest and NVIDIA is talking about Q4'14 and even Q1'15. Nothing too official but it seems that the 780Ti-havers itt can pat themselves on their backs.

Grim Up North
Dec 12, 2011

Ignoarints posted:

looks better than mine



Holy poo poo, did your PC metastasise? Anyway, I think you should post your room in PYF goon lair, your first picture is very promising.

Grim Up North
Dec 12, 2011

Here's a fun link I haven't seen posted on here:

http://richg42.blogspot.de/2014/05/the-truth-on-opengl-driver-quality.html

It's Rich Geldreich's impression of various OpenGL drivers.

quote:

Vendor A
[...]
Even so, until Source1 was ported to Linux and Valve devs totally held the hands of this driver's devs they couldn't even update a buffer (via a Map or BufferSubData) the D3D9/11-style way without it constantly stalling the pipeline. We're talking "driver perf 101" stuff here, so it's not without its historical faults. Also, when you hit a bug in this driver it tends to just fall flat on its face and either crash the GPU or (on Windows) TDR your system. Still, it's a very reliable/solid driver.

quote:

Vendor B
A complete hodgepodge, inconsistent performance, very buggy, inconsistent regression testing, dysfunctional driver threading that is completely outside of the dev's official control.
[...]
Vendor B driver's key extensions just don't work. They are play or paper extensions, put in there to pad resumes and show progress to managers. Major GL developers never use these extensions because they don't work. But they sound good on paper and show progress. Vendor B's extensions are a perfect demonstration of why GL extensions suck in practice.

quote:

Vendor C
It's hard to ever genuinely get angry at Vendor C. They don't really want to do graphics, it's really just a distraction from their historically core business, but the trend is to integrate everything onto one die and they have plenty of die space to spare.
[...]
These folks actually have so much money and their org charts are so deep and wide they can afford two entirely different driver teams! (That's right - for this vendor, on one platform you get GL driver #1, and another you get GL driver #2, and they are completely different codebases and teams.)

:allears:

Grim Up North
Dec 12, 2011

Factory Factory posted:

Sorry geez! Maybe it's noon Pacific instead. :saddowns:

Wasn't it the 19th for the actual NDA lift/paper launch, and today some general Maxwell stuff? Or was that just rumours?

Grim Up North
Dec 12, 2011

veedubfreak posted:

Guys, I'm allowed to buy a 980 for my triple 1440 set up right? I wouldn't want to get anyone's panties in a bunch.

That's like two 4K displays worth of pixels. Shouldn't you be buying two or more of them? :twisted:

Grim Up North
Dec 12, 2011

Hamburger Test posted:

imgur doesn't accept .svg and I'm too lazy to find a program to convert them unless you really want to see them.

https://mediacru.sh/

I'd be interested.

Grim Up North
Dec 12, 2011

Fajita Fiesta posted:

What happened with that announcement that AMD was teasing on the 25th?

OpenCL 2.0 driver support. :lol:

Grim Up North
Dec 12, 2011

Khagan posted:

Where do these supposed Sisoft Sandra results for a 390X put it in relation to other GPUs?

http://www.sisoftware.eu/rank2011d/show_run.php?q=c2ffccfddbbadbe6deedd4e7d3f587ba8aacc9ac91a187f4c9f9&l=en

If the results are valid and the CUs are the same like in GCN it has roughly 50% more shaders. The RAM with the new HBM (4096 bit bus) is a bit harder to talk about - one the one hand bandwidth is roughly double, but I don't think current-gen GCN chips are especially memory bandwidth starved.

I'd say it should come out at 50% faster than a R290X.

E: Here is a Sisoft entry for big maxwell (GM200): http://www.sisoftware.eu/rank2011d/show_run.php?q=c2ffccfddbbadbe6deeadbeedbfd8fb282a4c1a499a98ffcc1f9&l=de

+50% shaders, but no HBM, so probably not 50% faster than a 980.

Grim Up North fucked around with this message at 16:40 on Nov 11, 2014

Grim Up North
Dec 12, 2011

Overclock your CPU, that's what the K chips are made for. And since you have a Sandy Bridge CPU and an Asus Z68 board the process will be super easy. Check the "Overclocking for Workgroups" thread.

Grim Up North
Dec 12, 2011

The new JPR numbers for discrete graphic card market share are out, and it is looking bad for AMD. Here is a graphic compiling the last thirteen years of (only) ATI/AMD vs NVIDIA with key releases for reference:



I hope AMD can still innovate in GPU market and doesn't completely give up like they did with CPUs.

Grim Up North
Dec 12, 2011

Gibbo posted:

How long am I going to have to wait for a 3gb 960?

Assuming you want a 4GB one, it seems that Asus has confirmed that they will launch theirs this month. (But ^^^ it might not make sense to buy one.)

Grim Up North
Dec 12, 2011

Here's an interesting forum post on Mantle/Vulkan/DX12 by a somewhat well-known gamedev.

quote:

So why didn't we do this years ago? Well, there are a lot of politics involved (cough Longs Peak) and some hardware aspects but ultimately what it comes down to is the new models are hard to code for. Microsoft and ARB never wanted to subject us to manually compiling shaders against the correct render states, setting the whole thing invariant, configuring heaps and tables, etc. Segfaulting a GPU isn't a fun experience. You can't trap that in a (user space) debugger. So ... the subtext that a lot of people aren't calling out explicitly is that this round of new APIs has been done in cooperation with the big engines. The Mantle spec is effectively written by Johan Andersson at DICE, and the Khronos Vulkan spec basically pulls Aras P at Unity, Niklas S at Epic, and a couple guys at Valve into the fold.

[...]

Phew. I'm no longer sure what the point of that rant was, but hopefully it's somehow productive that I wrote it. Ultimately the new APIs are the right step, and they're retroactively useful to old hardware which is great. They will be harder to code. How much harder? Well, that remains to be seen. Personally, my take is that MS and ARB always had the wrong idea. Their idea was to produce a nice, pretty looking front end and deal with all the awful stuff quietly in the background. Yeah it's easy to code against, but it was always a bitch and a half to debug or tune. Nobody ever took that side of the equation into account. What has finally been made clear is that it's okay to have difficult to code APIs, if the end result just works. And that's been my experience so far in retooling: it's a pain in the rear end, requires widespread revisions to engine code, forces you to revisit a lot of assumptions, and generally requires a lot of infrastructure before anything works. But once it's up and running, there's no surprises. It works smoothly, you're always on the fast path, anything that IS slow is in your OWN code which can be analyzed by common tools. It's worth it.

Grim Up North
Dec 12, 2011

Space Gopher posted:

Haha, no it's not.

I thought that was :thejoke: but sometimes I really don't now anymore ...

Adbot
ADBOT LOVES YOU

Grim Up North
Dec 12, 2011

1gnoirents posted:

I hope this isn't true



Seems to be the same prices heise.de, a usually reliable source, got to see at CeBIT. Both 390 and 390X faster than a 290X as expected.




Nice to see that they are going for 8GB HBM VRAM.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply