Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
craig588
Nov 19, 2005

by Nyc_Tattoo
It's way cheaper to not have upgradeable vram, in pretty much every case when you'd want more or faster videocard memory it's probably time to get a more powerful GPU as well anyways. Support for arbitrary memory on a videocard would add so much to development costs with virtually no benefit to anyone.

Adbot
ADBOT LOVES YOU

Rastor
Jun 2, 2001

Palit is joining the 6GB party.

Ignoarints
Nov 26, 2010

craig588 posted:

It's way cheaper to not have upgradeable vram, in pretty much every case when you'd want more or faster videocard memory it's probably time to get a more powerful GPU as well anyways. Support for arbitrary memory on a videocard would add so much to development costs with virtually no benefit to anyone.

I was just wondering, as in the 780 case since there seems to be a wide range of vram models now.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE
Honestly I think getting a 6 gig 780/780ti would be a waste of the (likely quite a bit) extra money, unless you plan on getting a pair of them for 4k gaming.

A thought just occured to me - we have no release dates for these 6gig cards, so they must be a fair few weeks away. Is it just me or does this make it seem more likely that Nvidia has a longer wait than we thought before they can get the 20nm maxwell parts ready? I suspect we won't see 800 series stuff on the desktop till early next year.

The Lord Bude fucked around with this message at 16:15 on Mar 25, 2014

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

The Lord Bude posted:

Honestly I think getting a 6 gig 780/780ti would be a waste of the (likely quite a bit) extra money, unless you plan on getting a pair of them for 4k gaming.

A thought just occured to me - we have no release dates for these 6gig cards, so they must be a fair few weeks away. Is it just me or does this make it seem more likely that Nvidia has a longer wait than we thought before they can get the 20nm maxwell parts ready? I suspect we won't see 800 series stuff on the desktop till early next year.

Nvidia is supposed to start talking about maxwell and other poo poo in 10 minutes.

http://www.pcgameshardware.de/GTC-Event-257049/Specials/GTC-2014-Livestream-Nvidia-Maxwell-20-nm-1114823/

Jan
Feb 27, 2008

The disruptive powers of excessive national fecundity may have played a greater part in bursting the bonds of convention than either the power of ideas or the errors of autocracy.
Is there a GPU benchmarking tool with a test that measures bandwidth? It seems that the usual 3dMark/Unigine synthetic benchmarks tend to measure shader and texture processing power with preloaded data, whereas I want to specifically measure the impact of PCIe speed at my monitor's native 1600p.

veedubfreak
Apr 2, 2005

by Smythe
FYI here is where Jacob from EVGA talks about the upcoming fun
http://www.overclock.net/t/1475993/evga-step-up-your-gtx-780-to-6gb-update-6gb-gtx-780-ti-incoming-aswell

Straker
Nov 10, 2005

The Lord Bude posted:

Honestly I think getting a 6 gig 780/780ti would be a waste of the (likely quite a bit) extra money, unless you plan on getting a pair of them for 4k gaming.
They know all the savvy high-end gamers bought AMD cards anyway, I would be greatly amused but also unsurprised if their rationale was "let's try to get these shmucks to buy a third or fourth iteration of the same card"

sorry to anyone reading this courtesy of a 780 who isn't an idiot and only has it because they were too slow to get a 290 before prices went up :v:

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Ignoarints posted:

I wonder if we'll ever be able to upgrade vram, or is it too integrated into the specific application for that

It's soldered directly to the board. Upgrading it would be kinda like upgrading your system RAM by replacing the DRAM packages on the memory stick while otherwise keeping the stick.

Because of the big coolers needed for GPUs, you likely won't see VRAM packaged into separate modules. If you had e.g. standard 64-bit DDR3 modules, you'd need to cram eight DIMMs onto a card to populate an R9-290X.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

Straker posted:

sorry to anyone reading this courtesy of a 780 who isn't an idiot and only has it because they were too slow to get a 290 before prices went up :v:

Well some of us were reluctant to stick a small furnace in our PCs, and others of us wanted amusing Nvidia only features such as:

PhysX
Shadowplay
Not having to wait 3 weeks for drivers that make a new game work*

*yes I know, subjective, YMMV, etc but I'm not ready to give AMD a second chance just yet.

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map
Welp, the GTC live conference just showed a video that mashed one Titan into another. $3000 for the Titan Z.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

Don Lapre posted:

Nvidia is supposed to start talking about maxwell and other poo poo in 10 minutes.

http://www.pcgameshardware.de/GTC-Event-257049/Specials/GTC-2014-Livestream-Nvidia-Maxwell-20-nm-1114823/

Speaking of which, paraphrasing...

"The first of the technologies we'd like to announce today, as part of our next generation GPU... is nVlink... chip-to-chip communications, with an embedded clock... enables unified memory between the GPU and the CPU, and in the second generation, cache coherency..."

"We're able to [increase PCI-e throughput] by up to 12x. We can use this... between CPU and GPU [or up to 5x faster] GPU and GPU."

"The second technology has to do with memory bandwidth. [It's already big at nearly 300GB/s but] we would love to have more. [We're essentially power-limited at the moment] so we're going to be implementing [heterogeneous chip architecture on the same wafer]."

"We're putting so many bits [from hundreds --> thousands of bits], how do we solve that problem?" Answer is stacked DRAM for 2.5x capacity and greatly increased bandwidth :monocle: (my note: AMD is obv. not the only ones working hard on much more effective memory access, which we knew broadly but this is nicely specific)

"What are we gonna call this chip? ... it incorporates these two enabling technologies... What should we call this chip? ... Pascal [motherfuckers! woooo]"

"At constant power... comparing generations... We can continue to scale with Moore's Law." ( daaamn)

"What are we going to do with all these FLOPS? ... Machine learning... a branch of artificial intelligence... [and] neural nets." (I have a massive nerd boner at this as if I'm lucky I'll be studying neuroscience pretty soon and if they're focusing on machine learning heavily at this level then holy poo poo is that gonna be interesting going forward, please god give me access to MIT's neural net research)

Side note: holy fuuuck yeeeeeesssss

"[Check out this cool trivia about V1 in the brain where our brains have specific neurons for various orientations of edges, cool huh? Neuroplasticity rules and this is why you see clouds everywhere!] Computer scientists... will create a software program... teaching this. [big bunch of stuff about the Google Brain and how it and everything like it is going to look slow as poo poo and uber power hungry, compared to Pascal-powered neural net applications - the same amount of processing power can be harnessed from Pascal with drastically less time, power, money, etc. :smuggo:" (I am so excited about this holy crap, nVidia what are you doing?)

[some chuckling about how it's not "YodaFLOPS"]

*Google Brain level stuff can be done with GPU-accelerated servers for 1/100th the energy cost. (not totally sure if this is Kepler era technology, or what... Thinking yes? Pascal will be two generations away from that, so presumably given the further optimized stiff above, way more impressive even yet? We'll see I guess!

[some demo stuff with simple neural network learning quite quickly, :golfclap: for nVidia from a lot of people who aren't rabidly obsessed with neuroscience or machine learning, amazingly enthusiastic cheers from those who are]

*New product introduction: Titan Z. I can't tell immediately whether this will be a next-gen product or just an even more badass GK110-based Titan that starts sneakin' up big time on the top end Quadro (didn't see a mention of ECC GDDR5, but it's packing 12GB of VRAM). Costs $3K, though, so we sure as shootin' aren't at entry level anymore. Edit: Pretty sure this is 100% Kepler, or, more accurately, 200% - a mix of two Titans? Huh. Yeah, I can't afford a $3K graphics card but for professionals this will be an excellent alternative to a comparably priced Quadro provided that they don't need ECC GDDR5... Wonder which driver track this will default to, standard or workstation?

[amazing tech demo of a crazy good looking scene rendered in real-time, why not]
[another amazing tech demo of what physics will be capable of in real-time - grid-free, sparse-matrix, this sure as poo poo ain't your grandpa's PhysX]
[ANOTHER badass physics demo showing how sophisticated the calculations have become in realtime - look these up on Youtube later, this poo poo is :aaaaa:]
[yet another unbounded complex emitter with no grid requirement, poo poo this is cool]
[oh hey a voxel demo, of fire! 32,000,000 voxels, each simulating fuel, density, and the actual point of flammability, so that adjacent voxel models with this level of sophistication can affect - and effect - each other in real-time, wow...]


And now the stream dies for me and nothing I do will bring it back, so hopefully somebody else is enjoying it you lucky fuckers they haven't even really got to Maxwell yet god drat it

It's back!

I missed the UE4 in situ/game demo, will have to pick that up later but the tail end of it sure looked nice :3:

*IRAY VCA - Up to 8 GPUs in tandem per IRAY VCA, efficiently? God drat. Seriously? God drat.
*Focus on true global illumination allowing for unbelievably high quality graphics when absolute FPS isn't the goal, but rather photorealism. A billion rays all traced from zero to just a mega poo poo-ton of bounces. 1 PETAFLOP demo w/ 19 cards in tandem. This is scalability that is frankly heretofore unmatched.
*IRAY VCA reduces a $300,000 workstation level of performance to a $50K cost instead. Very cool.

Next tech...

GRID - decoupling display output from the computer (directly)...

I have to duck out, I'll catch up on all this later, but somebody who does have time, please please please please please keep an eye out for Maxwell details? :kiddo:

Agreed fucked around with this message at 18:23 on Mar 25, 2014

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
Just tell me when i can buy an 880.

Wistful of Dollars
Aug 25, 2009

Sidesaddle Cavalry posted:

Welp, the GTC live conference just showed a video that mashed one Titan into another. $3000 for the Titan Z.

I swear to Kali that when this is released we'll see people buying it and then playing on 1080p screens. :downs:

Don Lapre posted:

Just tell me when i can buy an 880.

Until I hear otherwise from the horse's mouth, if it's released this year it'll probably be Q4.

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map
Watching that self-piloted Audi was pretty freaky.

Ignoarints
Nov 26, 2010
My mind is chugging through that post man. 1 petaflops sounds pretty cool

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map

Agreed posted:

I have to duck out, I'll catch up on all this later, but somebody who does have time, please please please please please keep an eye out for Maxwell details? :kiddo:

No specific Maxwell details except for announcement of codename Erista, the next step after Tegra K1 (Kepler) based on Maxwell. Whoo, mobile chips based on Kepler/Maxwell.

lol everyone in the conference got a free Shield

Star War Sex Parrot
Oct 2, 2003

Don Lapre posted:

Just tell me when i can buy an 880.
Pretty much. I sold my GTX 680 2 weeks ago because there's nothing on my gaming radar, and I'll consider replacing it at the end of the year if NVIDIA has a new high-end card around then.

movax
Aug 30, 2008

NVLink looks pretty interesting; I'm curious what they're doing to increase the bandwidth over vanilla PCIe. Good to see they are retaining the programming model, and cache coherency. Should give SLI / multi-chip solutions a disgusting amount of bandwidth.

"Differential with embedded clock" could mean a few things; regular PCIe still supplies a REFCLK to each slot / device so the receiver can perform CDR as needed. I'd guess a change in the line coding + embedding the clock gives them the edge they need for increased performance.

Wonder if they will open it up to FPGA vendors and such (probably not).

Yaos
Feb 22, 2003

She is a cat of significant gravy.
Is there an archive of today's Nvidia stream up yet? I see it was on Twitch but I can't tell if it was archived.

Edit: Stream URL. It's down until tomorrow. http://www.gputechconf.com/page/live-stream-source2.html

Edit 2: Here we go! It's about 2 hours long http://www.twitch.tv/nvidia/c/3951220

Yaos fucked around with this message at 20:27 on Mar 25, 2014

Delusibeta
Aug 7, 2013

Let's ride together.

Sidesaddle Cavalry posted:

Welp, the GTC live conference just showed a video that mashed one Titan into another. $3000 for the Titan Z.

For gaming purposes, it's :retrogames:. For computational work, I can see the attraction.

Beautiful Ninja
Mar 26, 2009

Five time FCW Champion...of my heart.

Jan posted:

Is there a GPU benchmarking tool with a test that measures bandwidth? It seems that the usual 3dMark/Unigine synthetic benchmarks tend to measure shader and texture processing power with preloaded data, whereas I want to specifically measure the impact of PCIe speed at my monitor's native 1600p.

Almost certain that your monitor resolution doesn't matter when it comes to memory bandwidth use, as long as you aren't using something like vsync and limiting your FPS. You should be pushing the same amount of data at 1080p running at some insane amount of FPS compared to 1600p at whatever FPS you get there. For what its worth, AMD has specifically mentioned that they have tested R9 290X's down to PCI-E 2.0 8x and says that even that speed does not bottleneck their GPU's. I also watched a Linustechtips video where they were testing memory bandwidth and found cards like the Radeon HD 6990 were not bandwidth limited on even PCI-E 2.0 4x.

EoRaptor
Sep 13, 2003

by Fluffdaddy
Well this happened
http://arstechnica.com/gaming/2014/03/facebook-purchases-vr-headset-maker-oculus-for-2-billion/

I don't get where facebook is going with a bunch of recent purchases. Hopefully they don't google this company: stop all its projects, re-assign or layoff staff, technology is never heard from again.

Ignoarints
Nov 26, 2010
I knew they bought it but I didn't know it was for 2 fuckin billion :lol:.

I really know nothing about it really but a lot of non-gamer people are getting really drat excited over it. Maybe they can pump some money into it and turn it into something worthwhile.

Yaos
Feb 22, 2003

She is a cat of significant gravy.

Agreed posted:

I have to duck out, I'll catch up on all this later, but somebody who does have time, please please please please please keep an eye out for Maxwell details? :kiddo:
For those of you that don't want to watch two hours of graphical goodies, in addition to what Agreed posted Nvidia has done some other cool things. Nvidia has partnered with VMWare to provide GPU power through VMWare products. Tegra now shares architecture with the regular full sized GPUs. The next mobile chip they will release (Tegra K1 this year) will run at about 327 GFlops. They announced a new SDK kit that costs $192 for the new mobile chip, along with a new API called VisionWorks that provides picture recognition for the mobile GPU. They showed off a driverless car from Audi that uses a replaceable module to drive the car using a Tegra K1 chip. The module looked to be about the same size as a desktop GPU.

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

Beautiful Ninja posted:

Almost certain that your monitor resolution doesn't matter when it comes to memory bandwidth use, as long as you aren't using something like vsync and limiting your FPS. You should be pushing the same amount of data at 1080p running at some insane amount of FPS compared to 1600p at whatever FPS you get there. For what its worth, AMD has specifically mentioned that they have tested R9 290X's down to PCI-E 2.0 8x and says that even that speed does not bottleneck their GPU's. I also watched a Linustechtips video where they were testing memory bandwidth and found cards like the Radeon HD 6990 were not bandwidth limited on even PCI-E 2.0 4x.
This may have been true in the past but with modern cards at least 8GB/sec is required for optimal performance, which could be provided by either a PCIe 2.0 x16 link or a PCIe 3.0 x8 link. Most games didn't see a significant performance drop-off by going down to 4GB/sec, but some saw as much as a 20% penalty.

Jan
Feb 27, 2008

The disruptive powers of excessive national fecundity may have played a greater part in bursting the bonds of convention than either the power of ideas or the errors of autocracy.

Beautiful Ninja posted:

not bandwidth limited on even PCI-E 2.0 4x.

Yeah, well, I'm dealing with PCIe 1.1 4x when in Crossfire vs 2.0 16x when single GPU.

Chance
Apr 28, 2002

Ok I'm thinking about buying a new card for what feels like almost silly reasoning so wanted to run it by in here to see if you guys think I'm missing anything or fully crazy.

Right now I have a GTX 460 1gb in my much newer 4770k machine (put me into the boat trying to hold out for an 880). Gaming performance in what I play is ok enough (planetside 2, some BF4, hawken, at 1920x1200, basically everything gpu limited of course). And I use it for GPU rendering in blender for hobbyist use. But for how this thing performs it really sucks up the power, and is the noisiest thing in my case by far.

I'm pondering picking up a 750 TI with 2GB of memory. From what I've seen (the 750 and ti aren't in anands bench lookup yet and these cards are gens apart) I could expect roughly similar gaming performance, (maybe better or worse for planetside which actually caps out my cards memory). And then other factors depending get 10-15% better rendering speed in blender, all while drawing way less power. Also the current set of them comes with some free to play currency cards I can just ebay since I don't care.

I'd probably keep the 460 just to see how blender handles it I know it can do multiple GPU without crossfire/sli. Then eventually sell the 460 as well, and then farther down the line get said 880, or 860 to actually be my big gaming/perf upgrade, but keep the 750 TI around for the same blender extra oomph.

Am I crazy? I mean even if I am $200 isn't going to kill me, even if it was just as some extra render power. But am I missing anything? Will it suck a lot more than I'm expecting game wise?

Zero VGS
Aug 16, 2002
ASK ME ABOUT HOW HUMAN LIVES THAT MADE VIDEO GAME CONTROLLERS ARE WORTH MORE
Lipstick Apathy

Chance posted:

Ok I'm thinking about buying a new card for what feels like almost silly reasoning so wanted to run it by in here to see if you guys think I'm missing anything or fully crazy.

Right now I have a GTX 460 1gb in my much newer 4770k machine (put me into the boat trying to hold out for an 880). Gaming performance in what I play is ok enough (planetside 2, some BF4, hawken, at 1920x1200, basically everything gpu limited of course). And I use it for GPU rendering in blender for hobbyist use. But for how this thing performs it really sucks up the power, and is the noisiest thing in my case by far.

I'm pondering picking up a 750 TI with 2GB of memory. From what I've seen (the 750 and ti aren't in anands bench lookup yet and these cards are gens apart) I could expect roughly similar gaming performance, (maybe better or worse for planetside which actually caps out my cards memory). And then other factors depending get 10-15% better rendering speed in blender, all while drawing way less power. Also the current set of them comes with some free to play currency cards I can just ebay since I don't care.

I'd probably keep the 460 just to see how blender handles it I know it can do multiple GPU without crossfire/sli. Then eventually sell the 460 as well, and then farther down the line get said 880, or 860 to actually be my big gaming/perf upgrade, but keep the 750 TI around for the same blender extra oomph.

Am I crazy? I mean even if I am $200 isn't going to kill me, even if it was just as some extra render power. But am I missing anything? Will it suck a lot more than I'm expecting game wise?

The 750ti is great for the money but don't buy the EVGA FTW model, the fans have a super loud minimum speed even after they patched the bios. The MSI gaming model was tested the quietest, otherwise get the cheapest one because the low end model overclocks almost catch up to the most expensive model overclocks. Keep in mind there is no SLI for the card so you'd want to sell it or use it for physx when it is upgrade time. The F2P currency cards only sell for $20-30 right now but it helps.

Edit: with medium settings you should easily hit 60 fps at that res in most of those games, with occasional dips

Zero VGS fucked around with this message at 13:48 on Mar 26, 2014

Ignoarints
Nov 26, 2010

Chance posted:

Ok I'm thinking about buying a new card for what feels like almost silly reasoning so wanted to run it by in here to see if you guys think I'm missing anything or fully crazy.

Right now I have a GTX 460 1gb in my much newer 4770k machine (put me into the boat trying to hold out for an 880). Gaming performance in what I play is ok enough (planetside 2, some BF4, hawken, at 1920x1200, basically everything gpu limited of course). And I use it for GPU rendering in blender for hobbyist use. But for how this thing performs it really sucks up the power, and is the noisiest thing in my case by far.

I'm pondering picking up a 750 TI with 2GB of memory. From what I've seen (the 750 and ti aren't in anands bench lookup yet and these cards are gens apart) I could expect roughly similar gaming performance, (maybe better or worse for planetside which actually caps out my cards memory). And then other factors depending get 10-15% better rendering speed in blender, all while drawing way less power. Also the current set of them comes with some free to play currency cards I can just ebay since I don't care.

I'd probably keep the 460 just to see how blender handles it I know it can do multiple GPU without crossfire/sli. Then eventually sell the 460 as well, and then farther down the line get said 880, or 860 to actually be my big gaming/perf upgrade, but keep the 750 TI around for the same blender extra oomph.

Am I crazy? I mean even if I am $200 isn't going to kill me, even if it was just as some extra render power. But am I missing anything? Will it suck a lot more than I'm expecting game wise?

Admittedly I have no actual experience with the 460 or really ever seen it in benchmarks today, but just after a quick look at numbers it would seem the 750ti would outperform it in virtually every practical way, in some ways by a large margin. The 460 has a larger memory bus, but half the memory and much slower ram. I have no idea if that makes up for it but I'd guess so. And this is stock for stock, the 750 ti is supposed to be easy as cake to overclock, which is in line with pretty much all video cards today but there was some speculation early on if there was anything on the table for the 750ti to overclock to, but there is.

Edit: If you do want to spend $200, I'm pretty sure the R9 270 is better (although uses the more "traditional" amount of wattage)

I'm noticing some microstuttering return when playing BF4. It's intermittent behavior is bugging the hell out of me. I'm starting to lean towards DPC latency just because it seems abnormally high (although I had a really hard time figuring out of it was or not, I'm still not really sure) but just in case - is there really anything else I can do besides a really clean driver install (which I've done), frame limiting, etc? I've tried every form of vsync on and off there is available to me. I'm starting to believe its two different issues. Initially I had some serious fps drops but with frame limiting and basically any form of vsync now it's butter smooth. The intermittent stuttering however is so short lived that it hardly ever even registers on the in game FPS monitor, but that really doesn't poll very often. The only time I can really replicate it is if I notice its starting to happen randomly, I can spin around really fast in game and it gets slightly choppy the first spin. However any other spinning after that in the same spot returns to smoothness.

Super scientific I know, but could this also indicate some kind of memory issue?

Ignoarints fucked around with this message at 15:20 on Mar 26, 2014

Chance
Apr 28, 2002

AMDs can't GPU render in blender yet and I wanted to give a shot to stuff like shadowplay. I was looking at the MSI one, which I should have clarified is $190 CDN, so ballpark 200 after tax http://products.ncix.com/detail/msi-geforce-gtx-750-ti-b5-94307.htm

Thanks for the input guys.

ShaneB
Oct 22, 2002


If I want to cheap out and get a used 7870 to crossfire with my 270X, will I have to down-clock my 270X from where it is right now? It doesn't seem like 7870s really like to overclock to where my 270X is.

Ignoarints
Nov 26, 2010

Chance posted:

AMDs can't GPU render in blender yet and I wanted to give a shot to stuff like shadowplay. I was looking at the MSI one, which I should have clarified is $190 CDN, so ballpark 200 after tax http://products.ncix.com/detail/msi-geforce-gtx-750-ti-b5-94307.htm

Thanks for the input guys.

Shadowplay rocks. It really is like a DVR you hardly ever notice, until you need (want) it. The FPS hit with a single 660 ti was 3-4.

Unfortunately it doesn't seem to play nicely with SLI yet.

Rastor
Jun 2, 2001

Animal posted:

Wasn't the next NVIDIA GPU going to contain an ARM processor? Whatsup with that, what would be the use?

Factory Factory posted:

It might be a Big-Maxwell-only feature, since it's compute-oriented. The idea behind it is that there are some lightly-threaded tasks that are rear end-slow on a GPU, but PCIe being what it is, it's slower to send those back to the CPU than just process them locally on a wimpy ARM core.

Ignoarints posted:

And one day the GPU card will get so large that it becomes the computer itself, moving the CPU socket to the GPU pcb and the motherboard will whither away to nothing but a faint memory, making future generations wonder why they still plug the GPU into something.

Really though it seems like a processor on the GPU seems like a good idea, eliminating the need to clog the bus pipes with more trivial tasks. You can literally watch this happen with a latency monitor thousands of times a second

(I really do think what I said above will happen :P)

Anand has a nice article up about nVidia's plans for changing the motherboards and interlinks, currently scheduled for 2016:

http://www.anandtech.com/show/7900/nvidia-updates-gpu-roadmap-unveils-pascal-architecture-for-2016

Menacer
Nov 25, 2000
Failed Sega Accessory Ahoy!

Jan posted:

Is there a GPU benchmarking tool with a test that measures bandwidth? It seems that the usual 3dMark/Unigine synthetic benchmarks tend to measure shader and texture processing power with preloaded data, whereas I want to specifically measure the impact of PCIe speed at my monitor's native 1600p.
If you're willing to build it yourself, the SHOC Benchmark Suite includes a collection of bandwidth benchmarks. level0/DeviceMemory, for instance, tests bandwidth to the GPU's memory from the GPU.

level0/BusSpeedDownload, level0/BusSpeedUpload, and level1/Triad all relate to the PCIe bandwidth

This info's likely too low-level for what you want to know about displaying graphics at your desired resolution, but you did ask for a bandwidth measurement test.

CFox
Nov 9, 2005

ShaneB posted:

If I want to cheap out and get a used 7870 to crossfire with my 270X, will I have to down-clock my 270X from where it is right now? It doesn't seem like 7870s really like to overclock to where my 270X is.

Going from my experience with crossfire you'd have to downclock even if you got another 270x. My two cards can both handle overclocks with crossfire turned off that then will crash all day long when I turn crossfire back on. It's just another price you pay to do the crossfire thing.

Jan
Feb 27, 2008

The disruptive powers of excessive national fecundity may have played a greater part in bursting the bonds of convention than either the power of ideas or the errors of autocracy.

Menacer posted:

If you're willing to build it yourself, the SHOC Benchmark Suite includes a collection of bandwidth benchmarks. level0/DeviceMemory, for instance, tests bandwidth to the GPU's memory from the GPU.

level0/BusSpeedDownload, level0/BusSpeedUpload, and level1/Triad all relate to the PCIe bandwidth

This info's likely too low-level for what you want to know about displaying graphics at your desired resolution, but you did ask for a bandwidth measurement test.

Sounds promising, but I'd guess that multi-GPU compute works differently from multi-GPU rendering. The latter pretty much needs both GPUs to work in lockstep, whereas the former will just take up all the bandwidth it can on each bus.

Basically, I'm trying to profile the source of the stutters I get when running in Crossfire. Since the culprit of the moment is Thief, I ran some benchmarks with its built-in tool. I'm observing frequent and severe stutters with Crossfire, resulting in a much lower minimum FPS, even if the average frame rate is higher. And this isn't microstuttering but actual hitching, I suspect whenever the GPUs need to cache new textures or otherwise load data. Hence my suspecting bandwidth.

Even at 1.1 4x, I still don't see why it would outright hitch, but maybe this board's crummy Crossfire design somehow ends up running both buses at different speeds, resulting in an unexpected hitch when the secondary GPU doesn't load its resources in time. But I doubt there is any tool capable of looking into the black box that is Crossfire, hence my looking for a benchmark utility that is strictly bandwidth bound, in order to compare the results and see if the difference is in the orders of magnitude I'd expect.

ShaneB
Oct 22, 2002


CFox posted:

Going from my experience with crossfire you'd have to downclock even if you got another 270x. My two cards can both handle overclocks with crossfire turned off that then will crash all day long when I turn crossfire back on. It's just another price you pay to do the crossfire thing.

Ehhhh more reason to just save up for something that can handle 1440p by itself I guess.

veedubfreak
Apr 2, 2005

by Smythe

CFox posted:

Going from my experience with crossfire you'd have to downclock even if you got another 270x. My two cards can both handle overclocks with crossfire turned off that then will crash all day long when I turn crossfire back on. It's just another price you pay to do the crossfire thing.

That is pretty much what I have found with my setup too. I was able to push a single 290 to 1190 with no issues, in crossfire even going 1050 starts causing driver crashes.

Also, I installed that fancy new power supply and cleaned up the wiring a bit.

Adbot
ADBOT LOVES YOU

Ignoarints
Nov 26, 2010
Nice setup man, that looks otherworldly to me lol.

Is that common with SLI too? I kind of assumed it would but at least for me I was able to achieve about the same clock speeds overclocked in SLI. Although they are never the same which sort of bugs me more than it should. One might pop up to 1241 while the other stays steady down at 1215. Or one will stay steady at 1228 while one going from 1202 to 1228 back and forth. It's never the same card too they will switch clock speeds and tendencies at random.

It kind of sucks too because I will definitely crash eventually at 1241, but when I set the offset lower it kicks one to 1202 pretty consistently.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply