Search Amazon.com:
Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«140 »
  • Post
  • Reply
atomicthumbs
Dec 26, 2010



Factory Factory posted:

Z also allows PCIe lane bifurcation for SLI and CrossFire graphics configurations (or really, for two or three high-performance PCIe devices regardless of use). H boards could be hacked the same way with a PCIe bridge, but that's more expensive than just using a Z in the first place.

That reminds me... will single-GPU setup suffer from performance degradation if I put it in a PCIe 3.0 x8 slot instead of an x16 slot? I'm driving a 1080p monitor with it.

Also, are modern SSDs likely to saturate the connection in a PCIe 2.0 2x slot, or are they not limited by connection?

Adbot
ADBOT LOVES YOU

SourKraut
Nov 20, 2005

Liberty Cabbage


atomicthumbs posted:

That reminds me... will single-GPU setup suffer from performance degradation if I put it in a PCIe 3.0 x8 slot instead of an x16 slot? I'm driving a 1080p monitor with it.

Also, are modern SSDs likely to saturate the connection in a PCIe 2.0 2x slot, or are they not limited by connection?
While I don't think performance is going to suffer, why not just use the x16 slot?

Someone can confirm this but I believe PCIe 3.0 x8 bandwidth is equal to the bandwidth of PCIe 2.0 x16 (though ultimately it depends on how the lane was electrically wired). And I don't think PCIe 2.0 x16 was very limiting to single card situations. There was some concern that R9 290X cards in Crossfire might be limited by PCIe 3.0 when the slots are operating as x8 but I'm not sure if that bore out (and obviously in this instance this doesn't matter).

SourKraut fucked around with this message at Jul 26, 2014 around 19:45

GokieKS
Dec 15, 2012

Mostly Harmless.


Sidesaddle Cavalry posted:

This, though what was their reasoning for making all those [insert badass name here]-Z boards for Z68 again? Was it just to have new stuff for the initial launch of Ivy Bridge? I remember Z77 came out really quickly afterwards.

Z68 instead of P67, back when "can OC but can't use iGPU" was somehow seen as a meaningful segment differentiator.

(Yes, I'm aware there are other differences, but that was the primary one, and pretty much everyone who didn't already buy a P67 board just went with a Z68 for OCing).

Alereon
Feb 6, 2004

For me but LEFTHANDED

atomicthumbs posted:

That reminds me... will single-GPU setup suffer from performance degradation if I put it in a PCIe 3.0 x8 slot instead of an x16 slot? I'm driving a 1080p monitor with it.
Not really, but keep in mind that the only time you see an x8 slot is if you are splitting an x16 slot two ways. If you are only using one of the two slots you will get all of the bandwidth.

quote:

Also, are modern SSDs likely to saturate the connection in a PCIe 2.0 2x slot, or are they not limited by connection?
Yes, the Samsung XP941 is a first-gen OEM M.2 PCIe 2.0 x4 SSD, it delivers about 1400MB/sec reads. That's without any of the fancy performance-enhancing technologies that come in retail SSDs or the new NVMe interface.

BobHoward
Feb 13, 2012

Special Operations Executive
Q Section




SourKraut posted:

While I don't think performance is going to suffer, why not just use the x16 slot?

Someone can confirm this but I believe PCIe 3.0 x8 bandwidth is equal to the bandwidth of PCIe 2.0 x16 (though ultimately it depends on how the lane was electrically wired).

I'm not sure what you mean by 'how it was wired' but yes, PCIe 3.0 x8 is about the same as PCIe 2.0 x16. PCIe 2.0 uses 5GT/s line rate with 8b10b line coding, giving 5*(8/10) = 4Gbps per lane. 3.0 is 8GT/s with 128b130b encoding, or 7.88Gbps per lane.

Sidesaddle Cavalry
Mar 15, 2013

Yep, that shirt says ほ alright.


GokieKS posted:

Z68 instead of P67, back when "can OC but can't use iGPU" was somehow seen as a meaningful segment differentiator.

(Yes, I'm aware there are other differences, but that was the primary one, and pretty much everyone who didn't already buy a P67 board just went with a Z68 for OCing).

That solves my mystery, and makes me wonder why Intel never bothered to develop at least a lovely not-on-die-GPU to integrate onto X-series chipsets for utility purposes

SourKraut
Nov 20, 2005

Liberty Cabbage


BobHoward posted:

I'm not sure what you mean by 'how it was wired'
I didn't state that very well but I was referring to that if a slot is wired for x8, if someone has a PCIe 2.0 card and inserts it into a PCIe 3.0 x8 slot, even though the bandwidth is approximately equivalent to PCIe 2.0 x16 slot, the card will operate at PCIe 2.0 x8 as there are only 8 electrical lanes.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down


atomicthumbs posted:

That reminds me... will single-GPU setup suffer from performance degradation if I put it in a PCIe 3.0 x8 slot instead of an x16 slot? I'm driving a 1080p monitor with it.

Also, are modern SSDs likely to saturate the connection in a PCIe 2.0 2x slot, or are they not limited by connection?

Well, I've got a GTX 780Ti on a P67 motherboard, fourth card to pull main rendering duty (GTX 580-->680-->780-->780Ti), and it has scaled pretty much 1:1 with expected results - that is, comparing benchmarks etc. with other people using overpriced graphics cards - and my graphics performance in general shows no difference if I pull my PhysX card (ahahaha), on PCI-e 2.0 8x (16x without physics coprocessor ).

I'm fairly confident you'll be fine at PCI-e 3.0 8x if I'm fine at PCI-e 2.0 8x.

atomicthumbs
Dec 26, 2010



SourKraut posted:

While I don't think performance is going to suffer, why not just use the x16 slot?

Someone can confirm this but I believe PCIe 3.0 x8 bandwidth is equal to the bandwidth of PCIe 2.0 x16 (though ultimately it depends on how the lane was electrically wired). And I don't think PCIe 2.0 x16 was very limiting to single card situations. There was some concern that R9 290X cards in Crossfire might be limited by PCIe 3.0 when the slots are operating as x8 but I'm not sure if that bore out (and obviously in this instance this doesn't matter).

I think if I put an SSD in the second long slot on a VII Gene it'd knock the first one down from x16 to x8. I'm trying to figure out how much that'd hurt performance-wise.

edit: and the reason I'd be doing that is complicated and involves m.2, miniPCIe, and a video capture card

atomicthumbs fucked around with this message at Jul 27, 2014 around 09:39

Factory Factory
Mar 19, 2010

I can do sex. It's just alien sex.


Basically not at all. For some reason I can't Google up literally any of the review articles on this, but even the most beefy of single-GPU video cards lose barely 1% on average dropping from PCIe 3.0 x16 all the way to x4 (or PCIe 2.0 x8).

At 1080p you likely won't even see a full 1% drop, as it shows up more at higher resolutions like 2560x1440.

BobHoward
Feb 13, 2012

Special Operations Executive
Q Section




For most game-like loads, the performance hit will be nearly nothing so long as the GPU has enough local memory to hold all the textures and static vertex data. GPU command traffic doesn't need much bandwidth.

If you're doing something GPGPU-ish, that kind of thing usually depends more on PCIe bandwidth.

japtor
Oct 28, 2005
WELL ARNT I JUST MR. LA DE FUCKEN DA. oh yea and i suck cocks too


Factory Factory posted:

Basically not at all. For some reason I can't Google up literally any of the review articles on this, but even the most beefy of single-GPU video cards lose barely 1% on average dropping from PCIe 3.0 x16 all the way to x4 (or PCIe 2.0 x8).

At 1080p you likely won't even see a full 1% drop, as it shows up more at higher resolutions like 2560x1440.
They show up if you look up "pcie scaling" or something to that extent. Here's one from two years ago:

http://www.techpowerup.com/reviews/...xpress_Scaling/

1gnoirents
Jun 28, 2014


Agreed posted:

Well, I've got a GTX 780Ti on a P67 motherboard, fourth card to pull main rendering duty (GTX 580-->680-->780-->780Ti), and it has scaled pretty much 1:1 with expected results - that is, comparing benchmarks etc. with other people using overpriced graphics cards - and my graphics performance in general shows no difference if I pull my PhysX card (ahahaha), on PCI-e 2.0 8x (16x without physics coprocessor ).

I'm fairly confident you'll be fine at PCI-e 3.0 8x if I'm fine at PCI-e 2.0 8x.

Is it worth having a physx card?

edit: as in, is it even worth the wattage? I doubt I'm SLI'ing anytime soon so I can throw down on some $50 craigslist gpu, but I know very little about the practical benefits of a physx card. The annoying part is the power consumption of old cards

edit2: well, I found a gtx 750 for $50, so I guess I'll be doing it anyways

1gnoirents fucked around with this message at Jul 28, 2014 around 14:18

Cardboard Box A
Jul 18, 2004
VTech Asian Pride

1gnoirents posted:

Is it worth having a physx card?

edit: as in, is it even worth the wattage?
No.

1gnoirents
Jun 28, 2014



Is it even worth... 50 watts? If I'm better off buying a steak and beer instead I'm content with that but if there is any reason at all

KillHour
Oct 28, 2007

Wake up and
smell the murder.



1gnoirents posted:

Is it even worth... 50 watts? If I'm better off buying a steak and beer instead I'm content with that but if there is any reason at all

Buy a steak and beer. Any card that draws only 50 watts isn't going to improve your performance at all, and the beefy cards just aren't worth it for how few games really use PhysX. You're better off putting that money into getting a more powerful primary card.

1gnoirents
Jun 28, 2014


KillHour posted:

Buy a steak and beer. Any card that draws only 50 watts isn't going to improve your performance at all, and the beefy cards just aren't worth it for how few games really use PhysX. You're better off putting that money into getting a more powerful primary card.

I have a 780ti (for 1440p) and the card I saw is a $50 gtx 750. But, its looking like steak and beer more and more

(just realized this wasnt the gpu thread, sorry)

Hace
Feb 13, 2012

don't call me a fruit

KillHour posted:

Buy a steak and beer. Any card that draws only 50 watts isn't going to improve your performance at all, and the beefy cards just aren't worth it for how few games really use PhysX. You're better off putting that money into getting a more powerful primary card.

For what it's worth, the 750 is able to handle PhysX pretty well on it's own.

The Lord Bude
May 23, 2007

I'M DISAPPOINTED THAT CORTANA WILL BE A CIRCLE AND NOT THE ACTUAL SEXY WOMAN FROM THE GAME.


1gnoirents posted:

Is it even worth... 50 watts? If I'm better off buying a steak and beer instead I'm content with that but if there is any reason at all

I have never once, with my gtx680, or now with my 780ti encountered a situation where I wished I had a separate physX card. barely anything uses it, and when it does, you just turn it on and your fps is perfectly fine.

1gnoirents
Jun 28, 2014


I've seen the light, thanks lol.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast


The Lord Bude posted:

I have never once, with my gtx680, or now with my 780ti encountered a situation where I wished I had a separate physX card. barely anything uses it, and when it does, you just turn it on and your fps is perfectly fine.

Honestly, NVIDIA is screwing themselves over by refusing to work with AMD cards, because I bet they'd sell a few 750s to people who already have a high end Radeon. I don't actually get the logic.

It's a nice to have, but if you already got a sweet deal on a 290 or something, you're not going to spend a bunch getting a 780 instead for a locked-in feature, but you might think about the aforementioned used $50 750.

Krailor
Nov 2, 2001
I'm only pretending to care

1gnoirents posted:

Is it even worth... 50 watts? If I'm better off buying a steak and beer instead I'm content with that but if there is any reason at all

In games that are optimized for physx it looks like you could get around a 20% performance increase so it might make sense if:

1. You play a lot (i mean a LOT) of games that are optimized for physx

and

2. You love playing with advanced physx

and

3. You aren't happy with your current framerates

Only real article I could find on it: http://alienbabeltech.com/main/usin...ted-physx-card/

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down


It impacts minimum framerates more than maximum framerates because there's some difficulty in the load balancing/role switching. Said role-switching and the impact it had on FPS was worse with Fermi going from memory, and also SMXes seem to just be plain nutty better at PhysX calculations (which is just an nVidia-optimized GPGPU calculation, kind of a "CUDA lite" workload) but I still see performance improvements in the few games that run it despite using a 780Ti as my main card. I'm using a 650Ti non-boost, for reference, and it generally doesn't get higher than 20-30% tops in utilization even under heavy workloads, but as it's dedicated to doing 'em, there is way less juggling going on and it is overall a much better experience.

I previously had a headless GTX 580 for CUDA stuff and that's what lead to me caring about physics co-processors at all. I personally think that it's an antiquated idea and by its very nature it's graphics not gameplay, and I look forward to non-proprietary GPGPU physics engines becoming more prevalent as the console generation matures and they start taking advantage of the rather substantial compute power (... for a console!), and that'll ... trickle up? I hope?

I personally think that the Bullet physics engine is a good GPGPU engine, open source, but regardless: no, it is not worth it to have a PhysX card unless you play a shitload of PhysX games, which you deductively cannot because you don't need very many hands to count the good games that use PhysX.

Also, that god damned pepper shaker in Alice Returns is still going to tank your FPS (probably).

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.


Probably better to sell your existing card and buy a higher end card with the money you would spend on a physx card.

1gnoirents
Jun 28, 2014


^^Hmm, I'd typically agree. However in my situation I can't really do that. After looking at the games that use it and actually finding more than 1 I play(ed) I guess I don't have a lot to lose since I can just sell it. Perhaps the UT4 engine will use physx like UT3. Thanks everyone, and sorry again for the derail

Adbot
ADBOT LOVES YOU

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down


1gnoirents posted:

^^Hmm, I'd typically agree. However in my situation I can't really do that. After looking at the games that use it and actually finding more than 1 I play(ed) I guess I don't have a lot to lose since I can just sell it. Perhaps the UT4 engine will use physx like UT3. Thanks everyone, and sorry again for the derail

Pros:
Borderlands games
Batman games (is this still a pro? ... that was a pretty wicked lookin' batmobile in the trailer...)
uhhh Mafia II, Alice Returns, and the two Metro games

Cons:
You bought a kind of expensive luxury item for a really silly reason
Dummy
This hits close to home

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«140 »