Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
atomicthumbs
Dec 26, 2010


We're in the business of extending man's senses.

Factory Factory posted:

Z also allows PCIe lane bifurcation for SLI and CrossFire graphics configurations (or really, for two or three high-performance PCIe devices regardless of use). H boards could be hacked the same way with a PCIe bridge, but that's more expensive than just using a Z in the first place.

That reminds me... will single-GPU setup suffer from performance degradation if I put it in a PCIe 3.0 x8 slot instead of an x16 slot? I'm driving a 1080p monitor with it.

Also, are modern SSDs likely to saturate the connection in a PCIe 2.0 2x slot, or are they not limited by connection?

Adbot
ADBOT LOVES YOU

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



atomicthumbs posted:

That reminds me... will single-GPU setup suffer from performance degradation if I put it in a PCIe 3.0 x8 slot instead of an x16 slot? I'm driving a 1080p monitor with it.

Also, are modern SSDs likely to saturate the connection in a PCIe 2.0 2x slot, or are they not limited by connection?
While I don't think performance is going to suffer, why not just use the x16 slot?

Someone can confirm this but I believe PCIe 3.0 x8 bandwidth is equal to the bandwidth of PCIe 2.0 x16 (though ultimately it depends on how the lane was electrically wired). And I don't think PCIe 2.0 x16 was very limiting to single card situations. There was some concern that R9 290X cards in Crossfire might be limited by PCIe 3.0 when the slots are operating as x8 but I'm not sure if that bore out (and obviously in this instance this doesn't matter).

Canned Sunshine fucked around with this message at 20:45 on Jul 26, 2014

GokieKS
Dec 15, 2012

Mostly Harmless.

Sidesaddle Cavalry posted:

This, though what was their reasoning for making all those [insert badass name here]-Z boards for Z68 again? Was it just to have new stuff for the initial launch of Ivy Bridge? I remember Z77 came out really quickly afterwards.

Z68 instead of P67, back when "can OC but can't use iGPU" was somehow seen as a meaningful segment differentiator.

(Yes, I'm aware there are other differences, but that was the primary one, and pretty much everyone who didn't already buy a P67 board just went with a Z68 for OCing).

Alereon
Feb 6, 2004

Dehumanize yourself and face to Trumpshed
College Slice

atomicthumbs posted:

That reminds me... will single-GPU setup suffer from performance degradation if I put it in a PCIe 3.0 x8 slot instead of an x16 slot? I'm driving a 1080p monitor with it.
Not really, but keep in mind that the only time you see an x8 slot is if you are splitting an x16 slot two ways. If you are only using one of the two slots you will get all of the bandwidth.

quote:

Also, are modern SSDs likely to saturate the connection in a PCIe 2.0 2x slot, or are they not limited by connection?
Yes, the Samsung XP941 is a first-gen OEM M.2 PCIe 2.0 x4 SSD, it delivers about 1400MB/sec reads. That's without any of the fancy performance-enhancing technologies that come in retail SSDs or the new NVMe interface.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

SourKraut posted:

While I don't think performance is going to suffer, why not just use the x16 slot?

Someone can confirm this but I believe PCIe 3.0 x8 bandwidth is equal to the bandwidth of PCIe 2.0 x16 (though ultimately it depends on how the lane was electrically wired).

I'm not sure what you mean by 'how it was wired' but yes, PCIe 3.0 x8 is about the same as PCIe 2.0 x16. PCIe 2.0 uses 5GT/s line rate with 8b10b line coding, giving 5*(8/10) = 4Gbps per lane. 3.0 is 8GT/s with 128b130b encoding, or 7.88Gbps per lane.

Sidesaddle Cavalry
Mar 15, 2013

Oh Boy Desert Map

GokieKS posted:

Z68 instead of P67, back when "can OC but can't use iGPU" was somehow seen as a meaningful segment differentiator.

(Yes, I'm aware there are other differences, but that was the primary one, and pretty much everyone who didn't already buy a P67 board just went with a Z68 for OCing).

That solves my mystery, and makes me wonder why Intel never bothered to develop at least a lovely not-on-die-GPU to integrate onto X-series chipsets for utility purposes :downs:

Canned Sunshine
Nov 20, 2005

CAUTION: POST QUALITY UNDER CONSTRUCTION



BobHoward posted:

I'm not sure what you mean by 'how it was wired'
I didn't state that very well but I was referring to that if a slot is wired for x8, if someone has a PCIe 2.0 card and inserts it into a PCIe 3.0 x8 slot, even though the bandwidth is approximately equivalent to PCIe 2.0 x16 slot, the card will operate at PCIe 2.0 x8 as there are only 8 electrical lanes.

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

atomicthumbs posted:

That reminds me... will single-GPU setup suffer from performance degradation if I put it in a PCIe 3.0 x8 slot instead of an x16 slot? I'm driving a 1080p monitor with it.

Also, are modern SSDs likely to saturate the connection in a PCIe 2.0 2x slot, or are they not limited by connection?

Well, I've got a GTX 780Ti on a P67 motherboard, fourth card to pull main rendering duty (GTX 580-->680-->780-->780Ti), and it has scaled pretty much 1:1 with expected results - that is, comparing benchmarks etc. with other people using overpriced graphics cards - and my graphics performance in general shows no difference if I pull my PhysX card (ahahaha), on PCI-e 2.0 8x (16x without physics coprocessor :v:).

I'm fairly confident you'll be fine at PCI-e 3.0 8x if I'm fine at PCI-e 2.0 8x.

atomicthumbs
Dec 26, 2010


We're in the business of extending man's senses.

SourKraut posted:

While I don't think performance is going to suffer, why not just use the x16 slot?

Someone can confirm this but I believe PCIe 3.0 x8 bandwidth is equal to the bandwidth of PCIe 2.0 x16 (though ultimately it depends on how the lane was electrically wired). And I don't think PCIe 2.0 x16 was very limiting to single card situations. There was some concern that R9 290X cards in Crossfire might be limited by PCIe 3.0 when the slots are operating as x8 but I'm not sure if that bore out (and obviously in this instance this doesn't matter).

I think if I put an SSD in the second long slot on a VII Gene it'd knock the first one down from x16 to x8. I'm trying to figure out how much that'd hurt performance-wise.

edit: and the reason I'd be doing that is complicated and involves m.2, miniPCIe, and a video capture card

atomicthumbs fucked around with this message at 10:39 on Jul 27, 2014

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Basically not at all. For some reason I can't Google up literally any of the review articles on this, but even the most beefy of single-GPU video cards lose barely 1% on average dropping from PCIe 3.0 x16 all the way to x4 (or PCIe 2.0 x8).

At 1080p you likely won't even see a full 1% drop, as it shows up more at higher resolutions like 2560x1440.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
For most game-like loads, the performance hit will be nearly nothing so long as the GPU has enough local memory to hold all the textures and static vertex data. GPU command traffic doesn't need much bandwidth.

If you're doing something GPGPU-ish, that kind of thing usually depends more on PCIe bandwidth.

japtor
Oct 28, 2005

Factory Factory posted:

Basically not at all. For some reason I can't Google up literally any of the review articles on this, but even the most beefy of single-GPU video cards lose barely 1% on average dropping from PCIe 3.0 x16 all the way to x4 (or PCIe 2.0 x8).

At 1080p you likely won't even see a full 1% drop, as it shows up more at higher resolutions like 2560x1440.
They show up if you look up "pcie scaling" or something to that extent. Here's one from two years ago:

http://www.techpowerup.com/reviews/Intel/Ivy_Bridge_PCI-Express_Scaling/

1gnoirents
Jun 28, 2014

hello :)

Agreed posted:

Well, I've got a GTX 780Ti on a P67 motherboard, fourth card to pull main rendering duty (GTX 580-->680-->780-->780Ti), and it has scaled pretty much 1:1 with expected results - that is, comparing benchmarks etc. with other people using overpriced graphics cards - and my graphics performance in general shows no difference if I pull my PhysX card (ahahaha), on PCI-e 2.0 8x (16x without physics coprocessor :v:).

I'm fairly confident you'll be fine at PCI-e 3.0 8x if I'm fine at PCI-e 2.0 8x.

Is it worth having a physx card?

edit: as in, is it even worth the wattage? I doubt I'm SLI'ing anytime soon so I can throw down on some $50 craigslist gpu, but I know very little about the practical benefits of a physx card. The annoying part is the power consumption of old cards

edit2: well, I found a gtx 750 for $50, so I guess I'll be doing it anyways :v:

1gnoirents fucked around with this message at 15:18 on Jul 28, 2014

Assepoester
Jul 18, 2004
Probation
Can't post for 11 years!
Melman v2

1gnoirents posted:

Is it worth having a physx card?

edit: as in, is it even worth the wattage?
No.

1gnoirents
Jun 28, 2014

hello :)

Is it even worth... 50 watts? If I'm better off buying a steak and beer instead I'm content with that but if there is any reason at all

KillHour
Oct 28, 2007


1gnoirents posted:

Is it even worth... 50 watts? If I'm better off buying a steak and beer instead I'm content with that but if there is any reason at all

Buy a steak and beer. Any card that draws only 50 watts isn't going to improve your performance at all, and the beefy cards just aren't worth it for how few games really use PhysX. You're better off putting that money into getting a more powerful primary card.

1gnoirents
Jun 28, 2014

hello :)

KillHour posted:

Buy a steak and beer. Any card that draws only 50 watts isn't going to improve your performance at all, and the beefy cards just aren't worth it for how few games really use PhysX. You're better off putting that money into getting a more powerful primary card.

I have a 780ti (for 1440p) and the card I saw is a $50 gtx 750. But, its looking like steak and beer more and more

(just realized this wasnt the gpu thread, sorry)

Hace
Feb 13, 2012

<<Mobius 1, Engage.>>

KillHour posted:

Buy a steak and beer. Any card that draws only 50 watts isn't going to improve your performance at all, and the beefy cards just aren't worth it for how few games really use PhysX. You're better off putting that money into getting a more powerful primary card.

For what it's worth, the 750 is able to handle PhysX pretty well on it's own.

The Lord Bude
May 23, 2007

ASK ME ABOUT MY SHITTY, BOUGIE INTERIOR DECORATING ADVICE

1gnoirents posted:

Is it even worth... 50 watts? If I'm better off buying a steak and beer instead I'm content with that but if there is any reason at all

I have never once, with my gtx680, or now with my 780ti encountered a situation where I wished I had a separate physX card. barely anything uses it, and when it does, you just turn it on and your fps is perfectly fine.

1gnoirents
Jun 28, 2014

hello :)
I've seen the light, thanks lol.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

The Lord Bude posted:

I have never once, with my gtx680, or now with my 780ti encountered a situation where I wished I had a separate physX card. barely anything uses it, and when it does, you just turn it on and your fps is perfectly fine.

Honestly, NVIDIA is screwing themselves over by refusing to work with AMD cards, because I bet they'd sell a few 750s to people who already have a high end Radeon. I don't actually get the logic.

It's a nice to have, but if you already got a sweet deal on a 290 or something, you're not going to spend a bunch getting a 780 instead for a locked-in feature, but you might think about the aforementioned used $50 750.

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

1gnoirents posted:

Is it even worth... 50 watts? If I'm better off buying a steak and beer instead I'm content with that but if there is any reason at all

In games that are optimized for physx it looks like you could get around a 20% performance increase so it might make sense if:

1. You play a lot (i mean a LOT) of games that are optimized for physx

and

2. You love playing with advanced physx

and

3. You aren't happy with your current framerates

Only real article I could find on it: http://alienbabeltech.com/main/using-maxwells-gtx-750-ti-dedicated-physx-card/

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

It impacts minimum framerates more than maximum framerates because there's some difficulty in the load balancing/role switching. Said role-switching and the impact it had on FPS was worse with Fermi going from memory, and also SMXes seem to just be plain nutty better at PhysX calculations (which is just an nVidia-optimized GPGPU calculation, kind of a "CUDA lite" workload) but I still see performance improvements in the few games that run it despite using a 780Ti as my main card. I'm using a 650Ti non-boost, for reference, and it generally doesn't get higher than 20-30% tops in utilization even under heavy workloads, but as it's dedicated to doing 'em, there is way less juggling going on and it is overall a much better experience.

I previously had a headless GTX 580 for CUDA stuff and that's what lead to me caring about physics co-processors at all. I personally think that it's an antiquated idea and by its very nature it's graphics not gameplay, and I look forward to non-proprietary GPGPU physics engines becoming more prevalent as the console generation matures and they start taking advantage of the rather substantial compute power (... for a console!), and that'll ... trickle up? I hope?

I personally think that the Bullet physics engine is a good GPGPU engine, open source, but regardless: no, it is not worth it to have a PhysX card unless you play a shitload of PhysX games, which you deductively cannot because you don't need very many hands to count the good games that use PhysX.

Also, that god damned pepper shaker in Alice Returns is still going to tank your FPS (probably).

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
Probably better to sell your existing card and buy a higher end card with the money you would spend on a physx card.

1gnoirents
Jun 28, 2014

hello :)
^^Hmm, I'd typically agree. However in my situation I can't really do that. After looking at the games that use it and actually finding more than 1 I play(ed) I guess I don't have a lot to lose since I can just sell it. Perhaps the UT4 engine will use physx like UT3. Thanks everyone, and sorry again for the derail

Agreed
Dec 30, 2003

The price of meat has just gone up, and your old lady has just gone down

1gnoirents posted:

^^Hmm, I'd typically agree. However in my situation I can't really do that. After looking at the games that use it and actually finding more than 1 I play(ed) I guess I don't have a lot to lose since I can just sell it. Perhaps the UT4 engine will use physx like UT3. Thanks everyone, and sorry again for the derail

Pros:
Borderlands games
Batman games (is this still a pro? ... that was a pretty wicked lookin' batmobile in the trailer...)
uhhh Mafia II, Alice Returns, and the two Metro games

Cons:
You bought a kind of expensive luxury item for a really silly reason
Dummy
This hits close to home

Welmu
Oct 9, 2007
Metri. Piiri. Sekunti.
Delidding i7-5960X

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.

Welmu posted:

Delidding i7-5960X



Thatll buff right out.

r0ck0
Sep 12, 2004
r0ck0s p0zt m0d3rn lyf

Don Lapre posted:

Thatll buff right out.

It should line back up when you reseat the CPU in the socket.

1gnoirents
Jun 28, 2014

hello :)

Panty Saluter
Jan 17, 2004

Making learning fun!

Welmu posted:

Delidding i7-5960X



I GIS'd this image and if nothing else it was just a sample and not something someone bought :v: Also it seems Intel learned from the issues they had with Haswell and the lids before, so...

http://www.kitguru.net/components/cpu/anton-shilov/intel-core-i7-5960x-haswell-e-de-lidded-interesting-discoveries-found/

beejay
Apr 7, 2002

Couldn't Intel have given them a heads-up about the soldering? Although it may just be done for website hits, it is a striking image after these years of non-soldered heatspreaders.

1gnoirents
Jun 28, 2014

hello :)
Has anybody tried to solder a cpu themselves ?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

1gnoirents posted:

Has anybody tried to solder a cpu themselves ?

No. Lots of people have floated the idea, but after you pass out of the Dunning-Kruger zone doing research you find out that low-temperature solder that connects aluminum with glass and ceramics is very difficult to find and execute.

Why glass and ceramics, you ask? That's what the outer layers of a CPU are made of, silicon glass and ceramic protective layers over the gooey silicon-and-metal-bits core.

Lord Windy
Mar 26, 2010
If one of those CPU's have disabled cores is it possible to enable them?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
Generally, no. Intel especially will lock off unused parts by cutting links with a laser.

When and if this isn't done, as was the case with many of AMD's Athlon II and Phenom II era CPUs and a couple of their GPUs (like the Radeon 6950), then theoretically you can activate the locked-off parts using custom firmware or a BIOS/UEFI that exposes such controls. However, the parts are usually locked off for a good reason and such unlocks often fail.

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

Factory Factory posted:

Generally, no. Intel especially will lock off unused parts by cutting links with a laser.

When and if this isn't done, as was the case with many of AMD's Athlon II and Phenom II era CPUs and a couple of their GPUs (like the Radeon 6950), then theoretically you can activate the locked-off parts using custom firmware or a BIOS/UEFI that exposes such controls. However, the parts are usually locked off for a good reason and such unlocks often fail.

or a pencil

Lord Windy
Mar 26, 2010
A 12 core CPU would be pretty awesome though :(

(I like programming with concurrency)

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

Lord Windy posted:

A 12 core CPU would be pretty awesome though :(

(I like programming with concurrency)

Most people who buy an i7 would rather have 8 and the higher clock speed, and those who need many more cores can achieve that with extremely expensive Xeons.

(Although those are the ridiculous 8-way socket models, it was more fun to pick those out).

HalloKitty fucked around with this message at 13:19 on Jul 30, 2014

Adbot
ADBOT LOVES YOU

Lord Windy
Mar 26, 2010

HalloKitty posted:

So buy one. That's what Xeon workstations are for. Of course, it's Ivy not Haswell, but I imagine Haswell Xeons will follow soon.

Most people who buy an i7 would rather have 8 and the higher clock speed.

Oh I spot price AWS CPU servers whenever I want to get my kicks in with heaps of CPUs. I'm honestly pretty happy with my 4-core, 8 thread CPU. They will probably release a 12 core version of that with reduced clock speed that I won't buy because I simply don't need it for every day computing.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply