Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



priznat posted:

They might have some junk cluttering up the motherboard where a x16 edge would bang into it I guess.

I’ve done it for work stuff on slots that didn’t have parts in the way and just snipping out the plastic worked best with some fine cutters. A lot of enterprise boards are more likely to have the open ended slots too.
Ah, yeah that's true, there could be stuff in the way.

Workstation/server boards are how I know about open-ended dautherboard slots.

Adbot
ADBOT LOVES YOU

Anime Schoolgirl
Nov 28, 2002

Asrock tends to leave the x1 slots open-ended on about half of their mid-high end consumer boards. I've only ever seen ASUS do it for their workstation-marketed and enterprise things.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!
In terms of PCIe slots the best board I found was the Asus proart b650-creator - notably better than the x670 proart if you need a third slot with more than two lanes. The b650 will do x8/x8/x4 PCIe 4.0, which might limit upgrades in the future but the current gen nvidia cards only do 4.0 and my NIC is pcie 3.0 so I needed the lanes. In practice I'm only able to do about 20gbps with iperf on a 40Gbe NIC, instead of the 32ish I was hoping for, but that might be limited by the CPU at the other end and a lack of proper offload somewhere. The x670 proart will do x8/x8 PCIe 5.0 but then the third slot is stuck at two lanes PCIe 4.0.

edit: Oh yeah there are MSI boards that will also do x8/x8/x4 but I had excluded them due to historically bad IOMMU support from MSI. If you're not concerned about that, the MSI boards will probably be even better.

Desuwa fucked around with this message at 16:08 on Apr 22, 2024

Bjork Bjowlob
Feb 23, 2006
yes that's very hot and i'll deal with it in the morning


The MSI MPG X670E Carbon Wifi could be another option, it will do x8/x8/x4 with the first two x8s at PCIe 5.0. I've been using this board populated with a 3080 in the first slot, P420 SAS controller in the second, and a X520 2x 10Gb NIC in the third without any issues so far.

BlankSystemDaemon
Mar 13, 2009



Bjork Bjowlob posted:

The MSI MPG X670E Carbon Wifi could be another option, it will do x8/x8/x4 with the first two x8s at PCIe 5.0. I've been using this board populated with a 3080 in the first slot, P420 SAS controller in the second, and a X520 2x 10Gb NIC in the third without any issues so far.
Heck yea, Pn20 and X520 HBA buddy!

Kivi
Aug 1, 2006
I care
You'd also need to consider the power delivery as wider slots have more power to them, you'd need to make sure that these cute small open ended slots can handle up to 75 watts cards.

Klyith
Aug 3, 2007

GBS Pledge Week

Kivi posted:

You'd also need to consider the power delivery as wider slots have more power to them, you'd need to make sure that these cute small open ended slots can handle up to 75 watts cards.

PCIe power is all on that front stubby bit, which is the same on every size of slot. Every slot needs 75 watts by spec.



My guess would be that open-end slots are much easier to break or damage. Server stuff gets put together and then shoved into racks and nobody touches it until it fails or is obsolete. DIYers are always monkeying with their PCs. And if someone puts a heavy x16 GPU into an open-end x4 slot and then is moving or shipping the PC, it probably ends in tears.

Thus the general absence in consumer boards and prevalence in server and pro-grade stuff.

Tuna-Fish
Sep 13, 2017

Klyith posted:

PCIe power is all on that front stubby bit, which is the same on every size of slot.
True.

Klyith posted:

Every slot needs 75 watts by spec.
Not true. By spec, a x16 slot must be able to provide 75W, while a x1 or x4 must be able to provide at least 25W. A small slot may optionally provide up to 75W, but is not required to be able to by the spec. (There is a protocol for configuring a card for high power that allows for the host to communicate to the cards how much power they may draw.)

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
25W ought to be enough. Even the most inefficient 400GBit ConnectX-7 barely goes past this with 26W, with other models less power hungry.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Mixing different types of memory of similar specs is still a no-go?

I want to add more RAM to my NAS, turns out they meanwhile switched from Micron E-die to Hynix C-die on the (mostly) specific model of module.

Klyith
Aug 3, 2007

GBS Pledge Week

Combat Pretzel posted:

Mixing different types of memory of similar specs is still a no-go?

Do you mean like one stick of each, or adding another pair of sticks?

Mixed sticks in a single pair, your problem is the XMP/AMP values might be slightly different for the two. When loading XMP it just looks at one stick, it doesn't do any comparison or smarts. It should work fine in JDEC, or with speed backed down one notch from rated value. But to fully OC you may need manual settings of timings & voltage set because you have to target the worst value for both.

Adding a second pair, it doesn't matter because even 4 perfectly matched sticks will get trained to slower timings, if they can run at rated speed at all.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Going from two to four sticks, at JEDEC speeds. They're DDR4-3200 CL22 out of the box. Well, four of them are gonna run at most at 2666 from what the mainboard manual says.

hobbesmaster
Jan 28, 2008

Combat Pretzel posted:

Mixing different types of memory of similar specs is still a no-go?

I want to add more RAM to my NAS, turns out they meanwhile switched from Micron E-die to Hynix C-die on the (mostly) specific model of module.

It’s a NAS, run JEDEC speeds.

Combat Pretzel posted:

Going from two to four sticks, at JEDEC speeds. They're DDR4-3200 CL22 out of the box. Well, four of them are gonna run at most at 2666 from what the mainboard manual says.

…well, if the mobo’s original release predates zen2 I bet you could get 3200 cl22 which is the fastest JEDEC speed.

Adbot
ADBOT LOVES YOU

Kivi
Aug 1, 2006
I care

Tuna-Fish posted:

True.

Not true. By spec, a x16 slot must be able to provide 75W, while a x1 or x4 must be able to provide at least 25W. A small slot may optionally provide up to 75W, but is not required to be able to by the spec. (There is a protocol for configuring a card for high power that allows for the host to communicate to the cards how much power they may draw.)
Also the power needs to come from somewhere, if you populate every slot (6 or 7) with 75 watt card you end up with almost 500 watts through the main ATX connector. There's reason why higher end boards have extra PCIe or EPS power connectors near the slots if they're kitted out with enough full length slots.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply