Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Greatest Living Man posted:

T just being a tower version?

Yes, which generally comes with softer fans since the Tower ones are often operating in the office.

Adbot
ADBOT LOVES YOU

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

Paul MaudDib posted:

If your rigs are physically close together, InfiniBand QDR is definitely the way to go for the moment. You can pick up a pair of dual-port IB cards and some cables for less than a single 10GbE adapter card, and if you end up having to lose your $100 investment then oh well, you had 40 gbps networking for 2 years or w/e at $50 per year. Switches are cheap too, a 24- or 36-port switch should run you around $125.

Having a lot of 10 GbE switching capacity gets real expensive, it's kinda hard to justify in a homelab setting past a trunk connection to a NAS, and IB still does better IOPS there. I would go so far as to say that if you need longer than a 7m run it might be worth looking into a retarded setup like a 10 GbE card bridged to its own InfiniBand port to handle the longer runs between your switches or something. Right now, for computers that are physically close, the economics of Infiniband on a per-adapter basis are just fantastic.

Any recommendations for ebayable InfiniBand 40 GbE gear that works well in FreeNAS and Windows 10?

Mr Shiny Pants
Nov 12, 2012

admiraldennis posted:

Any recommendations for ebayable InfiniBand 40 GbE gear that works well in FreeNAS and Windows 10?

ConnectX 3 cards should work.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

admiraldennis posted:

Any recommendations for ebayable InfiniBand 40 GbE gear that works well in FreeNAS and Windows 10?

The ones I'm using are Mellanox MHQH29B-XTR, which is a ConnectX 2 part with a pair of QDR x4 ports, and they are working OK between my Ubuntu NAS and my Win10 Pro. Not sure of the difference, might not be as good as the 3-series, but they were $40 each so :shrug:

Mine don't deliver anywhere close to their theoretical throughput, iperf was giving me about 6 gbit/s real speed to my server (and seemed like a huge improvement in IOPS) but that's good enough for my uses that I haven't bothered digging.

Do keep an eye on whether it comes with the standard height bracket or the low-profile bracket, I think the low-profile ends in -XSR or something. Mine came with changeable brackets but they tend to get separated from their cards.

Then you just need a QSFP cable. QSFP can run up to 7m passive, longer cables need active optical transducers+fiber which gets expensive.

Do be aware that you can either run in RDMA mode, which is faster but applications need to be coded for it. Samba is, but there's supposed to be some driver issues with Windows. Or you can run in TCP/IP mode which just works with everything, but has more overhead. In TCP/IP mode you cannot bond connections so there is no need to buy two cables unless you want a spare or to connect a third machine.

edit: Here's the manual (with a model listing).

Paul MaudDib fucked around with this message at 00:57 on Sep 21, 2017

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Samba doesn't do RDMA. Yet. Microsoft's own proper implementation of SMB does it, but there's some bullshit with the Windows 10 release. It's unclear whether Pro does SMB Direct or not, and if it does, what's going to happen, given there's talk that it's going to be shifted towards the Pro for Workstations edition.

Also, maximum performance is only attainable with multiple connections. Depends on the protocol. With iperf, you have to tell it to.

code:
PS C:\temp\iperf> .\iperf.exe -c 172.17.0.22 -w 425984 -P 4
------------------------------------------------------------
Client connecting to 172.17.0.22, TCP port 5001
TCP window size:  416 KByte
------------------------------------------------------------
[  4] local 172.17.0.18 port 2636 connected with 172.17.0.22 port 5001
[  3] local 172.17.0.18 port 2635 connected with 172.17.0.22 port 5001
[  6] local 172.17.0.18 port 2638 connected with 172.17.0.22 port 5001
[  5] local 172.17.0.18 port 2637 connected with 172.17.0.22 port 5001
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  13.0 GBytes  11.2 Gbits/sec
[  3]  0.0-10.0 sec  10.6 GBytes  9.12 Gbits/sec
[  6]  0.0-10.0 sec  10.7 GBytes  9.20 Gbits/sec
[  5]  0.0-10.0 sec  11.6 GBytes  10.0 Gbits/sec
[SUM]  0.0-10.0 sec  46.0 GBytes  39.5 Gbits/sec
PS C:\temp\iperf>
With Samba, you have to enable multichannel I think. Not sure with iSCSI (depending on which target). Probably needs MPIO.

I managed to shovel 28GBit/s using a single thread once. Forgot the conditions under which that worked.

--edit: Also, Windows doesn't do iSER (iSCSI Extension for RDMA). If you have the Windows 10 Feedback app installed, go upvote that poo poo:

https://aka.ms/Dxg2mv

--edit: If you want to do some driver signing disabling fuckery and run old drivers, you can use SRP (SCSI RDMA Protocol). Not sure what target to run on FreeNAS for that. In Linux you can use LIO.

Combat Pretzel fucked around with this message at 00:53 on Sep 21, 2017

alecm
Aug 20, 2004

Lorraine, I'm your density. I mean . . . your destiny.

Moey posted:

I've been keeping my eye on them, it seems about once or twice a month.
Looks like these WD easystore 8TBs are again at Best Buy for $180 this week. (https://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?skuId=5792401) I'm interested in buying maybe eight and using them in a rackmount case with a backplane. If I get one or more of the white label Thai drives, is there any way to interrupt the 3.3v line to use them? Is cutting the trace on the drive itself the only option? :stonk:

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

alecm posted:

Looks like these WD easystore 8TBs are again at Best Buy for $180 this week. (https://www.bestbuy.com/site/wd-easystore-8tb-external-usb-3-0-hard-drive-black/5792401.p?skuId=5792401) I'm interested in buying maybe eight and using them in a rackmount case with a backplane. If I get one or more of the white label Thai drives, is there any way to interrupt the 3.3v line to use them? Is cutting the trace on the drive itself the only option? :stonk:

You can maybe mod the backplane or the power supply to the backplane if you don't want to mod the drives... I also wouldn't be surprised if some backplanes don't even provide 3.3v

I devised a (probable) way to tell white vs red without opening the shrinkwrap, no idea how globally consistent it is but it's worked for me across shucking 14 total from various Best Buys near me. I'm hesitant to share this outside of these here forums to help slow eBay shuckers from buying up the last supply of red labels... Look up the warranty of the serial on the bottom of the box - if it ends on or after 09/01/19, and it's Made in Thailand, it's going to be a white label. The latest Red I got was a 08/26/2019 - if it's on or before this, in my experience at least, it's always a Red label. No idea if this date applies to Made in China boxes, or if there are Made in China white label ones at all (AFAIK I haven't heard evidence of one). The Made in China red label is actually the "standard" 128MB cache Red drive that's for retail sale (as opposed to the Made in Thailand with 256MB cache). A good buying rule would be to buy the earliest dated ones you can. Registering the drive with WD bumps the warranty date to +2 years from your purchase date, so no worries about less warranty.

You can also tell for sure without shucking the drive by using SMART over USB which correctly relays the model number of the drive.

Good luck and happy shucking



edit: also, let me know if you need any pointers - I've done a bunch of these now (all cases intact).

admiraldennis fucked around with this message at 00:01 on Sep 25, 2017

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues
FreeNAS/zfs question:

When I last set up a fileserver (using md) I always made the RAID member partitions smaller than the drive itself (like 0.5% smaller than the rated sized) to account for variances in actual drive size if I needed to replace it with a different drive down the road. Is this still a thing that needs to be done? Does FreeNAS/zfs do this automatically? Or are drive sizes standardized these days?

alecm
Aug 20, 2004

Lorraine, I'm your density. I mean . . . your destiny.

admiraldennis posted:

Immensely helpful information.

Thank you so much for this. I managed to find eight drives across two different Best Buys, all within the date ranges you mentioned and all red label Thai drives. Strangely, I saw no Chinese drives at all.

I found this PDF guide on a very straight forward way to open the case, and was able to do the first one in about 5 minutes. I couldn't find the original post to attribute credit, but it seems to have originated at the [H]ard forums. It's the most clear and concise breakdown of what to do I've seen. At the very least, it's better than watching some dude narrate his 15 minute struggle to open the case over the course of a YouTube video.

lol internet.
Sep 4, 2007
the internet makes you stupid
Looking to get smart real quick at Fiber channel multi path with Hyper V clusters real quick. I understand how Fiber channel works but when you start to add multiple connections my head tends to start to spin when it's being explained to me from a colleague.

Any good references out there? Physical FC equipment seems rather expensive, has anyone used software simulator before? I found this in a quick google search: http://www.simsans.org/demo.htm

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Are there any quad NVMe adapter boards (let you put 4x NVMe drives onto a 16x lane) that have an onboard PLX chip (rather than relying on bifurcation) and use the standard consumer M.2 drive form factor?

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Paul MaudDib posted:

Are there any quad NVMe adapter boards (let you put 4x NVMe drives onto a 16x lane) that have an onboard PLX chip (rather than relying on bifurcation) and use the standard consumer M.2 drive form factor?

Yeah just saw this one:

http://www.tomshardware.com/reviews/aplicata-m.2-nvme-ssd-adapter,5201.html

Boo PLX though! ;)

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

What's wrong with PLX? Are there limitations/drawbacks I should be aware of?

Yeah that's the only one I ran into, there are a few SuperMicros with PLX chips but it looks like they are designed for units with some kind of cabled interface (possibly optical), and there are some other ones that don't use PLX chips from various companies (but then you need a mobo that supports bifurcation). Also, none of the PLX offerings appear to handle more than 6.4 GB/s.

Greatest Living Man
Jul 22, 2005

ask President Obama
So I took the plunge and bought a Supermicro CSE-825 (2U) with a X8DTU-F motherboard with 2x Xeon E5620 and 8 3.5" drive bays for cheap on Ebay. I bought 6x KINGSTON 8GB PC3-8500R modules for it which are compatible with the processor from what I can tell.

From my understanding from the motherboard layout (http://www.supermicro.com/products/motherboard/QPI/5500/X8DTU-F.cfm), this specific board has 6x SATA ports, 2x PCI-E 2.0 x8, and 1x PCI-E 2.0 x16. If I want to add more SATA ports, can I use something that I already have like this: https://www.amazon.com/gp/product/B00AZ9T3OU/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 in one of the x8 slots? Is it advisable to mix up controllers, e.g. use the motherboard's internal SATA ports with the Maxwell card's ports, especially across something like FreeNAS?

I'm kind of confused by the idea of "riser cards," UIO and WIO. Supermicro has a page they refer to: http://www.supermicro.com/products/nfo/UIO.cfm but I can't really tell what I'm supposed to do with this information. Should I just look at the UIO riser card support? If I end up needing more PCI-e slots, can I use a riser card like this? http://www.ebay.com/itm/Supermicro-...rkAAOSwLghZusFP I haven't received the actual machine yet, so I don't know if it already includes a riser card or not, but I'm assuming it doesn't. Most of this I will probably be able to figure out after getting the machine and messing around with it, but it would be nice to have some input.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Greatest Living Man posted:

From my understanding from the motherboard layout (http://www.supermicro.com/products/motherboard/QPI/5500/X8DTU-F.cfm), this specific board has 6x SATA ports, 2x PCI-E 2.0 x8, and 1x PCI-E 2.0 x16. If I want to add more SATA ports, can I use something that I already have like this: https://www.amazon.com/gp/product/B00AZ9T3OU/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 in one of the x8 slots? Is it advisable to mix up controllers, e.g. use the motherboard's internal SATA ports with the Maxwell card's ports, especially across something like FreeNAS?

There's no problem at all with that. You'll want to run in "IT mode" so your OS just sees a bunch of drives that it hands over to soft-RAID or LVM or ZFS or whatever (aka JBOD - "just a bunch of disks").

What is actually problematic is when you have hardware RAID, since in that case you can't span the RAID across drives that aren't on the same controller. The cheaper hard-RAID options tend to be a lot more funky/unreliable/less capable than software-level management these days.

Generally LSI is considered more solid/reputable/compatible than Marvell but they can be spendy.

quote:

I'm kind of confused by the idea of "riser cards," UIO and WIO. Supermicro has a page they refer to: http://www.supermicro.com/products/nfo/UIO.cfm but I can't really tell what I'm supposed to do with this information. Should I just look at the UIO riser card support? If I end up needing more PCI-e slots, can I use a riser card like this? http://www.ebay.com/itm/Supermicro-...rkAAOSwLghZusFP I haven't received the actual machine yet, so I don't know if it already includes a riser card or not, but I'm assuming it doesn't. Most of this I will probably be able to figure out after getting the machine and messing around with it, but it would be nice to have some input.

Basically a riser card is a way to make the PCIe slot that comes straight off the motherboard go somewhere else so they can fit into the chassis more easily. Some are "hard" PBC risers, some are "soft" risers with flexible ribbon cables, the latter can do things like fitting 4 PCIe devices into a thin 1U/2U chassis. Once you are into OEM cases you need to look at what is compatible with the specific case, SuperMicro should list a replacement part number or be able to tell you via tech support.

Next hurdle: by default a PCIe slot is only designed to have one device plugged into it. So you may have 2 slots on the riser you can only use one card in it. Some motherboards support "PCIe bifurcation" which lets you (eg) slice a 8x slot into 2x 4 lane slots, or a 16 lane into 4x4 lanes. SuperMicro will tell you if your mobo supports that. Motherboards that don't have this capability can use an adapter with what's called a "PLX switch", which is a chip that acts like its own PCIe root and can re-route communications between multiple cards across a single slot (IIRC motherboards with bifurcation mostly just have these onboard). It's an expensive add-on (~$80 just for the chip iirc) so not all boards or adapters have one.

However, you have to pay real close attention to bandwidth, because this doesn't actually make the slot any faster. If you have four M.2 drives that each can talk at 4 GB/s but your switch can only talk at 8 GB/s total, then in theory you can max out the switch with only 2 drives' worth of load. But in practice, you often aren't actually maxing every single device out, you're hurting for lanes. Especially with any legacy devices (cheapo IB QDR are usually 2.0x8), since lanes are the basic increment of bandwidth allocation regardless of bus clocks.

The PCH/chipset acts like a PLX switch for most of the motherboard's peripheral capabilities - but it only has 3.0x4 speed (~= 4 GB/s) total, for everything that hangs off it. This is one of the places that consumer processors could really stand to improve, it's a really convenient spot to hang M.2 or high-speed networking (onboard or add-in) but it can't handle both of these at once. And the reality is that 10 GbE is just getting cheaper and cheaper, let alone ghetto 2007 poo poo like IB QDR.

Intel VROC probably isn't far from the mark: give it a JBOD mode that can bifurcate into 4x lane slices for cheapo NVMe scale-out and you'll cash in bigtime. $100 extra for cutting out a bunch of PLX chips, yes please.

Paul MaudDib fucked around with this message at 02:53 on Sep 27, 2017

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I don't like PLX because they are the competition ;)

Also quite a few motherboards bifurcate slots now, ASRock Rack stuff is very good for this. Aplicata has a x16 to quad x4 passive card (without switch).

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

priznat posted:

I don't like PLX because they are the competition ;)

Also quite a few motherboards bifurcate slots now, ASRock Rack stuff is very good for this. Aplicata has a x16 to quad x4 passive card (without switch).

Please update the Rpi's GPU, it's terrible now. Also, SATA in the next incarnation too. It's impressive how much Broadcom has squandered (even hindered) the immense publicity from the Rpi Foundation. Intel wishes Edison would have been the Raspberry Pi.

I haven't used it but I've been super impressed with what I've read about Asrock Rack's stuff. Both they and SuperMicro do very good work for the white-box community. When I filter a PcPartPicker search down to something absurd I'm not at all surprised to see their products left in my results, and the reviews are invariably pretty good. Price-conscious but sensible white-box gear built for some odd niches.

Surprisingly I've actually gotten great pre-sale technical responses from Asus within reasonable windows too. I have been drooling over the Asus X99M-WS for a power-user build in a U-NAS NSC810a, and I asked them "does this support 128GB with 4x32 GB RDIMMS? the ATX version (X99E-WS) does, and the memory support list hasn't been updated since 2015..." and "does this support bifurcation?" and both times the response has been "from our documentation it looks like no, but let me ask our engineering team" within like 12 hours and then I get a definitive "no" from the engineering team within a week or so. Props, that's a legit support response to someone who hasn't even given them money yet (and may not).

It really looks like a PCIe 3.0x8 is the best you can do at the moment, I guess that's probably the speed of the cheapest PLX chip or something.

Paul MaudDib fucked around with this message at 03:36 on Sep 27, 2017

redeyes
Sep 14, 2002

by Fluffdaddy
FWIW I've built 4 Asrock rack servers and all have been painless, great bioses, and most importantly stable. I wouldn't mind a workstation based on one of their boards if I could pull it off. Gamer garbage leds and whatever do nothing for me.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

redeyes posted:

Gamer garbage leds and whatever do nothing for me.

Seriously. I am not joking about putting high-speed connectivity on the board. Infiniband QDR to my fileserver owns pretty hard. I'm bottlenecked by SATA and/or my drives at the moment. Now give me a couple NVMe SSDs to write from (same rate, ~4 GB/s ea) and I'm super happy. Doesn't matter whether they go in my machine itself or on the NAS on a personal level, but as far as scaling it out to multiple machines (for OS/app drives) I could see cache/dedup scaling better.

IMO this is the next front to tackle now that Moore's Law is really hitting the wall. Let's see if we can deliver way faster IOPS in an economical way. 10 GbE is pretty much here even as it is, and that's a commodity product, not surplus sales like IB QDR.

Without increasing PEG/graphics lanes this is really the next front. It's too expensive to put into every board at the moment, but it needs to be an option that OEMs can fill.

I really couldn't give a poo poo about LEDs. I would rather have a windowless case, which are actually getting hard to find now. Spend those pennies towards other poo poo on the board.

Paul MaudDib fucked around with this message at 03:53 on Sep 27, 2017

IOwnCalculus
Apr 2, 2003





Greatest Living Man posted:

So I took the plunge and bought a Supermicro CSE-825 (2U) with a X8DTU-F motherboard with 2x Xeon E5620 and 8 3.5" drive bays for cheap on Ebay. I bought 6x KINGSTON 8GB PC3-8500R modules for it which are compatible with the processor from what I can tell.

From my understanding from the motherboard layout (http://www.supermicro.com/products/motherboard/QPI/5500/X8DTU-F.cfm), this specific board has 6x SATA ports, 2x PCI-E 2.0 x8, and 1x PCI-E 2.0 x16. If I want to add more SATA ports, can I use something that I already have like this: https://www.amazon.com/gp/product/B00AZ9T3OU/ref=oh_aui_search_detailpage?ie=UTF8&psc=1 in one of the x8 slots? Is it advisable to mix up controllers, e.g. use the motherboard's internal SATA ports with the Maxwell card's ports, especially across something like FreeNAS?

I'm kind of confused by the idea of "riser cards," UIO and WIO. Supermicro has a page they refer to: http://www.supermicro.com/products/nfo/UIO.cfm but I can't really tell what I'm supposed to do with this information. Should I just look at the UIO riser card support? If I end up needing more PCI-e slots, can I use a riser card like this? http://www.ebay.com/itm/Supermicro-...rkAAOSwLghZusFP I haven't received the actual machine yet, so I don't know if it already includes a riser card or not, but I'm assuming it doesn't. Most of this I will probably be able to figure out after getting the machine and messing around with it, but it would be nice to have some input.

Did you buy the motherboard and chassis together or separately? If you bought it as a complete system it should hopefully have whatever riser card is needed. Supermicro UIO boards are meant to fit their specific chassis, as opposed to the generic xATX form factor boards and chassis that they also manufacture.

And yes a board like that is absolutely designed to have a riser card plugged into it, generally without any PCIe switch logic needed.

On the drive controller front, since you don't mind used hardware, get a LSI controller instead. Dead reliable and extremely well supported.

Djarum
Apr 1, 2004

by vyelkin

Paul MaudDib posted:

I really couldn't give a poo poo about LEDs. I would rather have a windowless case, which are actually getting hard to find now. Spend those pennies towards other poo poo on the board.

Amen. I have no idea when it became the thing to make your computer look like a loving Daft Punk concert but man it looks embarrassing. I have had to go completely out of my way to find decent, well made, windowless cases for all my machines anymore.

And Jesus gently caress if I get one more thing with a blue LED that I can see from space I am going to lose it.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Djarum posted:

Amen. I have no idea when it became the thing to make your computer look like a loving Daft Punk concert but man it looks embarrassing. I have had to go completely out of my way to find decent, well made, windowless cases for all my machines anymore.

And Jesus gently caress if I get one more thing with a blue LED that I can see from space I am going to lose it.

Seriously it is GPU Box Art for the teenies. It's so loving cringe, just give me something understated with as much loving expandability as you can hammer into every cubic inch, regardless of case volume. Fractal Design, DAN, and Raven are doing satan's work here, nothing is worse than an exquisitely designed case that kickstarts for $400 and ships 2019. :shepspends:

I tried to buy an F31 Suppressor and it turns out they're basically out of production now. I complained after 2 months of backorder when Amazon still had no ship date, now the only one you can find windowless is the F51 Suppressor.

Paul MaudDib fucked around with this message at 05:12 on Sep 27, 2017

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

Paul MaudDib posted:

Please update the Rpi's GPU, it's terrible now. Also, SATA in the next incarnation too. It's impressive how much Broadcom has squandered (even hindered) the immense publicity from the Rpi Foundation. Intel wishes Edison would have been the Raspberry Pi.

Oh I'm not a RPi person, I work on PCIe switches so PLX is my deadly rival: https://www.microsemi.com/product-directory/ics/3724-pcie-switches

Right now the main focus is tier 1/2 customers but it is going wider and should be on motherboards and AICs eventually. The main advantage over PLX is flexibility in the bifurcation, lane count and more customization through firmware.

The PAX series is especially cool where it is a massive fabric with many hosts having their own individual domain(s) of drives and even sharing multi-function drives through SRIOV. Some really cool stuff like the PCIe backplane in a rack you can pull out a bay full of drives and push in a blade server and it will swap between it on the fly without causing a hiccup with the rest of the system. It's cool as hell.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Paul MaudDib posted:

Please update the Rpi's GPU, it's terrible now. Also, SATA in the next incarnation too.

Do you not understand that the broadcom chip in the rpi wasn't designed for the rpi, it was designed for things like low spec cellphones and set-top boxes

BlankSystemDaemon
Mar 13, 2009



I like SBCs, and it looks like companies are finally looking to do server platforms with them.

BlankSystemDaemon fucked around with this message at 10:49 on Sep 27, 2017

Mr Shiny Pants
Nov 12, 2012
The problem with ARM seems to be that a lot manufacturers are talking about releasing something but when you want to actually buy a board nothing is available.

I would like an 24Core ARM soc with PCIe and room for 64GB RAM. That would be an awesome machine.

Greatest Living Man
Jul 22, 2005

ask President Obama

Paul MaudDib posted:

There's no problem at all with that. You'll want to run in "IT mode" so your OS just sees a bunch of drives that it hands over to soft-RAID or LVM or ZFS or whatever (aka JBOD - "just a bunch of disks").

What is actually problematic is when you have hardware RAID, since in that case you can't span the RAID across drives that aren't on the same controller. The cheaper hard-RAID options tend to be a lot more funky/unreliable/less capable than software-level management these days.

Generally LSI is considered more solid/reputable/compatible than Marvell but they can be spendy.


So like a LSI 9210-8i with two SAS to 4x SATA splitters? That would allow me to run 8 drives mirrored in ZFS. Then I could use the onboard SATA ports to run my OS hard drives?

e: there's no problem with using >2TB drives with these, are there? I've been seeing that pop up in forums.

IOwnCalculus posted:

Did you buy the motherboard and chassis together or separately? If you bought it as a complete system it should hopefully have whatever riser card is needed. Supermicro UIO boards are meant to fit their specific chassis, as opposed to the generic xATX form factor boards and chassis that they also manufacture.

And yes a board like that is absolutely designed to have a riser card plugged into it, generally without any PCIe switch logic needed.

On the drive controller front, since you don't mind used hardware, get a LSI controller instead. Dead reliable and extremely well supported.

The mobo and chassis / CPUs come pre-assembled together. We'll see in the next couple of days I guess.

Greatest Living Man fucked around with this message at 15:26 on Sep 27, 2017

eightysixed
Sep 23, 2004

I always tell the truth. Even when I lie.
Newegg.com has 2-Pack 4TB Seagate IronWolf NAS 3.5" Internal Hard Drive (ST4000VN008) on sale for $209.99 with free shipping. That's only $105 for the 4TB NAS drive - although I have no idea about the model. Would it be advisable to buy 4 of these? Does anyone know about the Seagate IronWolf NAS drives? :shobon:

edit: 3 year warranty

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
The reliability of Seagate IronWolf drives is an open question, but the price isn't bad--it's about $10/drive lower than the lowest Amazon has had them for.

Whether it's worth buying 4 of them depends on what you're doing with them, particularly since you can still pick up 8TB WD Reds for ~$180 via the BestBuy external shucking shenanigans.

IOwnCalculus
Apr 2, 2003





Greatest Living Man posted:

So like a LSI 9210-8i with two SAS to 4x SATA splitters? That would allow me to run 8 drives mirrored in ZFS. Then I could use the onboard SATA ports to run my OS hard drives?

e: there's no problem with using >2TB drives with these, are there? I've been seeing that pop up in forums.


The mobo and chassis / CPUs come pre-assembled together. We'll see in the next couple of days I guess.

Yes, a 9210-8i - or any SAS2008 card - supports large drives fine. It's the older 1064/1068 controllers that have problems with drives over 1.5TB. The main thing you want is to be able to flash it in IT mode.

I would run your OS drives on the onboard SATA to make booting easier, but the ZFS array can be split across any usable SATA / SAS ports in your system. At one point my array was split across the onboard controller and two separate SAS2008 controllers.

eightysixed posted:

Newegg.com has 2-Pack 4TB Seagate IronWolf NAS 3.5" Internal Hard Drive (ST4000VN008) on sale for $209.99 with free shipping. That's only $105 for the 4TB NAS drive - although I have no idea about the model. Would it be advisable to buy 4 of these? Does anyone know about the Seagate IronWolf NAS drives? :shobon:

edit: 3 year warranty

Anecdote != data, but I've had five IronWolf drives in my hands. 100% failure rate - one DOA and the other four racked up massive SMART faults (and terrible performance) immediately. They were even from two different manufacturing batches, but they did all come from Amazon. They also omitted the middle drive holes.

I replaced them with Hitachi Toshiba drives in the exact same cables / controllers / drive trays and they've been flawless.

IOwnCalculus fucked around with this message at 19:01 on Sep 27, 2017

emocrat
Feb 28, 2007
Sidewalk Technology

admiraldennis posted:



You can also tell for sure without shucking the drive by using SMART over USB which correctly relays the model number of the drive.

Good luck and happy shucking


edit: also, let me know if you need any pointers - I've done a bunch of these now (all cases intact).

OK based on this post I just ran out and bought 2, both had the 08/26/2019 date and from Thailand. Best Buy says I got 15 days to return them, so I figured it also verify using the SMART thing you mentioned above before prying them open. So, since you offered pointers, can you briefly tell me how to do that? Is this with Crystaldisk or what? What info am I looking for? Thanks!

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

IOwnCalculus posted:

Yes, a 9210-8i - or any SAS2008 card - supports large drives fine. It's the older 1064/1068 controllers that have problems with drives over 1.5TB. The main thing you want is to be able to flash it in IT mode.

I would run your OS drives on the onboard SATA to make booting easier, but the ZFS array can be split across any usable SATA / SAS ports in your system. At one point my array was split across the onboard controller and two separate SAS2008 controllers.


Anecdote != data, but I've had five IronWolf drives in my hands. 100% failure rate - one DOA and the other four racked up massive SMART faults (and terrible performance) immediately. They were even from two different manufacturing batches, but they did all come from Amazon. They also omitted the middle drive holes.

I replaced them with Hitachi drives in the exact same cables / controllers / drive trays and they've been flawless.



My continual NAS/ZFS upgrade strategy is to buy a drive a month until I've got enough to resize one of my pools. I also like to split between manufacturers.

My next pool due for an upgrade consists of 5 2TB drives. So far I've got two 8TB WD Reds. It's really hard for me to justify getting anything else but Reds from those shuckable EasyStores, but if I do I think I'm going to get a Hitachi and then a Toshiba next because even though you don't hear about them much, everything I do hear is good.

The 8TB Seagates seem like they're OK-ish, but look at those 4TB models!

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Not to be a Seagate apologist, but it's worth noting that those terrible Seagates are generic desktop drives, while the WD's are all Reds.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

DrDork posted:

Not to be a Seagate apologist, but it's worth noting that those terrible Seagates are generic desktop drives, while the WD's are all Reds.

Yeah, I wasn't really making any argument, I was just being amazed at the failure rate.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
abloo bloo bloo but Seagate failure rates are actually really good if you ignore "specific models" (that make up about 80% of the lineup, and that you won't be able to identify until 6-12 months down the road) :qq:

I mean surely having 30x the failure rate of comparable models can be attributed to NAS branding, despite that branding being widely agreed to make essentially no difference to things like UBE failure rates, and in some cases coming off literally the exact same production lines, while competing brands show no difference whatsoever. That definitely makes tons of sense.

Yup, certainly no widespread problem with Seagate products that goes back literally a decade. You'll definitely win the shell game this time!

Paul MaudDib fucked around with this message at 19:13 on Sep 27, 2017

IOwnCalculus
Apr 2, 2003





I'm honestly glad mine all poo poo the bed literally immediately. Made it very easy to just ship them back to Amazon when I hadn't even been able to load any real data on them. Trying to copy anything, they'd be fast until the cache filled up and then they'd be absurdly slow.

Also I was a goddamn idiot in my last post, I replaced them with Toshibas, not Hitachis. Still been dead reliable.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

Thermopyle posted:

Yeah, I wasn't really making any argument, I was just being amazed at the failure rate.

How are they calculating that failure rate column? Is it related to operating hours? Because 5 failed ST4000DM001 out of 400 installed is 1.25%, not 30.4%.

(Spot checked a couple HGST drives and those numbers are also inflated relative to the straightforward percentage calculation, though not by as much.)

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

BobHoward posted:

How are they calculating that failure rate column? Is it related to operating hours? Because 5 failed ST4000DM001 out of 400 installed is 1.25%, not 30.4%.

(Spot checked a couple HGST drives and those numbers are also inflated relative to the straightforward percentage calculation, though not by as much.)

They're annualized total failure rates, calculated on the assumption that a failed drive will be replaced with another that has an equivalent chance of failure. So in this calculation it's possible to have failure rates >100%, all that means is that you have 100 drives but over the course of a year you needed (eg) 130 replacements. There is a PDF explaining it in this ZIP (and their data is also available if you want to look at it).

eightysixed
Sep 23, 2004

I always tell the truth. Even when I lie.

DrDork posted:

The reliability of Seagate IronWolf drives is an open question, but the price isn't bad--it's about $10/drive lower than the lowest Amazon has had them for.

Whether it's worth buying 4 of them depends on what you're doing with them, particularly since you can still pick up 8TB WD Reds for ~$180 via the BestBuy external shucking shenanigans.

I was going to use 2 disks for data and 2 disks for parity with unRAID. Is this a bad drive to do this with? y/n :shobon:

Adbot
ADBOT LOVES YOU

IOwnCalculus
Apr 2, 2003





Paul MaudDib posted:

They're annualized total failure rates, calculated on the assumption that a failed drive will be replaced with another that has an equivalent chance of failure. So in this calculation it's possible to have failure rates >100%, all that means is that you have 100 drives but over the course of a year you needed (eg) 130 replacements. There is a PDF explaining it in this ZIP (and their data is also available if you want to look at it).

Valid point, so by this metric I had 125% failure rate since one of the five was a replacement for the first DOA drive :haw:

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply