Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
fatman1683
Jan 8, 2004
.
How would one go about building an eSATA DAS? I'd like to have an external RAID box connected to my ESXi host, but the cheap off-the-shelf options don't have any kind of useful management interface, and there's a huge gap between the $200 models and the $2000+ models where there's basically nothing.

Presumably you'd need some kind of controller card in the storage box that can act as a disk and present the array in the box to the host's controller, but I have no idea if such a thing exists as a standalone product, and I haven't found anything that advertises that capability.

Anyone know if this is possible?

Adbot
ADBOT LOVES YOU

fatman1683
Jan 8, 2004
.

PitViper posted:

I think that depends. Do you want the box to handle the RAID management, or one of the VMs?

Basically I want to have a standalone box that creates the array and presents it to the hypervisor as a single volume. Then I can RDM it to the fileserver guest, and have a native filesystem on the storage box that isn't inside a datastore. This'll make recovery easier if the hypervisor ever dies, or if I just want to upgrade the hyp without having to do a host-side data migration.

And yes, I'm aware that there are off-the-shelf solutions that do this. However, the inexpensive ones I've found have no management at all, and basically only allow you to create a single array from a set of buttons on the front of the box. The expensive, enterprise-grade DASes do have management, naturally, but there's no middle-ground of reasonably priced storage enclosures with enough management to make them useful.

So if I could figure out how the 'device' side of a SATA interface is served and find a way to put a RAID array behind it, I could build my own storage server with an OS that presents to the hyp as a single volume over eSATA.

fatman1683
Jan 8, 2004
.
I'm thinking about building a thing:

Silverstone DS380 8-bay hotswap Mini-ITX Case

ASRock E3C224D4I-14S board (8-channel LSI 2308 onboard, 3x SFF-8087)

Flex ATX power supply

Custom Flex ATX to SFX power supply adapter bracket

The reasoning behind this build is thusly:

Lots of people who want small form-factor NASes are looking at the ASRock Atom boards like the C2750D4I and its 2550-based little brother:


At first glance these look like pretty nice boards: Mini-ITX, Avoton, ECC support, four full-sized DIMM slots and a massive 12 SATA ports. Unfortunately, this board has a major issue.

The twelve SATA ports on the D4I boards come from three different controllers. One is the on-package Avoton controller, the other two are consumer-grade Marvells. I'm not a fan of Marvell hardware in general, and the idea of trying to run a large contiguous disk pool across three separate and indeed completely different controllers (the two Marvells are different models) makes very little sense to me, where there is a workable alternative.

Enter the alternative:


The ASRock Rack E3C224D4I-14S. An 'Extended-ITX' version of the regular E3C224D4I, this board is longer than the Mini-ITX specification by about an inch and a half, and that space is very profitably filled by including on the board an 8-channel LSI 2308 controller providing 8 SAS ports via two SFF-8087 multilane connectors, in addition to the C224 chipset's four SATA lanes, also available through an SFF-8087 port. At ~$260, this board is a bargain considering that a standalone controller like an M1015 or 9201-8i will run you at least $100 used, $200+ new. Additionally, having the 8 SAS ports onboard saves the single PCIe x8 slot, which I plan on using for a 10GbE NIC at some point in the future, when 10GBASE-T switch prices come down out of the stratosphere.

Unfortunately, there's a big problem with this motherboard: It won't fit in the Silverstone case. That extra inch and a half intrudes into the mounting area for the SFX power supply by about an inch, making it impossible to actually mount the board and the power supply in the case at the same time, without some extraordinary measures.

Enter extraordinary measures:


I designed this bracket in FreeCAD. It mounts using the standard SFX mounting holes, and to it mounts a Flex-ATX power supply. This lifts the Flex-ATX power supply above the board, providing around 20mm of clearance underneath it where the edge of the board can sit comfortably (I hope). The new power supply stays mostly within the space that would be occupied by the SFX part, meaning that it shouldn't interfere with the PCI slot or any of the board features. The only place where it runs out of that space is in length: a Flex-ATX power supply is about 25mm longer than an SFX, but based on the images I can find it looks like there's a fair bit of unused space in that direction and it shouldn't be an issue.

I've sent the bracket design to ProtoCase and am waiting on a quote. It won't be cheap just to buy one of them, but I'm hoping I can find some other people who want to build this system and set up a group buy to bring down the cost. If you're interested in this let me know.

e:

Skandranon posted:

You can probably find a machine shop locally that will do that for you cheaply. Or try a local college that has a machine shop course, they need things to do.

I haven't found a machine shop around here that does low-volume fabrication, but I'm still looking. I didn't think about the school angle, I'll look into that one. Thanks.

fatman1683 fucked around with this message at 20:22 on Nov 16, 2015

fatman1683
Jan 8, 2004
.

Thanks Ants posted:

What's the lip for? Remove that and it's just a CNCd plate. Alternatively have your hole CNCd out and leave the lip as something that gets folded out of the plate. At the moment it looks like the part needs welding or machining.

Supporting the weight of the power supply. The case itself has a similar lip. I thought about doing a bend there, but my OCD got the better of me since that edge could not then be recessed like the rest of it.

vvv I'm not a MechEng or anything, I don't know if the power supply's weight actually needs that lip to support it, but given how thin the upper section is (to get the maximum separation between the power supply and the board), I was worried about sagging, so I added the lip to allow the lower portion of the bracket to do more of the work.

If anyone has actual engineering creds and wants to chime in, I'd really love some informed feedback on that.

fatman1683 fucked around with this message at 20:58 on Nov 16, 2015

fatman1683
Jan 8, 2004
.

IOwnCalculus posted:

Might be more cost-effective to just make the base material thicker, but I'm no ME either.

I specced it out at 16 gauge cold-rolled steel, if that's not enough I can go thicker but I don't know how thick is too thick.

e: Just got my quote back from ProtoCase:

~$65 for the bracket in 16ga with the lip, $70 in setup. I think I'm going to order this as a prototype and see how it works.

fatman1683 fucked around with this message at 21:51 on Nov 16, 2015

fatman1683
Jan 8, 2004
.

Skandranon posted:

Some parts are cheaper than others, and if he can reduce the design to "drill holes here, cut here" it can be done much more cheaply. Hell, if he does it with aluminum he can probably use a hack-saw and do it himself in an hour or two.

The first version I sent ProtoCase, which didn't have the lip and was specced to 18ga steel, was still over $100. I knew this wasn't going to be cheap going in, and I'm actually pretty happy with the price.

On this version I've told ProtoCase that the lip is to be welded, haven't discussed seam vs spot welding yet. I put in the order deposit yesterday and I'm waiting to hear from one of their engineers to help me finalize the design. The case and power supply should arrive this week so I can do some final measurements.

fatman1683
Jan 8, 2004
.

ElehemEare posted:

I built a home server/ NAS combo last year and I was a dumb dumb; now Stablebit Scanner is flipping out because the load cycle count on the 2x2TB WD Greens I meant to replace are now at twice the threshold limit. I'm going to pick up an 8TB Red on Black Friday sale, but I'm also out of SATA ports. I want to throw a four-port SATA controller in so that I can actually hit full capacity of my case eventually, but I'm unsure on what to pick up.

I'm using Drivepool and folder duplication for redundancy so I don't need any RAID functionality built in. Is there any appreciable difference between a C$28 Syba PEX40064 or C$49 Syba PEX40057 and a C$100 Highpoint 640L? Am I just overthinking this?

I guess more appropriately: should I just be avoiding Marvell controllers in favor of something else?

Most of the difference between HBAs is going to be in driver support and compatibility. The lowest I would reasonably go would be one of the low-end LSI HBAs. Here's one for under $100

fatman1683
Jan 8, 2004
.

Sheep posted:

Budget is under 500. I'm just trying to get a bunch of disks to present to an attached computer as raw devices so I can put them into an mdadm array. I've got this which does exactly what I want, except for four drives. While I could always just get a second, I'd prefer to have them all in one enclosure for sanity's sake.

Not looking for any sort of NAS situation since I have Infiniband running here and I'm not at all interested in trying to shoehorn an IB card into some NAS setup.

https://www.amazon.com/gp/offer-listing/B005GYDMYG/ref=dp_olp_0?ie=UTF8&condition=all&qid=1424442321&sr=8-1

e: Newegg has a much better deal on this one
https://www.newegg.com/Product/Product.aspx?Item=N82E16817576012

e again: this is the version with the controller, but it says it supports JBOD
https://www.amazon.com/Mediasonic-ProRaid-H8R2-SU3S2-External-Enclosure/dp/B005GYDMYQ


https://www.amazon.com/Sans-Digital...0_&dpSrc=detail

fatman1683 fucked around with this message at 02:18 on Jun 2, 2018

fatman1683
Jan 8, 2004
.
I posted this in one of the IT threads, but then I realized that you guys might have an opinion as well:

I've been tasked with setting up a sort of searchable document library, mostly for PDFs but possibly some other formats too. I've researched a bunch of document management systems, but I don't need 90% of the functionality they offer, since I'm not doing change tracking or access control or any like that.

Can anyone recommend a product that will let me set up a catalog of documents with full-text and metadata search, that I can make accessible over a simple web page? Can be on-premise or hosted, free or paid. I really don't want to build something myself at this point.

fatman1683
Jan 8, 2004
.
What are the largest drives that would be considered 'safe' to use in an 8-drive, RAIDZ2 vdev? Planning to finally get off my rear end and build a FreeNAS box this spring.

fatman1683
Jan 8, 2004
.

D. Ebdrup posted:

There's also a calculator that can do Mean Time To Resilver and Mean Time To Data Loss calculations based on Mean Time Between Drive Failure and Mean Time To Physical Replacement.

Thanks for this. With drive MTBFs in the hundreds of thousands of hours, it seems like I'd have to get up into two-digit numbers of 10TB drives before MTTDL drops into the ~10 year range, which I feel pretty ok about.

I've also been looking at Xpenology, doing Btrfs over SHR-2, anyone have opinions about this setup? I'm mostly interested in the easy expandability (ZFS is probably still a couple of years away from having that in stable) and the more-refined user experience provided by a 'commercial' product.

fatman1683
Jan 8, 2004
.
Any opinions on whether a pair of E5-2603v4s are enough CPU for a medium (~20TB) FreeNAS box? I'm going to be building a new ESXi host and converting my old one into a standalone NAS. 12 cores at 1.7GHz has been marginal for virtualization, but I'm hoping it'll be enough grunt to manage storage.

fatman1683
Jan 8, 2004
.

Paul MaudDib posted:

:words:
If you want to stay on that system I would think strongly about upgrading your CPUs.

Thanks, I'm aware of the single-threaded elements of SMB and other such filestreaming tools, but I don't have any experience working with them on such a significantly CPU-constrained system. Right now, a RAID1 of WD Reds is my main bottleneck so I haven't hit any point at which having more CPU would matter, but when I expand out to a large array I'm anticipating that I'll hit that point rather quickly. I'm probably going to hold off on the upgrade until the next generation of Xeon Scalables drops and hope that the price comes down on v4 E5s.

edit: Alternatively, I could try and pick up a used v2 system on the cheaps from one of the refurb houses and try and sell my v4 kit. Odds on getting a decent price for a pair of 2603v4s, an X10 motherboard and 64GB of DDR4?

fatman1683 fucked around with this message at 03:03 on Jan 8, 2020

fatman1683
Jan 8, 2004
.

Paul MaudDib posted:

what about swapping your processors out for 2640v3s? they're $75 a pop on ebay and that gets you 2.8 to 3.4 GHz on 8 cores. Or you can get 2637v3 for $65 and that gets you 3.6 to 3.7 (but only 4 cores). Max of 768GB on all these.

It's been awhile since I've bought hardware on ebay, I'm not sure I'd want to roll those dice necessarily, but one of the refurb places might have some v3 CPUs with a warranty of sorts. I'll look into it, thanks.

e:

Paul MaudDib posted:

Broadwell doesn't seem like a super important generation to hold out for to me. Early 14nm was baaaadddd (this is not 14+++++++) and clocks on most of the higher core count parts are abysmal unless you go for the very tippy top SKUs which are probably never going to be cheap. It is a bit lower power than Haswell, it supports 1.5 TB vs 768GB on Haswell, but it's way more expensive and for home users it doesn't seem like a big deal.

Yeah, noted. At the time I wanted to get whatever was current in the hopes of keeping it for as long as possible, and prices on Haswell hadn't started to drop yet. It's worked well enough for my purposes so far, but I definitely need to revisit it for this next round of upgrades.

fatman1683 fucked around with this message at 03:24 on Jan 8, 2020

fatman1683
Jan 8, 2004
.
Where are people getting used Supermicro chassis these days? I need to upgrade and I'd like to get out of my POS Rosewill and into a CSE-846.

fatman1683
Jan 8, 2004
.
ZFS peeps, what's generally considered the disk/array size threshold for stepping up from RAIDZ to RAIDZ2/3? This tool seems to indicate that a 12-disk array of 4TB 10e15 disks would be safe on RAID-5, does the same logic apply to RAIDZ? Are there other considerations besides disk failure during rebuild that would influence the choice of RAIDZ vs a higher level?

fatman1683
Jan 8, 2004
.

BlankSystemDaemon posted:

I wouldn't trust this calculator, as raidz is not equivalent to raid5 because raid5 will die if an URE happens during a rebuild whereas raidz will mark the file(s) as unrecoverable and keep on working.

There is a MTTDL RAID Reliability Calculator over at ServeTheHome which (ought to be called an availability calculator, and) lets you set MTBF, URE, capacity, sector size, disk quantity, number of volumes (ought to be vdevs), and expected the rebuild speed.
That should get you a much better answer, and it also includes both raidz2 and raidz3 explicitly.

Thanks! This looks like a much more comprehensive tool. Is there a good method for estimating rebuild speed? I know it's affected by a lot of factors, but is there a 'safe' number for 7.2k SAS disks I can use?

fatman1683
Jan 8, 2004
.

BlankSystemDaemon posted:

It's more a question of the CPU found in the system depending on the ZFS version, than anything else.

If you check my post history ITT you'll see quite a few links to ways that the operations involved in resilvering have been vectorized in ZFS. If the ZFS implementation and CPU is new enough, it's entirely possible that the resilver speed will be limited by the write speed of the disk you're resilvering onto.
Anecdotally, real-world experience suggests to me that raidz3 rebuilds are no more than 3-4 times slower than that achievable by simple mirroring, but that's just based on half-remembered stuff from when I ran storage servers professionally.
Whether that's the number you want to aim for, though, is harder to judge. It might be worth trying to use the worst possible estimates for everything - and even in the worst-possible case where every read from every disk is random, you're still going to get about 10MBps.

Thanks! One thing I couldn't find an answer to is whether the resilvering operation is multithreaded. My current plan is to turn my old ESX box into the FreeNAS server, which is running on a pair of E5-2603 V4s, 1.7GHz 6-core. Slow as dogshit, but enough cores to be functional. Do you think this is going to be a significant bottleneck to the resilver and worth an upgrade, or should it be capable of capping disk write speed?


BlankSystemDaemon posted:

That, by the way, is why draid exists; it uses distributed spares in addition to distributed parity - so the resilver speed is much faster.

Ok, so according to these numbers, an 11-drive RAIDZ3 vdev seems like it would be a good balance of performance, capacity, and redundancy. Would adding a 12th drive as a draid spare be a good idea here? I could theoretically build a stripe set of two 11-drive Z3s, each with a spare, and fill up a 24-bay chassis. Looks like I misunderstood how draid works, and it seems like it's not really intended for this use case.

fatman1683 fucked around with this message at 01:24 on May 7, 2022

fatman1683
Jan 8, 2004
.

BlankSystemDaemon posted:

So it depends on how old the ZFS implementation in FreeNAS is (which I don't know), but AVX2 vectorized raidz resilver was added back on Nov 29, 2016, so even if it is single-threaded, it shouldn't be taking up much CPUtime since your CPU has AVX2.

Again, I have to reiterate that I don't think "capping disk write speed" is what you should be expecting during a resilver, It assumes that every single record in your pool is written sequentially, that there's no single stray read from anything else on the system, that all disks are 100% functional, and that they don't have any malignancies in their firmware.

I'm definitely not expecting to reach that speed, but if it's theoretically achievable I can use a conservative figure derived from that as a basis for calculating risk of data loss. I probably will run it as-is and do some benchmarks on the pool before I move data over, if it's not adequate I can upgrade the CPUs at that point.

BlankSystemDaemon posted:

As for draid, 12-disk raidz3 is over the point at which I'd be considering switching, as I think the recommendation is to have raidz go no wider than 9 disks.
I have two 15-wide draid3:11d:1s vdevs in a pool that I use as a local offline backup (the server in question also acts as a buildserver, occasionally, when I'm working on FreeBSD, because it has 2x Xeon E5-2667v2 and 260GB memory).

If you wanna read more about it, I suggest zpoolconcepts(7).

Ok thanks, I'll do some more research. I'm still a few months away from building this (hooray for unemployment!).

Adbot
ADBOT LOVES YOU

fatman1683
Jan 8, 2004
.
I'm working on my backup procedures for the new home servers I'm building over the next few months, and I think I've settled on a rotating pair of hard drives that I swap out weekly or so. The sticking point now is how to actually present them to the backup server. I'll most likely be running Veeam in a VM, and all of the removable hard drive docks I've looked at appear to just be straight SATA passthroughs, which I think means that I'd have to redo the RDM every time I swap the drives, which would be a hassle.

Is there such thing as a hard drive dock that presents itself to the host, instead of the underlying disk, so the RDM will persist across swaps? The alternative would be to use a tape drive, but that's far more money than I really want to spend.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply