|
How would one go about building an eSATA DAS? I'd like to have an external RAID box connected to my ESXi host, but the cheap off-the-shelf options don't have any kind of useful management interface, and there's a huge gap between the $200 models and the $2000+ models where there's basically nothing. Presumably you'd need some kind of controller card in the storage box that can act as a disk and present the array in the box to the host's controller, but I have no idea if such a thing exists as a standalone product, and I haven't found anything that advertises that capability. Anyone know if this is possible?
|
# ¿ Apr 14, 2013 04:12 |
|
|
# ¿ Apr 28, 2024 19:52 |
|
PitViper posted:I think that depends. Do you want the box to handle the RAID management, or one of the VMs? Basically I want to have a standalone box that creates the array and presents it to the hypervisor as a single volume. Then I can RDM it to the fileserver guest, and have a native filesystem on the storage box that isn't inside a datastore. This'll make recovery easier if the hypervisor ever dies, or if I just want to upgrade the hyp without having to do a host-side data migration. And yes, I'm aware that there are off-the-shelf solutions that do this. However, the inexpensive ones I've found have no management at all, and basically only allow you to create a single array from a set of buttons on the front of the box. The expensive, enterprise-grade DASes do have management, naturally, but there's no middle-ground of reasonably priced storage enclosures with enough management to make them useful. So if I could figure out how the 'device' side of a SATA interface is served and find a way to put a RAID array behind it, I could build my own storage server with an OS that presents to the hyp as a single volume over eSATA.
|
# ¿ Apr 15, 2013 20:54 |
|
I'm thinking about building a thing: Silverstone DS380 8-bay hotswap Mini-ITX Case ASRock E3C224D4I-14S board (8-channel LSI 2308 onboard, 3x SFF-8087) Flex ATX power supply Custom Flex ATX to SFX power supply adapter bracket The reasoning behind this build is thusly: Lots of people who want small form-factor NASes are looking at the ASRock Atom boards like the C2750D4I and its 2550-based little brother: At first glance these look like pretty nice boards: Mini-ITX, Avoton, ECC support, four full-sized DIMM slots and a massive 12 SATA ports. Unfortunately, this board has a major issue. The twelve SATA ports on the D4I boards come from three different controllers. One is the on-package Avoton controller, the other two are consumer-grade Marvells. I'm not a fan of Marvell hardware in general, and the idea of trying to run a large contiguous disk pool across three separate and indeed completely different controllers (the two Marvells are different models) makes very little sense to me, where there is a workable alternative. Enter the alternative: The ASRock Rack E3C224D4I-14S. An 'Extended-ITX' version of the regular E3C224D4I, this board is longer than the Mini-ITX specification by about an inch and a half, and that space is very profitably filled by including on the board an 8-channel LSI 2308 controller providing 8 SAS ports via two SFF-8087 multilane connectors, in addition to the C224 chipset's four SATA lanes, also available through an SFF-8087 port. At ~$260, this board is a bargain considering that a standalone controller like an M1015 or 9201-8i will run you at least $100 used, $200+ new. Additionally, having the 8 SAS ports onboard saves the single PCIe x8 slot, which I plan on using for a 10GbE NIC at some point in the future, when 10GBASE-T switch prices come down out of the stratosphere. Unfortunately, there's a big problem with this motherboard: It won't fit in the Silverstone case. That extra inch and a half intrudes into the mounting area for the SFX power supply by about an inch, making it impossible to actually mount the board and the power supply in the case at the same time, without some extraordinary measures. Enter extraordinary measures: I designed this bracket in FreeCAD. It mounts using the standard SFX mounting holes, and to it mounts a Flex-ATX power supply. This lifts the Flex-ATX power supply above the board, providing around 20mm of clearance underneath it where the edge of the board can sit comfortably (I hope). The new power supply stays mostly within the space that would be occupied by the SFX part, meaning that it shouldn't interfere with the PCI slot or any of the board features. The only place where it runs out of that space is in length: a Flex-ATX power supply is about 25mm longer than an SFX, but based on the images I can find it looks like there's a fair bit of unused space in that direction and it shouldn't be an issue. I've sent the bracket design to ProtoCase and am waiting on a quote. It won't be cheap just to buy one of them, but I'm hoping I can find some other people who want to build this system and set up a group buy to bring down the cost. If you're interested in this let me know. e: Skandranon posted:You can probably find a machine shop locally that will do that for you cheaply. Or try a local college that has a machine shop course, they need things to do. I haven't found a machine shop around here that does low-volume fabrication, but I'm still looking. I didn't think about the school angle, I'll look into that one. Thanks. fatman1683 fucked around with this message at 20:22 on Nov 16, 2015 |
# ¿ Nov 16, 2015 20:16 |
|
Thanks Ants posted:What's the lip for? Remove that and it's just a CNCd plate. Alternatively have your hole CNCd out and leave the lip as something that gets folded out of the plate. At the moment it looks like the part needs welding or machining. Supporting the weight of the power supply. The case itself has a similar lip. I thought about doing a bend there, but my OCD got the better of me since that edge could not then be recessed like the rest of it. vvv I'm not a MechEng or anything, I don't know if the power supply's weight actually needs that lip to support it, but given how thin the upper section is (to get the maximum separation between the power supply and the board), I was worried about sagging, so I added the lip to allow the lower portion of the bracket to do more of the work. If anyone has actual engineering creds and wants to chime in, I'd really love some informed feedback on that. fatman1683 fucked around with this message at 20:58 on Nov 16, 2015 |
# ¿ Nov 16, 2015 20:49 |
|
IOwnCalculus posted:Might be more cost-effective to just make the base material thicker, but I'm no ME either. I specced it out at 16 gauge cold-rolled steel, if that's not enough I can go thicker but I don't know how thick is too thick. e: Just got my quote back from ProtoCase: ~$65 for the bracket in 16ga with the lip, $70 in setup. I think I'm going to order this as a prototype and see how it works. fatman1683 fucked around with this message at 21:51 on Nov 16, 2015 |
# ¿ Nov 16, 2015 21:26 |
|
Skandranon posted:Some parts are cheaper than others, and if he can reduce the design to "drill holes here, cut here" it can be done much more cheaply. Hell, if he does it with aluminum he can probably use a hack-saw and do it himself in an hour or two. The first version I sent ProtoCase, which didn't have the lip and was specced to 18ga steel, was still over $100. I knew this wasn't going to be cheap going in, and I'm actually pretty happy with the price. On this version I've told ProtoCase that the lip is to be welded, haven't discussed seam vs spot welding yet. I put in the order deposit yesterday and I'm waiting to hear from one of their engineers to help me finalize the design. The case and power supply should arrive this week so I can do some final measurements.
|
# ¿ Nov 17, 2015 19:12 |
|
ElehemEare posted:I built a home server/ NAS combo last year and I was a dumb dumb; now Stablebit Scanner is flipping out because the load cycle count on the 2x2TB WD Greens I meant to replace are now at twice the threshold limit. I'm going to pick up an 8TB Red on Black Friday sale, but I'm also out of SATA ports. I want to throw a four-port SATA controller in so that I can actually hit full capacity of my case eventually, but I'm unsure on what to pick up. Most of the difference between HBAs is going to be in driver support and compatibility. The lowest I would reasonably go would be one of the low-end LSI HBAs. Here's one for under $100
|
# ¿ Nov 24, 2016 18:09 |
|
Sheep posted:Budget is under 500. I'm just trying to get a bunch of disks to present to an attached computer as raw devices so I can put them into an mdadm array. I've got this which does exactly what I want, except for four drives. While I could always just get a second, I'd prefer to have them all in one enclosure for sanity's sake. e: Newegg has a much better deal on this one https://www.newegg.com/Product/Product.aspx?Item=N82E16817576012 e again: this is the version with the controller, but it says it supports JBOD https://www.amazon.com/Mediasonic-ProRaid-H8R2-SU3S2-External-Enclosure/dp/B005GYDMYQ https://www.amazon.com/Sans-Digital...0_&dpSrc=detail fatman1683 fucked around with this message at 02:18 on Jun 2, 2018 |
# ¿ Jun 1, 2018 21:07 |
|
I posted this in one of the IT threads, but then I realized that you guys might have an opinion as well: I've been tasked with setting up a sort of searchable document library, mostly for PDFs but possibly some other formats too. I've researched a bunch of document management systems, but I don't need 90% of the functionality they offer, since I'm not doing change tracking or access control or any like that. Can anyone recommend a product that will let me set up a catalog of documents with full-text and metadata search, that I can make accessible over a simple web page? Can be on-premise or hosted, free or paid. I really don't want to build something myself at this point.
|
# ¿ Aug 8, 2018 15:18 |
|
What are the largest drives that would be considered 'safe' to use in an 8-drive, RAIDZ2 vdev? Planning to finally get off my rear end and build a FreeNAS box this spring.
|
# ¿ Dec 28, 2019 19:17 |
|
D. Ebdrup posted:There's also a calculator that can do Mean Time To Resilver and Mean Time To Data Loss calculations based on Mean Time Between Drive Failure and Mean Time To Physical Replacement. Thanks for this. With drive MTBFs in the hundreds of thousands of hours, it seems like I'd have to get up into two-digit numbers of 10TB drives before MTTDL drops into the ~10 year range, which I feel pretty ok about. I've also been looking at Xpenology, doing Btrfs over SHR-2, anyone have opinions about this setup? I'm mostly interested in the easy expandability (ZFS is probably still a couple of years away from having that in stable) and the more-refined user experience provided by a 'commercial' product.
|
# ¿ Dec 29, 2019 02:06 |
|
Any opinions on whether a pair of E5-2603v4s are enough CPU for a medium (~20TB) FreeNAS box? I'm going to be building a new ESXi host and converting my old one into a standalone NAS. 12 cores at 1.7GHz has been marginal for virtualization, but I'm hoping it'll be enough grunt to manage storage.
|
# ¿ Jan 8, 2020 01:42 |
|
Paul MaudDib posted:
Thanks, I'm aware of the single-threaded elements of SMB and other such filestreaming tools, but I don't have any experience working with them on such a significantly CPU-constrained system. Right now, a RAID1 of WD Reds is my main bottleneck so I haven't hit any point at which having more CPU would matter, but when I expand out to a large array I'm anticipating that I'll hit that point rather quickly. I'm probably going to hold off on the upgrade until the next generation of Xeon Scalables drops and hope that the price comes down on v4 E5s. edit: Alternatively, I could try and pick up a used v2 system on the cheaps from one of the refurb houses and try and sell my v4 kit. Odds on getting a decent price for a pair of 2603v4s, an X10 motherboard and 64GB of DDR4? fatman1683 fucked around with this message at 03:03 on Jan 8, 2020 |
# ¿ Jan 8, 2020 02:59 |
|
Paul MaudDib posted:what about swapping your processors out for 2640v3s? they're $75 a pop on ebay and that gets you 2.8 to 3.4 GHz on 8 cores. Or you can get 2637v3 for $65 and that gets you 3.6 to 3.7 (but only 4 cores). Max of 768GB on all these. It's been awhile since I've bought hardware on ebay, I'm not sure I'd want to roll those dice necessarily, but one of the refurb places might have some v3 CPUs with a warranty of sorts. I'll look into it, thanks. e: Paul MaudDib posted:Broadwell doesn't seem like a super important generation to hold out for to me. Early 14nm was baaaadddd (this is not 14+++++++) and clocks on most of the higher core count parts are abysmal unless you go for the very tippy top SKUs which are probably never going to be cheap. It is a bit lower power than Haswell, it supports 1.5 TB vs 768GB on Haswell, but it's way more expensive and for home users it doesn't seem like a big deal. Yeah, noted. At the time I wanted to get whatever was current in the hopes of keeping it for as long as possible, and prices on Haswell hadn't started to drop yet. It's worked well enough for my purposes so far, but I definitely need to revisit it for this next round of upgrades. fatman1683 fucked around with this message at 03:24 on Jan 8, 2020 |
# ¿ Jan 8, 2020 03:16 |
|
Where are people getting used Supermicro chassis these days? I need to upgrade and I'd like to get out of my POS Rosewill and into a CSE-846.
|
# ¿ May 4, 2022 00:11 |
|
ZFS peeps, what's generally considered the disk/array size threshold for stepping up from RAIDZ to RAIDZ2/3? This tool seems to indicate that a 12-disk array of 4TB 10e15 disks would be safe on RAID-5, does the same logic apply to RAIDZ? Are there other considerations besides disk failure during rebuild that would influence the choice of RAIDZ vs a higher level?
|
# ¿ May 6, 2022 04:10 |
|
BlankSystemDaemon posted:I wouldn't trust this calculator, as raidz is not equivalent to raid5 because raid5 will die if an URE happens during a rebuild whereas raidz will mark the file(s) as unrecoverable and keep on working. Thanks! This looks like a much more comprehensive tool. Is there a good method for estimating rebuild speed? I know it's affected by a lot of factors, but is there a 'safe' number for 7.2k SAS disks I can use?
|
# ¿ May 6, 2022 14:52 |
|
BlankSystemDaemon posted:It's more a question of the CPU found in the system depending on the ZFS version, than anything else. Thanks! One thing I couldn't find an answer to is whether the resilvering operation is multithreaded. My current plan is to turn my old ESX box into the FreeNAS server, which is running on a pair of E5-2603 V4s, 1.7GHz 6-core. Slow as dogshit, but enough cores to be functional. Do you think this is going to be a significant bottleneck to the resilver and worth an upgrade, or should it be capable of capping disk write speed? BlankSystemDaemon posted:That, by the way, is why draid exists; it uses distributed spares in addition to distributed parity - so the resilver speed is much faster. fatman1683 fucked around with this message at 01:24 on May 7, 2022 |
# ¿ May 7, 2022 01:15 |
|
BlankSystemDaemon posted:So it depends on how old the ZFS implementation in FreeNAS is (which I don't know), but AVX2 vectorized raidz resilver was added back on Nov 29, 2016, so even if it is single-threaded, it shouldn't be taking up much CPUtime since your CPU has AVX2. I'm definitely not expecting to reach that speed, but if it's theoretically achievable I can use a conservative figure derived from that as a basis for calculating risk of data loss. I probably will run it as-is and do some benchmarks on the pool before I move data over, if it's not adequate I can upgrade the CPUs at that point. BlankSystemDaemon posted:As for draid, 12-disk raidz3 is over the point at which I'd be considering switching, as I think the recommendation is to have raidz go no wider than 9 disks. Ok thanks, I'll do some more research. I'm still a few months away from building this (hooray for unemployment!).
|
# ¿ May 7, 2022 16:17 |
|
|
# ¿ Apr 28, 2024 19:52 |
|
I'm working on my backup procedures for the new home servers I'm building over the next few months, and I think I've settled on a rotating pair of hard drives that I swap out weekly or so. The sticking point now is how to actually present them to the backup server. I'll most likely be running Veeam in a VM, and all of the removable hard drive docks I've looked at appear to just be straight SATA passthroughs, which I think means that I'd have to redo the RDM every time I swap the drives, which would be a hassle. Is there such thing as a hard drive dock that presents itself to the host, instead of the underlying disk, so the RDM will persist across swaps? The alternative would be to use a tape drive, but that's far more money than I really want to spend.
|
# ¿ Nov 16, 2022 05:27 |