|
Why do people like to use multi-drive arrays (RAID, in ZFS, whatever) so much anyway, unless they're running a datacenter or are unable to afford that one extra hard drive? Higher failure rate, less safe than a proper offsite backup or even a simple mirrored drive, can't just plug it into another computer. Performance is slightly increased but I don't really see how that matters for a home server or NAS of all things where your lovely network is probably limiting way before you hit the 130mb/s a hgst deathstar can manage.
suck my woke dick fucked around with this message at 12:48 on Dec 20, 2015 |
# ? Dec 20, 2015 12:41 |
|
|
# ? Apr 19, 2024 01:37 |
|
blowfish posted:Why do people like to use multi-drive arrays (RAID, in ZFS, whatever) so much anyway, unless they're running a datacenter or are unable to afford that one extra hard drive? Higher failure rate, less safe than a proper offsite backup or even a simple mirrored drive, can't just plug it into another computer. Performance is slightly increased but I don't really see how that matters for a home server or NAS of all things where your lovely network is probably limiting way before you hit the 130mb/s a hgst deathstar can manage. If you have many hard drives, using a single drive for parity to protect them all at once is significantly cheaper than having to double your storage in order to keep a mirror. You're taking a risk on a simultaneous double-disk failure, but those are (by definition) quadratically less likely than a single drive failure. So I think RAID/ZFS make a lot of sense for redundancy. Maybe less so now that HDDs are getting bigger, but as long as you have 2+ they're still good. However, I think consumer users should get into proper backups LONG before they should bother looking into redundancy. Redundancy protects you against hardware failure and nothing else. User error, malware, and general software fuckups are a much bigger concern for the average goon with a collection of movies, pictures and documents. That kind of data also doesn't change all that often, so even a weekly backup is probably perfectly fine.
|
# ? Dec 20, 2015 14:20 |
|
blowfish posted:Why do people like to use multi-drive arrays (RAID, in ZFS, whatever) so much anyway, unless they're running a datacenter or are unable to afford that one extra hard drive? Higher failure rate, less safe than a proper offsite backup or even a simple mirrored drive, can't just plug it into another computer. Performance is slightly increased but I don't really see how that matters for a home server or NAS of all things where your lovely network is probably limiting way before you hit the 130mb/s a hgst deathstar can manage. My use case is having a ton of data in one place, with some level of redundancy in case of failure. I think backups are important (and I'm working on getting those in place but can't afford 20TB in extra drives just yet), but my data's not so critical that it'd ruin my world if I lost it all. It's just an annoyance to lose. Because of that, local drive redundancy is acceptable for my use case. On my old system, I had a whole bunch of drives, totally separate, and data scattered across all of them. If one failed, I lost what was on that drive. If one fails in my array, I swap it, and the array is intact. I'd love to see statistics on where you're seeing that there are inherently higher failure rates. For anything except a RAID-0, where any one drive failing is a total loss of the array, fail rates that involve actual loss of data should be lower. There's very little extra strain on the drives themselves from being in an array, so it's basically down to individual drive failure rates. Additionally, because it's ZFS, not hardware RAID, if I have a catastrophic failure that doesn't involve the drives (motherboard fails, for example), I can plug the entire array into any other machine that supports ZFS and be good to go. Hardware RAID that's tied to a specific controller card is the crazy thing for home use.
|
# ? Dec 20, 2015 15:04 |
|
I suspect most large home NAS users are filling them with pirated content. In that case it's pretty inconvenient to have to download stuff again, so making that less likely with parity drives is good, but you can get that stuff again, so full backups aren't really necessary.
|
# ? Dec 20, 2015 15:31 |
RAID - any kind, whether it's hardware via a dedicated HBA with controller or through software like software spaces/refs, zfs, btrfs, or mdadm - is for redundancy, so if you have disk failure(s) you don't lose data (any data on the failed disk(s) with spanned arrays, all data in the array if it's striped). As has been previously established in this very thread (and everywhere on the internet, hopefully): RAID is not backup, and should never be treated as such. Backing your data up is a completely seperate process that, depending on your budget and how critical the data is, can involve anything from just having two external disks that you create a daily backup to and swap out each week, to a multi-step processes involving on-site on-line, on-site off-line, off-site on-line, and/or any combination thereof plus actual test and verification of backups, and a disaster recovery system that's tested and verified to be working in the worst-case scenario. Thermopyle posted:I suspect most large home NAS users are filling them with pirated content. Another point is that probably deserves to be made, is that RAID doesn't have to be hard or complex. With storage spaces or synology/qnap appliance NAS plus network file sharing, having redundant data is very accessible if you just take a little bit of time to look into it without any prior knowledge. Sure, the us nerds in the thread take it to a whole other level, but that's kind of to be expected. BlankSystemDaemon fucked around with this message at 15:46 on Dec 20, 2015 |
|
# ? Dec 20, 2015 15:36 |
D. Ebdrup posted:Faulted devices mean that zfs cannot see the disk at all (and in your case, it's obviously because you removed the disk - whereas when you just offline it, the disk is still present in the system). ZFS Health states are covered here. Well it's faulted but I can always put the old ada2 disk back in and then it's back to normal. I got a few nibbles on my NAS4Free thread, still haven't resolved the issue though:
|
|
# ? Dec 20, 2015 17:00 |
fletcher posted:Well it's faulted but I can always put the old ada2 disk back in and then it's back to normal. I got a few nibbles on my NAS4Free thread, still haven't resolved the issue though: I don't know whether its possible to change the paths for the disks to use by-id devices instead, but that may also be worth looking into if your only other alternative is re-creating the pool. You're probably best suited following the recommendations in the thread you created on their forum. BlankSystemDaemon fucked around with this message at 19:04 on Dec 20, 2015 |
|
# ? Dec 20, 2015 18:54 |
|
Yeah I think that last suggestion there - to try pulling the disk and replacing it without shutting it down - might be a good next step. Any open SATA ports so you could try it without the hot swap concerns?
|
# ? Dec 20, 2015 22:40 |
IOwnCalculus posted:Yeah I think that last suggestion there - to try pulling the disk and replacing it without shutting it down - might be a good next step. It's tempting but my N40L doesn't support hot swap. I can't remember if there's another SATA port I'll take a look. fletcher fucked around with this message at 23:34 on Dec 20, 2015 |
|
# ? Dec 20, 2015 23:31 |
|
Don Lapre posted:Xpenology I dunno, I guess the appeal of the 1813/1815 is that it's low wattage and I literally have to put zero effort into "building" it. If I really wanted a server as a NAS I guess I'd probably go with Nexenta or something.
|
# ? Dec 21, 2015 00:10 |
|
So I realize this is the NAS thread, but I'd prefer not to create a thread in Haus for this. Does anyone have any good data recovery labs they use? Dead hard drive, not recoverable via software. Friend of the family, etc, etc. I thought there was a goon-approved one but I am having trouble finding info in the FAQ threads and I guess the SHSC wiki has been dead for a while?
|
# ? Dec 21, 2015 00:22 |
|
Internet Explorer posted:So I realize this is the NAS thread, but I'd prefer not to create a thread in Haus for this. Does anyone have any good data recovery labs they use? Dead hard drive, not recoverable via software. Friend of the family, etc, etc. I thought there was a goon-approved one but I am having trouble finding info in the FAQ threads and I guess the SHSC wiki has been dead for a while? I've always sent people to drive savers. Not cheap, but seem to get the job done.
|
# ? Dec 21, 2015 03:32 |
|
fletcher posted:It's tempting but my N40L doesn't support hot swap. I can't remember if there's another SATA port I'll take a look. Did you flash your bios to the hacked one? That one does support hotswapping drives.
|
# ? Dec 21, 2015 09:33 |
Mr Shiny Pants posted:Did you flash your bios to the hacked one? That one does support hotswapping drives. Nope, I was running the stock BIOS. I don't think it matters anymore though, I can no longer get back to a working pool.
|
|
# ? Dec 21, 2015 09:44 |
Well by some strange miracle my pool is back to working again this morning. This is with 2 new drives and 3 old drives (the last known working config). Afraid I might breathe on it wrong now!
|
|
# ? Dec 21, 2015 23:45 |
|
You might want to see about some different SATA cables, just in case.
|
# ? Dec 22, 2015 02:42 |
G-Prime posted:You might want to see about some different SATA cables, just in case. This is in a N40L. Would that be mini-sas? After having to spend this much time messing with it, I'm kinda tempted to just buy a new box for the new drives. That'll end up being way more than I wanted to spend expanding my storage though.
|
|
# ? Dec 22, 2015 07:55 |
There's no need to buy a new server, just find somewhere to migrate your pool to temporarily and set up your pool with persistent device identifiers, or switch to OpenIndiana/Solaris/FreeBSD or some other OS that runs ZFS, which doesn't mess around with device ids.
|
|
# ? Dec 22, 2015 09:14 |
D. Ebdrup posted:There's no need to buy a new server, just find somewhere to migrate your pool to temporarily and set up your pool with persistent device identifiers, or switch to OpenIndiana/Solaris/FreeBSD or some other OS that runs ZFS, which doesn't mess around with device ids. NAS4Free is FreeBSD, I don't see any other disk identifiers in /dev though. Perhaps I would need to make them? I wonder if a zpool import -d as described here would have fixed my problem when the pool was faulted. I also just remembered I was concerned about this a few years ago
|
|
# ? Dec 22, 2015 11:12 |
|
fletcher posted:This is in a N40L. Would that be mini-sas? After having to spend this much time messing with it, I'm kinda tempted to just buy a new box for the new drives. That'll end up being way more than I wanted to spend expanding my storage though. I think so, but do your research and don't quote me on that, as I don't own one. I just know from experience that cables can be the source of instability and faulting. I've got one drive that's got an insane number of CRC errors recorded in SMART, and it was doing that from the moment I powered on the machine for the first time. Swapped the cable, and they came to a dead halt immediately. It's possible that with some jostling from you swapping drives, you've caused some sort of problem with the cables, and that'd be my go-to thing to replace first if you have any more problems with the array.
|
# ? Dec 22, 2015 13:53 |
The saga is over! I am resilvering the new ada2 now. The issue was with the zpool.cache file. The fix was to first mount /cf as read/write: code:
code:
code:
|
|
# ? Dec 23, 2015 21:23 |
|
I need to add a bunch more storage to my network - I'm thinking at least 2x6TB disks, so something like $500. I think I've currently got something like 15 TB and I'm flirting with 90% used capacity. Alternately, I could move a bunch of it to nearline storage. Random access really isn't important to me and hard drives eventually decay. Once you get past the (totally nutso) cost of the drive the marginal cost of LTO-6 is really appealing (~$30/2.5 TB). I saw some half-height Quantum LTO-6 drives on eBay for like $1250 or something, how crazy am I for considering that? Paul MaudDib fucked around with this message at 01:17 on Dec 24, 2015 |
# ? Dec 24, 2015 01:12 |
|
Paul MaudDib posted:I need to add a bunch more storage to my network - I'm thinking at least 2x6TB disks, so something like $500. I think I've currently got something like 15 TB and I'm flirting with 90% capacity. Tape isn't really nearline, unless you have a somewhat automated system for bringing stuff online. It's not crazy though, it depends entirely on your access patterns, and if you have the drive already or not.
|
# ? Dec 24, 2015 01:15 |
|
Hi Guys, I'm CommieGIR and I'm a storage addict:
|
# ? Dec 24, 2015 03:54 |
Can you post a bigger picture I can't quite make out that serial number on the back shelf Also, can we get a closeup of that super rad cellphone looking thing in the back corner?
|
|
# ? Dec 24, 2015 07:05 |
|
fletcher posted:Also, can we get a closeup of that super rad cellphone looking thing in the back corner? From the corner of ages: There is a Slimnote laptop hidden the box in the back. All original packaging. 1MB of RAM. CommieGIR fucked around with this message at 15:32 on Dec 24, 2015 |
# ? Dec 24, 2015 15:24 |
CommieGIR posted:From the corner of ages: Very cool! Thanks for sharing. Your storage addiction is quite impressive as well!
|
|
# ? Dec 24, 2015 17:52 |
|
CommieGIR posted:Hi Guys, I'm CommieGIR and I'm a storage addict: What's your wattage at the wall?
|
# ? Dec 24, 2015 18:12 |
|
Jago posted:What's your wattage at the wall? Significant to be sure, but really only works out to ~$20 a month. fletcher posted:Very cool! Thanks for sharing. Your storage addiction is quite impressive as well! I also have a DEC Alpha Server and an ancient Compaq Proliant Quad Pentium 2, and an HP UX Pizzabox.
|
# ? Dec 24, 2015 21:21 |
|
I was going to say, the old compaq proliant drive caddy got me laughing. And brought back memories.. But mostly laughing.
|
# ? Dec 24, 2015 22:29 |
|
I think you should see someone about your electronics hoarding problem
|
# ? Dec 24, 2015 22:34 |
|
BobHoward posted:I think you should see someone about your electronics hoarding problem I actually just threw out a bunch of machines, but yes I'm a hoarder. Mostly server stuff.
|
# ? Dec 24, 2015 23:21 |
|
It reminds me of my dorm room in college. I heated it with nothing but computers, mostly discarded Sun and DEC servers/workstations.
|
# ? Dec 25, 2015 00:31 |
|
sharkytm posted:It reminds me of my dorm room in college. I heated it with nothing but computers, mostly discarded Sun and DEC servers/workstations. I have a Sun SPARCStation 5 that is just around for nostalgia purposes. The DEC AlphaServer works, but its not worth running.
|
# ? Dec 25, 2015 01:15 |
|
CommieGIR posted:I actually just threw out a bunch of machines, but yes I'm a hoarder. You seem to have a few 5.25" half-height and full-height drives, this shames me as the best I can do is a 3.5" half-height monolith (it is all black and very squared off) that was the last generation of HDD sold by Micropolis before they exited the HDD market. (the only reason I haven't gotten rid of it tbqh is that it may or may not have data on it that I'd want to erase and I don't think I have anything which speaks SCSI anymore, so I'm pretty much a failure at hoarding properly, although I do honestly have way too many old useless computers)
|
# ? Dec 25, 2015 08:50 |
|
BobHoward posted:(the only reason I haven't gotten rid of it tbqh is that it may or may not have data on it that I'd want to erase and I don't think I have anything which speaks SCSI anymore Physical destruction is guaranteed backwards-compatible, or so I hear.
|
# ? Dec 25, 2015 09:10 |
|
This reminds me, I have a bunch of PATA drives that relatives gave me to "wipe off", what'd be the best way to destroy them that won't make a huge mess? And that doesn't involve having to unscrew each one to get the platters out. Just curious if there is a quick and easy way to make a drive unreadable anymore. I have nothing that talks to PATA anymore and I don't want to get an adapter..
|
# ? Dec 25, 2015 09:18 |
|
Using them for target practice is a pretty solid method.
|
# ? Dec 25, 2015 09:22 |
|
DrDork posted:Using them for target practice is a pretty solid method. That'd be messy though.. Albeit satisfying.
|
# ? Dec 25, 2015 09:42 |
|
|
# ? Apr 19, 2024 01:37 |
|
priznat posted:That'd be messy though.. Albeit satisfying. I bet 5.56 would work beautifully
|
# ? Dec 25, 2015 10:15 |