Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
suck my woke dick
Oct 10, 2012

:siren:I CANNOT EJACULATE WITHOUT SEEING NATIVE AMERICANS BRUTALISED!:siren:

Put this cum-loving slave on ignore immediately!
Why do people like to use multi-drive arrays (RAID, in ZFS, whatever) so much anyway, unless they're running a datacenter or are unable to afford that one extra hard drive? Higher failure rate, less safe than a proper offsite backup or even a simple mirrored drive, can't just plug it into another computer. Performance is slightly increased but I don't really see how that matters for a home server or NAS of all things where your lovely network is probably limiting way before you hit the 130mb/s a hgst deathstar can manage.

suck my woke dick fucked around with this message at 12:48 on Dec 20, 2015

Adbot
ADBOT LOVES YOU

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

blowfish posted:

Why do people like to use multi-drive arrays (RAID, in ZFS, whatever) so much anyway, unless they're running a datacenter or are unable to afford that one extra hard drive? Higher failure rate, less safe than a proper offsite backup or even a simple mirrored drive, can't just plug it into another computer. Performance is slightly increased but I don't really see how that matters for a home server or NAS of all things where your lovely network is probably limiting way before you hit the 130mb/s a hgst deathstar can manage.

If you have many hard drives, using a single drive for parity to protect them all at once is significantly cheaper than having to double your storage in order to keep a mirror. You're taking a risk on a simultaneous double-disk failure, but those are (by definition) quadratically less likely than a single drive failure.

So I think RAID/ZFS make a lot of sense for redundancy. Maybe less so now that HDDs are getting bigger, but as long as you have 2+ they're still good.

However, I think consumer users should get into proper backups LONG before they should bother looking into redundancy. Redundancy protects you against hardware failure and nothing else. User error, malware, and general software fuckups are a much bigger concern for the average goon with a collection of movies, pictures and documents. That kind of data also doesn't change all that often, so even a weekly backup is probably perfectly fine.

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

blowfish posted:

Why do people like to use multi-drive arrays (RAID, in ZFS, whatever) so much anyway, unless they're running a datacenter or are unable to afford that one extra hard drive? Higher failure rate, less safe than a proper offsite backup or even a simple mirrored drive, can't just plug it into another computer. Performance is slightly increased but I don't really see how that matters for a home server or NAS of all things where your lovely network is probably limiting way before you hit the 130mb/s a hgst deathstar can manage.

My use case is having a ton of data in one place, with some level of redundancy in case of failure. I think backups are important (and I'm working on getting those in place but can't afford 20TB in extra drives just yet), but my data's not so critical that it'd ruin my world if I lost it all. It's just an annoyance to lose. Because of that, local drive redundancy is acceptable for my use case. On my old system, I had a whole bunch of drives, totally separate, and data scattered across all of them. If one failed, I lost what was on that drive. If one fails in my array, I swap it, and the array is intact. I'd love to see statistics on where you're seeing that there are inherently higher failure rates. For anything except a RAID-0, where any one drive failing is a total loss of the array, fail rates that involve actual loss of data should be lower. There's very little extra strain on the drives themselves from being in an array, so it's basically down to individual drive failure rates.

Additionally, because it's ZFS, not hardware RAID, if I have a catastrophic failure that doesn't involve the drives (motherboard fails, for example), I can plug the entire array into any other machine that supports ZFS and be good to go. Hardware RAID that's tied to a specific controller card is the crazy thing for home use.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I suspect most large home NAS users are filling them with pirated content.

In that case it's pretty inconvenient to have to download stuff again, so making that less likely with parity drives is good, but you can get that stuff again, so full backups aren't really necessary.

BlankSystemDaemon
Mar 13, 2009



RAID - any kind, whether it's hardware via a dedicated HBA with controller or through software like software spaces/refs, zfs, btrfs, or mdadm - is for redundancy, so if you have disk failure(s) you don't lose data (any data on the failed disk(s) with spanned arrays, all data in the array if it's striped). As has been previously established in this very thread (and everywhere on the internet, hopefully): RAID is not backup, and should never be treated as such.

Backing your data up is a completely seperate process that, depending on your budget and how critical the data is, can involve anything from just having two external disks that you create a daily backup to and swap out each week, to a multi-step processes involving on-site on-line, on-site off-line, off-site on-line, and/or any combination thereof plus actual test and verification of backups, and a disaster recovery system that's tested and verified to be working in the worst-case scenario.

Thermopyle posted:

I suspect most large home NAS users are filling them with pirated content.
Even if you aren't filling up your drives with linux isos, having RAID and backup is probably the best possible way to treat your data if you don't want to lose it. I know people in amateur photography and amateur music collection who, despite storing their images in RAW and music in FLAC, don't fill up a modern disk and still want RAID in addition to their backup solutions.

Another point is that probably deserves to be made, is that RAID doesn't have to be hard or complex. With storage spaces or synology/qnap appliance NAS plus network file sharing, having redundant data is very accessible if you just take a little bit of time to look into it without any prior knowledge.
Sure, the us nerds in the thread take it to a whole other level, but that's kind of to be expected.

BlankSystemDaemon fucked around with this message at 15:46 on Dec 20, 2015

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

D. Ebdrup posted:

Faulted devices mean that zfs cannot see the disk at all (and in your case, it's obviously because you removed the disk - whereas when you just offline it, the disk is still present in the system). ZFS Health states are covered here.

Faulted pool means you're SOL - so I hope it's not that. :ohdear:

Well it's faulted but I can always put the old ada2 disk back in and then it's back to normal. I got a few nibbles on my NAS4Free thread, still haven't resolved the issue though:

BlankSystemDaemon
Mar 13, 2009



fletcher posted:

Well it's faulted but I can always put the old ada2 disk back in and then it's back to normal. I got a few nibbles on my NAS4Free thread, still haven't resolved the issue though:
As I see it, assuming its because you didn't use persistent device ids - and take this with a grain of salt as I've always used FreeBSD (ever since ~15 years ago when I got scared by Debian('s lack of documentation then) and was instead handed the FreeBSD handbook) - you can try importing the disks by-id with "zpool import -d /dev/disk/by-id zfsdata" (EDIT: I don't know if this will work, it may be only for pools created by using the by-id method).

I don't know whether its possible to change the paths for the disks to use by-id devices instead, but that may also be worth looking into if your only other alternative is re-creating the pool.

You're probably best suited following the recommendations in the thread you created on their forum.

BlankSystemDaemon fucked around with this message at 19:04 on Dec 20, 2015

IOwnCalculus
Apr 2, 2003





Yeah I think that last suggestion there - to try pulling the disk and replacing it without shutting it down - might be a good next step.

Any open SATA ports so you could try it without the hot swap concerns?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

IOwnCalculus posted:

Yeah I think that last suggestion there - to try pulling the disk and replacing it without shutting it down - might be a good next step.

Any open SATA ports so you could try it without the hot swap concerns?

It's tempting but my N40L doesn't support hot swap. I can't remember if there's another SATA port I'll take a look.

fletcher fucked around with this message at 23:34 on Dec 20, 2015

some kinda jackal
Feb 25, 2003

 
 

I dunno, I guess the appeal of the 1813/1815 is that it's low wattage and I literally have to put zero effort into "building" it. If I really wanted a server as a NAS I guess I'd probably go with Nexenta or something.

Internet Explorer
Jun 1, 2005





So I realize this is the NAS thread, but I'd prefer not to create a thread in Haus for this. Does anyone have any good data recovery labs they use? Dead hard drive, not recoverable via software. Friend of the family, etc, etc. I thought there was a goon-approved one but I am having trouble finding info in the FAQ threads and I guess the SHSC wiki has been dead for a while?

Booley
Apr 25, 2010
I CAN BARELY MAKE IT A WEEK WITHOUT ACTING LIKE AN ASSHOLE
Grimey Drawer

Internet Explorer posted:

So I realize this is the NAS thread, but I'd prefer not to create a thread in Haus for this. Does anyone have any good data recovery labs they use? Dead hard drive, not recoverable via software. Friend of the family, etc, etc. I thought there was a goon-approved one but I am having trouble finding info in the FAQ threads and I guess the SHSC wiki has been dead for a while?

I've always sent people to drive savers. Not cheap, but seem to get the job done.

Mr Shiny Pants
Nov 12, 2012

fletcher posted:

It's tempting but my N40L doesn't support hot swap. I can't remember if there's another SATA port I'll take a look.

Did you flash your bios to the hacked one? That one does support hotswapping drives.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Mr Shiny Pants posted:

Did you flash your bios to the hacked one? That one does support hotswapping drives.

Nope, I was running the stock BIOS. I don't think it matters anymore though, I can no longer get back to a working pool.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Well by some strange miracle my pool is back to working again this morning. This is with 2 new drives and 3 old drives (the last known working config). Afraid I might breathe on it wrong now!

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
You might want to see about some different SATA cables, just in case.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

G-Prime posted:

You might want to see about some different SATA cables, just in case.

This is in a N40L. Would that be mini-sas? After having to spend this much time messing with it, I'm kinda tempted to just buy a new box for the new drives. That'll end up being way more than I wanted to spend expanding my storage though.

BlankSystemDaemon
Mar 13, 2009



There's no need to buy a new server, just find somewhere to migrate your pool to temporarily and set up your pool with persistent device identifiers, or switch to OpenIndiana/Solaris/FreeBSD or some other OS that runs ZFS, which doesn't mess around with device ids.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

D. Ebdrup posted:

There's no need to buy a new server, just find somewhere to migrate your pool to temporarily and set up your pool with persistent device identifiers, or switch to OpenIndiana/Solaris/FreeBSD or some other OS that runs ZFS, which doesn't mess around with device ids.

NAS4Free is FreeBSD, I don't see any other disk identifiers in /dev though. Perhaps I would need to make them?

I wonder if a zpool import -d as described here would have fixed my problem when the pool was faulted.

I also just remembered I was concerned about this a few years ago :)

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.

fletcher posted:

This is in a N40L. Would that be mini-sas? After having to spend this much time messing with it, I'm kinda tempted to just buy a new box for the new drives. That'll end up being way more than I wanted to spend expanding my storage though.

I think so, but do your research and don't quote me on that, as I don't own one. I just know from experience that cables can be the source of instability and faulting. I've got one drive that's got an insane number of CRC errors recorded in SMART, and it was doing that from the moment I powered on the machine for the first time. Swapped the cable, and they came to a dead halt immediately. It's possible that with some jostling from you swapping drives, you've caused some sort of problem with the cables, and that'd be my go-to thing to replace first if you have any more problems with the array.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
The saga is over! I am resilvering the new ada2 now. The issue was with the zpool.cache file.

The fix was to first mount /cf as read/write:
code:
fletchn40l: ~ # cat /etc/fstab
/dev/da0a /cf ufs ro 1 1
proc /proc procfs rw 0 0
fletchn40l: ~ # mount -v
/dev/xmd0 on / (ufs, local, noatime, acls, writes: sync 341 async 24, reads: sync 679 async 12, fsid 1c0695543ee2f0e6)
devfs on /dev (devfs, local, multilabel, fsid 00ff007171000000)
/dev/xmd1 on /usr/local (ufs, local, noatime, soft-updates, acls, writes: sync 2 async 21, reads: sync 1121 async 0, fsid 1c0695548d3627ad)
procfs on /proc (procfs, local, fsid 01ff000202000000)
fletch_vdev on /mnt/fletch_vdev (zfs, NFS exported, local, nfsv4acls, fsid d055f767de7bbb48)
/dev/xmd2 on /var (ufs, local, noatime, soft-updates, acls, writes: sync 109 async 542, reads: sync 9 async 0, fsid bd197a562dad175a)
tmpfs on /var/tmp (tmpfs, local, fsid 02ff008787000000)
/dev/da0a on /cf (ufs, local, soft-updates, writes: sync 2 async 0, reads: sync 1 async 0, fsid 500795543a72c3ce)
fletchn40l: ~ # umount -f /cf
fletchn40l: ~ # mount -w /dev/da0a /cf
I also did this in between but I don't think it was necessary:
code:
fletchn40l: ~ # zpool export fletch_vdev
fletchn40l: ~ # zpool import -d /dev fletch_vdev
fletchn40l: ~ # zpool status
  pool: fletch_vdev
 state: ONLINE
  scan: resilvered 2.03M in 0h0m with 0 errors on Mon Dec 21 20:28:26 2015
config:

        NAME          STATE     READ WRITE CKSUM
        fletch_vdev   ONLINE       0     0     0
          raidz2-0    ONLINE       0     0     0
            ada0.nop  ONLINE       0     0     0
            ada1.nop  ONLINE       0     0     0
            ada2.nop  ONLINE       0     0     0
            ada3.nop  ONLINE       0     0     0
            ada4.nop  ONLINE       0     0     0

errors: No known data errors
fletchn40l: ~ # zdb -C
fletch_vdev:
    version: 5000
    name: 'fletch_vdev'
    state: 0
    txg: 20387127
    pool_guid: 4714842937177408258
    hostid: 2142099219
    hostname: 'fletchn40l.local'
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 4714842937177408258
        children[0]:
            type: 'raidz'
            id: 0
            guid: 4622716132764498571
            nparity: 2
            metaslab_array: 30
            metaslab_shift: 36
            ashift: 12
            asize: 10001970626560
            is_log: 0
            create_txg: 4
            children[0]:
                type: 'disk'
                id: 0
                guid: 10751594757434406316
                path: '/dev/ada0.nop'
                phys_path: '/dev/ada0.nop'
                whole_disk: 1
                DTL: 189
                create_txg: 4
            children[1]:
                type: 'disk'
                id: 1
                guid: 2233689155040162778
                path: '/dev/ada1.nop'
                phys_path: '/dev/ada1.nop'
                whole_disk: 1
                DTL: 188
                create_txg: 4
            children[2]:
                type: 'disk'
                id: 2
                guid: 878045480102891058
                path: '/dev/ada2.nop'
                phys_path: '/dev/ada2.nop'
                whole_disk: 1
                DTL: 185
                create_txg: 4
            children[3]:
                type: 'disk'
                id: 3
                guid: 869598473616518127
                path: '/dev/ada3.nop'
                phys_path: '/dev/ada3.nop'
                whole_disk: 1
                DTL: 187
                create_txg: 4
            children[4]:
                type: 'disk'
                id: 4
                guid: 6785725410307062142
                path: '/dev/ada4.nop'
                phys_path: '/dev/ada4.nop'
                whole_disk: 1
                DTL: 179
                create_txg: 4
    features_for_read:
        com.delphix:hole_birth
Then delete the zpool.cache file and reboot:
code:
fletchn40l: ~ # rm /cf/boot/zfs/zpool.cache
fletchn40l: ~ # reboot

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I need to add a bunch more storage to my network - I'm thinking at least 2x6TB disks, so something like $500. I think I've currently got something like 15 TB and I'm flirting with 90% used capacity.

Alternately, I could move a bunch of it to nearline storage. Random access really isn't important to me and hard drives eventually decay. Once you get past the (totally nutso) cost of the drive the marginal cost of LTO-6 is really appealing (~$30/2.5 TB). I saw some half-height Quantum LTO-6 drives on eBay for like $1250 or something, how crazy am I for considering that?

Paul MaudDib fucked around with this message at 01:17 on Dec 24, 2015

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Paul MaudDib posted:

I need to add a bunch more storage to my network - I'm thinking at least 2x6TB disks, so something like $500. I think I've currently got something like 15 TB and I'm flirting with 90% capacity.

Alternately, I could move a bunch of it to nearline storage. Random access really isn't important to me and hard drives eventually decay. Once you get past the (totally nutso) cost of the drive the marginal cost of LTO-6 is really appealing (~$30/2.5 TB). I saw some half-height Quantum LTO-6 drives on eBay for like $1250 or something, how crazy am I for considering that?

Tape isn't really nearline, unless you have a somewhat automated system for bringing stuff online. It's not crazy though, it depends entirely on your access patterns, and if you have the drive already or not.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Hi Guys, I'm CommieGIR and I'm a storage addict:

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Can you post a bigger picture I can't quite make out that serial number on the back shelf :D

Also, can we get a closeup of that super rad cellphone looking thing in the back corner?

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

fletcher posted:

Also, can we get a closeup of that super rad cellphone looking thing in the back corner?

From the corner of ages:



There is a Slimnote laptop hidden the box in the back. All original packaging. 1MB of RAM.

CommieGIR fucked around with this message at 15:32 on Dec 24, 2015

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

CommieGIR posted:

From the corner of ages:



There is a Slimnote laptop hidden the box in the back. All original packaging. 1MB of RAM.

Very cool! Thanks for sharing. Your storage addiction is quite impressive as well!

LRADIKAL
Jun 10, 2001

Fun Shoe

CommieGIR posted:

Hi Guys, I'm CommieGIR and I'm a storage addict:



What's your wattage at the wall?

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Jago posted:

What's your wattage at the wall?

Significant to be sure, but really only works out to ~$20 a month.


fletcher posted:

Very cool! Thanks for sharing. Your storage addiction is quite impressive as well!

I also have a DEC Alpha Server and an ancient Compaq Proliant Quad Pentium 2, and an HP UX Pizzabox.

some kinda jackal
Feb 25, 2003

 
 
I was going to say, the old compaq proliant drive caddy got me laughing. And brought back memories.. But mostly laughing.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
I think you should see someone about your electronics hoarding problem

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

BobHoward posted:

I think you should see someone about your electronics hoarding problem

I actually just threw out a bunch of machines, but yes I'm a hoarder.

Mostly server stuff.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib
It reminds me of my dorm room in college. I heated it with nothing but computers, mostly discarded Sun and DEC servers/workstations.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

sharkytm posted:

It reminds me of my dorm room in college. I heated it with nothing but computers, mostly discarded Sun and DEC servers/workstations.

I have a Sun SPARCStation 5 that is just around for nostalgia purposes. The DEC AlphaServer works, but its not worth running.

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull

CommieGIR posted:

I actually just threw out a bunch of machines, but yes I'm a hoarder.

Mostly server stuff.

You seem to have a few 5.25" half-height and full-height drives, this shames me as the best I can do is a 3.5" half-height monolith (it is all black and very squared off) that was the last generation of HDD sold by Micropolis before they exited the HDD market.

(the only reason I haven't gotten rid of it tbqh is that it may or may not have data on it that I'd want to erase and I don't think I have anything which speaks SCSI anymore, so I'm pretty much a failure at hoarding properly, although I do honestly have way too many old useless computers)

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

BobHoward posted:

(the only reason I haven't gotten rid of it tbqh is that it may or may not have data on it that I'd want to erase and I don't think I have anything which speaks SCSI anymore

Physical destruction is guaranteed backwards-compatible, or so I hear.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
This reminds me, I have a bunch of PATA drives that relatives gave me to "wipe off", what'd be the best way to destroy them that won't make a huge mess? And that doesn't involve having to unscrew each one to get the platters out.

Just curious if there is a quick and easy way to make a drive unreadable anymore. I have nothing that talks to PATA anymore and I don't want to get an adapter..

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Using them for target practice is a pretty solid method.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.

DrDork posted:

Using them for target practice is a pretty solid method.

That'd be messy though.. Albeit satisfying.

Adbot
ADBOT LOVES YOU

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

priznat posted:

That'd be messy though.. Albeit satisfying.

I bet 5.56 would work beautifully

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply