Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Captain Tram posted:

I might not be exactly sure what I'm looking for. I'm trying to expand my total storage space while at the same time giving myself some level of fault tolerance. I plan to connect the drives to my network, but it seems like the cheapest way to do that is to use a DAS and plug it into my existing server, rather than build/buy a whole new server/NAS. This plug would have to be USB2, because that's pretty much all I have available to me in the mini, unless I wanted to invest in FW800.

Ok, so your server is a Mac mini? If that's the case, then yeah, you need either a DAS or a NAS of some sort.

I don't think a DAS is necessarily going to be cheaper than a building a cheap NAS which will give you more flexibility in the long run anyway. I'll leave it up to someone else to recommend you some parts, but, if it were me, I'd build a 200-400 dollar PC and put Linux of some flavor on it and be done with it. Optionally, OpenIndiana or whatever the best OS for ZFS is nowadays.

Adbot
ADBOT LOVES YOU

Longinus00
Dec 29, 2005
Ur-Quan

jeeves posted:

According to FreeNAS's reporting, all 4GB are being used constantly, with no spiking or anything. Just a constant 4GB, over many hours, with 1GB put in reserve by the system I think.

Yeah, I think it's time for me to buy that second 4GB stick to bump the NAS up to 8GB then.

Hm. Looks like they took the 4GB nice Kingston memory module off of the market at newegg, with only a 2x4GB item available now. Anyone using a similar Proliant Microserver want to buy the extra 4GB I won't need off of me, as I already have a 4GB stick in one slot?

That 4GB in use could mean anything. Actually dig down and figure out where the memory is being used. If the memory use is from a leak or something buying more ram will only delay the slowdown a little longer.

LamoTheKid posted:

Are BitTorrent downloads directly to a ZFS pool a bad idea? Would I be better served using my 30GB SSD as a ZIL/L2ARC disk? Or should I just not worry about this?

Assuming your Bittorrent client doesn't download sequentially, and it really shouldn't be doing this, then its access pattern will really be hard for a COW filesystem to deal with. If you're only downloading/seeding at regular broadband speeds (i.e. much slower than your disk speeds) then you shouldn't worry about it too much. Just don't let that disk get too full.

Corrupt Cypher
Jul 20, 2006
Hello all! I am looking at a backup/centralized storage solution for our small office (5ish people). What I need from the system is:

-Raid 1
-Differential syncing once laptops hit the network
-2TB would be quite adequate

People don't need to be able to work on files stored on the NAS, only grab them and work on them on their own computers. What I've been looking at is the Seagate Black Armor and the LG Super Multi N2A2 NAS based on some reviews I've seen.

My main question though is on the software side, I've looked at syncback and it appears that might work, but how is the stuff that comes bundled with these types of units? And if it's good enough, what manufacturer is the best?

Any guidance is much appreciated!

atomjack
May 17, 2003

Background:Ubuntu 11.04 64 bit in a Norco 4220 case, 2x64GB Kingston SSD's for boot (in Software raid-1), 6x1.5TB WD and 6x2TB Samsung F4's.

Got my replacement 2TB Samsung F4 drive in. Raid 5 recovery took about 6-7 hours (unmounted). It seems like with these drives it IS possible to set cctl (their version of WD's TLER), although it is a volatile setting and loses it's value when the system is turned off (but not hot rebooted?). Setting this value does NOT work on the drives if they are in an array, so I first had to deactivate it (after deactivating the LVM array on top of it), and then the command worked. So, I added a script to /etc/init.d/ to run 'smartctl -l scterc,70,70 <device>' and ran update-rc.d with a priority of 2, that set the value for all 6 of the Samsung drives (using /dev/disk/by-id/), so hopefully it will set those values upon boot, before mdadm assembles the array (I checked and the init.d script for mdadm has a priority of 3).

The only problem is it doesn't seem to support the smartctl command to GET the ERC value, so I'm just going to have to hope the init script works.

Also, kinda stoked that hot-swapping the drives on the Norco case works fine. When the 2TB drive died I immediately shut it down and ejected all the drives, and had the system back up a few days later (with the drives still not plugged in). After the replacement drive came in, I was able to plug all 6 drives back in, and mdadm assembled the degraded array with 5 of 6 devices automatically.

Next up:Finish transferring all the files from the 6x1.5TB array to the new array, then recreate the 6x1.5TB array (so I can use 512k chunk size instead of 128k), then add it to the LVM Array.

Thought I'd share in case it helps anyone else down the road.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

atomjack posted:

Got my replacement 2TB Samsung F4 drive in. Raid 5 recovery took about 6-7 hours (unmounted). It seems like with these drives it IS possible to set cctl (their version of WD's TLER), although it is a volatile setting and loses it's value when the system is turned off (but not hot rebooted?). Setting this value does NOT work on the drives if they are in an array, so I first had to deactivate it (after deactivating the LVM array on top of it), and then the command worked. So, I added a script to /etc/init.d/ to run 'smartctl -l scterc,70,70 <device>' and ran update-rc.d with a priority of 2, that set the value for all 6 of the Samsung drives (using /dev/disk/by-id/), so hopefully it will set those values upon boot, before mdadm assembles the array (I checked and the init.d script for mdadm has a priority of 3).

The only problem is it doesn't seem to support the smartctl command to GET the ERC value, so I'm just going to have to hope the init script works.

Hmm. I've been running several F4's in a mdadm RAID5 on Ubuntu 11.04 without bothering with this and haven't ever had a problem. Now I'm wondering if I should bother. When I originally got them, I thought I'd get around to researching if it was necessary, and if so, how to do it...but then I got sidetracked and never did it.

atomjack
May 17, 2003

Thermopyle posted:

Hmm. I've been running several F4's in a mdadm RAID5 on Ubuntu 11.04 without bothering with this and haven't ever had a problem. Now I'm wondering if I should bother. When I originally got them, I thought I'd get around to researching if it was necessary, and if so, how to do it...but then I got sidetracked and never did it.
It's possible it's not needed, but only two days after I got the drives and put them in RAID5, one of the drives popped out of the array. Ran smartctl on it which reported bad sectors. So, it could just be that that drive was bad, or maybe it was bad AND the cctl kicked in and dropped it out of the array too soon. I didn't want to take any chances. I'll check back in in like a week and report on the status of the array.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

atomjack posted:

It's possible it's not needed, but only two days after I got the drives and put them in RAID5, one of the drives popped out of the array. Ran smartctl on it which reported bad sectors. So, it could just be that that drive was bad, or maybe it was bad AND the cctl kicked in and dropped it out of the array too soon. I didn't want to take any chances. I'll check back in in like a week and report on the status of the array.

Well, I'll give you my experience: I've never had a drive get popped from an array, and I've got 3 F4's, two of which have been in there right about the time the drives came out.

I've also got 6 2TB F3's in the server, and still haven't seen a problem.

I'm guessing the cctl thing isn't needed.

IOwnCalculus
Apr 2, 2003





I think md is pretty forgiving on drive error times. Due to the embarassingly ghetto cabling / controller setup I have mine living on right now, I pop DMA errors in dmesg every once in a while, but they never drop the drive from the array altogether.

The worry of bit rot / the hilariously bad way this is cabled right now is part of why I really want to do a Microserver + ZFS to eventually replace it with.

Corb3t
Jun 7, 2003

Can someone explain to me why most network attached drives are so slow to write to? That seems to be one of the biggest complaints about NAS drives.

What's the bottleneck?

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire

Corbet posted:

Can someone explain to me why most network attached drives are so slow to write to? That seems to be one of the biggest complaints about NAS drives.

What's the bottleneck?

Either slow non-gigabit networks or poor performance from software raids and such.

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

jeeves posted:

Either slow non-gigabit networks or poor performance from software raids and such.

Or even bad cabling.

My wireless performance went from 5MB/s to 11MB/s after I rewired my network, and wired went from 78MB/s to 110MB/s. All from replacing one cable to the media server, really.

movax
Aug 30, 2008

Corbet posted:

Can someone explain to me why most network attached drives are so slow to write to? That seems to be one of the biggest complaints about NAS drives.

What's the bottleneck?

Poor RAID/pseudo-RAID performance (CPU's fault sometimes in software RAID, check CPU usage on the server when you're writing a ton of data to it), or crappy Ethernet performance usually.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire
Basically there are a lot more possibilities for speed bottlenecks with a NAS versus when you just attach a second drive to your computer via a SATA cable. That's why people complain, most tend to be used to the transfer speeds of the latter and ignorant of the former.

Longinus00
Dec 29, 2005
Ur-Quan

Corbet posted:

Can someone explain to me why most network attached drives are so slow to write to? That seems to be one of the biggest complaints about NAS drives.

What's the bottleneck?

The cheap NAS units you can buy at retail stores are usually powered by a low power ARM SoC. If you want a high performance NAS then it might be more cost effective to make one yourself.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

PopeOnARope posted:

Or even bad cabling.

My wireless performance went from 5MB/s to 11MB/s after I rewired my network, and wired went from 78MB/s to 110MB/s. All from replacing one cable to the media server, really.

Was this a cable that was poorly crimped or possibly just bent too many times where it was dropping packets?

You say your increase came from when you replaced the cable to the media server. Was that increase only to the media server, or was that from a different device on the network to something other than the media server?

PopeOnARope
Jul 23, 2007

Hey! Quit touching my junk!

Moey posted:

Was this a cable that was poorly crimped or possibly just bent too many times where it was dropping packets?

You say your increase came from when you replaced the cable to the media server. Was that increase only to the media server, or was that from a different device on the network to something other than the media server?

The cable was a commercial one with absolutely no noticeable kinks in it's expanse.

The increase was basically in communication between my laptop and the server, both over wired and wireless (the wired line for my laptop stayed the same).

I haven't bothered to test FTP to my PS3 yet. I might later. It used to top out at around 11MB/s. (wired)

conntrack
Aug 8, 2003

by angerbeet
I am also getting slowdowns now that my zfs pool is getting full.

Getting alot of CPU spikes that stalls writes. Reads are going full gigabit.

ZFS would probably need to get a defrag and vdev rebalance tool to maintain performance close to being full.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire

conntrack posted:

I am also getting slowdowns now that my zfs pool is getting full.

Getting alot of CPU spikes that stalls writes. Reads are going full gigabit.

ZFS would probably need to get a defrag and vdev rebalance tool to maintain performance close to being full.

Apparently ZFS only likes being at less than 80% full.

To be honest the performance of my NAS with ZFS has been pretty subpar from what I am used to from my recent contract job that had me working with/editing files via network shares on a shittier server than my home's proliant but with Win/software Raid-5. Even with me adding in another 4GB of ram to max it out at 8GB, editing/sorting files over the network is much slower.

If I already didn't have ~5.5TB of data and no where to copy to I'd probably just go with Win2008+raid5 on the thing. Oh well.

conntrack
Aug 8, 2003

by angerbeet

jeeves posted:

Apparently ZFS only likes being at less than 80% full.

To be honest the performance of my NAS with ZFS has been pretty subpar from what I am used to from my recent contract job that had me working with/editing files via network shares on a shittier server than my home's proliant but with Win/software Raid-5. Even with me adding in another 4GB of ram to max it out at 8GB, editing/sorting files over the network is much slower.

If I already didn't have ~5.5TB of data and no where to copy to I'd probably just go with Win2008+raid5 on the thing. Oh well.

Did you tweak tcp settings and samba?

Trapdoor
Jun 7, 2005
The one and only.
I'm in the process of setting up a Windows Server 2008 primarily for fileserving at home.

I have in my workstation a Highpoint Rocketraid 2320 8xSATA raid controller that has been working fine for the past couple of years, with a 3x 1TB RAID0 and a 3x 500GB RAID5. I am planning to move this RAID controller to the server that I'm setting up, but unsure which RAID level would be more suited for the general purpose of just serving files.

I've read up on it and have had suggestions saying RAID1+0, but I don't know if I'm willing to sacrifice the cost/storage for the security, I mean, it's just files right?

The server will be streaming content to various devices in the house, and won't see too much writing, maybe 400-500GB a month, so write-performance isn't really a limiting factor.

These are the pros/cons I have considered, are there any more you can think of?
RAID5:
+ Capacity
+ Price
- Redundancy
- Read/Write performance

RAID1+0:
+ Redundancy
+ Read/Write performance
- Capacity
- Price

I will also be running a RAID1 for files that require some reliability, so that's not an issue.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Trapdoor posted:

These are the pros/cons I have considered, are there any more you can think of?
RAID5:
+ Capacity
+ Price
- Redundancy
- Read/Write performance

RAID1+0:
+ Redundancy
+ Read/Write performance
- Capacity
- Price

You're not quite right there.

RAID5 (3 disks):
* 2 disks' capacity
* Tolerates 1 disk failure
* Stripes data across disks with parity for ~2 striped disks' worth of performance
* Costs 3 disks
* Slow to rebuild
* RAID 5 write hole, if you don't have an UPS

RAID1+0 (4 disks):
* 2 disks' capacity
* Tolerates 1 disk failure for sure, maybe 2 if it's set up as a stripe across two mirrors
* Stripes data with no parity for 2 striped disks' worth of performance
* Costs 4 disks
* Rebuilds more quickly
* No write hole

RAID5 (4 disks):
* 3 disks' capacity
* Tolerates 1 disk failure
* Stripes data across disks with parity for ~3 striped disks' worth of performance
* Costs 4 disks
* Slow to rebuild
* RAID 5 write hole, if you don't have an UPS

RAID 5 is best suited. RAID 0, like you have set up now, is a real :psyduck: because it cannot tolerate any drive failures at all.

Potrod
Aug 9, 2003
The Argus
Has anybody used a slim case or barebones PC when building their own NAS/fileserver? I want something with a 2-drive RAID-1 setup (mostly for backup), but would rather have something physically small that doesn't consume much power. Trouble is, those small cases don't seem to have 2 internal 3.5" bays for RAID 1.

Since my main purpose is backup, I might just get a 2-drive NAS (something like this), but I'm curious about WHS, and since a good NAS/HDD combo is going to be a few hundred bucks anyway, figured I'd see what my options are for a more powerful custom build.

Trapdoor
Jun 7, 2005
The one and only.

Factory Factory posted:

You're not quite right there.

RAID5 (3 disks):
* 2 disks' capacity
* Tolerates 1 disk failure
* Stripes data across disks with parity for ~2 striped disks' worth of performance
* Costs 3 disks
* Slow to rebuild
* RAID 5 write hole, if you don't have an UPS

RAID1+0 (4 disks):
* 2 disks' capacity
* Tolerates 1 disk failure for sure, maybe 2 if it's set up as a stripe across two mirrors
* Stripes data with no parity for 2 striped disks' worth of performance
* Costs 4 disks
* Rebuilds more quickly
* No write hole

RAID5 (4 disks):
* 3 disks' capacity
* Tolerates 1 disk failure
* Stripes data across disks with parity for ~3 striped disks' worth of performance
* Costs 4 disks
* Slow to rebuild
* RAID 5 write hole, if you don't have an UPS

RAID 5 is best suited. RAID 0, like you have set up now, is a real :psyduck: because it cannot tolerate any drive failures at all.

The 3 disk RAID0 I run right now contains files that can sustain a loss.
When you say the RAID1+0 "rebuilds more quickly", how "quicklier" does it rebuild?

What I meant with RAID5 having better capacity, I meant it grows faster than the RAID1+0 as you only need to add one drive to increase the capacity, compared to 2 with RAID1+0.

I am probably going to run a UPS on the system, do you know what the price for a suitable UPS would be?

Trapdoor fucked around with this message at 23:03 on Jul 30, 2011

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Trapdoor posted:

The 3 disk RAID0 I run right now contains files that can sustain a loss.
When you say the RAID1+0 "rebuilds more quickly", how "quicklier" does it rebuild?

What I meant with RAID5 having better capacity, I meant it grows faster than the RAID1+0 as you only need to add one drive to increase the capacity, compared to 2 with RAID1+0.

I am probably going to run a UPS on the system, do you know what the price for a suitable UPS would be?

2-5 hours instead of 4-24, depending on the controller and how much data there is.

You can get a sufficient one for about $100 for a low-power system with few accessories. APC and Cyberpower are the brands to look for. If you can get one with a USB connector cable, then you can configure your PC to shut down much like you would a laptop, as the UPS appears as an ACPI battery to the computer.

movax
Aug 30, 2008

My eVGA RMA board finally came in, megatron is alive again! Went from E6600 to i7-930, 24GB of RAM. Scrub performance went way up:

code:
~ [ zpool status tank       ] 10:16 PM
  pool: tank
 state: ONLINE
 scan: scrub in progress since Tue Aug  2 22:13:14 2011
    148G scanned out of 19.3T at 896M/s, 7h0m to go
    0 repaired, 0.75% done
Down to 1.41T free though, which sucks. Need to finish acquiring my 5K3000s and toss 'em in to expand the pool. Question for ZFS guys: My 1068E controllers don't do 3TB drives, but I was thinking of acquiring 4 3TB drives and making a RAID-Z1 vdev out of them and adding them to my pool. Will that slay my performance? It would end up as:
1x 6-drive RAID-Z2 (2TB Barracuda LP)
1x 6-drive RAID-Z2 (2TB Barracuda LP)
1x 4-dirve RAID-Z1 (3TB Hitachi)

or should I just stick with my original plan and add the last 6-drive RAID-Z2?

e: write speeds went way up, maxing out at 120MB/s writes compared to the previous 70 or so. GigE limited!

movax fucked around with this message at 05:09 on Aug 3, 2011

Longinus00
Dec 29, 2005
Ur-Quan

movax posted:


...

Down to 1.41T free though, which sucks. Need to finish acquiring my 5K3000s and toss 'em in to expand the pool. Question for ZFS guys: My 1068E controllers don't do 3TB drives, but I was thinking of acquiring 4 3TB drives and making a RAID-Z1 vdev out of them and adding them to my pool. Will that slay my performance? It would end up as:
1x 6-drive RAID-Z2 (2TB Barracuda LP)
1x 6-drive RAID-Z2 (2TB Barracuda LP)
1x 4-dirve RAID-Z1 (3TB Hitachi)

or should I just stick with my original plan and add the last 6-drive RAID-Z2?

Isn't RAIDZ performance limited to that of a single drive? If that's the case then you're just comparing the performance of the Hitachi vs. the Barracudas.

Also tables.

movax
Aug 30, 2008

Longinus00 posted:

Isn't RAIDZ performance limited to that of a single drive? If that's the case then you're just comparing the performance of the Hitachi vs. the Barracudas.
I don't recall off-hand, was more curious what weirdness would result from a pool with a wildly out-of-place vdev.

quote:

Also tables.

:downs: fixed, sorry!

teamdest
Jul 1, 2007
For anyone that is interested, I've been using ZFS on Linux (http://zfsonlinux.org/) for a little while now, and it seems pretty drat stable, and there doesn't seem to be any performance hit compared to the XFS/mdadm setup that I used to run.

It's obviously still beta software, but so far it seems to work fine, zpool/zfs commands behave as expected. I'm a little annoyed that setting up an encrypted array is a choice between:

cryptsetup each individual drive, enter 5 passwords to mount them all and then ZFS rebuilds it every boot

-OR-

zpool over the bare drives, make a zvol and encrypt that, which then requires manual work in order to increase the size.

I had initially planned to try out btrfs, but it currently doesn't support Raid5/6 style arrays.

Also, what are the current recommendations for SATA/SAS controller cards? My Perc-5i's worked out well for a while, but there were some finicky issues that eventually made me stop using them, and a LOT of raid cards use LSI JMicron chips which are not well supported under linux.

teamdest fucked around with this message at 04:40 on Aug 3, 2011

gregday
May 23, 2003

teamdest posted:

For anyone that is interested, I've been using ZFS on Linux (http://zfsonlinux.org/) for a little while now, and it seems pretty drat stable, and there doesn't seem to be any performance hit compared to the XFS/mdadm setup that I used to run.

It's obviously still beta software, but so far it seems to work fine, zpool/zfs commands behave as expected. I'm a little annoyed that setting up an encrypted array is a choice between:

cryptsetup each individual drive, enter 5 passwords to mount them all and then ZFS rebuilds it every boot

-OR-

zpool over the bare drives, make a zvol and encrypt that, which then requires manual work in order to increase the size.

I had initially planned to try out btrfs, but it currently doesn't support Raid5/6 style arrays.

Also, what are the current recommendations for SATA/SAS controller cards? My Perc-5i's worked out well for a while, but there were some finicky issues that eventually made me stop using them, and a LOT of raid cards use LSI chips which are not well supported under linux.

Good to know. I've been using the ZFS-FUSE implementation and the I/O overhead from FUSE is just abysmal.

conntrack
Aug 8, 2003

by angerbeet
Suposedly each vdev is limited to the performance of one drive.

So to get high performance you need several vdevs and he has vdevs of similar performance so i would not worry about the difference in speed? More anoying for performance is that zfs is unable to automaticaly rebalance old data over all the vdevs.

I did some duct tape script to try and rebalance, don't know it it made it better or worse though, i copied all folders and deleted the originals. Did some rebalance but what that does for fragmentation i don't know.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
On the top of ZFS performance:
http://constantin.glez.de/blog/2010/06/closer-look-zfs-vdevs-and-performance
In early 2010 I did a bunch of research to make sure we didn't gently caress up our thumper and work, and everything I found matches up with what he's sayings, so I'd trust that.

That will answer questions about RAIDZ performance and mixed vdevs for movax.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
Last night I rebuilt my NAS, installed OpenIndiana and tried to import my single disk vdev on a USB drive and it throws this error:

code:
  pool: backup
    id: 16987408378758075283
 state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
   see: [url]http://www.sun.com/msg/ZFS-8000-5E[/url]
config:

        backup      UNAVAIL  insufficient replicas
          c8t0d0p0  UNAVAIL  corrupted data
The USB drive was created with zfs-fuse on Ubuntu 11.04

Here's the output of "zdb -l /dev/dsk/c8t0d0p0"

code:
--------------------------------------------
LABEL 0
--------------------------------------------
    version: 23
    name: 'backup'
    state: 1
    txg: 12984
    pool_guid: 16987408378758075283
    hostid: 1463810815
    hostname: '********'
    top_guid: 2211610224871490532
    guid: 2211610224871490532
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 2211610224871490532
        path: '/dev/disk/by-id/usb-WD_My_Book_1110_574341563539393337333530-0:0'
        whole_disk: 0
        metaslab_array: 23
        metaslab_shift: 33
        ashift: 9
        asize: 999496876032
        is_log: 0
        create_txg: 4
--------------------------------------------
LABEL 1
--------------------------------------------
    version: 23
    name: 'backup'
    state: 1
    txg: 12984
    pool_guid: 16987408378758075283
    hostid: 1463810815
    hostname: '******'
    top_guid: 2211610224871490532
    guid: 2211610224871490532
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 2211610224871490532
        path: '/dev/disk/by-id/usb-WD_My_Book_1110_574341563539393337333530-0:0'
        whole_disk: 0
        metaslab_array: 23
        metaslab_shift: 33
        ashift: 9
        asize: 999496876032
        is_log: 0
        create_txg: 4
--------------------------------------------
LABEL 2
--------------------------------------------
    version: 23
    name: 'backup'
    state: 1
    txg: 12984
    pool_guid: 16987408378758075283
    hostid: 1463810815
    hostname: '******'
    top_guid: 2211610224871490532
    guid: 2211610224871490532
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 2211610224871490532
        path: '/dev/disk/by-id/usb-WD_My_Book_1110_574341563539393337333530-0:0'
        whole_disk: 0
        metaslab_array: 23
        metaslab_shift: 33
        ashift: 9
        asize: 999496876032
        is_log: 0
        create_txg: 4
--------------------------------------------
LABEL 3
--------------------------------------------
    version: 23
    name: 'backup'
    state: 1
    txg: 12984
    pool_guid: 16987408378758075283
    hostid: 1463810815
    hostname: '*******'
    top_guid: 2211610224871490532
    guid: 2211610224871490532
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 2211610224871490532
        path: '/dev/disk/by-id/usb-WD_My_Book_1110_574341563539393337333530-0:0'
        whole_disk: 0
        metaslab_array: 23
        metaslab_shift: 33
        ashift: 9
        asize: 999496876032
        is_log: 0
        create_txg: 4
Am I hosed? I think it has something to do with the PATH pointing to disk by id since this is OpenIndiana, but I have no idea if a) thats the problem and b)how to edit that if it's even possible. Anyone have some ideas? I've been google searching all morning but it's been a lot to digest.

I tried importing with the -f option and it says theres a problem with the VDEV but the drive looks fine (as stated above).

frogbs
May 5, 2004
Well well well
So i'm looking to buy/build a 12tb DAS for video editing/archival footage for my small TV station. My only requirements are that it connect via Firewire 800, I dont think a NAS would be right for our facility and a SAN/Fibre Channel setup would be cost prohibitive. The best bang for my buck so far that i've come across would be this offering from OWC (the Mercury Elite Qx2). Does anyone have any other reccomendations/thoughts?

Based on my calculations I could build the OWC unit myself for about $100 cheaper if I bought the enclosure and drives separately, but I'd be inclined to spend the extra $100 so that I can call OWC when something goes wrong.

jeeves
May 27, 2001

Deranged Psychopathic
Butler Extraordinaire

frogbs posted:

Based on my calculations I could build the OWC unit myself for about $100 cheaper if I bought the enclosure and drives separately, but I'd be inclined to spend the extra $100 so that I can call OWC when something goes wrong.

If this is for a professional client then yes the 100$ is worth it. Don't ever build things yourself for real jobs like that, especially if the difference is only 100$ and you're not paying for it anyhow.

frogbs
May 5, 2004
Well well well

jeeves posted:

If this is for a professional client then yes the 100$ is worth it. Don't ever build things yourself for real jobs like that, especially if the difference is only 100$ and you're not paying for it anyhow.

I agree. It's for my employer, so I think it'd make sense to just pay the extra $100 or so.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

frogbs posted:

I agree. It's for my employer, so I think it'd make sense to just pay the extra $100 or so.

How much actual storage do you need? You mentioned 12tb, that is only a 4 bay unit, so 12tb is before any formatting/redundancy.

Is this just for archival, or are users going to be working from this as well?

frogbs
May 5, 2004
Well well well

Moey posted:

How much actual storage do you need? You mentioned 12tb, that is only a 4 bay unit, so 12tb is before any formatting/redundancy.

Is this just for archival, or are users going to be working from this as well?
Someone will be working from this as well as storing old footage. I figure that we'd use Raid 5 and thus get around 9tb from an initial 12tb of disks. 8tb to 6tb after Raid 5 would also be sufficient for now as well.

what is this
Sep 11, 2001

it is a lemur
You should probably use RAID6.

teamdest
Jul 1, 2007

what is this posted:

You should probably use RAID6.

At which point (4 drive R6) you might as well use a Striped pair of Mirrors to avoid the write-hole and probably boost performance a bit.

Adbot
ADBOT LOVES YOU

Moey
Oct 22, 2010

I LIKE TO MOVE IT

frogbs posted:

Someone will be working from this as well as storing old footage. I figure that we'd use Raid 5 and thus get around 9tb from an initial 12tb of disks. 8tb to 6tb after Raid 5 would also be sufficient for now as well.

As well as what others said, are you doing off site backup and storage of this? If this is for a TV station, the array failing and losing data may be devastating. Also depending on what kind of work they are doing, I wonder the performance out of that enclosure, and if it will have a negative impact on their work (video editing?).

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply