Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
movax
Aug 30, 2008

Moey posted:

Yea that's what was lingering in the back of my mind...I just couldn't bring myself to say it.

150GB[, not TB? Depending on your budget, you could SSD that poo poo. If not, it would be criminal to do anything less than RAID10 with that small of a dataset (or hell just do like a 3-way mirror using ZFS or something if you don't want to buy hardware. You can even use those super-speedy 1TB disks for room to grow)!

Adbot
ADBOT LOVES YOU

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Moey posted:

Any thoughts/opinions? Rather avoid buying an expensive Raid card...

Get a copy of NexentaStor, the free trial one that's good to 12TB used. Plop two 100GB SSDs in the fucker as a L2ARC and watch the numbers fly. With modern hardware and an Intel Gigabit NIC, you can probably push 20k IOPS and ~90MB/sec through it easily.

movax posted:

Yeah, I don't know what the gently caress, they are the first-gen Seagate 1.5TB drives w/ firmware patch. I did upgrade/stop under-volting the CPU, which helped boost write performance. Intel NIC + a PowerConnect I figure is halfway decent network infrastructure, so... :iiam:

So, if I'm settled on 3x6 -Z2 + 2 hotspares...and want to shift my data over, what can I do? Is it possible to zpool export my current pool, and cobble together a machine with 8 SATA ports across mobo + SATA HBA and successfully zpool import the pool and read all the data off it?

Yeah, you can just export/import the zpool after moving it across machines. Also, add is NOT the same as attach, so be very very loving sure which one you use before you gently caress with it.

Run iostat -xen 5 and see what your drives are doing as you pull stuff over CIFS/NFS and to dev/null locally. Each disk should be able to do about 35-50MB/sec, and over the network, you should be able to get ~80-100MB/sec. I just checked mine and it'll do ~85ish over the network using Windows 7 CIFS and a 3com Managed Gigabit switch. No jumbo packets yet.

Methylethylaldehyde fucked around with this message at 09:03 on Nov 5, 2010

ClassH
Mar 18, 2008

necrobobsledder posted:

they're within about 20% of each other and writes easily saturate a gigabit ethernet connection at 40MBps, in fact.

Not sure why you think 40MBps is easily saturating a gigabit connection. Saturating should be around 80-100MBps. I have a $30 dlink switch, onboard realtek nic cards and can push 100MBps without jumbo frames easily.

flyboi
Oct 13, 2005

agg stop posting
College Slice
I want to ditch my tower and go for a mac mini but the big thing holding me back is I need a decent storage solution that will work with my mac mini and my htpc which runs windows 7. Currently I just have the public folder on my hackintosh mounted on the htpc which seems to work fine. But the mini lacks expandability which the current tower has.

So I have been looking at NAS and thought "christ, drobo is expensive!" I am looking at drobo that actually attach to the network, not usb. I want true NAS independent of any system in my setup. So reluctant to spend $800 on a diskless setup I came up with the idea of using unraid w/ an atom. Does anyone know what kind of throughput you can expect with this? Is it decent enough that I could be writing to the drive from the mac mini while streaming to the htpc at the same time or can an atom just not keep up?

I came across this case yesterday:
http://www.silentpcreview.com/fractal-array

and if you coupled it with this super micro atom board:
http://www.supermicro.com/products/motherboard/ATOM/ICH9/X7SPE-H-D525.cfm

You could essentially have an extremely low power 6x2TB unraid setup. Total cost would be $500ish instead of $800 sans disks and it would still not look completely hideous. So anyone have experience with unraid on an atom? Is it worth pursuing?

Or ZFS? Whatever works I just don't want to spend $800 on a box that holds hard drives.

flyboi fucked around with this message at 15:29 on Nov 5, 2010

movax
Aug 30, 2008

Methylethylaldehyde posted:

Run iostat -xen 5 and see what your drives are doing as you pull stuff over CIFS/NFS and to dev/null locally. Each disk should be able to do about 35-50MB/sec, and over the network, you should be able to get ~80-100MB/sec. I just checked mine and it'll do ~85ish over the network using Windows 7 CIFS and a 3com Managed Gigabit switch. No jumbo packets yet.

Roger, will do when I get back home. I almost forgot to ask, what kind of performance penalty am I looking at for running 16 drives off 2 1068Es, and the last 4 off the mobo SATA controller (though 2 of those 4 will be hotspares).

And I should have 0 penalties for creating a pool w/ 2 vdevs to start and then adding a 3rd identical vdev in a few months, correct?

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Methylethylaldehyde posted:

Get a copy of NexentaStor, the free trial one that's good to 12TB used. Plop two 100GB SSDs in the fucker as a L2ARC and watch the numbers fly. With modern hardware and an Intel Gigabit NIC, you can probably push 20k IOPS and ~90MB/sec through it easily.

Never heard of NexentaStor before, but it looks interesting. Will do some investigating. Thanks

wang souffle
Apr 26, 2002
Any opinions on the Seagate Barracuda LP drives for low power when working with ZFS and primarily large files? They known to cause problems like the WD Green drives?

http://www.newegg.com/Product/Product.aspx?Item=N82E16822148413

movax
Aug 30, 2008

wang souffle posted:

Any opinions on the Seagate Barracuda LP drives for low power when working with ZFS and primarily large files? They known to cause problems like the WD Green drives?

http://www.newegg.com/Product/Product.aspx?Item=N82E16822148413

I think if it isn't 4k sector drive, and it's at a fixed 5900rpm without any head-parking/other green crap, it's probably suitable for ZFS use.

e: DS reports 512b sectors, so I think you may be good to go...
e2: stop buying Hitachis you assholes, they keep going out of stock! :argh:

movax fucked around with this message at 17:07 on Nov 5, 2010

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

movax posted:

Roger, will do when I get back home. I almost forgot to ask, what kind of performance penalty am I looking at for running 16 drives off 2 1068Es, and the last 4 off the mobo SATA controller (though 2 of those 4 will be hotspares).

And I should have 0 penalties for creating a pool w/ 2 vdevs to start and then adding a 3rd identical vdev in a few months, correct?

A single 1068E is overkill bandwidth wise, so no, two of them will not cause any problems.

You can start with two, and move to three later with no performance penalty. All it'll do is add the third vdev to the pool of writable area and start striping writes across it.


Also, you can mix and match vdev sizes, but it's not a good idea to mix vdev types.



Wang Souffle: As long as the drive is either native 4k and not lying about it, or a 512 without the head parking fuckery the WD drives pull, then they're going to be fine with ZFS. As long as your SAS/SATA card is able to see the damned thing, ZFS is going to use it.

Moey: NexentaStor has some snazzy GUIs that make dealing with it a little easier. Great to throw on a flash drive and see if it's what you're looking for. I bet for $2000 you can set up a ZFS box that'll support all those users with capacity to spare. Heaven help you if it takes a poo poo and you can't replace it same day from Fry's or MicroCenter though.

wang souffle
Apr 26, 2002

Methylethylaldehyde posted:

As long as the drive is either native 4k and not lying about it, or a 512 without the head parking fuckery the WD drives pull, then they're going to be fine with ZFS. As long as your SAS/SATA card is able to see the damned thing, ZFS is going to use it.

That's the thing. I've been researching these drives for a couple days and can't find definitive word if they're 4k liars or not. And no idea how to find out about the head parking. Specs on the websites are very sparse for each manufacturer.

Edit: With all major drive makers moving to 4k sectors, you'd figure OpenIndiana would handle this smoothly by now. Or do they, and the misreporting is causing all the issues?

movax
Aug 30, 2008

wang souffle posted:

That's the thing. I've been researching these drives for a couple days and can't find definitive word if they're 4k liars or not. And no idea how to find out about the head parking. Specs on the websites are very sparse for each manufacturer.

Edit: With all major drive makers moving to 4k sectors, you'd figure OpenIndiana would handle this smoothly by now. Or do they, and the misreporting is causing all the issues?

I looked at the datasheet for your Barracuda LP drives; they are 512-byte sector drives.

wang souffle
Apr 26, 2002

movax posted:

I looked at the datasheet for your Barracuda LP drives; they are 512-byte sector drives.

Strange, this link has a mention of "advanced format" in the bottom right. Way to make it confusing, Samsung.

movax
Aug 30, 2008

wang souffle posted:

Strange, this link has a mention of "advanced format" in the bottom right. Way to make it confusing, Samsung.

Ah, I looked at this: http://www.seagate.com/docs/pdf/datasheet/disc/ds_barracuda_lp.pdf

But it's possible that the 512 listed there is after emulation...probably the only way to be sure is to e-mail Seagate and ask them. Then post the answer here so that we may all know!

Triikan
Feb 23, 2007
Most Loved
I know this isn't a coupons/deals thread, but I figured this is relevant to the subject. Newegg has 2tb Samsung spinpoint 5400rpm drives on sale for $60 each.
http://www.newegg.com/Product/Product.aspx?nm_mc=AFC-SlickDeals&cm_mmc=AFC-SlickDeals-_-NA-_-NA-_-NA&Item=N82E16822152245
coupon code: EMCZYNW48

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Triikan posted:

I know this isn't a coupons/deals thread, but I figured this is relevant to the subject. Newegg has 2tb Samsung spinpoint 5400rpm drives on sale for $60 each.
http://www.newegg.com/Product/Product.aspx?nm_mc=AFC-SlickDeals&cm_mmc=AFC-SlickDeals-_-NA-_-NA-_-NA&Item=N82E16822152245
coupon code: EMCZYNW48

They use the 512b sector emulation fuckery the Western Digitals do. They're poo poo for ZFS. Real cheap though.

Drevoak
Jan 30, 2007

Triikan posted:

I know this isn't a coupons/deals thread, but I figured this is relevant to the subject. Newegg has 2tb Samsung spinpoint 5400rpm drives on sale for $60 each.
http://www.newegg.com/Product/Product.aspx?nm_mc=AFC-SlickDeals&cm_mmc=AFC-SlickDeals-_-NA-_-NA-_-NA&Item=N82E16822152245
coupon code: EMCZYNW48

Would getting two of those be good for a newbie jumping into this? I'm looking at getting the D-Link DNS-323 for media storage and playback.

md10md
Dec 11, 2006

Methylethylaldehyde posted:

They use the 512b sector emulation fuckery the Western Digitals do. They're poo poo for ZFS. Real cheap though.

They might be poo poo but are they at least usable? I was planning on using them for media storage and I'll never need to pull more than 50MB/s through them over GigE. Performance I can deal with but stability and reliability are two things I really can't sacrifice.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

md10md posted:

They might be poo poo but are they at least usable? I was planning on using them for media storage and I'll never need to pull more than 50MB/s through them over GigE. Performance I can deal with but stability and reliability are two things I really can't sacrifice.

As long as you can deal with the array occasionally hardlocking for a few minutes, they work great. Not sure if the Samsung ones do the same poo poo as the WD ones do, but we'll see.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I'm thinking about moving on from WHS.

My main requirements are:
* Handle many (16 right now) disks of varying capacities (500GB to 2TB) with a total of over 17TB of raw disk space.
* One of my favorite features of WHS is not having to worry about multiple partitions...it's just one big pool of storage.
* Some sort of parity to protect from at least 1 (the more the better) drive failure.
* The main purpose of this storage is to store and stream HD video in the home. Streaming isn't too big of a bandwidth hog with hard drives on a gigabit network, but I do copy multi-gigabyte files to/from the array quite often so the closer it comes to saturating gigabit, the better.

Is this raid/lvm/mdadm linux thing still a cool thing to do? Is this guide from the OP still accurate/up-to-date/the best?

I was thinking that a linux thing would be best for me since I do lots of python development, and run several server apps written in python on my server...

The main reservation I have right now is that, while I won't have any problems figuring out how to set this up, I'm not terribly interested in futzing with it everyday, and that's one thing WHS as provided me...I set it up and never have to think about it.

Also, I will be running this on a fairly powerful machine (P55 mobo/Core 2 Quad/4GB RAM)...does this have any implications for which distro I should use? I'm most familiar with Ubuntu.

DLCinferno
Feb 22, 2003

Happy

Thermopyle posted:

I'm thinking about moving on from WHS.

My main requirements are:
* Handle many (16 right now) disks of varying capacities (500GB to 2TB) with a total of over 17TB of raw disk space.
* One of my favorite features of WHS is not having to worry about multiple partitions...it's just one big pool of storage.
* Some sort of parity to protect from at least 1 (the more the better) drive failure.
* The main purpose of this storage is to store and stream HD video in the home. Streaming isn't too big of a bandwidth hog with hard drives on a gigabit network, but I do copy multi-gigabyte files to/from the array quite often so the closer it comes to saturating gigabit, the better.

Is this raid/lvm/mdadm linux thing still a cool thing to do? Is this guide from the OP still accurate/up-to-date/the best?

I was thinking that a linux thing would be best for me since I do lots of python development, and run several server apps written in python on my server...

The main reservation I have right now is that, while I won't have any problems figuring out how to set this up, I'm not terribly interested in futzing with it everyday, and that's one thing WHS as provided me...I set it up and never have to think about it.

Also, I will be running this on a fairly powerful machine (P55 mobo/Core 2 Quad/4GB RAM)...does this have any implications for which distro I should use? I'm most familiar with Ubuntu.

You should be fine with almost all your assumptions except potentially the actual RAIDing of your drives. Be aware that unlike WHS, which distributes data across any combination of drive sizes, mdadm will require you to choose the smallest size drive within the array as the size to use for each of the devices that array is built from. This means you have one 500GB drive and 15 2TB drives, you'll waste 1.5TB on all 15 of them. The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size.

Assuming this isn't too much of a burden, you can proceed with the rest of your plan, and if you use lvm you'll be able to combine all your mdadm arrays into a single big pool. Your computer should be plenty powerful enough to handle this and will probably get fairly close to saturating your gig network during reads, especially with that many spindles. Ubuntu is a fine choice for an OS. For reference, one of my servers is running Ubuntu Server edition with a 4 disk mdadm array of 7200rpm 1TB Seagates and I can get about 80-85MB/s transfer, with maybe a 20% cpu hit on the Core2Duo 2.16GHz (I think that's the cpu if I remember right).

Once you set it up and configure samba or whatever you're going to use to access the data, you can pretty much ignore it and it will just work. Make sure to setup mdadm notifications though, in case you lose a disk you'll want to get an email or something right away so you can replace it.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
Welp, in the spirit of complicating my life, I have done the following:
1) Set up a server 2008 instance via VirtualBox on my openindiana system
2) While moderately to amazingly drunk, proceeded to figure out how the gently caress to get Active Directory and the various DNS bits working so I could join my solaris box to the AD authentication system
3) Learned how to edit poo poo in Vi.
4) Joined the openindiana box to a domain hosted within the openindiana instance. This bizarre mixture of ouroboros and Matryoshka doll will certainly one day bite me in the rear end hard.
5) Spent 5 hours arguing with idmap, chown and chgrp, ACLs and a case of beer in order to get my newly hosed with AD users to map properly to the solaris users and get permissions set up so I could actually gently caress with stuff properly.


At long last I have fixed the file/folder permissions to the point where I can actually have the rest of my house have their own private folders as well as access the public ones.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

DLCinferno posted:

The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size.
Another way to accomplish the same is to split all the drives into suitably sized partitions, create arrays from the partitions and then combine them with LVM. I'm using an extreme version of this scenario with all my drives split to 10+ partitions. I did it for flexibility when changing and adding drives before RAID expansion was a practical option in Linux.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Methylethylaldehyde posted:

5) Spent 5 hours arguing with idmap, chown and chgrp, ACLs and a case of beer in order to get my newly hosed with AD users to map properly to the solaris users and get permissions set up so I could actually gently caress with stuff properly.
/usr/bin/chmod A=everyone@:rwxpdDaARWcCos:fd:allow zfsnamehere
zfs set sharesmb=on poolnamehere/zfsnamehere

then navigate to the parent (like \\openindiana) right click the folder, and set the required AD permissions.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

adorai posted:

/usr/bin/chmod A=everyone@:rwxpdDaARWcCos:fd:allow zfsnamehere
zfs set sharesmb=on poolnamehere/zfsnamehere

then navigate to the parent (like \\openindiana) right click the folder, and set the required AD permissions.

That was the easy part. The oval office part was trying to trick Vbox into running each separate XP instance as a separate user, so the files it creates keep the user/group permissions required to actually use/delete files it downloads.

DLCinferno
Feb 22, 2003

Happy

Saukkis posted:

Another way to accomplish the same is to split all the drives into suitably sized partitions, create arrays from the partitions and then combine them with LVM. I'm using an extreme version of this scenario with all my drives split to 10+ partitions. I did it for flexibility when changing and adding drives before RAID expansion was a practical option in Linux.

True, but I didn't recommend that because you need to be very clever about how you're choosing your RAID levels on the partition arrays and which ones are going into the same array, otherwise a single drive failing could end up wiping out the entire array.

In a simple example, assume two 500GB drives and one 1TB drive. Partition the TB in half and create a RAID5 array across the four partitions. Unfortunately, if that TB drive goes down, it will effectively kill two devices in the array and render it useless.

I'd be curious to see what your partition/array map looks like - it must have a taken awhile to setup properly if you have over ten partitions on some disks?

Only registered members can see post attachments!

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

DLCinferno posted:

You should be fine with almost all your assumptions except potentially the actual RAIDing of your drives. Be aware that unlike WHS, which distributes data across any combination of drive sizes, mdadm will require you to choose the smallest size drive within the array as the size to use for each of the devices that array is built from. This means you have one 500GB drive and 15 2TB drives, you'll waste 1.5TB on all 15 of them. The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size.

Assuming this isn't too much of a burden, you can proceed with the rest of your plan, and if you use lvm you'll be able to combine all your mdadm arrays into a single big pool. Your computer should be plenty powerful enough to handle this and will probably get fairly close to saturating your gig network during reads, especially with that many spindles. Ubuntu is a fine choice for an OS. For reference, one of my servers is running Ubuntu Server edition with a 4 disk mdadm array of 7200rpm 1TB Seagates and I can get about 80-85MB/s transfer, with maybe a 20% cpu hit on the Core2Duo 2.16GHz (I think that's the cpu if I remember right).

Once you set it up and configure samba or whatever you're going to use to access the data, you can pretty much ignore it and it will just work. Make sure to setup mdadm notifications though, in case you lose a disk you'll want to get an email or something right away so you can replace it.

Thanks. This is helpful. I've got enough 2TB and 1.5TB drives, but I'm going to have a problem with only have 2-1TB, 2-750GB, 1-500GB, and 1-400GB.

Hrmph.

DLCinferno
Feb 22, 2003

Happy

Thermopyle posted:

Thanks. This is helpful. I've got enough 2TB and 1.5TB drives, but I'm going to have a problem with only have 2-1TB, 2-750GB, 1-500GB, and 1-400GB.

Hrmph.

In that case, you do actually have enough drives to safely do what Saukkis suggested. For example, if you didn't mind losing 150GB, you could create 250GB partitions on each of the drives and build four RAID5 arrays from those partitions. Each array would have only one partition per drive, so you could lose an entire disk without losing any data. A little more complex to setup, but it would work.

Only registered members can see post attachments!

movax
Aug 30, 2008

Methylethylaldehyde posted:

Run iostat -xen 5 and see what your drives are doing as you pull stuff over CIFS/NFS and to dev/null locally. Each disk should be able to do about 35-50MB/sec, and over the network, you should be able to get ~80-100MB/sec. I just checked mine and it'll do ~85ish over the network using Windows 7 CIFS and a 3com Managed Gigabit switch. No jumbo packets yet.

Results:
code:
                          extended device statistics       ---- errors --- 
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot device
    4.3    2.3  281.7   33.3  0.0  0.0    1.1    1.0   0   0   0   0   0   0 c7d0
    0.0    0.0    1.9    0.0  0.0  0.0    1.9    8.9   0   0   0   0   0   0 c7d1
    0.0    0.0    1.3    0.0  0.0  0.0    3.0   10.1   0   0   0   0   0   0 c10d0
    0.5    2.1   23.4   29.4  0.0  0.0    0.1    5.5   0   0   0   0   0   0 c9t0d0
    0.5    2.1   22.8   29.4  0.0  0.0    0.1    6.0   0   0   0   0   0   0 c9t1d0
    0.5    2.1   23.2   29.3  0.0  0.0    0.1    5.5   0   0   0   0   0   0 c9t2d0
    0.5    2.1   22.6   29.4  0.0  0.0    0.1    5.3   0   0   0   0   0   0 c9t3d0
    0.5    2.1   22.9   29.4  0.0  0.0    0.1    5.4   0   0   0   0   0   0 c9t4d0
    0.5    2.1   22.7   29.4  0.0  0.0    0.1    5.6   0   0   0   0   0   0 c9t5d0
    0.5    2.1   23.0   29.3  0.0  0.0    0.1    5.6   0   0   0   0   0   0 c9t6d0
    0.5    2.1   23.0   29.3  0.0  0.0    0.1    5.5   0   0   0   0   0   0 c9t7d0
Not sure what the columns mean; SCP upload from Macbook over GigE (5400rpm disk though). Also, doing iostat -xen 5, every few prints would have all 0.0s printed, which I found odd.

e: SMB from a Mac:
code:
movax@megatron:~$ iostat -xen
                            extended device statistics       ---- errors --- 
    r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b s/w h/w trn tot device
    3.1    2.0  209.9   22.1  0.0  0.0    1.1    0.9   0   0   0   0   0   0 c7d0
    0.0    0.0    1.1    0.0  0.0  0.0    1.9    8.9   0   0   0   0   0   0 c7d1
    0.0    0.0    0.7    0.0  0.0  0.0    3.0   10.1   0   0   0   0   0   0 c10d0
    2.8    4.0  143.6  232.1  0.0  0.0    0.3    6.6   0   1   0   0   0   0 c9t0d0
    2.8    4.0  142.8  232.0  0.0  0.0    0.3    6.9   0   1   0   0   0   0 c9t1d0
    2.9    4.1  142.6  232.0  0.0  0.0    0.2    6.7   0   1   0   0   0   0 c9t2d0
    2.9    4.0  142.2  232.1  0.0  0.0    0.2    6.4   0   1   0   0   0   0 c9t3d0
    2.8    4.0  142.9  232.1  0.0  0.0    0.3    6.7   0   1   0   0   0   0 c9t4d0
    2.8    4.0  142.5  232.1  0.0  0.0    0.3    6.6   0   1   0   0   0   0 c9t5d0
    2.9    4.0  142.5  232.1  0.0  0.0    0.3    6.8   0   1   0   0   0   0 c9t6d0
    2.8    4.0  143.2  232.0  0.0  0.0    0.3    6.8   0   1   0   0   0   0 c9t7d0

movax fucked around with this message at 03:43 on Nov 8, 2010

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

DLCinferno posted:

In that case, you do actually have enough drives to safely do what Saukkis suggested. For example, if you didn't mind losing 150GB, you could create 250GB partitions on each of the drives and build four RAID5 arrays from those partitions. Each array would have only one partition per drive, so you could lose an entire disk without losing any data. A little more complex to setup, but it would work.



Oh yeah, that would work. Thanks!

Now, I just have to work out some sort of plan for moving 12 TB of data from WHS to Ubuntu.

My first thought is to use an older P4 PC as a temporary server, install Ubuntu, move as many hard drives as possible from WHS into it...up to my free space, copy data over the network to the Ubuntu server to fill those up, remove more from WHS, rinse, repeat.

The problem with that plan, is that Ubuntu actually needs to end up on my current WHS machine. Are the arrays I create on one machine easily transferable to another machine with different hardware?

DLCinferno
Feb 22, 2003

Happy

Thermopyle posted:

Are the arrays I create on one machine easily transferable to another machine with different hardware?

Sure are. Literally, unplug from one machine, plug into the new one, and run one mdadm --assemble command per array. As long as the computer can see the same physical drives/partitions, it doesn't matter what hardware it's running.

That's one of the main reasons I like ZFS/mdadm at home - no need to buy pricey hardware controllers, but you get most of the same benefits.

Telex
Feb 11, 2003

I may have asked this before and forgotten the answer but I bitched out and went with a Windows Home Server instead of doing the right thing and sucking it up for an OS with a real ZFS implementation...

I'm looking at OpenIndiana now which looks neat and has a GUI installer which was my major stopping point of the FreeBSD install that made me give up and go back to my comfortable Windows.

On the ZFS implementation of OpenIndiana, if I have a group of 5 2TB hard drives right now with the plan to upgrade the remaining space in my case with more 2TB hard drives as funds and time allows (grand total of I think 9 or 10 drives), will I be able to add those drives to a ZFS pool without any data loss or do I have to treat it like a raid 5 and add everything at the same time as I build the raid?

I guess I'm not totally against having a couple of different mounts for this array of drives but I would like to keep things as simple as possible.

Also with regards to OpenIndiana, is it a fairly easy process to create windows compatible shares for hooking my XMBC and two desktops without requiring a lot of hassle on the user ends? Never done it, totally paranoid. I'm installing on a VM right now to mess around and make sure I understand what I'm doing before I do it for real, but I have a feeling making windows shares is going to fail spectacularly since I don't have a windows machine at work to test with anyway. (OSX whee)

eta: Also (also) ZFS and those WD Green WD20EARS 2TB 64MB drives, good/bad? I realize now that they could be problematic. I have 5 of them so far. :(

Telex fucked around with this message at 19:45 on Nov 8, 2010

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Telex posted:

I may have asked this before and forgotten the answer but I bitched out and went with a Windows Home Server instead of doing the right thing and sucking it up for an OS with a real ZFS implementation...

I'm looking at OpenIndiana now which looks neat and has a GUI installer which was my major stopping point of the FreeBSD install that made me give up and go back to my comfortable Windows.

On the ZFS implementation of OpenIndiana, if I have a group of 5 2TB hard drives right now with the plan to upgrade the remaining space in my case with more 2TB hard drives as funds and time allows (grand total of I think 9 or 10 drives), will I be able to add those drives to a ZFS pool without any data loss or do I have to treat it like a raid 5 and add everything at the same time as I build the raid?

I guess I'm not totally against having a couple of different mounts for this array of drives but I would like to keep things as simple as possible.

Also with regards to OpenIndiana, is it a fairly easy process to create windows compatible shares for hooking my XMBC and two desktops without requiring a lot of hassle on the user ends? Never done it, totally paranoid. I'm installing on a VM right now to mess around and make sure I understand what I'm doing before I do it for real, but I have a feeling making windows shares is going to fail spectacularly since I don't have a windows machine at work to test with anyway. (OSX whee)

eta: Also (also) ZFS and those WD Green WD20EARS 2TB 64MB drives, good/bad? I realize now that they could be problematic. I have 5 of them so far. :(

Your best bet would be to create a pool with one vdev, a RAIDZ of your 5 disks. Then when you get five more disks, create another RAIDZ vdev and add that to your pool. Sound confusing? It's really not. I've got the same current setup (5 drives) and will add 5 more when I need to. Here's how I did what I've got:
code:
zpool create storage raidz c4t0d0 c4t1d0 c4t2d0 c4t3d0 c5t3d0
When I get five more disks I'll do this:
code:
zpool add storage raidz c4t4d0 c4t5d0 c4t6d0 c4t7d0 c5t3d0
As for the 'Windows' share, you'll need to install Samba and get that running (and you can test it with your Mac, Macs can connect to Samba shares. Unix can connect to Samba shares. Everything can connect to Samba shares!

DLCinferno
Feb 22, 2003

Happy
For anyone looking at or owning any of the 4KB-sector drives, here's a pretty good article on how to compensate for potential performance issues, as well as some discussion about what to expect in the future from drive manufacturers (i.e. more of the same):

http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html?ca=dgr-lnxw074KB-Disksdth-LX

CISADMIN PRIVILEGE
Aug 15, 2004

optimized multichannel
campaigns to drive
demand and increase
brand engagement
across web, mobile,
and social touchpoints,
bitch!
:yaycloud::smithcloud:
I'm checking out 4-6 drive NASs in the < 2500 range for a 20 person office with a couple of servers.

I'm looking for something pre-built Thecus,NetGear, QNAP, or Synology are the brands that seem to come up. Right now the front runner seems to be the Synology 1010+ given the transfer speeds look very good.

The primary use for the NAS is going to as a backup exec backup destination for the servers, but I'd also like to be able to back up the virtualized servers so I have runable copies of ESXi VMs on the NAS for disaster recovery purposes.

I know this thread is primarily aimed at the build it yourselfers, but if anyone has any experience with any of the commercial brands I'd love to get some advice.

IOwnCalculus
Apr 2, 2003





DLCinferno posted:

For anyone looking at or owning any of the 4KB-sector drives, here's a pretty good article on how to compensate for potential performance issues, as well as some discussion about what to expect in the future from drive manufacturers (i.e. more of the same):

http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html?ca=dgr-lnxw074KB-Disksdth-LX
Thanks for posting that, some solid info in there.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

FISHMANPET posted:

Your best bet would be to create a pool with one vdev, a RAIDZ of your 5 disks. Then when you get five more disks, create another RAIDZ vdev and add that to your pool. Sound confusing? It's really not. I've got the same current setup (5 drives) and will add 5 more when I need to. Here's how I did what I've got:
code:
zpool create storage raidz c4t0d0 c4t1d0 c4t2d0 c4t3d0 c5t3d0
When I get five more disks I'll do this:
code:
zpool add storage raidz c4t4d0 c4t5d0 c4t6d0 c4t7d0 c5t3d0
As for the 'Windows' share, you'll need to install Samba and get that running (and you can test it with your Mac, Macs can connect to Samba shares. Unix can connect to Samba shares. Everything can connect to Samba shares!

Doesn't OpenIndiana still have the kernel level CIFS implementation that kicks all kind of rear end? Shouldn't he use that over samba?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Goon Matchmaker posted:

Doesn't OpenIndiana still have the kernel level CIFS implementation that kicks all kind of rear end? Shouldn't he use that over samba?

Do you know how to configure it? Because I certainly don't. As far as I could ever tell, about all you could ever do with it was set it to "on" or "off" and not do nearly as much configuration as you could with a Samba install.

Zhentar
Sep 28, 2003

Brilliant Master Genius
if only there were some kind of documentation.



Edit for my own question: Has anyone here ever successfully used zpool import -d with files larger than 4GB?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Zhentar posted:

if only there were some kind of documentation.

To each his own. It's such a different paradigm that I'd rather just pull my smb.conf from the Linux box I had before than decipher all that poo poo, only to realize it doesn't support any options I've been using, and give up on it and go back to Samba.

Adbot
ADBOT LOVES YOU

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA
Out of the box openindiana is retarded simple to set up.

zfs create tank c0t0d0s0
zfs create tank/cifs
zfs set sharesmb=on tank/cifs
passwd god

log in via windows xp/7 with fishmanpet/god and go nuts. The rest of the issues are file/folder permissions and ACLs.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply