Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
Hi, could you guys help me a bit? I am currently planning to upgrade my linux-box to use some form of RAID for data storage ( my root will be on a separate disk ). Just bought 2 new 3TB drives to put together as software RAID1, after some shuffling I will also add 2x2TB from old drives.

I am trying to decide between zfsonlinux(only got 4GB of RAM), btrfs (afraid it might be still wonky) and the classic mdadm+lvm ( it's already 20XX).

Could someone here help me decide?

Adbot
ADBOT LOVES YOU

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

astr0man posted:

RAM is super cheap, if your box can support some more just pick some up and use zfs.
I can only upgrade to 8GB. According to some articles that is the minimum for zfs. And I am also using this computer for other stuff then storage.

More importantly, do you know how good zfs works on linux?
The different articles I found all give totally different opinions on stability. Some say it's fine some say it's worse then btrfs.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

astr0man posted:

I have 8GB of ram in my microserver for 8TB of storage running Ubuntu 12.04 server and zfsonlinux. I haven't had any performance issues, and I'm also running sab+sickbeard on the machine.

Thanks, that is what I wanted to hear.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
So, one more zfs question.
If I have drives of 3,3,2 and 1 TB size, I should be able to make a RAIDZ structure with 6TB space. Hopefully.
If I create a raidZ with 2x3TB disks, and then later add those 2 smaller disks, will it integrate them correctly?
Also, do you know hom much space I would have if I used raidZ2 on those 4 disks?

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Delta-Wye posted:

If you're talking about a raidz1 setup, 3,3,2,1 TB drives should only give 3TB (1TB from each drive, and then 1TB lost for parity). If you used raidz2, you would have 2TB (1TB from each drive, then 2TB lost for parity data).

However, you're talking about setting up a 2x3TB drive which doesn't make for a raidz1 setup, so I'm not sure.

I'm pretty sure you can use partitions to form volumes (is this right?). If it is the case, I guess you could make a bunch of 1TB partitions and make those into a raidz2, but while it would get you 7TB of storage it would be stupid and give you zero protection basically.

Does make me wonder if a mixed 2TB/1TB setup with the 2TB drives split would work okay with a raidz2. If one of hte 2TB disks kicks the bucket, you are no worse off than if you were running raidz1 and lost a single disk.
For the raidz1 I should be able to put an lvm on the 2 smaller drives which would let the zfs see 3x3TB for the raidz1. Giving me 6TB space even if it doesn't work directly.
I am going to check this with some virtual drives.

OK: with 4 testdrives of 60,60,40,20 GB :
RaidZ: 60GB in "df", 80GB in "zpool list"
RaidZ2: 40GB in "df", 80GB in "zpool list"
btrfs raid1 : 180GB in "btrfs fi show" can hold a 90GB file.
zfs mirror : 20 GB something is wrong here.

VictualSquid fucked around with this message at 22:10 on Oct 11, 2012

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
So, for my problem of having 3,3,2,1 TB in Disks, I seem to have the following options:

1) Group 2+1TB in lvm and install zfs in raidZ on the 2 3TB disks+the virtual volume. Giving 6TB space and one disk redundancy. Has probably the worst performance.
2) Partition the 3TB disks into 2+1TB and make 2 zfs raidz groups. Again 6TB space, one disk redundancy. This will also be annoying to upgrade or recover.

3) Partition the disks a bit complicated and make mirrored pairs in zfs. 4.5TB of space and 1.1 Disks of redundancy. Probably the worst variant, might also be the most performant.
4) Use btrfs in raid1 mode. Same thing as 3) only the partitioning gets done automatically. Will probably be the easiest to upgrade. But uses btrfs instead of zfs.

Anybody got any good ideas? I am tending a bit torwards the btrfs solution right now.
Especially, has anybody any idea what sorts of problems I might get with zfs on lvm in variant 1?

Also: It seems like adding a 2TB disk to a RaidZ of 4x3TB disks reduces your total storage capacity. Someone seems to have forgotten that the I in Raid stands for inexpensive.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

FISHMANPET posted:

Remember, the I in RAID stands for Indendent :science:!

wikipedia posted:

RAID (redundant array of independent disks, originally redundant array of inexpensive disks)
They changed it while I wasn't looking:argh:.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Delta-Wye posted:

Keep in mind if you were running out of space in your storage space, you can always swap out a disk.

Start with 4x3TB + 1x2TB disk, create a raidz1 and get 8TB of space.

Fill it up.

Replace the 2TB with a 3TB down the road, and your raidz1 turns into a 12TB array and you get 4TB after resilvering onto the new disk.
But if I chunk the 2TB disk in the beginning, I gain 1TB space and reduce my MTTF.
Also down the road I won't buy a 3TB, by then 5TB will probably be more cost effective. And again I can't see any efficient way to upgrade this thing.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

FISHMANPET posted:

ZFS was designed for Enterprise where you get 12 or 16 disks at a time and you can easily migrate data from a 12x500GB to a 12x4TB arrary, so disk by disk expansion was never a big deal.
Looks that way. So I will probably leave the enterprise with starfleet and go for the btrfs variant.
Once it fails embarrassingly I will report back.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

FISHMANPET posted:

I just saw this, the zfs mirror is 20 GB because that's the size of your smallest disk. It's essentially a 4 way mirror with 20GB disks, because that's the size of the smallest disk.
Yes, that is what I thought. I also found the command to do it the way I wanted later.

FISHMANPET posted:

Those btrfs numbers sound fishy, if it's giving you literally half the total disk capactiy, then it's doing something wrong. It's storing both copies of data on one disk for that to work out. It's creating two 90GB volumes and mirroring those, but there's no way to make two completely indepenting 90GB volumes out of those disks, all you can make is an 80 and a 100. To get to 90 you have to put 10 GB from the 100 into the 80, and then you've got one disk (the 40GB) holding data for two mirror sets.

How did you make the btrfs mirror?
I made it with the standard :
mkfs.btrfs -m raid1 -d raid1 /dev/vg/60a /dev/vg/60b /dev/vg/40 /dev/vg/20
What it displays is that it has 180GB of total space in the pool. If you write to a Raid1 volume each 1GB of Data needs 2GB of poolspace.
I think it just puts a 1GB large mirrored pair of blocks on the two emptiest drives everytime it needs new storage.

You could replicate this by hand for zfs in mirror mode:
Partitions:
60GB-a -> 50GB-a + 10GB-a
60GB-b -> 50GB-b + 10GB-b
40GB -> 20GB-a + 10GB-c + 10GB-d
20GB -> 20GB-b

then you pair up:
zpool create zfs_test mirror 50GB-a 50GB-b mirror 20GB-a 20GB-b mirror 10GB-a 10GB-c mirror 10GB-b 10GB-d

The difference is that btrfs assigns those in 1GB chunks, automatically. Also the zfs will be annoying to repair or upgrade especially if the 40GB disk fails.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

fletcher posted:

Trying to setup a cron job on NAS4Free:


Seems to work fine if I run it from a shell, but if I click "Run Now" in the web interface I get "Failed to execute cron job." Anybody know why that might be?

Did you try experimenting with escaping the special characters a bit more. I sometimes have problems when going from direct shell to a script, in that it needs extra backslashes.
Check if this is documented in your web interface manual, also.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Delta-Wye posted:

However, on the old machine (solaris), I get:
code:
[me@oldnas /tank/doc]$ du -sk
26242405        .
and on the new nas (freebsd), I get:
code:
[me@newnas /tank/doc]$ du -Ask
25736913	.
[me@newnas /tank/doc]$ du -sk
23929093	.
It seems like something is missing :( Any other techniques/tools I can use to make sure the copy was good? it seems like rsync should have done the job but I can't shake the suspicion that some files are missing based on the partition sizes.
Rsync should be what is needed... So theoretically the differences in sizes should only be metadata. Because

Maybe you should run "du -s --apparent-size" on both machines and compare those.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Lediur posted:

Also, anyone have recommendations for putting 3.5" hard drives in 5.25" bays? I've run out of hard drive bays in my case but I have a bunch of 5.25" bays I could use.
Something like this? I got some of them in my box.
http://www.amazon.de/gp/product/B001LQG4IO/ref=oh_details_o04_s00_i01
No idea what those are called in English.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Dead Goon posted:

When I get to the part about installing the bootable MBR on the drive i just get the following error from terminal

code:
dd: /dev/rdisk1: Invalid argument
Help!
did you remember to write if and 'of=/dev/rdisk' instead of only '/dev/rdisk' ?

Or the other way around if you are using some unusual version of dd.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

IT Guy posted:

I have a problem using FreeNAS.

I have a folder that has a lot of large files in it. Multiple hundreds of gigs. When I navigate to this folder, it takes about 5 seconds for the folder to populate in Windows explorer (using cifs) before I can click anything and the green loading bar or whatever it is goes across my windows explorer window as if it's loading something. Now, I have a second identical box that I use just to replicate and when I navigate to the same folder on that machine, it's instant. I have both FreeNAS machines setup identically. What could be the issue?
I have a similar situation. But here that large folder loads almost instantly over cifs/smb and if I copy the same folder to my windows box it takes several seconds to load.
I assume it is because windows tries to create thumbnails of my files if it is accessing a local folder, which wastes all that time.
If on of your machines got confused and started treating your cifs share like a local drive in this matter, this would explain the slowdown.

If you find out how to fix that problem, how to turn thumbnails off, tell me please.

Adbot
ADBOT LOVES YOU

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

FISHMANPET posted:

btrfs only has mirroring, which rules it out for plenty of us.
Actually the alpha version of Raid5/6 for btrfs went public around 2 months ago.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply