|
Hi, could you guys help me a bit? I am currently planning to upgrade my linux-box to use some form of RAID for data storage ( my root will be on a separate disk ). Just bought 2 new 3TB drives to put together as software RAID1, after some shuffling I will also add 2x2TB from old drives. I am trying to decide between zfsonlinux(only got 4GB of RAM), btrfs (afraid it might be still wonky) and the classic mdadm+lvm ( it's already 20XX). Could someone here help me decide?
|
# ¿ Oct 11, 2012 16:08 |
|
|
# ¿ Apr 18, 2024 23:31 |
|
astr0man posted:RAM is super cheap, if your box can support some more just pick some up and use zfs. More importantly, do you know how good zfs works on linux? The different articles I found all give totally different opinions on stability. Some say it's fine some say it's worse then btrfs.
|
# ¿ Oct 11, 2012 17:31 |
|
astr0man posted:I have 8GB of ram in my microserver for 8TB of storage running Ubuntu 12.04 server and zfsonlinux. I haven't had any performance issues, and I'm also running sab+sickbeard on the machine. Thanks, that is what I wanted to hear.
|
# ¿ Oct 11, 2012 18:19 |
|
So, one more zfs question. If I have drives of 3,3,2 and 1 TB size, I should be able to make a RAIDZ structure with 6TB space. Hopefully. If I create a raidZ with 2x3TB disks, and then later add those 2 smaller disks, will it integrate them correctly? Also, do you know hom much space I would have if I used raidZ2 on those 4 disks?
|
# ¿ Oct 11, 2012 18:49 |
|
Delta-Wye posted:If you're talking about a raidz1 setup, 3,3,2,1 TB drives should only give 3TB (1TB from each drive, and then 1TB lost for parity). If you used raidz2, you would have 2TB (1TB from each drive, then 2TB lost for parity data). I am going to check this with some virtual drives. OK: with 4 testdrives of 60,60,40,20 GB : RaidZ: 60GB in "df", 80GB in "zpool list" RaidZ2: 40GB in "df", 80GB in "zpool list" btrfs raid1 : 180GB in "btrfs fi show" can hold a 90GB file. zfs mirror : 20 GB something is wrong here. VictualSquid fucked around with this message at 22:10 on Oct 11, 2012 |
# ¿ Oct 11, 2012 19:59 |
|
So, for my problem of having 3,3,2,1 TB in Disks, I seem to have the following options: 1) Group 2+1TB in lvm and install zfs in raidZ on the 2 3TB disks+the virtual volume. Giving 6TB space and one disk redundancy. Has probably the worst performance. 2) Partition the 3TB disks into 2+1TB and make 2 zfs raidz groups. Again 6TB space, one disk redundancy. This will also be annoying to upgrade or recover. 3) Partition the disks a bit complicated and make mirrored pairs in zfs. 4.5TB of space and 1.1 Disks of redundancy. Probably the worst variant, might also be the most performant. 4) Use btrfs in raid1 mode. Same thing as 3) only the partitioning gets done automatically. Will probably be the easiest to upgrade. But uses btrfs instead of zfs. Anybody got any good ideas? I am tending a bit torwards the btrfs solution right now. Especially, has anybody any idea what sorts of problems I might get with zfs on lvm in variant 1? Also: It seems like adding a 2TB disk to a RaidZ of 4x3TB disks reduces your total storage capacity. Someone seems to have forgotten that the I in Raid stands for inexpensive.
|
# ¿ Oct 11, 2012 22:43 |
|
FISHMANPET posted:Remember, the I in RAID stands for Indendent ! wikipedia posted:RAID (redundant array of independent disks, originally redundant array of inexpensive disks)
|
# ¿ Oct 11, 2012 22:53 |
|
Delta-Wye posted:Keep in mind if you were running out of space in your storage space, you can always swap out a disk. Also down the road I won't buy a 3TB, by then 5TB will probably be more cost effective. And again I can't see any efficient way to upgrade this thing.
|
# ¿ Oct 12, 2012 01:33 |
|
FISHMANPET posted:ZFS was designed for Enterprise where you get 12 or 16 disks at a time and you can easily migrate data from a 12x500GB to a 12x4TB arrary, so disk by disk expansion was never a big deal. Once it fails embarrassingly I will report back.
|
# ¿ Oct 12, 2012 12:19 |
|
FISHMANPET posted:I just saw this, the zfs mirror is 20 GB because that's the size of your smallest disk. It's essentially a 4 way mirror with 20GB disks, because that's the size of the smallest disk. FISHMANPET posted:Those btrfs numbers sound fishy, if it's giving you literally half the total disk capactiy, then it's doing something wrong. It's storing both copies of data on one disk for that to work out. It's creating two 90GB volumes and mirroring those, but there's no way to make two completely indepenting 90GB volumes out of those disks, all you can make is an 80 and a 100. To get to 90 you have to put 10 GB from the 100 into the 80, and then you've got one disk (the 40GB) holding data for two mirror sets. mkfs.btrfs -m raid1 -d raid1 /dev/vg/60a /dev/vg/60b /dev/vg/40 /dev/vg/20 What it displays is that it has 180GB of total space in the pool. If you write to a Raid1 volume each 1GB of Data needs 2GB of poolspace. I think it just puts a 1GB large mirrored pair of blocks on the two emptiest drives everytime it needs new storage. You could replicate this by hand for zfs in mirror mode: Partitions: 60GB-a -> 50GB-a + 10GB-a 60GB-b -> 50GB-b + 10GB-b 40GB -> 20GB-a + 10GB-c + 10GB-d 20GB -> 20GB-b then you pair up: zpool create zfs_test mirror 50GB-a 50GB-b mirror 20GB-a 20GB-b mirror 10GB-a 10GB-c mirror 10GB-b 10GB-d The difference is that btrfs assigns those in 1GB chunks, automatically. Also the zfs will be annoying to repair or upgrade especially if the 40GB disk fails.
|
# ¿ Oct 12, 2012 17:15 |
|
fletcher posted:Trying to setup a cron job on NAS4Free: Did you try experimenting with escaping the special characters a bit more. I sometimes have problems when going from direct shell to a script, in that it needs extra backslashes. Check if this is documented in your web interface manual, also.
|
# ¿ Oct 14, 2012 11:20 |
|
Delta-Wye posted:However, on the old machine (solaris), I get: Maybe you should run "du -s --apparent-size" on both machines and compare those.
|
# ¿ Oct 15, 2012 00:00 |
|
Lediur posted:Also, anyone have recommendations for putting 3.5" hard drives in 5.25" bays? I've run out of hard drive bays in my case but I have a bunch of 5.25" bays I could use. http://www.amazon.de/gp/product/B001LQG4IO/ref=oh_details_o04_s00_i01 No idea what those are called in English.
|
# ¿ Nov 1, 2012 01:11 |
|
Dead Goon posted:When I get to the part about installing the bootable MBR on the drive i just get the following error from terminal Or the other way around if you are using some unusual version of dd.
|
# ¿ Nov 5, 2012 01:07 |
|
IT Guy posted:I have a problem using FreeNAS. I assume it is because windows tries to create thumbnails of my files if it is accessing a local folder, which wastes all that time. If on of your machines got confused and started treating your cifs share like a local drive in this matter, this would explain the slowdown. If you find out how to fix that problem, how to turn thumbnails off, tell me please.
|
# ¿ Nov 12, 2012 18:18 |
|
|
# ¿ Apr 18, 2024 23:31 |
|
FISHMANPET posted:btrfs only has mirroring, which rules it out for plenty of us.
|
# ¿ Mar 19, 2013 16:36 |