Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Megaman posted:

If I set up freenas with, say, 20 drives in raidz2, and the freenas box dies but the drives are fine, can i migrate those to a fresh working freenas machine? And if so, how do I go about this? Wouldn't I need a table configuration of some kind to understand the disk set? Or is it some black magic that automatically would understand the entire set?

You may want to check out the responses to some similar questions that I had a few pages back.

I think the having the ability to move the disks to a different machine and being able to throw something like FreeBSD on it is a definitely a benefit over a proprietary turn-key device that is worth considering.

Adbot
ADBOT LOVES YOU

evil_bunnY
Apr 2, 2003

FISHMANPET posted:

Do you know how much data is being synced to your client data? I suppose if it's resyncing all the data each night, the blocks could become shifted and all look different. But shouldn't rsync be smart enough to run a full checksum of the file before it sends it over?
You realize this would mean reading all the data on the remote side?

DrDork posted:

RAIDZ2 on ZFS is no more "magic" than any other RAID setup on any other file system.
It's a lot more magic in the sense that it works as expected, and does exhibit a modicum of user friendliness.

evil_bunnY fucked around with this message at 11:07 on Oct 11, 2012

Jolan
Feb 5, 2007

Misogynist posted:

DAAP access from any of your other devices that speak it (Xbox, PS3, etc.) without your computer being on and having iTunes running. The term is kind of a misnomer, since it won't play any DRMed content.

Ah, so it's only really useful when you're using devices other than a computer to access your music, if I'm understanding you correctly. Thank you for the clarification!

Megaman
May 8, 2004
I didn't read the thread BUT...

fletcher posted:

You may want to check out the responses to some similar questions that I had a few pages back.

I think the having the ability to move the disks to a different machine and being able to throw something like FreeBSD on it is a definitely a benefit over a proprietary turn-key device that is worth considering.

The problem I'm starting to think about the synology that I have, again which is great, is that if the device fails I'm kinda screwed. I don't care so much about the disks since they're in a RAID6. I need the whole device to be modularly replaceable.

Another question: Is using raid cards absolutely necessary? Or can I just plug them directly into the motherboard. If not, what are the benefits of raid cards?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
If you get a good one, RAID cards can offer you better performance, better reliability, and a bunch of extra ports. But you're going to pay for it--that $40 Rosewill RAID card is a step down from whatever's built into your motherboard, and should be avoided like the plague. These days you don't really NEED RAID cards like you did in the past, especially if you're going to go with something like ZFS/RAIDZ where you wouldn't be using the RAID card as anything more than a port multiplier, anyhow.

teamdest
Jul 1, 2007

DrDork posted:

If you get a good one, RAID cards can offer you better performance, better reliability, and a bunch of extra ports. But you're going to pay for it--that $40 Rosewill RAID card is a step down from whatever's built into your motherboard, and should be avoided like the plague. These days you don't really NEED RAID cards like you did in the past, especially if you're going to go with something like ZFS/RAIDZ where you wouldn't be using the RAID card as anything more than a port multiplier, anyhow.

With the caveat that if you need more ports than your motherboard supports, most onboard raids won't work with a sas port multiplier.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
Hi, could you guys help me a bit? I am currently planning to upgrade my linux-box to use some form of RAID for data storage ( my root will be on a separate disk ). Just bought 2 new 3TB drives to put together as software RAID1, after some shuffling I will also add 2x2TB from old drives.

I am trying to decide between zfsonlinux(only got 4GB of RAM), btrfs (afraid it might be still wonky) and the classic mdadm+lvm ( it's already 20XX).

Could someone here help me decide?

astr0man
Feb 21, 2007

hollyeo deuroga
RAM is super cheap, if your box can support some more just pick some up and use zfs.

Longinus00
Dec 29, 2005
Ur-Quan

Mantle posted:

I noticed the time stamps on the files in my snapshots have been changing every day and my hypothesis is that rsync is considering that a "file change" and telling the server to duplicate the file with the new stamp. I am going to try running the rsync server with

code:
--size-only             only use file size when determining if a file should be transferred
to make rsync ignore date stamps.

Does this look right?



No, don't do that.

What you actually want is --inplace. Otherwise you overwrite files on sync, breaking COW.

Longinus00
Dec 29, 2005
Ur-Quan

Megaman posted:

The problem I'm starting to think about the synology that I have, again which is great, is that if the device fails I'm kinda screwed. I don't care so much about the disks since they're in a RAID6. I need the whole device to be modularly replaceable.

Another question: Is using raid cards absolutely necessary? Or can I just plug them directly into the motherboard. If not, what are the benefits of raid cards?

The benefit of a raid card is that your just as hardware dependent as you were using that synology.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

astr0man posted:

RAM is super cheap, if your box can support some more just pick some up and use zfs.
I can only upgrade to 8GB. According to some articles that is the minimum for zfs. And I am also using this computer for other stuff then storage.

More importantly, do you know how good zfs works on linux?
The different articles I found all give totally different opinions on stability. Some say it's fine some say it's worse then btrfs.

Mantle
May 15, 2004

Longinus00 posted:

No, don't do that.

What you actually want is --inplace. Otherwise you overwrite files on sync, breaking COW.

What is COW? The other issue is that I don't actually have command line access to the rsync client so all my arguments have to be on the daemon side. Can --inplace be passed on the daemon side too?

And all of this is to work around an issue where my snapshots are taking up the space of a full mirror because the time stamps are being touched. I'm trying to fix it now by preserving time stamps on the rsync side, but is there another way to do it by ignoring time stamps on the snapshot side?

astr0man
Feb 21, 2007

hollyeo deuroga

tonberrytoby posted:

I can only upgrade to 8GB. According to some articles that is the minimum for zfs. And I am also using this computer for other stuff then storage.

More importantly, do you know how good zfs works on linux?
The different articles I found all give totally different opinions on stability. Some say it's fine some say it's worse then btrfs.
I have 8GB of ram in my microserver for 8TB of storage running Ubuntu 12.04 server and zfsonlinux. I haven't had any performance issues, and I'm also running sab+sickbeard on the machine.

teamdest
Jul 1, 2007
8gb is more than enough for zfs, just don't enable dedup or compression and you should be fine.

Longinus00
Dec 29, 2005
Ur-Quan

Mantle posted:

What is COW? The other issue is that I don't actually have command line access to the rsync client so all my arguments have to be on the daemon side. Can --inplace be passed on the daemon side too?

COW is copy on write. I have no idea about your second question, maybe?

Mantle posted:

And all of this is to work around an issue where my snapshots are taking up the space of a full mirror because the time stamps are being touched. I'm trying to fix it now by preserving time stamps on the rsync side, but is there another way to do it by ignoring time stamps on the snapshot side?

The root cause has nothing to do with timestamps and everything to do with breaking COW on your files. Ignoring timestamps means you'll only backup files when their size changes and not anytime their contents change.

When you take snapshots zfs it uses COW to handle future writes to the file. If your backup solution uses the traditional unix mantra of write->rename(mv) then COW gets broken because you're not writing into the file, you're actually replacing it with an entirely new one.

As a test, stat the backup before and after a rsync run. If the inodes change then that's probably the problem.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

astr0man posted:

I have 8GB of ram in my microserver for 8TB of storage running Ubuntu 12.04 server and zfsonlinux. I haven't had any performance issues, and I'm also running sab+sickbeard on the machine.

Thanks, that is what I wanted to hear.

Bonobos
Jan 26, 2004

astr0man posted:

I have 8GB of ram in my microserver for 8TB of storage running Ubuntu 12.04 server and zfsonlinux. I haven't had any performance issues, and I'm also running sab+sickbeard on the machine.

I am wondering if this will be true for me with 16tb space. I guess I will find out shortly.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
So, one more zfs question.
If I have drives of 3,3,2 and 1 TB size, I should be able to make a RAIDZ structure with 6TB space. Hopefully.
If I create a raidZ with 2x3TB disks, and then later add those 2 smaller disks, will it integrate them correctly?
Also, do you know hom much space I would have if I used raidZ2 on those 4 disks?

Delta-Wye
Sep 29, 2005

tonberrytoby posted:

So, one more zfs question.
If I have drives of 3,3,2 and 1 TB size, I should be able to make a RAIDZ structure with 6TB space. Hopefully.
If I create a raidZ with 2x3TB disks, and then later add those 2 smaller disks, will it integrate them correctly?
Also, do you know hom much space I would have if I used raidZ2 on those 4 disks?

If you're talking about a raidz1 setup, 3,3,2,1 TB drives should only give 3TB (1TB from each drive, and then 1TB lost for parity). If you used raidz2, you would have 2TB (1TB from each drive, then 2TB lost for parity data).

However, you're talking about setting up a 2x3TB drive which doesn't make for a raidz1 setup, so I'm not sure.

I'm pretty sure you can use partitions to form volumes (is this right?). If it is the case, I guess you could make a bunch of 1TB partitions and make those into a raidz2, but while it would get you 7TB of storage it would be stupid and give you zero protection basically.

Does make me wonder if a mixed 2TB/1TB setup with the 2TB drives split would work okay with a raidz2. If one of hte 2TB disks kicks the bucket, you are no worse off than if you were running raidz1 and lost a single disk.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Delta-Wye posted:

If you're talking about a raidz1 setup, 3,3,2,1 TB drives should only give 3TB (1TB from each drive, and then 1TB lost for parity). If you used raidz2, you would have 2TB (1TB from each drive, then 2TB lost for parity data).

However, you're talking about setting up a 2x3TB drive which doesn't make for a raidz1 setup, so I'm not sure.

I'm pretty sure you can use partitions to form volumes (is this right?). If it is the case, I guess you could make a bunch of 1TB partitions and make those into a raidz2, but while it would get you 7TB of storage it would be stupid and give you zero protection basically.

Does make me wonder if a mixed 2TB/1TB setup with the 2TB drives split would work okay with a raidz2. If one of hte 2TB disks kicks the bucket, you are no worse off than if you were running raidz1 and lost a single disk.
For the raidz1 I should be able to put an lvm on the 2 smaller drives which would let the zfs see 3x3TB for the raidz1. Giving me 6TB space even if it doesn't work directly.
I am going to check this with some virtual drives.

OK: with 4 testdrives of 60,60,40,20 GB :
RaidZ: 60GB in "df", 80GB in "zpool list"
RaidZ2: 40GB in "df", 80GB in "zpool list"
btrfs raid1 : 180GB in "btrfs fi show" can hold a 90GB file.
zfs mirror : 20 GB something is wrong here.

VictualSquid fucked around with this message at 22:10 on Oct 11, 2012

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
In theory you could make a vdev of the 1TB and 2TB essintially RAID 0ed together, but you can't have volumes like that as part of a RAIDZ set.

hifi
Jul 25, 2012

Longinus00 posted:


Does anyone know the status of the open source zfs branch? Can I assume that it's effectively frozen feature wise since anything they add will make it a fork of the official zfs?

It's still under development through Illumos, and the next big thing is feature flags, which you can read about here or if you google it there's a series of youtube videos from a developers' summit thing.

Mantle
May 15, 2004

Longinus00 posted:

The root cause has nothing to do with timestamps and everything to do with breaking COW on your files. Ignoring timestamps means you'll only backup files when their size changes and not anytime their contents change.

When you take snapshots zfs it uses COW to handle future writes to the file. If your backup solution uses the traditional unix mantra of write->rename(mv) then COW gets broken because you're not writing into the file, you're actually replacing it with an entirely new one.

As a test, stat the backup before and after a rsync run. If the inodes change then that's probably the problem.

Thanks for the leads. I think you nailed the problem. Bad news is I discovered I cannot pass the --inplace option on the daemon side and my lovely NAS appliance can't do it on the client side. Good news is that I think I figured out how to reverse the roles of my two NAS so I can make the lovely one my dumb emergency offsite backup.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
So, for my problem of having 3,3,2,1 TB in Disks, I seem to have the following options:

1) Group 2+1TB in lvm and install zfs in raidZ on the 2 3TB disks+the virtual volume. Giving 6TB space and one disk redundancy. Has probably the worst performance.
2) Partition the 3TB disks into 2+1TB and make 2 zfs raidz groups. Again 6TB space, one disk redundancy. This will also be annoying to upgrade or recover.

3) Partition the disks a bit complicated and make mirrored pairs in zfs. 4.5TB of space and 1.1 Disks of redundancy. Probably the worst variant, might also be the most performant.
4) Use btrfs in raid1 mode. Same thing as 3) only the partitioning gets done automatically. Will probably be the easiest to upgrade. But uses btrfs instead of zfs.

Anybody got any good ideas? I am tending a bit torwards the btrfs solution right now.
Especially, has anybody any idea what sorts of problems I might get with zfs on lvm in variant 1?

Also: It seems like adding a 2TB disk to a RaidZ of 4x3TB disks reduces your total storage capacity. Someone seems to have forgotten that the I in Raid stands for inexpensive.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
You can't add a disk to a RAIDZ set after it's created. And you're correct, 4x3TB RAIDZ would have 9 TB whereas 4x3TB + 2TB would be 8 TB. The reason is that in a RAIDZ the smallest disk is how it views all of them, so it's really a 5x2TB.

Remember, the I in RAID stands for Indendent :science:!

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

FISHMANPET posted:

Remember, the I in RAID stands for Indendent :science:!

wikipedia posted:

RAID (redundant array of independent disks, originally redundant array of inexpensive disks)
They changed it while I wasn't looking:argh:.

Delta-Wye
Sep 29, 2005

tonberrytoby posted:

Also: It seems like adding a 2TB disk to a RaidZ of 4x3TB disks reduces your total storage capacity. Someone seems to have forgotten that the I in Raid stands for inexpensive.

Keep in mind if you were running out of space in your storage space, you can always swap out a disk.

Start with 4x3TB + 1x2TB disk, create a raidz1 and get 8TB of space.

Fill it up.

Replace the 2TB with a 3TB down the road, and your raidz1 turns into a 12TB array and you get 4TB after resilvering onto the new disk.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

Delta-Wye posted:

Keep in mind if you were running out of space in your storage space, you can always swap out a disk.

Start with 4x3TB + 1x2TB disk, create a raidz1 and get 8TB of space.

Fill it up.

Replace the 2TB with a 3TB down the road, and your raidz1 turns into a 12TB array and you get 4TB after resilvering onto the new disk.
But if I chunk the 2TB disk in the beginning, I gain 1TB space and reduce my MTTF.
Also down the road I won't buy a 3TB, by then 5TB will probably be more cost effective. And again I can't see any efficient way to upgrade this thing.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
You could buy a 5TB and slot it in, and you'd only be using 3 TB of it, then replace your 3TB drives with 8 TB drives or something, etc etc.

ZFS was designed for Enterprise where you get 12 or 16 disks at a time and you can easily migrate data from a 12x500GB to a 12x4TB arrary, so disk by disk expansion was never a big deal.

I bought 6 1.5TB drives and made a 5x1.5 RAIDZ array with 1 hotspare, then later on I stuck my 5x750GB drives into another RAIDZ and attached that on. This is now my second ZFS server, and each time I planned for expansion, but I'm pretty sure next time around I'm going to just get a Norco 4224 and stuff that full of drives, instead of upgrading what I've got now.

foobar
Jul 6, 2002

I just bought a ZyXEL NSA325 to replace my Linksys NAS200 - does anyone know if I can simply pop my existing drives into the ZyXEL from the Linksys, or do I need to reformat them? They are configured as stand-alone drives, no RAID.

EDIT: Okay, I was being lazy when I posted this - a little research showed that the NAS200 formats the disks as EXT2 and the NSA325 uses EXT4. I'm not 100% familiar with *nix file systems but I'm fairly sure that those are incompatible. It all turned out to be moot anyway, since the ZyXEL requires you to run an "initialization wizard" to set up the NAS, and that wipes the drives.

foobar fucked around with this message at 04:38 on Oct 12, 2012

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

FISHMANPET posted:

ZFS was designed for Enterprise where you get 12 or 16 disks at a time and you can easily migrate data from a 12x500GB to a 12x4TB arrary, so disk by disk expansion was never a big deal.
Looks that way. So I will probably leave the enterprise with starfleet and go for the btrfs variant.
Once it fails embarrassingly I will report back.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

tonberrytoby posted:

I am going to check this with some virtual drives.

OK: with 4 testdrives of 60,60,40,20 GB :
RaidZ: 60GB in "df", 80GB in "zpool list"
RaidZ2: 40GB in "df", 80GB in "zpool list"
btrfs raid1 : 180GB in "btrfs fi show" can hold a 90GB file.
zfs mirror : 20 GB something is wrong here.

I just saw this, the zfs mirror is 20 GB because that's the size of your smallest disk. It's essentially a 4 way mirror with 20GB disks, because that's the size of the smallest disk.

Those btrfs numbers sound fishy, if it's giving you literally half the total disk capactiy, then it's doing something wrong. It's storing both copies of data on one disk for that to work out. It's creating two 90GB volumes and mirroring those, but there's no way to make two completely indepenting 90GB volumes out of those disks, all you can make is an 80 and a 100. To get to 90 you have to put 10 GB from the 100 into the 80, and then you've got one disk (the 40GB) holding data for two mirror sets.

How did you make the btrfs mirror?

Longinus00
Dec 29, 2005
Ur-Quan

FISHMANPET posted:

I just saw this, the zfs mirror is 20 GB because that's the size of your smallest disk. It's essentially a 4 way mirror with 20GB disks, because that's the size of the smallest disk.

Those btrfs numbers sound fishy, if it's giving you literally half the total disk capactiy, then it's doing something wrong. It's storing both copies of data on one disk for that to work out. It's creating two 90GB volumes and mirroring those, but there's no way to make two completely indepenting 90GB volumes out of those disks, all you can make is an 80 and a 100. To get to 90 you have to put 10 GB from the 100 into the 80, and then you've got one disk (the 40GB) holding data for two mirror sets.

How did you make the btrfs mirror?

No it's correct. Btrfs is ensuring that blocks are duplicated across two different devices in its "raid 1" mode, there is no "creating two 90GB volumes".

tonberrytoby posted:

Looks that way. So I will probably leave the enterprise with starfleet and go for the btrfs variant.
Once it fails embarrassingly I will report back.

Make sure you use the newest kernel you can (3.5/3.6) because btrfs gets fixes added constantly.

Jolan
Feb 5, 2007
Can't sign up for the QNAP boards for some reason, so while they're figuring that one out I'll ask a question here: is it possible to change the name of a share? I could do that on my very unfancy Silverstore, so it'd surprise me if you can't on a QNAP. Can't find anything in the manual and the only somewhat interesting Google results are from the QNAP boards, that suddenly won't even load.

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.

FISHMANPET posted:

I just saw this, the zfs mirror is 20 GB because that's the size of your smallest disk. It's essentially a 4 way mirror with 20GB disks, because that's the size of the smallest disk.
Yes, that is what I thought. I also found the command to do it the way I wanted later.

FISHMANPET posted:

Those btrfs numbers sound fishy, if it's giving you literally half the total disk capactiy, then it's doing something wrong. It's storing both copies of data on one disk for that to work out. It's creating two 90GB volumes and mirroring those, but there's no way to make two completely indepenting 90GB volumes out of those disks, all you can make is an 80 and a 100. To get to 90 you have to put 10 GB from the 100 into the 80, and then you've got one disk (the 40GB) holding data for two mirror sets.

How did you make the btrfs mirror?
I made it with the standard :
mkfs.btrfs -m raid1 -d raid1 /dev/vg/60a /dev/vg/60b /dev/vg/40 /dev/vg/20
What it displays is that it has 180GB of total space in the pool. If you write to a Raid1 volume each 1GB of Data needs 2GB of poolspace.
I think it just puts a 1GB large mirrored pair of blocks on the two emptiest drives everytime it needs new storage.

You could replicate this by hand for zfs in mirror mode:
Partitions:
60GB-a -> 50GB-a + 10GB-a
60GB-b -> 50GB-b + 10GB-b
40GB -> 20GB-a + 10GB-c + 10GB-d
20GB -> 20GB-b

then you pair up:
zpool create zfs_test mirror 50GB-a 50GB-b mirror 20GB-a 20GB-b mirror 10GB-a 10GB-c mirror 10GB-b 10GB-d

The difference is that btrfs assigns those in 1GB chunks, automatically. Also the zfs will be annoying to repair or upgrade especially if the 40GB disk fails.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So a btrfs mirror is always just a two way, and it spreads the data out in the best way possible? That's pretty cool.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Is there a set of btrfs forward migration tools available? I keep seeing on-disk format changes and that just makes me nervous about ever using btrfs for a large set of data.

Longinus00
Dec 29, 2005
Ur-Quan

necrobobsledder posted:

Is there a set of btrfs forward migration tools available? I keep seeing on-disk format changes and that just makes me nervous about ever using btrfs for a large set of data.

Where have you seen disk format changes?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Longinus00 posted:

Where have you seen disk format changes?
Hrm, mostly seeing feature extensions and this piece of unintentional FUD gave me that perception. There was a guy in this thread a within the past two years that talked about using btrfs (or maybe I'm conflating with a mailing list) and having to perform some low-level recovery due to certain bugs that could provoke some on-disk format change, but that's unsubstantiated until I cite at this point. But given that the on-disk format doesn't seem to have changed since 2009, that matter seems to be irrelevant and the more relevant question is how stable btrfs is in terms of handling the same error vectors as zfs.

Adbot
ADBOT LOVES YOU

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Trying to setup a cron job on NAS4Free:

quote:

/usr/local/bin/rsync --stats --progress -vhr -e "ssh -p 12345 -i /.mykey" user@somewhere.net:/some/remote/dir/ /some/local/dir/

Seems to work fine if I run it from a shell, but if I click "Run Now" in the web interface I get "Failed to execute cron job." Anybody know why that might be?

fletcher fucked around with this message at 04:16 on Oct 14, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply