Megaman posted:If I set up freenas with, say, 20 drives in raidz2, and the freenas box dies but the drives are fine, can i migrate those to a fresh working freenas machine? And if so, how do I go about this? Wouldn't I need a table configuration of some kind to understand the disk set? Or is it some black magic that automatically would understand the entire set? You may want to check out the responses to some similar questions that I had a few pages back. I think the having the ability to move the disks to a different machine and being able to throw something like FreeBSD on it is a definitely a benefit over a proprietary turn-key device that is worth considering.
|
|
# ? Oct 11, 2012 07:15 |
|
|
# ? May 26, 2024 12:27 |
|
FISHMANPET posted:Do you know how much data is being synced to your client data? I suppose if it's resyncing all the data each night, the blocks could become shifted and all look different. But shouldn't rsync be smart enough to run a full checksum of the file before it sends it over? DrDork posted:RAIDZ2 on ZFS is no more "magic" than any other RAID setup on any other file system. evil_bunnY fucked around with this message at 11:07 on Oct 11, 2012 |
# ? Oct 11, 2012 10:55 |
|
Misogynist posted:DAAP access from any of your other devices that speak it (Xbox, PS3, etc.) without your computer being on and having iTunes running. The term is kind of a misnomer, since it won't play any DRMed content. Ah, so it's only really useful when you're using devices other than a computer to access your music, if I'm understanding you correctly. Thank you for the clarification!
|
# ? Oct 11, 2012 11:03 |
|
fletcher posted:You may want to check out the responses to some similar questions that I had a few pages back. The problem I'm starting to think about the synology that I have, again which is great, is that if the device fails I'm kinda screwed. I don't care so much about the disks since they're in a RAID6. I need the whole device to be modularly replaceable. Another question: Is using raid cards absolutely necessary? Or can I just plug them directly into the motherboard. If not, what are the benefits of raid cards?
|
# ? Oct 11, 2012 14:01 |
|
If you get a good one, RAID cards can offer you better performance, better reliability, and a bunch of extra ports. But you're going to pay for it--that $40 Rosewill RAID card is a step down from whatever's built into your motherboard, and should be avoided like the plague. These days you don't really NEED RAID cards like you did in the past, especially if you're going to go with something like ZFS/RAIDZ where you wouldn't be using the RAID card as anything more than a port multiplier, anyhow.
|
# ? Oct 11, 2012 14:52 |
|
DrDork posted:If you get a good one, RAID cards can offer you better performance, better reliability, and a bunch of extra ports. But you're going to pay for it--that $40 Rosewill RAID card is a step down from whatever's built into your motherboard, and should be avoided like the plague. These days you don't really NEED RAID cards like you did in the past, especially if you're going to go with something like ZFS/RAIDZ where you wouldn't be using the RAID card as anything more than a port multiplier, anyhow. With the caveat that if you need more ports than your motherboard supports, most onboard raids won't work with a sas port multiplier.
|
# ? Oct 11, 2012 16:01 |
|
Hi, could you guys help me a bit? I am currently planning to upgrade my linux-box to use some form of RAID for data storage ( my root will be on a separate disk ). Just bought 2 new 3TB drives to put together as software RAID1, after some shuffling I will also add 2x2TB from old drives. I am trying to decide between zfsonlinux(only got 4GB of RAM), btrfs (afraid it might be still wonky) and the classic mdadm+lvm ( it's already 20XX). Could someone here help me decide?
|
# ? Oct 11, 2012 16:08 |
|
RAM is super cheap, if your box can support some more just pick some up and use zfs.
|
# ? Oct 11, 2012 16:30 |
|
Mantle posted:I noticed the time stamps on the files in my snapshots have been changing every day and my hypothesis is that rsync is considering that a "file change" and telling the server to duplicate the file with the new stamp. I am going to try running the rsync server with No, don't do that. What you actually want is --inplace. Otherwise you overwrite files on sync, breaking COW.
|
# ? Oct 11, 2012 16:38 |
|
Megaman posted:The problem I'm starting to think about the synology that I have, again which is great, is that if the device fails I'm kinda screwed. I don't care so much about the disks since they're in a RAID6. I need the whole device to be modularly replaceable. The benefit of a raid card is that your just as hardware dependent as you were using that synology.
|
# ? Oct 11, 2012 16:58 |
|
astr0man posted:RAM is super cheap, if your box can support some more just pick some up and use zfs. More importantly, do you know how good zfs works on linux? The different articles I found all give totally different opinions on stability. Some say it's fine some say it's worse then btrfs.
|
# ? Oct 11, 2012 17:31 |
|
Longinus00 posted:No, don't do that. What is COW? The other issue is that I don't actually have command line access to the rsync client so all my arguments have to be on the daemon side. Can --inplace be passed on the daemon side too? And all of this is to work around an issue where my snapshots are taking up the space of a full mirror because the time stamps are being touched. I'm trying to fix it now by preserving time stamps on the rsync side, but is there another way to do it by ignoring time stamps on the snapshot side?
|
# ? Oct 11, 2012 17:42 |
|
tonberrytoby posted:I can only upgrade to 8GB. According to some articles that is the minimum for zfs. And I am also using this computer for other stuff then storage.
|
# ? Oct 11, 2012 17:46 |
|
8gb is more than enough for zfs, just don't enable dedup or compression and you should be fine.
|
# ? Oct 11, 2012 17:50 |
|
Mantle posted:What is COW? The other issue is that I don't actually have command line access to the rsync client so all my arguments have to be on the daemon side. Can --inplace be passed on the daemon side too? COW is copy on write. I have no idea about your second question, maybe? Mantle posted:And all of this is to work around an issue where my snapshots are taking up the space of a full mirror because the time stamps are being touched. I'm trying to fix it now by preserving time stamps on the rsync side, but is there another way to do it by ignoring time stamps on the snapshot side? The root cause has nothing to do with timestamps and everything to do with breaking COW on your files. Ignoring timestamps means you'll only backup files when their size changes and not anytime their contents change. When you take snapshots zfs it uses COW to handle future writes to the file. If your backup solution uses the traditional unix mantra of write->rename(mv) then COW gets broken because you're not writing into the file, you're actually replacing it with an entirely new one. As a test, stat the backup before and after a rsync run. If the inodes change then that's probably the problem.
|
# ? Oct 11, 2012 18:14 |
|
astr0man posted:I have 8GB of ram in my microserver for 8TB of storage running Ubuntu 12.04 server and zfsonlinux. I haven't had any performance issues, and I'm also running sab+sickbeard on the machine. Thanks, that is what I wanted to hear.
|
# ? Oct 11, 2012 18:19 |
|
astr0man posted:I have 8GB of ram in my microserver for 8TB of storage running Ubuntu 12.04 server and zfsonlinux. I haven't had any performance issues, and I'm also running sab+sickbeard on the machine. I am wondering if this will be true for me with 16tb space. I guess I will find out shortly.
|
# ? Oct 11, 2012 18:26 |
|
So, one more zfs question. If I have drives of 3,3,2 and 1 TB size, I should be able to make a RAIDZ structure with 6TB space. Hopefully. If I create a raidZ with 2x3TB disks, and then later add those 2 smaller disks, will it integrate them correctly? Also, do you know hom much space I would have if I used raidZ2 on those 4 disks?
|
# ? Oct 11, 2012 18:49 |
tonberrytoby posted:So, one more zfs question. If you're talking about a raidz1 setup, 3,3,2,1 TB drives should only give 3TB (1TB from each drive, and then 1TB lost for parity). If you used raidz2, you would have 2TB (1TB from each drive, then 2TB lost for parity data). However, you're talking about setting up a 2x3TB drive which doesn't make for a raidz1 setup, so I'm not sure. I'm pretty sure you can use partitions to form volumes (is this right?). If it is the case, I guess you could make a bunch of 1TB partitions and make those into a raidz2, but while it would get you 7TB of storage it would be stupid and give you zero protection basically. Does make me wonder if a mixed 2TB/1TB setup with the 2TB drives split would work okay with a raidz2. If one of hte 2TB disks kicks the bucket, you are no worse off than if you were running raidz1 and lost a single disk.
|
|
# ? Oct 11, 2012 19:30 |
|
Delta-Wye posted:If you're talking about a raidz1 setup, 3,3,2,1 TB drives should only give 3TB (1TB from each drive, and then 1TB lost for parity). If you used raidz2, you would have 2TB (1TB from each drive, then 2TB lost for parity data). I am going to check this with some virtual drives. OK: with 4 testdrives of 60,60,40,20 GB : RaidZ: 60GB in "df", 80GB in "zpool list" RaidZ2: 40GB in "df", 80GB in "zpool list" btrfs raid1 : 180GB in "btrfs fi show" can hold a 90GB file. zfs mirror : 20 GB something is wrong here. VictualSquid fucked around with this message at 22:10 on Oct 11, 2012 |
# ? Oct 11, 2012 19:59 |
|
In theory you could make a vdev of the 1TB and 2TB essintially RAID 0ed together, but you can't have volumes like that as part of a RAIDZ set.
|
# ? Oct 11, 2012 20:55 |
|
Longinus00 posted:
It's still under development through Illumos, and the next big thing is feature flags, which you can read about here or if you google it there's a series of youtube videos from a developers' summit thing.
|
# ? Oct 11, 2012 21:34 |
|
Longinus00 posted:The root cause has nothing to do with timestamps and everything to do with breaking COW on your files. Ignoring timestamps means you'll only backup files when their size changes and not anytime their contents change. Thanks for the leads. I think you nailed the problem. Bad news is I discovered I cannot pass the --inplace option on the daemon side and my lovely NAS appliance can't do it on the client side. Good news is that I think I figured out how to reverse the roles of my two NAS so I can make the lovely one my dumb emergency offsite backup.
|
# ? Oct 11, 2012 22:27 |
|
So, for my problem of having 3,3,2,1 TB in Disks, I seem to have the following options: 1) Group 2+1TB in lvm and install zfs in raidZ on the 2 3TB disks+the virtual volume. Giving 6TB space and one disk redundancy. Has probably the worst performance. 2) Partition the 3TB disks into 2+1TB and make 2 zfs raidz groups. Again 6TB space, one disk redundancy. This will also be annoying to upgrade or recover. 3) Partition the disks a bit complicated and make mirrored pairs in zfs. 4.5TB of space and 1.1 Disks of redundancy. Probably the worst variant, might also be the most performant. 4) Use btrfs in raid1 mode. Same thing as 3) only the partitioning gets done automatically. Will probably be the easiest to upgrade. But uses btrfs instead of zfs. Anybody got any good ideas? I am tending a bit torwards the btrfs solution right now. Especially, has anybody any idea what sorts of problems I might get with zfs on lvm in variant 1? Also: It seems like adding a 2TB disk to a RaidZ of 4x3TB disks reduces your total storage capacity. Someone seems to have forgotten that the I in Raid stands for inexpensive.
|
# ? Oct 11, 2012 22:43 |
|
You can't add a disk to a RAIDZ set after it's created. And you're correct, 4x3TB RAIDZ would have 9 TB whereas 4x3TB + 2TB would be 8 TB. The reason is that in a RAIDZ the smallest disk is how it views all of them, so it's really a 5x2TB. Remember, the I in RAID stands for Indendent !
|
# ? Oct 11, 2012 22:48 |
|
FISHMANPET posted:Remember, the I in RAID stands for Indendent ! wikipedia posted:RAID (redundant array of independent disks, originally redundant array of inexpensive disks)
|
# ? Oct 11, 2012 22:53 |
tonberrytoby posted:Also: It seems like adding a 2TB disk to a RaidZ of 4x3TB disks reduces your total storage capacity. Someone seems to have forgotten that the I in Raid stands for inexpensive. Keep in mind if you were running out of space in your storage space, you can always swap out a disk. Start with 4x3TB + 1x2TB disk, create a raidz1 and get 8TB of space. Fill it up. Replace the 2TB with a 3TB down the road, and your raidz1 turns into a 12TB array and you get 4TB after resilvering onto the new disk.
|
|
# ? Oct 12, 2012 01:19 |
|
Delta-Wye posted:Keep in mind if you were running out of space in your storage space, you can always swap out a disk. Also down the road I won't buy a 3TB, by then 5TB will probably be more cost effective. And again I can't see any efficient way to upgrade this thing.
|
# ? Oct 12, 2012 01:33 |
|
You could buy a 5TB and slot it in, and you'd only be using 3 TB of it, then replace your 3TB drives with 8 TB drives or something, etc etc. ZFS was designed for Enterprise where you get 12 or 16 disks at a time and you can easily migrate data from a 12x500GB to a 12x4TB arrary, so disk by disk expansion was never a big deal. I bought 6 1.5TB drives and made a 5x1.5 RAIDZ array with 1 hotspare, then later on I stuck my 5x750GB drives into another RAIDZ and attached that on. This is now my second ZFS server, and each time I planned for expansion, but I'm pretty sure next time around I'm going to just get a Norco 4224 and stuff that full of drives, instead of upgrading what I've got now.
|
# ? Oct 12, 2012 02:37 |
|
I just bought a ZyXEL NSA325 to replace my Linksys NAS200 - does anyone know if I can simply pop my existing drives into the ZyXEL from the Linksys, or do I need to reformat them? They are configured as stand-alone drives, no RAID. EDIT: Okay, I was being lazy when I posted this - a little research showed that the NAS200 formats the disks as EXT2 and the NSA325 uses EXT4. I'm not 100% familiar with *nix file systems but I'm fairly sure that those are incompatible. It all turned out to be moot anyway, since the ZyXEL requires you to run an "initialization wizard" to set up the NAS, and that wipes the drives. foobar fucked around with this message at 04:38 on Oct 12, 2012 |
# ? Oct 12, 2012 03:10 |
|
FISHMANPET posted:ZFS was designed for Enterprise where you get 12 or 16 disks at a time and you can easily migrate data from a 12x500GB to a 12x4TB arrary, so disk by disk expansion was never a big deal. Once it fails embarrassingly I will report back.
|
# ? Oct 12, 2012 12:19 |
|
tonberrytoby posted:I am going to check this with some virtual drives. I just saw this, the zfs mirror is 20 GB because that's the size of your smallest disk. It's essentially a 4 way mirror with 20GB disks, because that's the size of the smallest disk. Those btrfs numbers sound fishy, if it's giving you literally half the total disk capactiy, then it's doing something wrong. It's storing both copies of data on one disk for that to work out. It's creating two 90GB volumes and mirroring those, but there's no way to make two completely indepenting 90GB volumes out of those disks, all you can make is an 80 and a 100. To get to 90 you have to put 10 GB from the 100 into the 80, and then you've got one disk (the 40GB) holding data for two mirror sets. How did you make the btrfs mirror?
|
# ? Oct 12, 2012 15:54 |
|
FISHMANPET posted:I just saw this, the zfs mirror is 20 GB because that's the size of your smallest disk. It's essentially a 4 way mirror with 20GB disks, because that's the size of the smallest disk. No it's correct. Btrfs is ensuring that blocks are duplicated across two different devices in its "raid 1" mode, there is no "creating two 90GB volumes". tonberrytoby posted:Looks that way. So I will probably leave the enterprise with starfleet and go for the btrfs variant. Make sure you use the newest kernel you can (3.5/3.6) because btrfs gets fixes added constantly.
|
# ? Oct 12, 2012 16:06 |
|
Can't sign up for the QNAP boards for some reason, so while they're figuring that one out I'll ask a question here: is it possible to change the name of a share? I could do that on my very unfancy Silverstore, so it'd surprise me if you can't on a QNAP. Can't find anything in the manual and the only somewhat interesting Google results are from the QNAP boards, that suddenly won't even load.
|
# ? Oct 12, 2012 17:01 |
|
FISHMANPET posted:I just saw this, the zfs mirror is 20 GB because that's the size of your smallest disk. It's essentially a 4 way mirror with 20GB disks, because that's the size of the smallest disk. FISHMANPET posted:Those btrfs numbers sound fishy, if it's giving you literally half the total disk capactiy, then it's doing something wrong. It's storing both copies of data on one disk for that to work out. It's creating two 90GB volumes and mirroring those, but there's no way to make two completely indepenting 90GB volumes out of those disks, all you can make is an 80 and a 100. To get to 90 you have to put 10 GB from the 100 into the 80, and then you've got one disk (the 40GB) holding data for two mirror sets. mkfs.btrfs -m raid1 -d raid1 /dev/vg/60a /dev/vg/60b /dev/vg/40 /dev/vg/20 What it displays is that it has 180GB of total space in the pool. If you write to a Raid1 volume each 1GB of Data needs 2GB of poolspace. I think it just puts a 1GB large mirrored pair of blocks on the two emptiest drives everytime it needs new storage. You could replicate this by hand for zfs in mirror mode: Partitions: 60GB-a -> 50GB-a + 10GB-a 60GB-b -> 50GB-b + 10GB-b 40GB -> 20GB-a + 10GB-c + 10GB-d 20GB -> 20GB-b then you pair up: zpool create zfs_test mirror 50GB-a 50GB-b mirror 20GB-a 20GB-b mirror 10GB-a 10GB-c mirror 10GB-b 10GB-d The difference is that btrfs assigns those in 1GB chunks, automatically. Also the zfs will be annoying to repair or upgrade especially if the 40GB disk fails.
|
# ? Oct 12, 2012 17:15 |
|
So a btrfs mirror is always just a two way, and it spreads the data out in the best way possible? That's pretty cool.
|
# ? Oct 12, 2012 17:28 |
|
Is there a set of btrfs forward migration tools available? I keep seeing on-disk format changes and that just makes me nervous about ever using btrfs for a large set of data.
|
# ? Oct 12, 2012 19:05 |
|
necrobobsledder posted:Is there a set of btrfs forward migration tools available? I keep seeing on-disk format changes and that just makes me nervous about ever using btrfs for a large set of data. Where have you seen disk format changes?
|
# ? Oct 12, 2012 20:20 |
|
Longinus00 posted:Where have you seen disk format changes?
|
# ? Oct 12, 2012 21:51 |
|
|
# ? May 26, 2024 12:27 |
Trying to setup a cron job on NAS4Free:quote:/usr/local/bin/rsync --stats --progress -vhr -e "ssh -p 12345 -i /.mykey" user@somewhere.net:/some/remote/dir/ /some/local/dir/ Seems to work fine if I run it from a shell, but if I click "Run Now" in the web interface I get "Failed to execute cron job." Anybody know why that might be? fletcher fucked around with this message at 04:16 on Oct 14, 2012 |
|
# ? Oct 13, 2012 22:28 |