|
Megaman posted:Semi silly question. I'm buying an antec case and 8 drives to make a freeNAS RAIDZ3 array. Is it smarter to use 2 5in3 ICY Dock bays to fit these, or just put them in the case without the ICY DOck bays? I know people have mentioned the backplanes won't fail, but if they do they can wipe out 5 drives with them. Do people recommend docks over no docks? I've alluded to this concern before, the bays have gotten not so great reviews on newegg I'm using something similar in mine, but its just a 4-in-3 adapter thing with no backplane. I did it just to keep the 2 4-disk vdevs physically together, so swapping a drive after a failure is easier. I wouldn't use anything with a backplane for this situation, try to find one that's just a cage with no electronics in it. You won't have hot swap capability, but I don't think that's really worth the money in this instance.
|
# ? Dec 17, 2012 19:07 |
|
|
# ? May 24, 2024 09:09 |
|
chizad posted:The closest equivalent Storage Spaces has to RAID5/RAIDZ1 really works more like Drobo's BeyondRAID. You can throw a bunch of different size disks at it, *magic* happens, and you get a storage pool that both offers redundancy makes the most efficient use of the disks. (It's not really magic, of course, but how it's carving things up behind the scenes is hidden from you. The BeyondRAID section in that wiki article I linked does a good job of explaining how a Drobo might handle different size disks. I'm assuming Storage Spaces works in a similar way.) There are a few others: - Flexraid (supports pooling, different sized drives, etc. Works out to $60) - Snapraid (snapshot parity, 1 or 2 drive failures, command line, free) - disParity (snapshot only, data is all kept on its own drive, windows only, free) There are also some that support drive pooling, different drive sizes, but "mirroring" only (as in, two copies of everything you have). Windows 8 storage spaces has this (as well as keeping three copies of everything), and there is also drive bender ($17.50-$29.95), which also maintains a filesystem on each disk (so you can pull out a drive and access it in any computer, although you won't have the file structure)
|
# ? Dec 17, 2012 20:10 |
|
HP has started selling the N54L, looks like the only real change is a bump in the CPU speed.
|
# ? Dec 17, 2012 20:11 |
|
Wild EEPROM posted:There are a few others: Snapraid isn't bad (the Elucidate GUI for it needs major work though) and Flexraid is pretty good now, I did a small write up on it a page or two ago. I haven't tried disParity yet. Storage Spaces was an annoying experience, I ran into some bugs with the reporting tools and performance with parity was awful.
|
# ? Dec 17, 2012 21:35 |
|
I switched to using the WD Reds when the previous hard drives I was using (Hitachi 7k3000s) doubled in price due to their reclassification as enterprise drives when WD bought hitatchi. So far they have worked great. I am currently using four of them in conjunction with 2 hitachi 7k3000s setup in RAID 6 on an Areca 1880ix-12 raid card. Performance has been great and the SAS support with the case I am using (Norco 4216) has been awesome. I love being able to support all my drives on 4 cables instead of 11 sata cables. Raid expansion has been a really nice feature although it does take about 30 hours to add in a new drive. Currently sitting at 18TB (14TB Usable) with with 12 TB (8 usable) in RAID and 6TB in other drives. Unfortunately the 6TB in free floating drives are all 1.5TB drives as opposed to the 2TB I use in my array so I can't add them. TheGreySpectre fucked around with this message at 21:54 on Dec 17, 2012 |
# ? Dec 17, 2012 21:50 |
|
Froist posted:Hijacking this topic, what's the difference between software RAID5 and Storage Spaces (in Windows 8)? I was planning on switching my N40L over from Ubuntu/ZFS to Windows 8/Storage Spaces, but if I can stick to Win7/Raid5 I'd probably be happier with that. I've got no plans to upgrade the capacity of the pool (it's 4x2tb ==6tb with redundancy at the moment), just hadn't realized you could do software RAID in Windows. I guess the performance is better than Storage Spaces too? The performance of software raid 5 isn't anything special, but its more than enough to stream media to two other computers while my parents watch poo poo on it in the lounge room. It probably also helps that ita one of the other computers which is the sabnzbd machine and handles most of the more intensive work.
|
# ? Dec 18, 2012 02:25 |
|
The last few pages of this thread have cemented that I'm most likely going to pick up the N40L/N54L with between four and six WD Red 3TB drives running in RAID5 (assuming I can shove them in there) plus a small SSD running freeNAS or one of the other flavors recommended in this thread. For someone who only really needs to back up his photos, videos and music and possibly some minor media streaming would this be a viable option? Though if people think that building my own fileserver is the best option I'm open to suggestions Being an IT guy who administers multiple Xsans and file servers I feel like a bit of a scrub asking advice and possibly being totally wrong about how many drives I can put in the N40L pr0digal fucked around with this message at 04:05 on Dec 19, 2012 |
# ? Dec 19, 2012 03:54 |
No need for an SSD, there is an internal USB port that is perfect for a thumbdrive, on which you can install the embedded version of NAS4Free or whatever you end up going with. On the previous page of this thread there is some discussion about how to get a 6th drive in there. Why RAID5 over RAIDZ?
|
|
# ? Dec 19, 2012 04:12 |
|
fletcher posted:No need for an SSD, there is an internal USB port that is perfect for a thumbdrive, on which you can install the embedded version of NAS4Free or whatever you end up going with. In actually reading the thread backwards I realize my mistake on both counts, since RAIDZ looks to beat out RAID5 and I didn't notice the internal USB port until I read back a few pages. Now it's a matter of pricing everything out and deciding if I want to go with the 2TB Red drives or the 3TB Red drives. The 2TB option is cheaper and I don't need the ~15TB that the 3TB Reds offer. And then slowly buying everything and tripping over myself as I dip back into a non OS X/Windows operating system for the first time in years... My ideal plan is to back up the photos scattered across my multiple external drives (somehow, I don't know how yet) to both the NAS and also to CrashPlan+ (which my family thankfully has a family account for) so I'll have local backups as well as backups on the web. Two years working in TV with an LTO5 system has drilled offsite backups into my head pr0digal fucked around with this message at 04:35 on Dec 19, 2012 |
# ? Dec 19, 2012 04:26 |
|
If you're going to use 5 or 6 drives you should really consider bumping up to raidz2. Especially if you're going to use 3TB drives.
|
# ? Dec 19, 2012 04:34 |
|
Galler posted:If you're going to use 5 or 6 drives you should really consider bumping up to raidz2. Especially if you're going to use 3TB drives. 5 or 6 drives is my ultra best case scenario, most likely I'll be putting 4x2TB drives in there at least at first due to budget constraints. I would love to do 6x2TB in RAIDZ2 but I'm not totally sure if I can afford that...at least at the moment Looking back a few pages to DrDork's post about the workflow for his girlfriend the photographer looks to be pretty good. Do all my work locally on my system and external drives and then shift them over to the NAS/CrashPlan when I'm done. At least something similar. This is what I get for posting before reading more pr0digal fucked around with this message at 04:56 on Dec 19, 2012 |
# ? Dec 19, 2012 04:37 |
|
6 drives should definitely be RAID-Z2 IMO, if you absolutely had to only use 5 drives, I'd be OK with a RAID-Z1/RAID6, but I live dangerously.
|
# ? Dec 19, 2012 04:37 |
|
Galler posted:If you're going to use 5 or 6 drives you should really consider bumping up to raidz2. Especially if you're going to use 3TB drives. Why is that?
|
# ? Dec 19, 2012 05:25 |
|
crm posted:Why is that? Rebuild time on 3TB will probably take a while and if another drive shits the bed you're boned. Double parity helps eliminate that.
|
# ? Dec 19, 2012 05:36 |
|
LmaoTheKid posted:Rebuild time on 3TB will probably take a while and if another drive shits the bed you're boned. Double parity helps eliminate that.
|
# ? Dec 19, 2012 06:30 |
|
pr0digal posted:5 or 6 drives is my ultra best case scenario, most likely I'll be putting 4x2TB drives in there at least at first due to budget constraints. I would love to do 6x2TB in RAIDZ2 but I'm not totally sure if I can afford that...at least at the moment e; Also, if you're going to do with a workflow like my girlfriend has, make sure you've got some system set up so there's always at least two copies of the data at every step--either leave the originals on the card while you're working on them locally before pushing them over to the NAS/CrashPlan, or have something like Norton Ghost set up to do nightly/hourly/whatever backups so if something disasterous happens, you've only lost some relatively small amount of work. AND (probably more likely, if your issues are anything like my girlfriend's) so that when you accidentally delete/overwrite something that you needed, you can go back and pull a previous copy easily. DrDork fucked around with this message at 06:38 on Dec 19, 2012 |
# ? Dec 19, 2012 06:35 |
|
DrDork posted:Note that there is no particularly easy way to expand from a 4-drive array to a 5- or 6-drive array, other than copying all your data somewhere else temporarily, so keep that in mind when you're setting things up. After mulling it over I will be going with the 6 drive array, even if it will take longer to get everything together Dual parity is great and has made my job less stressful when one of our drives failed and we had to wait a few days to get a replacement drive so there is no reason not to have it in my home solution
|
# ? Dec 19, 2012 06:37 |
|
Wild EEPROM posted:There are a few others: Thanks. I knew there were others, but BeyondRAID was the only one that came to mind. pr0digal posted:The last few pages of this thread have cemented that I'm most likely going to pick up the N40L/N54L with between four and six WD Red 3TB drives running in RAID5 (assuming I can shove them in there) plus a small SSD running freeNAS or one of the other flavors recommended in this thread. Adding a fifth drive is as simple as getting a 5.25"->3.5" bracket to install it in the optical bay and then a SATA cable and a molex->SATA power adapter. Like someone else said, a few pages back there's a link to an adapter bracket that will apparently let you fit two 3.5" drives in the optical bay and you can run the sixth off the eSATA port. I actually wish I'd known about this adapter a few weeks ago when I ordered my N40L setup. I thought getting a sixth 3.5" drive required hardware hacking (to make room for it to physically fit), so I only got 5 drives. Oh well, I've still got 8TB usable (5x 3TB WD reds in a RAIDZ2), and am only using like 15% of it right now, so I should be good for awhile.
|
# ? Dec 19, 2012 06:41 |
|
Ok, so to do the 6 drive RAID-Z2 via FreeNAS on the N40L/N54L, what hardware do I need beyond the machine, 6 hardrives and a USB stick?
|
# ? Dec 19, 2012 06:55 |
|
You need some spacers like these so you can fit two 3.5" drives in the optical bay. You also need an eSATA to SATA cable so you can route one of those drives to the eSATA port on the back. You'll also want to be picking up a BIOS flash to allow for better speeds on the SATA port that would have otherwise been used for the optical drive. But that's about it. You're also recommended to pick up an Intel PRO NIC, but that's true regardless of whatever else you feel like putting into the thing.
|
# ? Dec 19, 2012 07:08 |
|
crm posted:Ok, so to do the 6 drive RAID-Z2 via FreeNAS on the N40L/N54L, what hardware do I need beyond the machine, 6 hardrives and a USB stick? In addition to the drive mounting and connecting goodies above post, don't forget some memory, ZFS is a ram hog. Personally I just bought 8gb of DDR3, didn't even worry about ECC or anything.
|
# ? Dec 19, 2012 07:56 |
|
While you probably don't need ECC for a home rig, in that it literally costs $1-$2 more for 8/16GB of ECC vice non-ECC RAM, there's really no reason not to get it.
|
# ? Dec 19, 2012 08:06 |
|
Would Raid 6 in an N40L give you double-parity AND dynamic expansion if you wanted to start with fewer drives?
|
# ? Dec 19, 2012 09:04 |
|
If you're putting 6 drives in an N40L, don't forget a molex to 2x SATA adapter if you don't have one laying around.
|
# ? Dec 19, 2012 14:08 |
|
DrDork posted:While you probably don't need ECC for a home rig, in that it literally costs $1-$2 more for 8/16GB of ECC vice non-ECC RAM, there's really no reason not to get it. I want to live where you get your ECC ram so cheap. Here it costs 90€ over 50€ for 16GB for cheapo ones.
|
# ? Dec 19, 2012 15:47 |
|
DrDork posted:You also need an eSATA to SATA cable so you can route one of those drives to the eSATA port on the back. You'll also want to be picking up a BIOS flash to allow for better speeds on the SATA port that would have otherwise been used for the optical drive. 1) why the intel pro nic? 2) is there any reason not to just use a sata expansion card instead of mucking about with bios changes and esata -> sata cables?
|
# ? Dec 19, 2012 15:59 |
|
crm posted:1) why the intel pro nic? 1. I think the reason for getting an Intel NIC that the FreeBSD driver for the built-in NIC is really lovely, which affects FreeNAS/NAS4Free's performance. 2. Updating your BIOS is free
|
# ? Dec 19, 2012 16:04 |
|
astr0man posted:1. I think the reason for getting an Intel NIC that the FreeBSD driver for the built-in NIC is really lovely, which affects FreeNAS/NAS4Free's performance.
|
# ? Dec 19, 2012 16:17 |
|
porkface posted:Would Raid 6 in an N40L give you double-parity AND dynamic expansion if you wanted to start with fewer drives? Not all RAID is equal. If you want to go that route you need to use md and start with atleast 4 drives.
|
# ? Dec 19, 2012 16:53 |
|
Tornhelm posted:The performance of software raid 5 isn't anything special, but its more than enough to stream media to two other computers while my parents watch poo poo on it in the lounge room. It probably also helps that ita one of the other computers which is the sabnzbd machine and handles most of the more intensive work. Sorry to keep questioning you, but you seem to have the setup I want . I did a bit of reading around after this and everything I saw implied that Windows 7 removed the RAID 5 option. Are you using a 3rd party RAID software, or is there a different way of getting it working? One of the "workarounds" I saw mentioned was "Install Windows 7 -> Install VirtualBox -> Install linux VM with MDADM -> Give VM access to drives -> Access from Windows via virtual network" which just sounds hideous.
|
# ? Dec 19, 2012 17:36 |
|
Froist posted:Sorry to keep questioning you, but you seem to have the setup I want . I did a bit of reading around after this and everything I saw implied that Windows 7 removed the RAID 5 option. Are you using a 3rd party RAID software, or is there a different way of getting it working? One of the "workarounds" I saw mentioned was "Install Windows 7 -> Install VirtualBox -> Install linux VM with MDADM -> Give VM access to drives -> Access from Windows via virtual network" which just sounds hideous. If you going to do this why wouldn't you just keep your current Linux/ZFS setup?
|
# ? Dec 19, 2012 17:44 |
|
E: nevermind, asked the same thing as ^^^
|
# ? Dec 19, 2012 17:53 |
|
I haven't had any problems with the Boadcom NIC in my N40L (it's not a realtek). It's not as good as an Intel but it's likely going to work fine for most people. Most if not all HP servers ship with Broadcom cards and run fine with FreeBSD. If you were trying to do line rate gigabit worth of 68 byte packets I'd say get an Intel, but that would be a terrible scenario for a NAS. Ninja Rope fucked around with this message at 18:02 on Dec 19, 2012 |
# ? Dec 19, 2012 18:00 |
|
Longinus00 posted:Not all RAID is equal. If you want to go that route you need to use md and start with atleast 4 drives. md is software raid right? But different from ZFS? Sorry for all the dumb questions So there's 1) ZFS / RAIDZ (right?) 2) Software RAID5/6 (mdadm?) 3) Hardware RAID5/6 4) ???? Which of these will let me 1) expand (I set up with 4 drives and want to add 2 more) 2) recover if the machine it's running on blows up 3) (something else I'm not thinking of) What's my best option? Probably going to run FreeNAS on it and do nothing but serve up files.
|
# ? Dec 19, 2012 18:18 |
|
crm posted:md is software raid right? But different from ZFS? RAIDZ and MDADM software raid are both software RAID. RAIDZ/ZFS are a copy on write filesystem and eliminates the write hole that's present in RAID5. ZFS is a little more elegant than MDADM in that you just give it the raw disk and it takes care of everything for you, where MDADM builds the array and you format it with the filesystem you want, set mount points, etc. I know you can futz around with ZFS too but it's more automated (and doesn't take forever to build the array, it's just create pool and off you go). There are fancy ways to do expansion with ZFS but IMO the best way to expand any array is to copy your data off, destroy, and recreate. However, that's not really possible if you're storing terabytes and terabytes of data. A lot of people talk about using ZFS with partitions but that poo poo never makes sense to me so I'll let one of them explain it or pick apart my post if I got anything wrong.
|
# ? Dec 19, 2012 18:25 |
|
Personal experience... Having run RAID5 at home for the last 8-10 years I'm pretty comfortable with the limitation of a capped volume size and then building a new system/array in 3-5 years to just copy my data. Expansion is really only practical for those on an extremely tight budget or anyone building a system with far more room for expansion. In the time it will take to fill a starter volume, it will be nearly impossible to find drives that match the size of today's drives. I can still (barely) find 200 GB PATA drives for my oldest array but the cost and power consumption make that pointless.
|
# ? Dec 19, 2012 19:31 |
|
porkface posted:Personal experience... I got a UPS last year, and a crashplan account and I stopped caring about ZFS at home (moved to Ubuntu and mdadm/raid5), not to mention I really don't have enough important poo poo to worry about a write hole.
|
# ? Dec 19, 2012 19:53 |
|
ZFS on Linux is pretty nice so far, but I have really been spoiled by the in-kernel CIFS server + NFSv4 ACLs on Solaris. Can't see myself migrating my pool over unless a complete meltdown prompts it or something. Not to mention my zpool version is apparently 31, so I may have hosed myself into a corner there I am totally OK with expanding in stages though; my base unit for my machine was 6-drive 2TB RAID-Z2s, and I just bought 6 at a time until I was all set. I have so many extra drives laying around though, if I can find a cheap chassis to hold them all, I might just build a JBOD box or something to serve as a backup. Got 1.5 and 2TB drives coming out of my butt.
|
# ? Dec 19, 2012 20:21 |
|
crm posted:Which of these will let me md on linux will let you do #1, ZFS will not. It should be noted that md is for Linux and not BSD. I have no experience with gvinum and it doesn't sound like many people here do either. movax posted:ZFS on Linux is pretty nice so far, but I have really been spoiled by the in-kernel CIFS server + NFSv4 ACLs on Solaris. Can't see myself migrating my pool over unless a complete meltdown prompts it or something. If you have a lot of oddly sized drives and are going to use it as a pure write once backup then you should give btrfs a whirl.
|
# ? Dec 19, 2012 21:53 |
|
|
# ? May 24, 2024 09:09 |
|
astr0man posted:If you going to do this why wouldn't you just keep your current Linux/ZFS setup? I've questioned myself too, but basically I want to roll two computers (fileserver + DVR/playback) that sit right next to each other into one. I could stick with ZFS and use MythTV, but I really like Windows Media Center - I've been using it for years. ZFS has been fast/stable as anything for me, but really I don't need blinding speed for playing back video files.
|
# ? Dec 20, 2012 00:52 |