Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
teamdest
Jul 1, 2007
Edit: Information Updated as of 11/15/2012!

Ground Rules: Don't discuss illegal material. Seriously. Check the the thread before asking a question, at least the first posts and last couple pages. Also, this isn't the enterprise storage thread so if things get complicated it may be time to call in the professionals.

1. The Basics

RAID (Redundant Array of Independent Disks)

A RAID is a group of hard drives set up so that some amount of data is duplicated amongst them. This prevents a single drive from taking your data when it dies. RAIDs are almost always built with equally-sized drives. Different sizes means that each drive will be treated as if it is big as the smallest drive used.

RAID-1 (Mirroring) - Two or more drives are written to in parallel. Capacity is one disk, you can lose all but one of the disks without losing information.

RAID-5/6 (Distributed Parity) - Three or more drives distribute the duplicate information for the data onto all drives evenly. Total size is one drive less than your array size (2 for Raid6), you can lose one drive (again, 2 for R6) without losing data. If you lose power after something was written once but before the parity information is written, you may lose data when the array comes back online. Additionally, rebuilding the array after a drive loss is a disk-intensive process that may cause another drive to fail. If you lose a 2nd (3rd for R6) drive before the rebuild finishes, your data is gone.

Hardware and Software RAID

RAID can be done by hardware (A RAID Controller) that presents the disk to the OS as a single drive and handles the work on the back end, or as software raid such as MDADM, LVM, ZFS Raid-Z, Windows Home Server's Duplication tool, or other systems where the OS creates and manages the arrays.

Hardware RAID tends to be more reliable and often faster since there are dedicated parity chips, write-back cache, internal battery backups, etc. However it is less flexible and more expensive (most controllers will do some combination of 1, 5/6, maybe 10, or just pass the drives through without making an array) and if the hardware dies you will need identical replacements to get the array working again (usually). There are also "Hardware" RAID devices that use the CPU to do the work and are therefore somewhat slower than true hardware.

Software RAID is cheaper than dedicated hardware, and can provide benefits in flexibility and features, such as WHS's ability to mirror only certain portions of drives or folders. Software RAID is coming into its own even in the enterprise sector, since it can do things a controller can't.

Spare Drives, Hot Swapping

When a drives fails (when, not if.), the array must rebuild with a new drive. Some controllers allow for drives that are online but dormant, called Hot Spares. When a drive dies, the controller will automatically rebuild the array with the new drive. If you do not have hot spares, you will either need to power the system off to switch drives or have a system that supports changing them while it is running (called Hot Swapping). Drives not in the system are "cold spares".

Network Attached Storage

This is what most people want, a system on the network that you can read/write to from other computers. These can be simple appliances or custom built solutions, but they all use some fairly common parts and software.

Appliances

Several companies make prebuilt boxes that you can just plug drives into and put on the network. Most common are Synology's Diskstations and QNAP's SOHO Devices.

In this field special note goes to HP's N40L Microserver which is far and away more powerful than most appliances and is much closer to being a custom device while still retaining some ease of use and standardization. the Microserver has a huge hacking/modding community behind it that has often solved a problem before you could even encounter it.

Custom NAS
If you need more than ~6 hard drives or want to run resource-intensive software then you'll need to build a full-size computer. You'll need an enclosure, controller, drives, an OS, and some skill with configuring things.

CPU, Motherboard, and Memory are relatively simple, buy quality parts and make sure your motherboard will fit in the case you choose. Common choices here are Intel Core i3's or AMD Phenoms for low/midrange systems, and i7's for performance servers.

Good controllers are ones with dedicated parity and RAID hardware, support for all the drives you intend to use, and compatibility with your system. 3Ware, Adaptec, Areca, and Highpoint are all companies selling reliable Hardware Raid cards, but there are a few standouts. Dell and HP, for instance, often use rebranded LSI or Adaptec cards, sometimes available more cheaply in such system or on eBay. Before you buy, make sure the chip your controller uses is compatible with your choice of OS (Marvell for instance has shoddy Linux support on several chips). The Big Name in this area is the IBM ServeRAID M1015, a controller available cheaply as a used part from many IBM servers, that can be reflashed to the latest LSI firmware, and has support for new 3TB drives (which not all controllers can handle).

The case you choose will depend on how many drives you need, but also take time to consider whether it has adequate cooling and possible power problems (not many Power Supplies have dozens of SATA connectors, so you may need to wire extras up or have room for a second power supply). Some cases come with dual redundant power supplies, which can be helpful to prevent sudden power losses.

For the drives themselves, Western Digital and Seagate are the two big names remaining, though others are around here and there. There have been some bad series of drives (Deskstar 75GXP), but each company has their ups and downs. Right now the big thing is the WD Red drives which are high performance with low power consumption, and are designed with servers in mind.

The choice of operating system will be dictated by your knowledge and comfort level, but generally speaking you'll either be using Linux, FreeBSD, or Solaris on some level. Windows Home Server is pretty much dead and Drive Extender is no more.

Linux provides a wide suite of tools and broad hardware and software support. MDADM and LVM are both solid software RAID implementations, there is an enormous variety of server software available, and between CentOS, Debian, and Ubuntu and their respective communities, most questions have already been answered about setting up a basic NAS.

FreeBSD is another common option. The setup here is very similar to a Linux system, just with a different underlying operating system. The big reason to go with a FreeBSD base instead of Linux is more mature support for ZFS.

Solaris is a pretty distant third place at this point owing to the acquisition of Sun by Oracle and the effective cancellation of the OpenSolaris project, so unless you're tied to the system for other reasons there's no point to jumping in now. The killer feature was ZFS, and that is now well-implemented on FreeBSD and Linux.

There are also some custom distributions that create nicer web-based graphical interfaces for the server, much like you would find on the smaller appliances. FreeNAS is the biggest of these by far, and has a pretty large user base to answer questions. There are also systems like OpenFiler and unRAID. Note that unRAID requires a license for servers with more than 3 disks or if you require some features.

User er0k also brought up NASLite, Open Source but $30 for a license.

teamdest fucked around with this message at 07:39 on Nov 15, 2012

Adbot
ADBOT LOVES YOU

teamdest
Jul 1, 2007
FAQs:

If someone asks a question that doesn't seem to fit anywhere else but seems common or comes up often, it'll go here. for now, there's no questions.


Testimonials/Tutorials:

Managing Raid and LVM in Linux

manwh0r3 sounds off on Windows Home Server, including his experience with it.

er0k brings up NASlite, an alternative to OpenFiler/FreeNAS

King Nothing raves about the D-Link DNS-323/343

WindMinstrel pours his heart and soul into this testimonial of ZFS and Raid-Z

McRib Sandwich goes wild and shows how to build a ~$1600 dollar DAS Raid system for ~$800!

Have you had a good experience with a product? bad experience? Had some trouble setting something up but finally got through it and want to let everyone know how to do the same? If you've got something to say, I'll put it in here.




Special Thanks to kri kri, CrazyLittle, McRib Sandwich, and sharkytm for good topic ideas, support, and content. If anyone thinks I missed something, am wrong about something, am generally an idiot, speak up and I'll try my best to correct it!

teamdest fucked around with this message at 04:00 on Mar 19, 2008

CeciPipePasPipe
Aug 18, 2004
This pipe not pipe!!
Just a little nitpicking: Western Digital do sell drives having more than 1 year warranty, look for the "RAID edition" drives. I think they have 3 or 5 years warranty, and also is "rated for 24/7 usage". I don't know if it's just snakeoil or if there is an actual quality difference.

Also, for software RAID on linux, mdadm is the weapon of choice.
Somewhat old Linux Software RAID HOWTO
Linux Software RAID Wiki
MDADM's main developer blog

CeciPipePasPipe fucked around with this message at 17:18 on Mar 18, 2008

IOwnCalculus
Apr 2, 2003





CeciPipePasPipe posted:

mdadm

I love mdadm :swoon:

I used this site when setting up the current iteration of my fileserver:

Managing RAID and LVM with Linux (v0.5)

This one I haven't used directly yet since I haven't yet bumped up against my current RAID's total capacity or close to it, but it looks incredibly interesting and I may give it a shot at some point later this year if I get a good deal on a 500GB/750GB drive:

Growing a RAID5 in MDADM

For the record, my fileserver is a pretty simple setup, hardware-wise. Mid-tower Antec case on its side, some cheap Foxconn 945-based motherboard, a Pentium D 2140 CPU, a gig of RAM, and the main array I care about is 4x500GB in RAID5 via MDADM. I also have a secondary array I use as a scratch disk / storage for things easily replaced, which is two of my old 250GB drives in a RAID0.

It went smoothly enough and worked well enough (especially compared to my old abortion of a shitload of ~200GB drives and Windows Server 2003 Dynamic Disk RAID5) that I took what was left of my old fileserver and rebuilt it as another Ubuntu server box with a smaller RAID5 array (3x200GB), added BackupPC, and stuck it at my mom's house as an offsite backup for the files I really, really don't want to lose. Both boxes are set to email me in the event a drive goes down.

IOwnCalculus fucked around with this message at 17:39 on Mar 18, 2008

manwh0r3
Feb 22, 2003
Soy inocente... es mi primer día
From the main Windows Home Server thread a summary of what Windows Home Server:

What is WHS?

Windows Home Server very powerful home server with a lot of automation for novices and some powerful tools for the inner geek in all of us. Its runs a top a simplified windows 2003 R2 server. It features:
  • File and print sharing
  • Centralized backup
  • Disaster recovery functions
  • Single Instance Store
  • Health monitoring
  • Dynamic disk control (More on that later)
  • Windows Media Connect support
  • Remote access

Interesting. What are the system requirements?
  • Processor - 1GHz Pentium III or higher
  • RAM - 512MB RAM
  • Hard Drive - 80GB Internal Hard Drive
  • CD/DVD - Any Bootable DVD-ROM Drive (There will be no CD-ROM edition. Let it go)
  • Network - 10/100 Network Interface Card

But my linux server ran on a machine half as powerful...

The system requirements are a bit on the steep side but I did have WHS running fine on P3 500mhz. The only problem with running it on such a slow/old CPU was that it would take awhile to balance the drives and the web interface was also pretty slow. Apart from that it ran fine. The install will not let you install on a drive smaller than 80GB. Also, they recommend a 64bit CPU which I guess means at some point in the future it will only run a 64bit machine.

Talk up these automated backup features

When you install WHS for the first time, and the included software on all the PCs in the house it will automatically backup the computers overnight. Also it will create an image of the hard drive in case of catastrophic failures, which tend to happen with spyware to this day. I find this feature highly desirable, as sometimes rollbacks are still infected.

In essence you set it and forget it.

The way that it backs up the data is quite interesting, as it uses vistas Single instant store but in a more aggressive manner:

quote:

This means that the same data residing on more than one PC -- from Windows system files, applications and supporting DLLs to movies and MP3 tracks -- is stored just once. This drastically reduces the size of each image, and consequently the server's overall storage requirements. Subsequent backups are incremental, modifying only those clusters which have changed due to a file being added, edited or deleted.

quote:

They're seeing 15-19 TB of data stored in 300 GB or less of backup space

Also note: there is no support for domains.

Forget RAID. Embrace Drive Extender.

quote:

One of the truly unique features of Windows Home Server is Microsoft's home-grown Drive Extender technology which lets you add a hard drive -- internal or external -- and have its space added to the total pool of user storage instead of appearing to users as a new drive letter. It's not RAID, however, and doesn't require the use of identical drives.

Kick that controller to the curb, and kiss the inherent flaws of RAID goodbye. Now you can pool all those old HDs together in one box. No longer will your 4GB drives collect dust, now they can be put to use (till WHS tell you they are about to fail). You’ll need two or more drives to allow the data mirroring to happen, but with 500gb drives now at their lowest, half TB to full setups are a possibility with significantly less risk.

The fact you can use Firewire, USB, IDE, and SATA drives in one sitting makes me glee with joy.

Is this different than the disk spanning that's in XP? -Thug Bonnet

A evolution of the ideal. Where as WHS creates redundancies of the data in case of failure, disk spanning does not. Also, spanning on XP does not support removable drives, such as Flash drives or USB/Firewire drives.

Source

Fake edit: Copied mostly from incoherent first posted but will update later today/tomorrow with more up to date info, like the file corruption bug that affects WHS and my personal experience with it.

CeciPipePasPipe
Aug 18, 2004
This pipe not pipe!!
Also, here's another tip for Linux software raid: make sure you put your swap partition on RAID as well (instead of striping the swap partition). I learned this the hard way when one of the drives tanked and couldn't do a clean shutdown since it was unable to swap back in from the failed drive. Fortunately it didn't hurt anything important.

IOwnCalculus
Apr 2, 2003





CeciPipePasPipe posted:

Also, here's another tip for Linux software raid: make sure you put your swap partition on RAID as well (instead of striping the swap partition). I learned this the hard way when one of the drives tanked and couldn't do a clean shutdown since it was unable to swap back in from the failed drive. Fortunately it didn't hurt anything important.

Are you setting yours up with a separate drive for the OS or no? I seem to have a never-ending supply of sub-100GB drives, which are still good for boot / swap disks - and it seems a lot easier to me, in the long run at least, to keep the OS completely separate from the data disks.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

IOwnCalculus posted:

Are you setting yours up with a separate drive for the OS or no? I seem to have a never-ending supply of sub-100GB drives, which are still good for boot / swap disks - and it seems a lot easier to me, in the long run at least, to keep the OS completely separate from the data disks.

I agree, I would never put the operating system on a raid array meant for file storage. My current fileserver setup has 5x 500GB drives in a RAID-5 for storage and 2x 20GB fireballs in a RAID-1 for boot. If the storage array goes south, it's invaluable to still have a safely bootable system in order to diagnose things. Keeping boot and data separate is really a must in my opinion.

edit: I'm not sure that's what he's saying, though. I think he's suggesting keeping swap on redundant storage instead of striped over a RAID-0.

admiraldennis fucked around with this message at 18:14 on Mar 18, 2008

Strict 9
Jun 20, 2001

by Y Kant Ozma Post
Any recommendations on an entry level NAS? I'm trying to compare costs of something like a DLink DNS-323 for $160 compared to my own system. I've built several gaming systems but aren't exactly sure what to look for from a server perspective, especially in terms of not overpaying for features I don't need.

Maybe something like this?

Rosewill Case with 400W PSU. It has 4 external bays, and the included PSU is a good savings. Even if the PSU is only halfway decent I should be fine without a big graphics card and as long as I don't go crazy with the number of hard drives.

ASUS M2A-VM AM2. I have no idea. Highly rated board from a good manufacturer with 4 SATA ports. Probably more than I need?

AMD Sempron 64 3400, assuming that processor speed really doesn't matter in this case.

G.SKILL 2GB (2 x 1GB) 240-Pin DDR2 SDRAM DDR2 800. Since when was 2GB of RAM $40?

And then a Western Digital 750GB drive brings this up to about $350. Doesn't sound bad at all for an entry level NAS, and I bet I could make that cheaper if I could find a lower cost motherboard.

Strict 9 fucked around with this message at 19:14 on Mar 18, 2008

alo
May 1, 2005


Does anyone have a suggestion for a low power motherboard/cpu combination with plenty of pci slots for loading up with drives. An overclocked core 2 duo isn't exactly a requirement for simple raid5/raidz.

I could get away with having limited pci slots if there were numerous sata ports. The idea is mostly to load up a machine with 7+ drives and not have to have the overhead of a modern power hungry machine.

Bonus points if it runs Solaris.

CeciPipePasPipe
Aug 18, 2004
This pipe not pipe!!

admiraldennis posted:

I agree, I would never put the operating system on a raid array meant for file storage. My current fileserver setup has 5x 500GB drives in a RAID-5 for storage and 2x 20GB fireballs in a RAID-1 for boot. If the storage array goes south, it's invaluable to still have a safely bootable system in order to diagnose things. Keeping boot and data separate is really a must in my opinion.

edit: I'm not sure that's what he's saying, though. I think he's suggesting keeping swap on redundant storage instead of striped over a RAID-0.

I put the OS on a RAID1 partition across 4 drives (which is way overkill, but useful to keep identical partition tables - also then I could boot any of the four drives, in theory at least) and allocated the remaining parts of the disks to large RAID5 partitions (and some RAID1 swap again)

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib
I'll chime in with everything I've learned thusfar.
1: DAS is great, but a good NAS can come close to USB2's actual (not max) throughput with a decent gigabit switch, and good cabling. I've clocked our Terastation Live 2TB (1.4TB available through RAID5) at 35mb/s, which is better than my DAS USB2 ATA100 drive, which tops out at 30. I just transferred a 19GB file in an hour over a 100mbit card, connected to a gigabit network, while the computer was being heavily used. Pretty decent, if you ask me.

Check the charts over at smallnetbuilder. They are a great resource, which is why I asked that it be linked in the megathread. I did a bunch of research, and haven't found a single other site that has a consistent testing methodology.

It should be mentioned that Windows Home Server has some major issues, including data overwriting if a file is opened when something else is copied. It should be used with caution until the bugs are worked out. It also should be noted that its RAID/redundancy capabilities are questionable.

er0k
Nov 21, 2002

i like ham.
NASLite

Simliar to FreeNAS mentioned in the OP, but is Linux-based and much more stable and streamlined. It's not free, but it's worth the $30 IMO.

It can boot from hard disk, CD or a USB drive, supports SMB/CIFS, FTP, NFS, AFP, mDNS, Rsync, HTTP, Gigabit, IDE, SATA, SCSI, USB/FireWire and RAID.

It'll run on just about anything, has very low hardware requirements, and boots into an 8MB RAM Disk.

It's meant for low security situations, so if you need something with user management or disk quotas, this is not what you want.

I've been using NASlite for a couple years now, and have never had any problems with it. It's dead easy to set up and maintain, and has a great support community if anything should go wrong.

http://www.serverelements.com/

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

er0k posted:

NASLite

I've been meaning to build one of these for myself, do you mind if I bug you about some questions once I start setting mine up?

I'd also considered unRAID, which is free for the basic (3 drive) version, and $70 or $120 for the Plus (6drive)/Pro (16-drive).

unRAID: http://lime-technology.com/

Good people, I've emailed one of their guys a few times. The nice thing is that it boots from a USB drive, and even has user level security for all paid versions.

EC
Jul 10, 2001

The Legend
I don't want to turn this into a WHS thread/debate by any means, but I have to say I can't recommend to stay away enough. Between the sluggishness and the current file corruption bug, I'm scared as hell that my data is on it right now. And the constant loving "storage balancing" routine is annoying as hell.

I used Server 2003 and a hardware RAID5 card with 4 IDE drives forever, and then moved into WHS when I needed to expand. So now I'm looking for the following things:

- A redundant file system, in some form or another.
- The ability to share the storage pool as one large share, as opposed to a lot of individual shares.
- The ability to scale fairly easily and well in regards to adding additional drives.

Right now I'm looking at unRAID, which seems like it will fit my needs pretty well. Now all I have to do is worry about migrating over from WHS. Right now I have a 300gb, a 250gb, and a 500gb. My plan is to:

- Buy two 750gb drives.
- Install unRAID, using one of the 750gb as parity.
- Detach each drive one at a time from WHS and add the files manually to the shares on the unRAID box.
- As I clear out the files on each drive, I'll add that drive to the array.

Any thoughts to the plan?

teamdest
Jul 1, 2007

Strict 9 posted:

Entry-level NAS hardware stuff

I think the biggest question for you is "how many drives do you ever plan to expand to?" if the answer is 4 or less, get pretty much the cheapest non-lovely motherboard you can find, an Athlon X2 or Sempron, a gig or two of ram, and then software-raid it in linux. if you want to go beyond four drives, you should begin looking at a controller card. For a cheap motherboard, This MSI Model seems like overkill for you, and it's cheaper than the one you were looking at.



Also: I've updated the OP a few times so far. I'm trying to make sure I credit everything I add but if I miss something please let me know.



edit: closing quotes are not my friends...

kri kri
Jul 18, 2007

EC I am in the same boat. I am just not confident about Microsoft fixing their WHS bugs seeing as how long it has taken them to fix, then reading this article:

http://www.anandtech.com/weblog/showpost.aspx?i=413

I read somewhere that Microsoft is already working on WHS v2, which is based on Server 2008. Hopefully they will have things ironed out by then.

I think I might go unRaid as well - it sounds like you can do multiple drives and use logical shares.

er0k
Nov 21, 2002

i like ham.

sharkytm posted:

I've been meaning to build one of these for myself, do you mind if I bug you about some questions once I start setting mine up?

sure, send a PM or AIM is in my profile

King Nothing
Apr 26, 2005

Ray was on a stool when he glocked the cow.


I think the D-Link DNS-323 deserves a mention in the OP's NAS section. It's the best value out there for a SOHO NAS, in my opinion.

Out of the box, the DNS-323 provides:

  • Space for two 3.5" SATA drives
  • RAID 0, RAID 1, JBOD, separate drives
  • Internet Access via FTP
  • UPnP AV for Storing and Streaming Media Files
  • iTunes server
  • Gigabit Ethernet Port
  • USB Print Server Port
  • E-mail alerts

It's small (4.1" x 7.8" x 5.2") and comes in an attractive black and silver aluminum case. I have mine sitting in my entertainment center with my DVD player and receiver and it looks like it belongs there. It's pretty much silent, too, I can only tell it's doing something from watching the LEDs blink on the front.

The best part about this thing though, is modding it.

DNS323 wiki/forums

There's this thing called fun_plug. Just by copying some files to one of the drives and rebooting the unit, you can have:

code:
busybox-1.8.1
    dns323-utils-0.7.176
    uclibc-utils-0.9.28

    zlib-1.2.3
    pcre-7.4
    tcp_wrappers-7.6
    strace-4.5.16
    file-4.21
    e2fsprogs-1.40.2

 Available add-ons are:

  NFS:
    portmap-6.0
    unfs3-0.9.18            (user-space NFS daemon)
    nfs-utils-1.1.0         (kernel NFS daemon, needs kernel support!)

  SSH+SFTP:
    dropbear-0.50
    openssh-sftp-4.6p1

  HTTP:
    lighttpd-1.4.18

  NTP:
    ntp-4.2.4p4

  RSYNC:
    rsync-2.6.9

  MAYBE THESE TOO: (from the 0.3 release)
    strace-4.5.14
    file-4.21
    php-5.2.3

    libpng-1.2.18
    libjpeg-6b
    imagemagick-6.3.4_9
    jhead-2.7

    ruby-1.8.6
I haven't tried most of this, but I have set up rsync. Instead of doing RAID 1, I wanted an actual backup solution. So I put a couple 500 GB SATA drives into my unit, and have a cron job set up to rsync the main drive to the backup drive every night. I only just set this up last night so I don't have much to say about it, but rsync is only supposed to copy the changes so the backup should be quite fast from now on. This doesn't give me much rewind capability, but I should be able to set up something to accomplish that.

If you're more experienced with linux, you can also install debian (etch or sarge) onto the DNS-323 and get even more functionality. Since you access it with chroot you keep the existing web interface functionality, and get stuff like AppleTalk/AFP support, better Samba support, encrypted partitions, and pretty much anything you want to compile yourself. The DNS-323 sports a 500 Mhz ARM processor and 64 MB of RAM so it's not a bad little linux box.



If you really want RAID 5 or just room for 4 drives in a single box, the DNS-343 is coming at some point. Most of the information is in German at this point though, it looks like it will be released in Europe first.

Thread about DNS-343

King Nothing fucked around with this message at 00:49 on Mar 19, 2008

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich
Here's my little contribution to this thread:

Build-Your-Own Portable RAID Appliance!

While NAS is great, even over gigabit Ethernet you're never going to get SATA/eSATA speeds from a NAS box. One of the nicest things about NAS right now is the wide variety of portable offerings available, but why doesn't direct-attached RAID get the same portability love? Well, I wondered that too, and after enough digging, I decided to roll my own. What follows is the buildout I did for a friend that was looking for such a portable RAID solution.

First, the advantages of this approach are considerable:


* DAS blows NAS out of the water for raw speed, any day of the week.

* No computers to configure! This isn't a shuttle PC running linux; the RAID system is completely self-contained. If you don't like fiddling with software RAID implementations, this might just be the way to go. Once you've configured your RAID set, it's just plug-and-play simplicity.

* Separating the network function from the storage function lets you format your RAID set as you require. For me, I maintain a few classic Macs, which requires that I use HFS+ formatted disks if I want to store any old Mac files with resource forks. Very few NAS appliances out there will let you do this, but with a RAID appliance hooked up to my trusty PowerMac / OS X combo, I can have an HFS+ disk set being shared over AFP and SMB simultaneously. Total mixed-environment compatibility = win!

* You don't even have to give up network attachment! As I pointed out above, you can connect your RAID appliance to any networked computer and share to your heart's content. The best part about this is flexibility -- you aren't locked into the network filesystem protocols built into your NAS box.

* You don't even need a computer to add NAS capability! While my particular scenario suits hooking the RAID up to a computer to be shared, there are several bridgeboards available that bring NAS capability to DAS devices as well.



So that's all well and good, but do keep in mind some of the drawbacks and limitations of this approach, as well. Some of these, like the inherent rules and limitations of RAID, apply to both DAS and NAS boxes that offer those RAID levels; teamdest has provided a competent breakdown of RAID in the OP.


* Remember that unlike some "smarter" storage devices, you cannot grow the array size simply by swapping disks in the same set for larger ones down the road. While the RAID hardware considered for building this box is quite capable, even it does not offer this capability. Though that could theoretically change with later firmware, don't quote me on it, and certainly don't embark on building something like this expecting the capability to be added down the line.

* This thing isn't cheap. Entry-level NAS boxes can be had for around $300-$400; competent SOHO/prosumer ones, like the Infrant/Netgear ReadyNAS NV+, are about $800-$850 at the time of this writing. This RAID costs a bit more than even that. Whether or not the benefits of portable DAS RAID are worth it to you should be carefully considered.

Anyway, suffice it to say that I think the portable RAID appliance is a winning proposition. So without further ado, here's how to build this identical-but-overpriced $1600 box:



...for less than 60% of the price. Separately buying the same parts and putting them together yourself comes out to around $940 (shipping not included) at the time of this writing.


Breaking It Down

Believe it or not, the RAID system pictured above can be exactly replicated, part-for-part. It's just a matter of knowing what it's made of, and finding the individual components for the right price. Here is what that sexy portable RAID consists of:


* Areca Technologies ARC-5030 RAID Subsystem -- Lowest price: $662.35 (Froogle, 18 Mar 2008)



This device is GREAT, and comprises the heart of our little system. Areca is one of the big players in hardware RAID (the other at this price point being 3Ware), and the feature set on this subsystem (and the price) proves it. Check the website for full details, but in summary this is a 5-drivebay enclosure on a Marvell SATA II backplane, with dual SATA and IDE outputs for host connection. The system supports RAID levels 0,1,3,5,6 and JBOD and has a dedicated ASIC for parity calculations. Too many RAID features to list here.

Note that if SATA isn't your thing, Areca offers other similar enclosures for SCSI drives too, but don't expect that to be as cost-effective. Other enclosures are outside the scope of this writeup.


* Areca Technologies ARC-1000 Control Panel -- Lowest price: $79.00 (Froogle, 18 Mar 2008)



The control panel for this system is actually an optional purchase, as the RAID subsystem itself supports management over the network. However, its geek value is high. If cost is a concern, skip this and save a few bucks.


* Addonics Technologies Storage Tower - Base Model -- Lowest price: $119 (Addonics, 18 Mar 2008)



You have no idea how hard it actually was to google for this enclosure type, but here it is. This enclosure will support up to 4 5.25" devices, and comes with a ~200W PSU integrated. In this case the Areca subsystem takes up 3U, and the control panel takes up the last bay. The subsystem is a tight fit in the box, but it works. I recommend going with the base model, and adding the appropriate back panel and/or cables separately.

Specifically, the "Port-Multiplier" back panel will give you space to mount two bridgeboards, and a punchout for a SATA to eSATA connector. This is probably the most practical configuration for most people.


This is the other part where personal choice will affect the overall price of this little RAID applicance, as this is the part where we choose the bridgeboard that is best-suited to you. Here's what I ended up going with for the buildout.

* FireWire 800 / USB2.0 bridgeboard (Dual SATA) -- Price: $69 (DatOptic, 18 Mar 2008)



This bridgeboard contains dual SATA links and provides a bridge to FireWire 800 (backwards-compatible with FW400) and USB 2.0. The controller is an Oxford 924 chipset, which I highly recommend. There are competing bridgeboards out there based on the Initio chipset, but given my past experiences with them, I'd say avoid them at all costs.

The nice thing about this bridgeboard is that it appears to offer a SATA mode that will simply make the two ports nodal -- that is to say, you can take the SATA out from the Areca controller, connect it to the bridgeboard, and then take a second SATA to eSATA cable and hook it to the remaining port. This lets you avoid having to use the IDE output on the controller for your bridgeboard. Disclaimer: I have not tested this feature, and can't guarantee that this bridgeboard will function the way I've described this setup, though the documentation suggests it should. Don't blame me if the implementation I described doesn't actually work. If you want to play it safe, either don't use eSATA in conjunction with a SATA bridgeboard, or get an IDE bridgeboard to handle the FireWire/USB function.

Other options for boards here include NAS bridgeboards (also available on the DatOptic site).



Whew! Outside of assembly, which should be pretty straightforward, you now have everything you need to build one of these boxes! Like I said, the cost for these components comes to roughly $940. If you want to save money, you can remove the control panel, and even the bridgeboard if you want to run eSATA only; dropping those two items will save you almost $150 more.

I'll update this post as needed, including any clarifying remarks you might have for me. Thanks to teamdest for taking on this megathread!

McRib Sandwich fucked around with this message at 04:11 on Mar 19, 2008

Jerk Burger
Jul 4, 2003

King of the Monkeys
What is the best way to monitor a mdadm Raid-5 array under Linux?

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

amanvell posted:

What is the best way to monitor a mdadm Raid-5 array under Linux?

Personally, I have a cron job run mdadm in monitor mode every 20 minutes; it will send me an email if somethings up:
code:
0,20,40 * * * * root 	mdadm --monitor -1 -m [email]user@domain.com[/email] -scan
Make sure you have postfix or another MTA properly configured first.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Has anyone used the Panasas pNFS gear? We're looking at it for some HPC applications.

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

sharkytm posted:

I'll chime in with everything I've learned thusfar.
1: DAS is great, but a good NAS can come close to USB2's actual (not max) throughput with a decent gigabit switch, and good cabling. I've clocked our Terastation Live 2TB (1.4TB available through RAID5) at 35mb/s, which is better than my DAS USB2 ATA100 drive, which tops out at 30. I just transferred a 19GB file in an hour over a 100mbit card, connected to a gigabit network, while the computer was being heavily used. Pretty decent, if you ask me.

Yeah, absolutely. Speed is simply not a concern for NAS over Gigabit ethernet. On a good day I can pull just shy of 40MB/sec from my software RAID-5, and the network is not the bottleneck.

kapalama
Aug 15, 2007

:siren:EVERYTHING I SAY ABOUT JAPAN OR LIVING IN JAPAN IS COMPLETELY WRONG, BUT YOU BETTER BELIEVE I'LL :spergin: ABOUT IT.:siren:

PLEASE ADD ME TO YOUR IGNORE LIST.

IF YOU SEE ME POST IN A JAPAN THREAD, PLEASE PM A MODERATOR SO THAT I CAN BE BANNED.
It seems like for home-brew solutions, this motherboard/CPU combo should be first post worthy:

http://www.silentpcreview.com/article780-page1.html

WindMinstrel
Jan 20, 2005
I'm a minstrel. Blow me.
I just want to take a few moments to gush over RAID-Z.

I tried using solaris, but it had a boatload of problems detecting my JMB363 controller which was driving my boot drives (2x120GB's), so I went to FreeBSD(at the time)7-RC1.

And it worked. And now I'm at 7.0-RELEASE.

And, goddamn, it is loving impressive.

I built the array with one command, which took about 5 seconds. I then made another pool on the array for backups, which I then told the system to increase the amount of drives it stored the backups on. I then made a documents subpool inside the Backups one, and turned compression on. I'm getting ~50% compression on the documents, and they're stored across 3 of the 6 drives.

I did all that with five simple commands.

I get mailed a status report every day, and if it detects some errors, it repairs it, e-mails me and repairs the file on the drive.

I really, really have had a perfect experience with ZFS and RAID-Z, and I imagine under solaris it's even better.

You want a fair hunk of RAM (~1GB, which is probably the cheapest anyway) and a 64-bit processor if you want maximum performance out of it, but I get ~220MB/s reads off it anyway with 1GB and 32-bit install of FBSD7-RELEASE, so I can't be bothered. If I did need to reinstall, a simple command unmounts the zpool, and then I can do whatever the gently caress I want, then just zpool import [poolname] and it's back. Perfect stuff.

Oh, and I re-did all my cabling after I set it up, all the drives were on different ports of the controller, ZFS didn't even blink.

In summary. :love: ZFS, :love: RAID-Z.

All it needs is online capacity expansion and it's superior in every way to mdadm RAID-5.

Edit: Me spelling perfect.

WindMinstrel fucked around with this message at 04:00 on Mar 19, 2008

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

kapalama posted:

It seems like for home-brew solutions, this motherboard/CPU combo should be first post worthy:

http://www.silentpcreview.com/article780-page1.html

I guess it's nice if you want to build a tiny box and don't ever want to use gigabit ethernet.

2x SATA, 1x IDE, 1x PCI = little chance at a decent raid; 100/10 ethernet = little chance I'd ever use it for NAS

stephenm00
Jun 28, 2006
Fixed

stephenm00 fucked around with this message at 15:22 on Apr 21, 2017

admiraldennis
Jul 22, 2003

I am the stone that builder refused
I am the visual
The inspiration
That made lady sing the blues

WindMinstrel posted:

In summary. :love: ZFS, :love: RAID-Z.

That's it; my next array is going to be a z. I gotta try this.

stephenm00 posted:

why isn't zfs and raid-z a more common option? Their must be some disadvantage for home users right?

Accessibility. ZFS was written by Sun as a major feature of Solaris. Solaris is a nice operating system, but its hardware compatibility is nowhere near as accommodating as that of linux or, say, FreeBSD.

Fortunately for us, the latest version of FreeBSD contains an experimental but working port of zfs.

admiraldennis fucked around with this message at 03:50 on Mar 19, 2008

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

stephenm00 posted:

why isn't zfs and raid-z a more common option? Their must be some disadvantage for home users right?

Well for starters, in the case of RAID-Z you're basically required to run an operating system that can support it. The strengths of RAID-Z come from the fact that it is so intimate with the OS.

WindMinstrel
Jan 20, 2005
I'm a minstrel. Blow me.

stephenm00 posted:

why isn't zfs and raid-z a more common option? Their must be some disadvantage for home users right?

Well until recently it was either use solaris, which has... interesting hardware support, or use a beta/RC of FreeBSD7.

Now that FreeBSD7 is released, I hope to see a bit more uptake.

Because goddamn, ZFS is just amazing.

ParanoiaComplex
Dec 29, 2004

As far as homebrew file server motherboards go, I'd recommend the Gigabyte P35-DS3R:



Aside from the fact that it has 8 on-board SATA connections, the board itself is awesome. From a couple of reviews:

"Looking at our own experience with the GA-P35-DS3R, there is no way to directly link the stability and overclocking performance with Gigabyte's Ultra Durable 2 design, but there is no denying that the GA-P35-DS3R is one hell of a smooth overclocker. Gigabyte plays right into this scenario, offering power users higher quality components with the Ultra Durable 2 design and universally raising the bar for motherboard components and quality." - hardwarezone.com review

"The board is fantastically stable, surviving the full 24 hours of bit-tech's stress test that includes two lots of Prime 95, IOMeter and FarCry looping. Even when setting the BIOS straight to 3-3-3-9 (after Ctrl+F1, remember), we had no problems from the outset. Given an early revision BIOS, Gigabyte has excelled itself in making it work so well. We only wished this was repeated in every board we look at, especially in the inexpensive varieties boards where less effort is often applied." - bit-tech.net review

It's terribly expensive either. $142.99 at CanadaComputers.com. It will run current and upcoming (45 nm) Intel processors, sports high quality voltage regulation circuitry, the P35 northbridge with the ICH9R southbridge, and solid stability. I bought one after reading reviews and being sold on the fact that I don't have to shell out for a SATA card. Currently it runs an Ubuntu file server with mdadm in RAID-5 but I might experiment with RAID-Z after reading this thread.

Reviews:
xbitlabs.com
bit-tech.net
hardwarezone.com

teamdest
Jul 1, 2007

McRib Sandwich posted:

Holy poo poo you built a prebuilt RAID box

That is seriously amazing! I'm actually somewhat amused that they just used off-the-shelf parts and are packaging it at such a premium, I'm betting most people don't even consider that you could put something like that together on your own.

WindMinstrel
Jan 20, 2005
I'm a minstrel. Blow me.

ParanoiaComplex posted:

As far as homebrew file server motherboards go, I'd recommend the Gigabyte P35-DS3R:

This.

This is the board I'm using, and apart from the JMB363 (purple ports) not being supported under solaris, it's perfect.

It's great.

McRib Sandwich
Aug 4, 2006
I am a McRib Sandwich

teamdest posted:

That is seriously amazing! I'm actually somewhat amused that they just used off-the-shelf parts and are packaging it at such a premium, I'm betting most people don't even consider that you could put something like that together on your own.

:D I was amazed as you are. That was a buildout I did for a friend, and I was impressed with how well it came together, given that I didn't get any hands-on time with any of the components before they all arrived. I just did a shitload of research, and it paid off!

Shalrath
May 25, 2001

by elpintogrande
Ive enjoyed using NFSv3 over samba for mounting my storage drives over Gig-E. Does anyone have some tips for tweaking nfs and smb access?

My current setup is about 5 drives concatencated through LVM for a total of about 800 GB. (which now feels very outmoded, since geeks.com is having a sale of 750 Barracudas for 124$) Nothing special really. I havent bothered with any sort of mirroring or striping since most of the drives are different sizes.




What i'd really like is some way of keeping track of all the random things i burn to DVD. IE, some sort of virtual filesystem that remembers all of the files I've burned and deleted, and which disc I put it on. Anything like that floating around?

teamdest
Jul 1, 2007
Well since I've got the OP updated, I think I'll take a moment and talk about my own fileserver.

Artemis, as I call her, is built off of:

Asus P5W64 WS Workstation Board
Core 2 Duo E4300
2GB DDR2-800

The base specs are very high for what is essentially an entry-level fileserver, but for a reason: the P5W64 has *4* x16 PCI-E slots (16/16/8 or 8/8/8/8 I THINK), which allows me an eventual grand total of FOUR Dell PERC 5/i's or PERC 5/e's. the PERC's are finicky bastards, but if you choose hardware that is compatible they are amazingly solid performers with enterprise-grade features for ~120 on eBay. Two of those are the backbone of my system, each has a pair of 8484 Ports, which breakout to 8 SATAII/SAS ports per card without edge expanders. I'm hesitant to wholeheartedly recommend PERCs, as they have a serious case of motherboard-compatibility-itis, but for what you pay, you get amazing cards. If you're thinking of using PERC 5's, I may be the guy to talk to about getting it up and running.

Currently I'm only using one card, hooked up to a triplet of 500GB drives and a pair of 750's, but that will soon be added to with another 3 750's. All my drives are Seagates, I've lost at least a half-dozen WD drives in the past so the 1/3 year warranty business kinda scared me off. Seagate's 5 year warranty hopefully means that by the time I lose one, I'll have outgrown it anyway. At the moment those drives are in a frighteningly insecure pair of JBODs, but when the 750's finally come I'll be migrating over to Raid-5's with my current tape and off site backup routine.


Which reminds me, what does everyone think of appending Backup information into this thread as well? it seems like this volume of data would necessitate some decent backup measures, and i don't recall any thread related to that recently.

H110Hawk
Dec 28, 2006

teamdest posted:

RAID-6 (Double Parity)

RAID-Z (Sun makes everything in house, don't they?)

Z is a software RAID implementation in Solaris's ZFS File system.

To expand on this for the OP (please incorporate if you like it!) ZFS includes most enterprise level raid features you would expect. It can do single parity (raidz1) or double parity (raidz2). Within ZFS you create pools for your storage needs, these can be mixes of N-way mirrors, raidz's, etc.

There are also things such as raid4, which is like raid5 but without distributed parity. Normally parity data is written right along side regular data on various disks. If you put all of your parity on a single parity drive, you get raid4. This creates performance issues, as your parity spindle cannot be used in any way for data. Network Appliance has created a double parity version of raid4, calling it raid_dp.

teamdest
Jul 1, 2007

H110Hawk posted:

Raid Stuff

Put it in the OP! thanks for the info, I'm a little unclear on the ZFS stuff still. planning to give it a try soon though, it sounds amazing.

Adbot
ADBOT LOVES YOU

PowerLlama
Mar 11, 2008

I have to second King Nothing's review. I have the D-Link Drandom numbers, and I love it.

I don't use it for everything I could, but it was so easy to set up and go. The FTP server is great.

The only thing I have a problem with is that it's not completely UPnP compliant. Or maybe it's DNA or something, I don't remember. So my PS3 sees that it's a media server, but can't access any of the files on there.

It is pretty awesome, even though mine's not modded.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply