Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
teamdest
Jul 1, 2007
Edit: Information Updated as of 11/15/2012!

Ground Rules: Don't discuss illegal material. Seriously. Check the the thread before asking a question, at least the first posts and last couple pages. Also, this isn't the enterprise storage thread so if things get complicated it may be time to call in the professionals.

1. The Basics

RAID (Redundant Array of Independent Disks)

A RAID is a group of hard drives set up so that some amount of data is duplicated amongst them. This prevents a single drive from taking your data when it dies. RAIDs are almost always built with equally-sized drives. Different sizes means that each drive will be treated as if it is big as the smallest drive used.

RAID-1 (Mirroring) - Two or more drives are written to in parallel. Capacity is one disk, you can lose all but one of the disks without losing information.

RAID-5/6 (Distributed Parity) - Three or more drives distribute the duplicate information for the data onto all drives evenly. Total size is one drive less than your array size (2 for Raid6), you can lose one drive (again, 2 for R6) without losing data. If you lose power after something was written once but before the parity information is written, you may lose data when the array comes back online. Additionally, rebuilding the array after a drive loss is a disk-intensive process that may cause another drive to fail. If you lose a 2nd (3rd for R6) drive before the rebuild finishes, your data is gone.

Hardware and Software RAID

RAID can be done by hardware (A RAID Controller) that presents the disk to the OS as a single drive and handles the work on the back end, or as software raid such as MDADM, LVM, ZFS Raid-Z, Windows Home Server's Duplication tool, or other systems where the OS creates and manages the arrays.

Hardware RAID tends to be more reliable and often faster since there are dedicated parity chips, write-back cache, internal battery backups, etc. However it is less flexible and more expensive (most controllers will do some combination of 1, 5/6, maybe 10, or just pass the drives through without making an array) and if the hardware dies you will need identical replacements to get the array working again (usually). There are also "Hardware" RAID devices that use the CPU to do the work and are therefore somewhat slower than true hardware.

Software RAID is cheaper than dedicated hardware, and can provide benefits in flexibility and features, such as WHS's ability to mirror only certain portions of drives or folders. Software RAID is coming into its own even in the enterprise sector, since it can do things a controller can't.

Spare Drives, Hot Swapping

When a drives fails (when, not if.), the array must rebuild with a new drive. Some controllers allow for drives that are online but dormant, called Hot Spares. When a drive dies, the controller will automatically rebuild the array with the new drive. If you do not have hot spares, you will either need to power the system off to switch drives or have a system that supports changing them while it is running (called Hot Swapping). Drives not in the system are "cold spares".

Network Attached Storage

This is what most people want, a system on the network that you can read/write to from other computers. These can be simple appliances or custom built solutions, but they all use some fairly common parts and software.

Appliances

Several companies make prebuilt boxes that you can just plug drives into and put on the network. Most common are Synology's Diskstations and QNAP's SOHO Devices.

In this field special note goes to HP's N40L Microserver which is far and away more powerful than most appliances and is much closer to being a custom device while still retaining some ease of use and standardization. the Microserver has a huge hacking/modding community behind it that has often solved a problem before you could even encounter it.

Custom NAS
If you need more than ~6 hard drives or want to run resource-intensive software then you'll need to build a full-size computer. You'll need an enclosure, controller, drives, an OS, and some skill with configuring things.

CPU, Motherboard, and Memory are relatively simple, buy quality parts and make sure your motherboard will fit in the case you choose. Common choices here are Intel Core i3's or AMD Phenoms for low/midrange systems, and i7's for performance servers.

Good controllers are ones with dedicated parity and RAID hardware, support for all the drives you intend to use, and compatibility with your system. 3Ware, Adaptec, Areca, and Highpoint are all companies selling reliable Hardware Raid cards, but there are a few standouts. Dell and HP, for instance, often use rebranded LSI or Adaptec cards, sometimes available more cheaply in such system or on eBay. Before you buy, make sure the chip your controller uses is compatible with your choice of OS (Marvell for instance has shoddy Linux support on several chips). The Big Name in this area is the IBM ServeRAID M1015, a controller available cheaply as a used part from many IBM servers, that can be reflashed to the latest LSI firmware, and has support for new 3TB drives (which not all controllers can handle).

The case you choose will depend on how many drives you need, but also take time to consider whether it has adequate cooling and possible power problems (not many Power Supplies have dozens of SATA connectors, so you may need to wire extras up or have room for a second power supply). Some cases come with dual redundant power supplies, which can be helpful to prevent sudden power losses.

For the drives themselves, Western Digital and Seagate are the two big names remaining, though others are around here and there. There have been some bad series of drives (Deskstar 75GXP), but each company has their ups and downs. Right now the big thing is the WD Red drives which are high performance with low power consumption, and are designed with servers in mind.

The choice of operating system will be dictated by your knowledge and comfort level, but generally speaking you'll either be using Linux, FreeBSD, or Solaris on some level. Windows Home Server is pretty much dead and Drive Extender is no more.

Linux provides a wide suite of tools and broad hardware and software support. MDADM and LVM are both solid software RAID implementations, there is an enormous variety of server software available, and between CentOS, Debian, and Ubuntu and their respective communities, most questions have already been answered about setting up a basic NAS.

FreeBSD is another common option. The setup here is very similar to a Linux system, just with a different underlying operating system. The big reason to go with a FreeBSD base instead of Linux is more mature support for ZFS.

Solaris is a pretty distant third place at this point owing to the acquisition of Sun by Oracle and the effective cancellation of the OpenSolaris project, so unless you're tied to the system for other reasons there's no point to jumping in now. The killer feature was ZFS, and that is now well-implemented on FreeBSD and Linux.

There are also some custom distributions that create nicer web-based graphical interfaces for the server, much like you would find on the smaller appliances. FreeNAS is the biggest of these by far, and has a pretty large user base to answer questions. There are also systems like OpenFiler and unRAID. Note that unRAID requires a license for servers with more than 3 disks or if you require some features.

User er0k also brought up NASLite, Open Source but $30 for a license.

teamdest fucked around with this message at 07:39 on Nov 15, 2012

Adbot
ADBOT LOVES YOU

teamdest
Jul 1, 2007
FAQs:

If someone asks a question that doesn't seem to fit anywhere else but seems common or comes up often, it'll go here. for now, there's no questions.


Testimonials/Tutorials:

Managing Raid and LVM in Linux

manwh0r3 sounds off on Windows Home Server, including his experience with it.

er0k brings up NASlite, an alternative to OpenFiler/FreeNAS

King Nothing raves about the D-Link DNS-323/343

WindMinstrel pours his heart and soul into this testimonial of ZFS and Raid-Z

McRib Sandwich goes wild and shows how to build a ~$1600 dollar DAS Raid system for ~$800!

Have you had a good experience with a product? bad experience? Had some trouble setting something up but finally got through it and want to let everyone know how to do the same? If you've got something to say, I'll put it in here.




Special Thanks to kri kri, CrazyLittle, McRib Sandwich, and sharkytm for good topic ideas, support, and content. If anyone thinks I missed something, am wrong about something, am generally an idiot, speak up and I'll try my best to correct it!

teamdest fucked around with this message at 04:00 on Mar 19, 2008

teamdest
Jul 1, 2007

Strict 9 posted:

Entry-level NAS hardware stuff

I think the biggest question for you is "how many drives do you ever plan to expand to?" if the answer is 4 or less, get pretty much the cheapest non-lovely motherboard you can find, an Athlon X2 or Sempron, a gig or two of ram, and then software-raid it in linux. if you want to go beyond four drives, you should begin looking at a controller card. For a cheap motherboard, This MSI Model seems like overkill for you, and it's cheaper than the one you were looking at.



Also: I've updated the OP a few times so far. I'm trying to make sure I credit everything I add but if I miss something please let me know.



edit: closing quotes are not my friends...

teamdest
Jul 1, 2007

McRib Sandwich posted:

Holy poo poo you built a prebuilt RAID box

That is seriously amazing! I'm actually somewhat amused that they just used off-the-shelf parts and are packaging it at such a premium, I'm betting most people don't even consider that you could put something like that together on your own.

teamdest
Jul 1, 2007
Well since I've got the OP updated, I think I'll take a moment and talk about my own fileserver.

Artemis, as I call her, is built off of:

Asus P5W64 WS Workstation Board
Core 2 Duo E4300
2GB DDR2-800

The base specs are very high for what is essentially an entry-level fileserver, but for a reason: the P5W64 has *4* x16 PCI-E slots (16/16/8 or 8/8/8/8 I THINK), which allows me an eventual grand total of FOUR Dell PERC 5/i's or PERC 5/e's. the PERC's are finicky bastards, but if you choose hardware that is compatible they are amazingly solid performers with enterprise-grade features for ~120 on eBay. Two of those are the backbone of my system, each has a pair of 8484 Ports, which breakout to 8 SATAII/SAS ports per card without edge expanders. I'm hesitant to wholeheartedly recommend PERCs, as they have a serious case of motherboard-compatibility-itis, but for what you pay, you get amazing cards. If you're thinking of using PERC 5's, I may be the guy to talk to about getting it up and running.

Currently I'm only using one card, hooked up to a triplet of 500GB drives and a pair of 750's, but that will soon be added to with another 3 750's. All my drives are Seagates, I've lost at least a half-dozen WD drives in the past so the 1/3 year warranty business kinda scared me off. Seagate's 5 year warranty hopefully means that by the time I lose one, I'll have outgrown it anyway. At the moment those drives are in a frighteningly insecure pair of JBODs, but when the 750's finally come I'll be migrating over to Raid-5's with my current tape and off site backup routine.


Which reminds me, what does everyone think of appending Backup information into this thread as well? it seems like this volume of data would necessitate some decent backup measures, and i don't recall any thread related to that recently.

teamdest
Jul 1, 2007

H110Hawk posted:

Raid Stuff

Put it in the OP! thanks for the info, I'm a little unclear on the ZFS stuff still. planning to give it a try soon though, it sounds amazing.

teamdest
Jul 1, 2007

kri kri posted:

I would be really interested to hear any hands on experiences with UnRaid. I am looking to move away from my WHS (performance, corruption bug) to UnRaid.

I would too, if someone's got a good review I'll throw it into the OP. also, any comments/suggestions to improve that?

teamdest
Jul 1, 2007

elcapjtk posted:

This is probably a dumb question, but I really haven't had much personal expirience with building network storage type things.

I have 10X 160GB PATA hard drives that I want to shove into an array for home use. This won't hold anything really important so redundancy really isn't that big of a deal for me. What I'd prefer is something cheap, what kinda stuff would I be looking for?

the biggest issues you face are: number of drives often dictates price of your product, and that you haven't specified direct or network attached. for network attached, you'd probably be best off with a computer and PATA controller cards, unless someone knows of some ridiculously cheap 10-bay PATA NAS device. I don't know as much about direct-attached solutions for that many pata drives.

teamdest
Jul 1, 2007

cypherks posted:

Not sure if it's the right thread, but I can answer any questions about EMC Clariion and Celerra, iSCSI and Fiber Channel switching. I've done a bunch of SAN with ESX 3.0 and 3.5 installs as well.

well then I'll take you up on that: could you give a basic overview of what a SAN is, how it is hooked up/accessed, etc? I've got my NAS and was thinking of building a SAN and realized I literally know nothing about them, and even iSCSI confuses me when I try to figure out what's going on. I believe I just fundamentally don't understand what a SAN is for.

teamdest
Jul 1, 2007

cypherks posted:

SAN Stuff

If it's free I don't think it's "files" really, so it should be fine. Someone correct me if I'm wrong, though.


Also, the SNIA stuff looks good, i'm reading it over now. I guess my primary question would be: what do I really need to make a SAN in my "home" (Dorm Room), can I convert my NAS to a SAN, would there be benefits beyond learning?

teamdest
Jul 1, 2007

TheChipmunk posted:

And also a rather bad question for RAID-5: Lets say I have two 250gig hard drives and I put them in RAID 5 configuration. Do I see 500Gigs available? Or do I only see 250? Seeing 500gigs available blows my mind if its true.
(I am obviously not speaking of ACTUAL availability but is this theoretically sound?)

No. Raid 5 requires same-sized hard drives, and requires 3 or more. If you have 3 250GB drives, you get 500GB space. 4 250GB drives = 750GB space. you "lose" one drive's worth of capacity, but you gain one drives worth of failure protection. get it?

teamdest
Jul 1, 2007

H110Hawk posted:

Close! Most redundant raid implementations merely use the smallest sized drive as your size for all other disks. Meaning, 250/250/500gig RAID5 = 500gigs of available space. If you're doing software raid, you may even be able to carve off the remaining 250gigs from that last disk. But thats a big MAYBE.

technically correct, sorry. however you're a fool to use a 250 with 2 500's, due to space loss.

teamdest
Jul 1, 2007
I think the PERC 5's do it dynamically though I'd prefer not to test it :)

it is pretty silly though. I guess in budget-constraint systems it's pretty nice.

teamdest
Jul 1, 2007

Sh1tF4ced posted:

Wow, the stacker is almost perfect to what I was looking for, but why the hell is it $240? Yeah, I didn't even think of how loud it would be. I just want to fit as many hard drives in it as I possibly can without spending a ton of money, I don't really care if they're hot-swap.

I use a stacker for my NAS here in my dorm, and it's a great box, lots of space, even has wheels. It is, however, heavy as HELL, to the point of never lifting it.

teamdest
Jul 1, 2007
here's an interesting question:

I used LVM under Debian to create a Volume group, then logical volume on that volume group. then some bad poo poo happened, and I had to reinstall. I had consigned myself to the loss of what data was on this Logical Volume, but on attempting to rebuild it, i was greeted with "Can't Initialize Physical Volume /dev/hda of volume group a3p without -ff".

It had this error for all 4 of the disks I had planned to reinitialize. is there any way (since the metadata seems to be intact) to recover the array? all four disks are fine, the information regarding them has just been destroyed.

teamdest
Jul 1, 2007
The exact occurrence:

Built a virtual group out of a 120GB ATA HDD, 250GB ATA HDD, 480GB SCSI hardware array, and 500GB External USB drive. this was a very temporary array just meant to hold some data while doing a drive shuffle.

~90% of the way through the procedure, the USB hard drive got bumped, shutting it off and freaking the computer out in general. the bump-er (roommate) decided the proper response was to shut the whole server down. When it came back up, it could not load back into Debian, and after much futzing I decided "well, that data is gone, and it is sad, but I'd rather have this computer back online than sitting here while I try vainly to get Debian booting again" so I proceeded to reinstall Debian, set the first two arrays back up (hardware arrays) like normal, and then decided to just build the third array from the internal disks (which was the final plan anyway), however on running pvcreate it gave that error on the first three drives, and on plugging the fourth in, it gave the same error on all three.

Running pvscan tells me that it sees all three as part of a3p still, type lvm2, and running vgscan returns "Found volume group "a3p" using metadata type lvm2" so it seems promising that I can recover from this comedy of errors.

edit: after trying different google keywords, I've found my solution! I now have 3 arrays again and can make another attempt at this! sorry to bother ya!

teamdest fucked around with this message at 12:03 on Apr 1, 2008

teamdest
Jul 1, 2007
What kind of card and bus architecture are you using? I'm partial to the PERCs I use, but those are DEFINITELY not something that's easy to recommend.

teamdest
Jul 1, 2007

sund posted:

On a more general note, how do CF cards and other non-SSD flash media devices hold up running a system not specifically designed for them? What I mean is if I installed a generic Ubuntu server on a usb flash drive, would the normal system logging and temp file stuff cause the disk to use up it's write cycles at any appreciable rate? I assume that distros designed for embedded flash systems do their read/write things in ramdisks.

I can't give hard numbers, but the distro of linux that my roommate's Soekris board runs off a CF drive, it doesn't write anything except what it absolutely must to the drive itself, citing that very reason.

teamdest
Jul 1, 2007
Alright try and follow me here, but call me out if something sounds wrong because I'm new to this:

SATA controllers use a "Port Multiplier". this is just a 1-in, 5-out type device.

SAS controllers have two different things: Edge Expanders, and a "Fan Out" Expander. in a given SAS system there can be only 1 Fan-Out expander, but many edge expanders, though edge expanders can only be hooked up to a fan-out or a controller. so you get a controller->fan->multiple edge situation.

SAS expanders, from what I have looked for, are RIDICULOUSLY expensive, come in pre-built enclosures, etc. However using a SATA port multiplier should work just fine, so long as you are using SATA DRIVES, as well. so pick up a 4 or 8-port SAS or SATA controller, and get some SATA Port Multipliers when you fill it.

clear? did I miss something? this is what I've found researching basically this exact problem for my server.

teamdest
Jul 1, 2007

NotHet posted:

Ah, that really clears things up for me. It seems like the best solution for me is to buy a large controller and use port multipliers when I'm forced to.
What I'm looking for in a card: 8x SATAII ports, PCI-E interface, parity calculation and cache on card. $500 is my absolute upper price, less than that is fantastic and preferred.
Now I've obviously looked around and have been investigating cards, but I was wondering if anyone has any experience or advice in this regard.

Well the PERC/5's have all that I believe, but unless your hardware is JUUUUUUST right I don't recommend them, they're very finicky.

teamdest
Jul 1, 2007

that one guy posted:

If I get the D-Link DNS-323, can I use it as an FTP server to serve up pictures, etc. that I would link to on my website?

no, the files would exist on your LOCAL network, and unless you have a good ISP, and proper routing set up, you can't normally just host over port 80/21 from a home. Also, it's really not a good idea to expose a fileserver to the internet without a really good reason.

that one guy posted:

Would I be able to use it to serve up files to myself (like media files to another computer? and possibly use it as the storage place for files I would serve to the HTPC I keep thinking about but never putting together)?

yes, provided the hypothetical media server is on your local network, you could map the D-Link as a network drive or just connect to it over the network (via samba) and watch whatever files are stored on it.

that one guy posted:

Using that same device, would I be able to put in various HDDs of different capacities? If I make two of them linked up for RAID do they have to be identical drives?

depends on the type of array you create. a RAID-1 will just build to the size of the smaller drive. a RAID-0 (don't do this, seriously) will most likely require 2 same-sized drives. I don't know exactly what a DNS-323 is capable of doing in regards to arrays.

teamdest
Jul 1, 2007

sund posted:

Does anyone know if three really is the minimum number of disks in a linux software raid 5 setup? It seems that with two it would just resemble a slow, overly complicated mirroring setup. I ask because NCIX always has a limit of two on most of their hard drive sales and wouldn't mind spreading the cost out.

... You're aware that Raid-5 requires a drive's worth of parity, correct? without three drives, there isn't really any way to create redundancy. I mean, yea, under software raid 5 you could probably make two half-drive partitions on each drive and make a 4 "drive" "raid-5" but you don't get any redundancy above a mirrored setup. You lose one drive, you're hosed because two "drives" died. Why exactly would you want to do this?

teamdest
Jul 1, 2007

sund posted:

Thanks. I wanted to end up with a four disk RAID 5 setup wanted to start with the cheapest setup I could. I realize it looks like a crazy question because I always assumed parity was distributed across the disks, not on a dedicated drive.

Is it just me who thinks it's weird that there isn't a widespread standard for distributed parity and multisized volumes? You should be able to throw a 1TB and a 500GB drive in a system and get 500GB of usable space. Throw another 500GB in and it should go to 1 TB.

You have misunderstood. Do me and yourself a favor, read over the OP and hit the section on raid-5. there's no parity DRIVE, but that doesn't just mean you can build it with two. THAT IS A MIRROR, just overly-loving-complicated because rather than a straight copy on drive failure, you have to recalculate the loving parity, and that's even if most software and hardware raids would LET you do it, which to my knowledge they don't because as stated it is stupid, and also pointless. Get 4 drives, build your array when you get them.


edit: and in response to the bottom part of your post, Raid-5 can survive a single drive loss, because the parity is distributed. Your plan would involve making 2 logical drives on the 1TB, and if that goes, then how many drives have you lost? TWO. Ergo, NOT a good idea. That's why you're not supposed to do it.

teamdest
Jul 1, 2007
that sounds like a fine idea and while it's theoretically possible, i've migrated raid-1 and even raid-0 and raid-5 systems from onboard raid controllers on different motherboards so you should be okay. Especially with Raid-1, it's essentially just a split datastream at the chip level, so both should work just fine.

teamdest
Jul 1, 2007
it will probably handle it by going "YOU CANNOT DO THAT!"

teamdest
Jul 1, 2007

vanjalolz posted:

I wanted to try something similar in ZFS but couldn't figure it out. I dont see why high-risk software doesn't exist to migrate a raid1 to raid5 or striped to raidz.

i'm going out on a limb here, but it either doesn't exist or exists only in a poorly-thought-out fashion because *IT IS A STUPID IDEA*. First of all, Raid-1 and Raid-5 have wildly different uses and requirements. Second, there is pretty much *NO* acceptable time to be "high risk" with DATA. You SUPPOSEDLY have it on a redundant drive because it is valuable, so WHY would anyone risk it like that?

teamdest
Jul 1, 2007
It's not, strictly speaking, USELESS. I concede that. What I am attempting to point out, however, is that reading from a drive, which contains data you obviously don't want to lose, while not just WRITING to that drive but writing to it by dynamically reallocating its function is something that is such an edge-case that it should be completely unnecessary. That is why there is no such software or system to do so, because such a thing would take something meant to be redundant and turn it into a vessel for FAILURE at a slight hiccup. and ironically it would be far MORE dangerous for the home user than the enterprise user, because the home user is more susceptible to power sags, surges, and outages due to common lack of onboard and external battery power supplies as well as a lack of good Raid hardware which would make this task at least reasonably quick with parity calculations at every step which is the only way you could ever accomplish such a thing "safely". I have no doubt that sun is attempting to make this work, because undoubtedly people have been bothering them for this nonstop because they appear at least on their face to be a big company who will listen to the complaints of people who are the victims of their own poor planning.

as a thought exercise, lets compare the purposes of the three common raid levels (5 and 6 being so similar as to be considered a single point for this exercise).

Raid-0 - weeds out retards.

Raid-1 - high read speed with high redundancy (1/2 failure rate without losing data at maximum, and two drive failure at minimum).

Raid-5 - allows for redundancy of 1 or 2 drive's failure without taking a hit, while not sacrificing nearly as much storage.

Either you know at the beginning what you need (do you need high redundancy of your data, or high capacity with some redundancy?), or you are unlikely to be capable of using a tool that migrates active systems without additional swap drives. In addition, the problem is nontrivial to begin with (that blog you linked to is actually a good indicator, in that the Sun team has said that it's a nonissue for them and that the community should work on it, and that the simplest proposed solution involves rewriting the way ZFS calculates geometry of the stripe, then writing from scratch a transformation that has just been thought up now, THEN PERFORMING AN OPERATION ZFS IS NOT EVEN CAPABLE OF AT THIS TIME and needs to be rewritten to be able to do. and that's not taking into account the "plenty of gaps to fill in" that he mentions, and bear in mind that I do not even claim to understand 100% of what he wrote in that post.

In closing, yes it would be nice. Even I would like such a thing, but it's delusional to think that it would solve a PROBLEM. it would create more problems, as people attempted something they did not fully understand.

teamdest
Jul 1, 2007
I'll definitely add something like that in, i'll probably go and recheck the prices too when i get a chance.


and yea, i'm a hostile person by nature.

teamdest
Jul 1, 2007

Stonefish posted:

They keep telling me those PERC cards are an absolute bitch to sort out motherboard compatibility with.

If I've got a motherboard that does already support one, what are the odds I could get a second one going at the same time?
I've previously had the board working with a perc and a rocketraid card in the PCIe16 slots.

I run two, Perc 5/i's, and they are a WHORE getting a good motherboard. I did, however, have both working just fine, I can't check it now but I can look up the board I wound up using if you'd like. It's some Asus Pro series board, I know that.

teamdest
Jul 1, 2007

Nam Taf posted:

I'm considering a fileserver, finally, now that I am in dire need of space.

My approach was going to be FreeBSD with Raid-Z over 4 drives, running on a custom built PC based around, most likely, a core 2 duo that I have laying around (in my current PC - I figured I'd upgrade that at the same time).

As far as motherboards go, what do I really want to look at? I don't need hardware RAID, because Raid-Z is software based. Onboard graphics would be nice, I guess, but not crucial as I can always pick up a cheap second-hand graphics card to configure the thing and then rip it out and reboot, running it headless.

What's more, is onboard gigabit LAN considered a better move than gigabit on expansion cards? The upside in my head of expansion cards is that I could get multiple intel ones and bind them together if 4x1.5TB drives pump out more than gigabit capacity (which I highly expect them to do). On the other hand, onboard LAN tends to have better overall throughput, if I recall correctly. Please correct me if I'm mistaken.

Given all that, is there any specific motherboards that appeal better to this job than others? Are there any features that I do want, and any that aren't worth their while? I have a 965P-DS3 in this computer, which has served me well, and if I choose to shift the CPU and RAM out of this (upgrading this to a better core2 or i7, and giving it better/more RAM), would it just be worth keeping that, or is there some feature missing that will cripple me?

With the above question, what should I consider insofar as drivers go on freeBSD? Are some manufacturers likely to leave me cold for driver support on BSD, or should I generally be fine there?

I am also highly unsure as to whether I want to get these 1.5TB seagate drives (AUD$205 each, which is the best price/gig currently), given the recent swag of issues with 7200.11's. The last thing I want to happen is have 2 of my RAID drives go pear-shaped on a boot due to a firmware issue that hasn't been completely resolved in the revisions released to 'fix' it (which seems to be what I'm gathering from the other thread on this issue), because costing me my entire RAID array would be a huge pain in the arse.

What say you guys, oh wise packrats? I've never fiddled with BSD so I'm not 100% sure what I'm in for, but I am able to get myself through most things Linux (being first thrown into Vector Linux on a 386SX and having to build from the ground up), so basic stuff shouldn't concern me too much but if there's any great pits I should be aware of, that'd be grand.

Thanks in advance!

let me preface this by saying i'm not PERSONALLY familiar with ZFS under BSD, I tend to stick to hardware raid or LVM under Linux.

That said, my roommate is building a fileserver at the moment because he'd been borrowing a portion of mine but is moving out, and after messing with FreeBSD and ZFS for several hours he gave up, formatted, and installed OpenSolaris, citing some possible compatibility and stability issues. OpenSolaris was easy to install, had a nice gnome-based (I think? seemed to be) frontend and graphical configuration for things, and behaves very similarly to BSD/Linux. If you're not dead-set on FreeBSD, the OpenSolaris setup was much easier and get his Raid-Z up and running drat quickly.

Additionally, you might consider a Raid-6 or a 5 with a spare (or 6 with a spare, which is what i'll be upgrading my raw-content server to sometime in the coming months!) if you're concerned about having drives from the same batch go at the same time. I haven't heard definitively whether the 7200.11 issues are resolved already, but if it concerns you WD is just as good as Seagate nowadays, I think the warranty is the only real bickering point, and though I can't provide anecdotes I'd be willing to say that Samsungs, Hitachi's, etc are just as good, considering the homogenization of parts and systems seen in the hardware world today.

teamdest
Jul 1, 2007

Nam Taf posted:


Sorry, I didn't mean to say avoid using ZFS, I simply meant that supposedly it has a Raid-Z2 that is a Raid-6 implementation, and you might consider using that.

teamdest
Jul 1, 2007

frogbs posted:

So i'm in a bit of a pickle. I work for a really small tv station (we only have 4 employees). Dealing with video, we can quickly create many many gigabytes of data. Right now we have a 2tb server storing all of our content, but we have quickly filled that up. I'd like to build something in the 5 to 8 terabyte range to accommodate our future storage needs, but am a bit flummoxed at the variety of options infront of me. As I see here are the routes I can go:

- Buy a prebuilt NAS and fill it with drives
- Build a hardware raid box with an areca card or something, fill it with 8 1tb drives using raid-6 for 6tb of storage
- Buy a prebuilt server from HP or something for a ridiculous cost
- Build a box using software raid (can software raid even be done on 8 drives?)

Can anyone offer any advice on any of these options, or at least steer me in the right direction?

If this is business-related, at the very least you should consider a decent backup solution. Additionally, you're probably best off contacting a company to do it right and get a warranty on it. Dell makes nice stuff.

The basic question is "Am I willing to take responsibility for the loss of 8 terabytes of shows we're supposed to be showing tonight/storing for whatever reason? If not, a prebuilt NAS, software raid, or hardware raid-6 are right out. If you're comfortable taking that risk, I'd say you're better off with a good hardware raid controller with an integral battery so you don't have write-failure on power loss, one of the cases with hot-swap bays in the front so you know the cooling is done decently, and Run raid-6 with hot spares available.

Essentially, what is the failure tolerance allowed here?

None? Redundant NAS's with failover and infrastructure from a company that knows how to do such things, as well as hot and cold backups and offsite backup.

Loss of a day's work to restore from backups? Get a decent NAS and a local backup solution. Backup at least should be done by a professional if you've never handled it before.

Lets hope for the best but if it goes tits-up then that's that? Just build something out using MDADM or any decent Raid controller or ZFS if you're feeling adventurous.

teamdest
Jul 1, 2007

adorai posted:

In case anyone is interested, it looks like zfs is going to have deduplication before the end of the summer.

In case anyone is interested, it looks like I have to learn to use solaris before the end of the summer.

teamdest
Jul 1, 2007

Allistar posted:

sysctl shows that net.inet.tcp.delayed_ack is 0 already.

It RANDOMLY went back up to 16-20MB/s for an hour or so and has dropped back down to 9MB/s.

I'm at the point of trying cat6 cables instead of cat5e, though I have read from countless sites that cat5e is fine for gigabit. Each of these cables are less than 10 feet in length.

Cut them yourself or premade patch cables? I'd honestly say run some extended packet-transmit tests first and look for loss there. If you're not getting loss on that it's not your cables, and then it comes down to bottlenecking somewhere between harddrives.

teamdest
Jul 1, 2007
Could anyone recommend an Ultra-320 SCSI card with (I think) 68pin internal connector, preferably one that is PCI-E?

teamdest
Jul 1, 2007

jonathan posted:

I've been reading through the thread, but the OP is fairly old and I was hoping to get a recommendation based on the current lineup of pre-built Nas's out there.

Is the Netgear ReadyNAS stuff still decent ? Basically I need a media storage, somewhere around 4TB. Currently I just use a Boxee Box with a usb drive plugged into it.

Problem is when I transfer an 8gb movie, it transfers from my laptop to the drive at a max of 1.6MB a second. So I was hoping someone could recommend a NAS that will allow for quick network transfers.

I was thinking of upgrading the entire network to 1000mbit/Wireless N. will that offer a speed upgrade versus my 100mbit/Wire;less G setup now ?

The OP is ancient, I'm aware. I'm going to update it! Promises! (No Promises)

teamdest
Jul 1, 2007
I flat out don't understand how a client-side bug on a completely unrelated platform turns into root access on a Unix based file server. That's an outright lie.


edit: I was running NexentaStor Community for a while, but there were a LOT of authentication issues between linux, windows, and Nexenta, so I wound up saying "gently caress it" and going back to my old standby of Debian/mdadm/XFS.

teamdest
Jul 1, 2007
Wow. So it actually WAS a problem with OS X. Sorry FreeNAS!

teamdest
Jul 1, 2007

Waffle Conspiracy posted:

I need help making a decision, our company just got a couple Million $ in EMC storage so I was told to get rid of our islands of storage in our remote office -- which means I now have a Promise Tech 12 Bay enclosure with 12 2TB drives. Along with that I currently also have a TowerRAID 4bay enclosure with 4 2TB drives.

My problem is I don't have a SCSI card, so my choices are either buy a SCSI card and use the Promise Tech enclosure, or pull the hard drives and place them in a different enclosure (if I do this I'm thinking of getting the 8bay version of what i already have)

I don't know which is the better way to go. the Promise Tech is less upfront cost since the SCSI cards are a little cheaper than the 8bay enclosures. It also comes with more options. But, it's loud, uses a lot of power and is very large.

so if you had a couple hundred to spend, 12 drives, and that Promise Tech enclosure which way would you go? SCSI card and the power-hungry enclosure or a different solution all together?

(if it makes any difference, I'm running win2008 R2)

I wouldn't be betting the ponies on SCSI-anything in this day and age, personally. Maybe if I were still at college with free power, free AC and a bed I can stick loud things under to make them quiet I'd go for it, but not in a place where I'm paying for everything. Gotta factor the power cost into that "couple hundred" you plan to spend.

Adbot
ADBOT LOVES YOU

teamdest
Jul 1, 2007

movax posted:

+5V and +12V, these Hitachis on my desk at the moment want 430mA @ 5V (2.15W) and 350mA @ 12V (4.20W). Luckily a lot of the PSU guys are moving to back to the massive single-rail design, and coupled with staggered startup I wouldn't worry too much about it.

Though, I guess with 45 drives (I only have 22) it may be a good idea to wire in another supply in parallel, hah. 45 5K3000s would need nearly 20A @ 5V (100W) and 16A @ 12V (192W, but 16A on 12V is child's play for most PSUs these days).

20A on the +5 is much harder to come by, I would think. 16A on the 12v should be fairly easy on a monolithic rail since videocards require a lot of power there too.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply