Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
H110Hawk
Dec 28, 2006

teamdest posted:

RAID-6 (Double Parity)

RAID-Z (Sun makes everything in house, don't they?)

Z is a software RAID implementation in Solaris's ZFS File system.

To expand on this for the OP (please incorporate if you like it!) ZFS includes most enterprise level raid features you would expect. It can do single parity (raidz1) or double parity (raidz2). Within ZFS you create pools for your storage needs, these can be mixes of N-way mirrors, raidz's, etc.

There are also things such as raid4, which is like raid5 but without distributed parity. Normally parity data is written right along side regular data on various disks. If you put all of your parity on a single parity drive, you get raid4. This creates performance issues, as your parity spindle cannot be used in any way for data. Network Appliance has created a double parity version of raid4, calling it raid_dp.

Adbot
ADBOT LOVES YOU

H110Hawk
Dec 28, 2006

teamdest posted:

No. Raid 5 requires same-sized hard drives, and requires 3 or more.

Close! Most redundant raid implementations merely use the smallest sized drive as your size for all other disks. Meaning, 250/250/500gig RAID5 = 500gigs of available space. If you're doing software raid, you may even be able to carve off the remaining 250gigs from that last disk. But thats a big MAYBE.

H110Hawk
Dec 28, 2006

teamdest posted:

technically correct, sorry. however you're a fool to use a 250 with 2 500's, due to space loss.

Or you use a controller that allows you to swap the 250 with a 500 at a later date, and rebuild to a larger physical array. Then, using something along the lines of gparted, expand your partition and filesystem to encompass the new free space. Just trying to keep the record straight, that it is technically possible, though likely silly.

I believe PERC controllers do this, based upon what I've read on these forums. NetApp's do not appear to, at least not in OnTAP 6, though I have never done a complete replacement cycle. Perhaps I shall try that in the coming weeks.

H110Hawk
Dec 28, 2006

Digitally Mastered posted:

I've looked at NetApp, but most of what they provided is way above my budget. Do you think they are worth the cost? Have you had good success with the equipment?

New netapp is horribly overpriced, and they are slowly being pushed out of the market by things like ZFS, OnStor, BlueArc, etc. Used netapp is much more reasonable, and they still get awesome performance. By some accounts the 3000 series are actually lower performance than the 900 series for the same money. Only gotcha is the entire 900 series is past EOL, save maybe the 980.

Their hardware is very solid, however, and you will rarely if ever have a problem from which you cannot recover. ZFS has not had nearly as much real world vetting as WAFL.

If you're willing to put up with sales people, though, you can apparently get some pretty decent deals on the hardware. Just remember, the licensing for the software adds 100% to the cost, and then you have service contracts.

H110Hawk
Dec 28, 2006

oblomov posted:

They are pricey, but if you beat up their sales people you can get a decent deal (i.e. 40-50% off on hardware, 20-25% off on software and support).

Just to expand on this, you should never really pay more than 40-50% list on any enterprise storage hardware. The price is that high because people do just sign on the dotted line, but if you waffle at all they'll hack it down to give you such a great deal saving you millions. Their software is where they (Netapp, BlueArc, everyone) nail you to the wall anyways.

I know for a lot of it you really do get what you pay for, but it's not always true. Especially when you can get a licensed, off-lease, 3 year old Netapp that runs just as well as it did fresh out of the crate for half off again. They're like cars, really. Keep a few extra spare disks on hand and you're golden.

H110Hawk
Dec 28, 2006

poo poo Copter posted:

Could any point out any good tutorials for setting up md or Z-RAID arrays?

If you just have the one controller and backplane it's easy.

code:
zpool create porn raidz disk disk disk disk disk
To expand, you will need another five disks to maintain even performance.

code:
zpool add porn raidz disk disk disk disk disk
Those "disk"s are the physical devices. You'll wind up with something like:

code:
          raidz      ONLINE       0     0     0
            c0t7d0    ONLINE       0     0     0
            c1t7d0    ONLINE       0     0     0
            c4t7d0    ONLINE       0     0     0
            c5t7d0    ONLINE       0     0     0
            c6t7d0    ONLINE       0     0     0

H110Hawk
Dec 28, 2006

poo poo Copter posted:

Thanks. Upgrading in the future is one concern of mine though. Would I absolutely need another 5 disks in order to upgrade/expand, or could I add say 3 to another pool? How exactly does the expansion process go?

Addendum to everything Toiletbrush said, I was very specific in my wording for a reason. You can add more raidz's, mirrors, etc, but you start burning space faster/slower than your original. You cannot expand parity based vdev's (raidz(2)). And please don't be afraid of the zfs create or zfs snapshot command. It's really fun! (Caveat, beware nfsv3 can't see past zfs boundries.)

code:
# zfs list -t filesystem | wc
      79     395    6863
# zfs list | wc
     211    1055   17130
# zfs list | tail
pool/homie/backups/capone/418229               97.0K  35.6T  97.0K  /pool/homie/backups/capone/418229
pool/homie/backups/capone/418229@200805131600      0      -  97.0K  -
pool/homie/backups/capone/418239               58.1M  35.6T  58.1M  /pool/homie/backups/capone/418239
pool/homie/backups/capone/418239@200805131600      0      -  58.1M  -
pool/homie/backups/capone/418242               59.3G  35.6T  59.3G  /pool/homie/backups/capone/418242
pool/homie/backups/capone/418242@200805131600   458K      -  10.2G  -

H110Hawk
Dec 28, 2006

HPL posted:

Good lord, that thing must start glowing from all the heat when all the hard drives are going at once.

They're not so bad. The two rows of fans in the front prevent that. They things sound like freaking turbines when they power up.

Xyratex makes one as well, the 48-in-5u is a pretty common form factor now.

The real bitch is racking them without bending the rails.

H110Hawk
Dec 28, 2006

MrMoo posted:

I'd hardly call the top-end-only models of a few companies common, side-note: Xyratex website is rear end, the size appears to be 48/4u like the x4500.

(edit) Neither NetApp or EMC appear to have any top loading chassis models.

They aren't the top-end models, though. They're some of the lowest around. Sun considers the X4500 a "server" not a "storage" device. Xyratex is only looking at the reseller market. They don't care to sell to you or I. Netapp buys most (all?) of their disk trays from Xyratex, I believe OnStor sells the 48-in-4u ones. It's only a matter of time before supermicro comes out with a chassis + board that can do it, or until NetApp certifies and badges the Xyratex one.

IOwnCalculus posted:

Between this and the power requirements I'd guess it has...yeah, that's one big loving box.

Yup! 3 of them fit in a 30Amp@110v circuit.

H110Hawk
Dec 28, 2006

Toiletbrush posted:

As said, the memory requirements depend on what you want to do. For a home file server, even a gigabyte is enough. ZFS makes gratuitous use of memory for functions like cache and especially prefetching (it detects linear and stride reads and depending on the pattern prefetches very large chunks (think 100-200MB)). But it can also run in low memory situations. The cache reacts to memory pressure and shrinks if necessary.

There are also a handful of kernel tunables that let you tell ZFS just how much ram it gets, etc.

H110Hawk
Dec 28, 2006

Farmer Crack-rear end posted:

I remember hearing a couple of years ago that buying hard drives in bulk could net some pretty good deals. Is that still the case? Where would one go to find that sort of thing?

Define bulk? I get more or less the lowest prices around* on disks, near as I can tell, ordering 300-500 spindles/month. Ordering 10-20 will net you a small discount. 20 is a "case" of spindles, normally comes still sealed from the factory. Check out sites like cti-intl.com, though most of the time the hassle isn't worth not just keying it in to newegg/mwave/whatever.

Sometimes newegg has some pretty good deals, too, so be on the lookout. While I tend to not be able to take advantage of them due to quantity issues, you can sometimes save another 10%ish off distributor level prices.

* Dell, HPAQ, and similar OEM's not included, they're the tier above me, and it's a VERY large jump.

H110Hawk
Dec 28, 2006

vanjalolz posted:

Interesting, so following that logic I should be able to start a 3 drive array with 0 parity drives, and add one when I'm ready, right?

I'm pretty certain that they use distributed parity, in that each drive has an equal amount of parity on each disk. You cannot do what you are proposing with ZFS.

H110Hawk
Dec 28, 2006

qalnor posted:

Any tips on retailers for a system with 6+ SATA ports? I'm thinking of going windows home server, but I want to have as much room as reasonable to grow.

Supermicro makes plenty of boards with 6+ sata ports. In reality though it doesn't matter, because a 4-12 port sata card is really quite cheap.

H110Hawk
Dec 28, 2006

vlack posted:

Is anyone using OpenSolaris?

I don't know the answer to all of your questions, but you can almost certainly set up OpenSolaris to be headless the same way you do with Solaris 10.

As for software on Solaris 10, have you checked our Blastwave?

code:
pkgadd -d http://www.blastwave.org/pkg_get.pkg
cp -p /var/pkg-get/admin-fullauto /var/pkg-get/admin
/opt/csw/bin/pkg-get -i wget rsync blah
I would also suggest you install SFWcoreu from Sun's Freeware collection(NOT sunfreeware, an unaffiliated site.) It gives you useful things like /opt/sfw/bin/cp (GNU cp.)

H110Hawk
Dec 28, 2006

NotHet posted:

This isn't always the case, I have a really terrible raid controller (I paid $20 for it) which just keeps chugging.

I'm going to buy three of these http://www.wdc.com/en/products/Products.asp?DriveID=503. An extra $300 :smith:

This is pretty much what is being alluded to above. You have a consumer software raid card with consumer hard disks and it is willing to put up with anything.

NotHet posted:

I've been building a new raid array for myself and I have a $400 lesson that should probably be mentioned in the OP.

High end RAID Controllers should NOT be used with desktop (or 'consumer') grade hard drives.

I'm now trying to pawn a few 1TB Consumer Harddrives off on friends. :p

My config: ARC-1680ix-24, three generic consumer Segate 1TB harddives

Yes! This should likely be added to the OP. Consumer hard disks are just that, consumer. There are however enterprise firmware images you can load on some of them. I imagine with some hacking you could convert a 7200.11 seagate into an ES.2 disk. This should give you what you need in terms of error timeouts, etc. I believe you can also do the same thing to convert a disk from regular consumer into an A/V disk.

This does not account for hardware differences. I know the ES version of Seagate drives weigh a substantial amount more than the non-ES versions. (ST3...NS vs ST3...AS)

And I don't know about anyone else, but the OP is coming off pretty hostile right now, regardless of the merit of your statements. :)

In theory ZFS is in a pretty good spot to do dynamic restriping of data, however it is a very risky operation at absolute best, and not something they want to support down the road when enterprisey people do it. They have absolute control over both the block and file layout on disk. They of course do not know anything about the disk subsystems. In theory a "dumb" raid controller with no knowledge of the overlaid FS could do this given enough unallocated space very safely. It's just blocks in an LBA device to the OS. Copy block, compute+write parity, flush, remap old block.

If you really want to expand an online array, Netapp is a pretty good bet, but have fun with pricing it out, and you should come over to the enterprise thread. :)

The problem with all of this is that home users who are just itching to blow $500 on a raid controller then add disks one at a time want that feature, but you are an extremely niche market. Enterprise users who don't bat an eye at a $500 raid card don't need the feature, they buy all of their spindles up front.

H110Hawk
Dec 28, 2006

deimos posted:

The plan is to buy 3x 1TB external (USB :( ) drives and RAID-Z them with as much gzip compression as I can without becoming CPU-bound. The idea is that the compression will make up some for the fact that I am using USB drives.

Sounds fine. If you're using binary groups, though, you're not likely to gain much from compression. If this is all for text groups, just enable LZW compression, which if I am remembering correctly is much faster.

H110Hawk
Dec 28, 2006

Stonefish posted:

Wiki says 133MB/sec. That's not very quick.

Why does storing data have to be so difficult? I have money, and I want to spend it on hardware. Someone loving try to sell me what I want :v:

Whoa there, it's not just that simple. A sales guy will need to come out and discuss why their form of horse-rape is better than the next guys. Then he will leave you with glossy brochures. If you're lucky a technical guy was allowed to come along who might even know how to turn the unit on and know what the glossy brochure says about the product.

From there you get to discern the truth from the lies. What you are left with is nothing. You should already know how many disk iops you need going in to the talks, and know that nothing their proprietary bullshit is going to do that will magically give you iops those disks don't have. I tended to use SPEC_SFS97 numbers, because our workload pretty much directly compared to theirs.

You can then try to get them to tell you a price for their hardware. This price is complete bullshit, and they will try and forbid you from talking to anyone about it. Try and figure out if it is MSRP, if so cut it straight in half, and that is the real price. (Note, not cost to them, but price for which they will sell you the unit.) Add on the support contract, which costs as much as the unit costs them, and you're golden!

In the meantime you should make them take you and at least one low-level technical guy out to an expensive lunch/dinner. Order fine wine and the steak.

Jaded? Sure. Far from the truth? Trying to spend money on hardware/software can be one the hardest things to do in this industry. Everyone is out to maximize their own bottom line, and some people will fork over MSRP, so why not keep it high?

H110Hawk
Dec 28, 2006

angelfoodcakez posted:

are these the 1.5tb seagates that had all the problems with locking up?

Doesn't matter, firmware fix is released last I heard.

H110Hawk
Dec 28, 2006

Sjurygg posted:

I'm setting up a Dell MD1000 in split mode for two video streaming servers. I've been recommended to use RAID-6 instead of our default RAID-5, since it would offer better read performance for large reads -- the way I see traffic going on these servers.

Why would RAID6 offer increased performance over RAID5 for the same number of spindles? Ask them that direct question, and demand a technical answer. The only way in which I could see this being true is if you were doing pure reads, in which case you are indeed changing the usable disk:spindle ratio. In reality it sounds like they're trying to sell you an extra hard disk/server/feature.

As soon as you start throwing any writes into the mix you are doing three iops per block written instead of 2 iops per block written. You only have 8 disks worth of iops to use. This all assumes your controller can keep up with the doubled parity calculations of any write load you are doing. If it's a streaming server the write load should be lower.

What is going to offer you increased read performance for large sequential reads? Larger block/stripe size, and read ahead enabled on your raid card. If you are only serving up large files, use the largest supported stripe size, and try to make your FS block size match that exactly, or be an even multiple.

H110Hawk
Dec 28, 2006

Ceros_X posted:

LSI-8344ELP-MEGARAID

The LSI Megaraid series, and their straight up LSI SAS HBA's are all pretty solid. That is a pretty amazing pricepoint.

H110Hawk
Dec 28, 2006

Interlude posted:

3) OpenSolaris using ZFS and RAID-Z. Not positive if this will support both CIFS and AFP.

Depending on how much you like screwing around with things, this is likely your most trendy bet. Solaris (ZFS?) has a native CIFS server inside of it. It is as simple as punching in:

zfs set sharesmb=on or similar.

http://docs.sun.com/app/docs/doc/820-2429/createstaticsmbsharezfstask?a=view

If you want it to more or less set it and forget it, Windows Home Server might be a good option, and steer clear of XFS on Linux. Bad things come to those who use it, at least from the experiences we've had around the office both professionally and personally. Eric Sandeen will tell you otherwise, of course. Try Ext4. It's pretty slick so far.

H110Hawk
Dec 28, 2006

KennyG posted:

Can anyone instruct me on the rebuild speeds/times for something like an Areca ARC-1280 http://www.areca.com.tw/products/pcie341.htm? I am looking at doing some math at deciding the amount of redundancy of a large raid array (16+ total drives) and I'd like to figure it out myself, accounting for things like lazyness, lead times in RMA and other drive cycle problems to assess my risk level.

Spindle count, spindle size, disk I/O utilization, and I/O performance hit all factor in here. For low usage, large disks, and high performance hit you're looking at about a 1-2 days for a parity based rebuild.

H110Hawk
Dec 28, 2006

haywire posted:

What sort of build would people recommend for high storage but *low* power usage. Would also need to run things like SABnzbd+, some sort of torrent client, Samba, some sort of server with h.265 streaming support, etc. Basically a upload/download box that my flatmates wont kill me for making our power bill go up. I was thinking Intel Atom processors with ArchLinux but I have no experience system building with them. Wouldn't need to play video, but streaming it at high speeds would be a must (2+ NICs?). HDDs wouldn't need to be superfast, but lots of space + backups would be excellent.

As long as you don't need to do any transcoding anything should be fine. Make sure it has plenty of memory, and an intel nic if you need network performance. I don't know the performance on the Atom's, but I know you can get a dual atom for trivially more than a single, so at least start there. I seriously doubt you need two nics.

H110Hawk
Dec 28, 2006

haywire posted:

I can't even find Atom processors on ebuyer. The closest things I've found are:
http://www.ebuyer.com/product/156820

And that has only 2x onboard SATA connectors.

http://www.ebuyer.com/product/95863 Is that low power?

Don't buy it for the onboard sata ports, 0 is somewhat an optimal number there. Remember I said buy addon cards with bulk sata ports?

Not really. Modern things are even lower, look up the specifications.

quote:

I've also seen things like this:

http://www.codinghorror.com/blog/archives/001156.html

But will it be able to do what it needs, and if I'm using a huge array of external HDDs, will it be saving all that much power anyway?

That is exactly the idea for basic basic file server that does minimal thinking. Remember that your hard disks will be there and spinning anyways. Grab an embedded OS (freenas?) and slap it on there. Figure out an enclosure for your disks, and call it a day. If you put something like that in a big ugly full tower case with a regular power supply, find a board with standard ATX plugs, and just run your disks off that.

Most of your power waste is going to be in the power supply, the fewer power supplies the better. The disks will all have their amperage requirements right on there. Something like 5v 1.1 / 12v 0.9. Ohms law says W=VA/O, so roughly, 5*1.1+12*.9=1 disk worth of watts. Don't buy a power supply larger than you need.

H110Hawk
Dec 28, 2006

KennyG posted:

Does anyone know of a 3 or 4U rackmount server case with space for a 5.25" bay in the BACK?

To what end do you need the 5.25" bay?

H110Hawk
Dec 28, 2006

KennyG posted:

SAS expander module. I want to link two (or more) rack mount cases for some crazy raid storage.

Can you link this? Why would use this as opposed to just a LSI SAS3041/SAS3081 card?

http://alrightdeals.com/Item.htm?Id=S0_Controller.Card_Serial.ATA.Controller___LS-3041ES

They rock your vague socks.

H110Hawk
Dec 28, 2006
I have 10ish of those at work, only with the LSI cards. Use something like this:

http://www.wiredzone.com/itemdesc.asp/ic/10017610/store/SUPERMICRO

Much easier to find cases that it can fit in to than a 5.25" bay. One 24 bay server hooked up to 3x JBOD 24 port expanders. Just get a backplane that has an out port.

H110Hawk
Dec 28, 2006

Farmer Crack-rear end posted:

I'm about to put together a RAID-6 array and I'd like to really beat the hell out of it for awhile - hopefully to get any premature failures or unforeseen incompatibilities out of the way. Does anyone know of a good utility program to do this?

stress, bonnie++, and iozone are decent ways to gently caress around with stuff like this.

H110Hawk
Dec 28, 2006

tehschulman posted:

drat these headless installs are difficult. I've got FreeNAS on a bootable thumb drive but dunno how I'm going to change the BIOS options without the aid of a monitor. I have some USB ports and an Ethernet jack, any advice for getting this going?

You don't have any type of VGA/RS-232 output?

H110Hawk
Dec 28, 2006

IOwnCalculus posted:

*(how will they know which one failed at this point? Match serial numbers?)

Since they build them all alike, they know the mapping exactly of how Linux will enumerate the disks. They probably just have a cheat sheet that shows exactly which slot holds /dev/sdh or whatever.

H110Hawk
Dec 28, 2006

IOwnCalculus posted:

I wouldn't trust that, though; I've had Ubuntu randomly reassign drivemappings from one reboot to the next. I just replaced a drive in my backup server - the failed drive was /dev/sdc, but when it came back up it had remapped something else to /dev/sdc and the replacement for the failed drive to /dev/sdb.

Except that once you've popped out the disk you can validate the serial number. It should give you a pretty good first guess. This is one of the reasons we like Solaris for our backup servers. It maps a drive out pretty much the same way every time. You can even have Linux remap the disks every time it boots up and update the database if something changes.

quote:

Don't get me wrong, I like what they've done, but I think they haven't made enough concessions to long-term maintenance and reliability; it seems like for a relatively marginal increase in cost (especially compared to even the next-cheapest solution) they could make things considerably more reliable.

They make most of their money on that marginal increase in cost. I imagine they just have some dude go around to all the devices with a dead disk once a day, or even every few days, power them down, swap the disk, and power them back up. For that price no one is expecting 100% reliability. Just tell people "yeah, your poo poo might go down between 10am and 1pm daily for drive repair and other maintenance." You even have the advantage of people will want to do their backups overnight so you can work during the day.

Having done our storage system where it has to scale past a handful of servers, the cheaper way almost always adds up to the better way. Disks don't fail that often under backup use. One guy, $15/hour, full health benefits. 30 minutes per server per failed disk. You have to beat $15 for "increased reliability". You probably get at most 1.2-1.5 failed disks per rack per day. I imagine any increase in cost would be at a minimum $100 per server. 10 servers in a rack, $1000/rack, $10,000 total. That is 666 drive swaps. You're hoping to bring failure from 1.5 to 1.2 reliably? 1.5 to 0.5? Or just reduce the time spent per disk from 30 minutes to 20 or 15?

Now, with that math aside, if you're replicating this data 2 ways you can just let one machine wholesale fail before replacing it. Pull the whole unit once it gets past what you consider a safe failure threshold, put a new one in the rack, let it copy the data over automatically, and repair the server offline all at once. This would probably let you cut time per disk from 30 to 20 minutes, since there isn't a long spindown/spinup time per disk replaced, and no clock ticking. Their threshold for total failure is likely a lot higher than you might imagine if they worked in this manner.

What do you propose the next cheapest solution to be that would beat their cost and margin?

H110Hawk
Dec 28, 2006

roadhead posted:

I've got about $2-$2,500 budgeted to build a NAS here shortly (with-in the next 3 months most likely) - am planning on FreeBSD with Raid-Z using this hardware any obvious problems so far?

You are spending far too much on the UPS. Just grab some cheapo thing that can run your machine for 5 minutes and then shut it down via RS-232/USB. This will give you more holdup for half the price:

http://www.newegg.com/Product/Product.aspx?Item=N82E16842102048

The APC brand isn't doing anything there except costing you money, they all use the same batteries. I found that by opening the UPS page and it was the first thing listed. I would target $100 for the box alone, or $125-150 for everything in your closet.

You seem to have a need for speed with the CPU, but what realistic values are you looking to push from this machine? Is it just for NAS duty, or do you hope to offload HTPC functions like video transcoding? IF not, I would go to dual core for half the price and still maintain a lot of flexibility. This chipset likely won't get you very far unless it's always single stream, and even then:

Onboard LAN
LAN Chipset Realtek 8111DL

If you want to go fast, or have multiple clients accessing it, slap a cheap Intel NIC in there:

http://www.newegg.com/Product/Product.aspx?Item=N82E16833106033

Cheaper ram: http://www.newegg.com/Product/Product.aspx?Item=N82E16820148160

DVD-ROM? Are you going to be backing up your DVD's?

This may seem nit-picky, but for a dedicated NAS box you can now throw in 2 more hard disks for roughly the same amount of money.

H110Hawk
Dec 28, 2006

roadhead posted:

Ok, cheaper UPS.

:) Glad to help

quote:

Yes it has to be able to transcode 1080P x264 and deliver it to my PS3 over the gigabit network. That will probably be the most taxing thing the box does. I would also like to use it for Asterisk, and other fun random projects that I dream up. Also I am a programmer by trade so you never know what this box might end up doing.

The server will be ripping/encoding DVDs to back them up and make getting at them easier.

Cool, as long as you have your reasons! Have fun!

Please add IP6 to rtorrent if you're looking for a project. ;)
http://libtorrent.rakshasa.no/ticket/1111

H110Hawk
Dec 28, 2006

Horn posted:

I'm getting ready to upgrade my RAID5 array from 3 drives to 4 and I have a question about how I should configure this thing. Currently my motherboard has 4 SATA ports all of which are used (3 for the array, 1 for the system). This is a software RAID setup.

Since I have to buy a controller card I figure I might as well do this right. Would it be better to buy one 4 port card and put all 4 of the array drives on that? Two 2 port cards (with only the system drive plugged into the MB)? One 2 port card and just put the new drive on it leaving the other 3 on the MB?

You're looking to buy a SATA controller card? Or a SATA+RAID controller card? What is your budget?

Just put them all on the same card.

H110Hawk
Dec 28, 2006

Allistar posted:

It doesn't matter, no atom system that I see offers more than 3 integrated SATA ports. I'll just keep my P4 system and measly 8MB/s transfers due to the fact that 3 of the 5 drives are running off of a PCI SATA card. :(

The cost of running that P4 is more than a low power machine with a pci-e slot + sata adapter. All things equal you will see a bottom line savings on your electric bill going to a new machine, and probably <1year to recoup your costs if you go to low power.

H110Hawk
Dec 28, 2006

stephenm00 posted:

What is the power difference between an intel / amd 65w processor and an atom processor? Is it enough to justify getting the atom for an always on server?

http://en.wikipedia.org/wiki/Intel_Atom#Architecture

H110Hawk
Dec 28, 2006

Gendo posted:

Is this going to work for me or is there something else I should look for? I'm not super concerned about performance as this will be for media storage or maybe this as I will be adding a second enclosure eventually.

I would suggest going software raid if you're looking at <$100 cards. OpenSolaris ZFS or Windows Home Server are pretty popular options. That way when your card shits itself you can just throw it away and replace it with a new one.

What OS are you going to be running this on? Would one of those freenas/openfiler/whatever distributions be right up your alley?

H110Hawk
Dec 28, 2006

frogbs posted:

I have no back up solution as of yet. buy an external 2tb hard drive (about $200)and mirror the array nightly.

Correct. Use rsync --delete. If the files change infrequently you can avoid scans by using the "sync on timestamp change" option. It actually doesn't matter how taxed the system is, with no backups you're boned if it dies.

H110Hawk
Dec 28, 2006

frogbs posted:

Correct me if I'm wrong, but doesn't rsync require that I set up a host on the volume to be backed up? Unfortunately this server is a vendor supplied "solution" and it may void our warranty if I go mucking around with 'their' server.

Well, how do you get data off the thing normally? If you can export it on anything linux/mac/windows can mount then you can back it up with rsync. If not, then yeah just drag and drop the whole drat thing every day or whatever.

Adbot
ADBOT LOVES YOU

H110Hawk
Dec 28, 2006

frogbs posted:

I can drag and drop just fine, its just that I thought RSYNC required me to configure RSYNC on the volume to be backed up, if it doesn't then that is great for me, perhaps I am just misinformed...?

If you can mount the filesystem on another computer then there isn't anything you need to do on your server for rsync to work. Play with the various "when should I sync" options and --dry-run, it will list out what it wants to do and why. You can also check up on your I/O and make sure it's not doing anything silly like reading every file. If your modified timestamps are real and not faked or ignored somehow then that is a pretty good indicator of when to sync a file.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply