Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
evil_bunnY
Apr 2, 2003

Mewcenary posted:

Also, there seem to be a LOT of posts from people just building their own setups. Any particular reason why this is recommended? Does it really work out a lot cheaper / smaller / less noisy than the 'black box' NAS options?
You can get pretty fancy for not a lot of money. But then you have to manage the fanciness when it breaks, so lots of people decide to live without features so they don't have to worry about poo poo breaking.

necrobobsledder posted:

Because I could probably always use a small, power sipping box with space for 6 drives, I'll probably keep this for my 24/7 NAS and a big huge NAS that I turn on with Wake On LAN when I need some particular file in archives.
Automated Storage Tiering to the people!

evil_bunnY fucked around with this message at 17:05 on Nov 17, 2011

Adbot
ADBOT LOVES YOU

evil_bunnY
Apr 2, 2003

Residency Evil posted:

The main appeal of something new would be the (hopefully) decreased power draw, smaller size, and piece of mind. An ideal solution would be something like Drobo or the original WHS, which have solutions for redundant storage that seem very appealing. The old WHS probably won't run on this computer and I really can't justify paying for a networked drobo. This: http://www.amazon.com/Data-Robotics-FireWire-Storage-DR04DD10/dp/B001CZ9ZEE/ref=sr_1_1?ie=UTF8&qid=1321567278&sr=8-1 is tempting, but I imagine connecting it to my linux system would be pretty miserable.
A microserver would server you well (with your choice of drives and management system on it).

evil_bunnY
Apr 2, 2003

devmd01 posted:

I went with a massive file server with nine drives because the only thing I had to pay for was the case and rma shipping for the psu and a couple of hard drives, and Indiana has some of the cheapest municipal electrical rates in the country. Decomissioned company-built whitebox ip video security whitebox servers, :snoop:
I have a pair of MD1000 at home and I'm not ashamed.

evil_bunnY
Apr 2, 2003

Hamburglar posted:

Where do you add hdparm -S242 /dev/sdX in a boot script? Or does that only apply if using Linux or something? (meaning I will have hdd spindown in Windows 7?)

sdX should really be sda/sdb or whatever your storage drives are called. Also spindown is for treehuggers.

evil_bunnY fucked around with this message at 16:36 on Dec 6, 2011

evil_bunnY
Apr 2, 2003

IOwnCalculus posted:

a VM with four cores on that i5 2400 and 12GB assigned to it (though I may just up it to 16GB for shits and grins).
Don't do this. Too much memory doesn't matter if there's no contention but extra vCPUs will actually slow poo poo down unless they're needed.
Also 12GB is kinda fun, I serve 25TB with 4.

evil_bunnY
Apr 2, 2003

Star War Sex Parrot posted:

I wish they'd post a follow-up to that study. It's five years old and based on drives that are even older. Drive manufacturers have done a lot to both the hardware and firmware of hard drives to extend their lifespan and I'm curious if there are tangible results.
The backblaze guys had some data on HDD failure rates in their pod 2.0 article:

quote:

We monitor the temperature of every drive in our datacenter through the standard SMART interface, and we’ve observed in the past three years that: 1) hard drives in pods in the top of racks run three degrees warmer on average than pods in the lower shelves; 2) drives in the center of the pod run five degrees warmer than those on the perimeter; 3) pods do not need all six fans—the drives maintain the recommended operating temperature with as few as two fans; and 4) heat doesn’t correlate with drive failure (at least in the ranges seen in storage pods).
http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/

evil_bunnY
Apr 2, 2003

BlackMK4 posted:

Maybe the wrong thread, but is there a USB device that will connect to a server via VGA/DVI and USB or PS2 connectors so you can manage a headless server without a monitor or keyboard/mouse in the event of ethernet going down?
Not USB. ILO/iDRAC cards do this over ethernet.

evil_bunnY
Apr 2, 2003

error1 posted:

I'm a bit annoyed that I didn't buy a motherboard that supports VT-d.. After some of my 2TB disks started crashing i decided to rebuild my fileserver and it seems ESXi 5.0 runs fine on it. But since I'm missing VT-d i can't passthrough the SATA controllers.
Why not just use the virtual disk as local storage? If you have a poo poo SATA RAID controller it's not ESXi's fault. If you're trying to migrate data to a VM this way, it means you don't have it backed up somewhere else and welp.

evil_bunnY
Apr 2, 2003

Odette posted:

Why 3 different services? Wouldn't one suffice?
Quality of life, mostly.

evil_bunnY
Apr 2, 2003

Any hardware reseller should have brackets for less than $10

evil_bunnY
Apr 2, 2003

ISCSI is a block protocol (so if you're going to mount it on n>1 machines you'll need a concurrent access filesystem, which NTFS isn't.

Share your NAS volumes using a file protocol (smb/cifs, nfs, afs, you name it). Then the server process takes care of locking etc

evil_bunnY
Apr 2, 2003

fletcher posted:

But after 10 years, even if there are no errors, should the drives be replaced?
After 10 years your array will fit on a single new drive and you won't be able to find the old drive tech anyway.

evil_bunnY
Apr 2, 2003

Are you sure the read media can do faster than that?

evil_bunnY
Apr 2, 2003

I think the only major annoyance is the lack of kernel CIFS.

evil_bunnY
Apr 2, 2003

FISHMANPET posted:

I was super excited about in kernel CIFS, but at least for the home environment, it's a pain in the rear end. I keep using it out of some moral obligation or something, but gently caress I hate it. Maybe it works really well in an enterprise (AD) environment, but it's pretty bad without.
What don't you like about it?

evil_bunnY
Apr 2, 2003

I don't think I'd choose Linux for a storage box. ZFS on BSD or solaris/illumos tends to be much less of a headache.

It'll be interesting to see where BTRFS ends up a couple of years down the line.

evil_bunnY
Apr 2, 2003

DEAD MAN'S SHOE posted:

What kind of headaches?

I have no complaints or apparent data loss with headless Ubuntu w/ software Raid5 after 4 years, multiple reinstalls and 1 drive failure. TightVNC launching icewm (or whatever) with whatever else on top has worked out very well, even on 1gb ram.
Well software raid with a traditional FS on top is a very different animal.

evil_bunnY
Apr 2, 2003

ITunes server probably keeps track of changes on the NAS.

evil_bunnY
Apr 2, 2003

IT Guy posted:

Which is other people having the same issue. They believe it's the CPU maxing out on the encryption of the SSH protocol.
That's probably the issue. SSH wasn't really designed for high speed file transfers.

Viktor posted:

eliminating SSH will not correct the problem the CPU is pegged out with the checksums from rsync.
So I underestimated how anemic those chips are.

evil_bunnY
Apr 2, 2003

If you open SMB/CIFS to a routed network you deserve everything you'll get.

evil_bunnY
Apr 2, 2003

titaniumone posted:

I saw someone mention earlier that they had a ZFS file server with multiple NICs so that they wouldn't be bottlenecked by gigabit ethernet.
iSCSI has MPIO, but file protocols tend to be single link.

evil_bunnY
Apr 2, 2003

movax posted:

24GB of RAM and 8 threads, solely for file-serving at the moment.
I love how you blurb for 6 lines before mentioning this gem.

IT Guy posted:

True, unless you LAGG your endpoint as well :)
LACP will only do 1 interface's worth of traffic on a single stream.

evil_bunnY fucked around with this message at 16:39 on Apr 12, 2012

evil_bunnY
Apr 2, 2003

marketingman posted:

Just want to say that I just did this and got pathetic performance on brand new hardware, and while I constantly run into people saying to do it this way they all magically never seem to see my follow up post about the poor performance...
Then you break out DTrace. Then you shoot yourself, because you're digging in dtrace at home in your free time.

LmaoTheKid posted:

It must be a REALLY lovely USB card, because aside from swap space and booting, you don't really use the card at all.
I don't think he's talking about root FS IO performance :laugh:

evil_bunnY
Apr 2, 2003

Longinus00 posted:

in place file modifications can potentially scatter a file all over a device.
Yes, it's copy on write. Leave a bit of free space and never worry about it.

evil_bunnY
Apr 2, 2003

The failure rate of disk drives at 5 years is 10%+. If you're using big arrays and identical drives (and thus similar failure patterns), you're living on borrowed time without Raid6 or better (ideally something with end-to-end data integrity like RaidZ2 on ZFS).

RAID 5


RAID 6


This is on a big (20 drives) array, but you get the point.

Also shitton of media errors simply go undetected in traditional raid setups.

I don't even want to talk about non-parity raid levels.

evil_bunnY fucked around with this message at 12:37 on Apr 16, 2012

evil_bunnY
Apr 2, 2003

wang souffle posted:

Honestly, what is everyone using for backup of large amounts of data (10TB+) these days?
MD1000 filled with 2TB drives.

evil_bunnY
Apr 2, 2003

Yeah I only have that much storage because I can't stop shooting pictures.

evil_bunnY
Apr 2, 2003

nyoron posted:

Say if there's three machines, Workstation A, Workstation B, and Workstation C.
Someone on A wants to copy data from B to C via CIFS.

Would the data go through Workstation A, or would it go straight from B to C?
Through A. It's not that long ago with even moving stuff on a remote host would route everything through the local machine.

evil_bunnY
Apr 2, 2003

IT Guy posted:

Is it a protocol thing or a Windows thing? Does NFS have the same behaviour?
SMB 1 introduced server side data movement. Don't think NFS can do that yet.

e: NFS 4.2 introduces server-side copy.

evil_bunnY fucked around with this message at 13:39 on Apr 20, 2012

evil_bunnY
Apr 2, 2003

Residency Evil posted:

Out of curiosity, what do you need inside of a microserver to turn it into a HTPC? I'm currently using a boxee to stream from the microserver, but less is always better.
Decoding capability. So either a CPU that doesn't suck or a hardware video decoder + OS support to accelerate decoding.

evil_bunnY
Apr 2, 2003

net use z: /persistent:yes \\nasbox\sharename

evil_bunnY
Apr 2, 2003

Star War Sex Parrot posted:

Right. drat near every NAS does AFP, but apparently nothing does Home Sharing. Oh well :(
Well open protocols are easier to implement than "whatever apple decided you want next"

evil_bunnY
Apr 2, 2003

D. Ebdrup posted:

Alright, I split the difference and set it up to run every 3 weeks (it takes about 16 hours to finish a scrub).
Weirdly, the best practices guide for zfs suggests weekly for consumer-grade drives and monthly for datacenter-grade drives. Is there really that big of a difference, or is it just being conservative?
In practice there's very little difference in MTBF and media errors between the two, I think.

evil_bunnY
Apr 2, 2003

adorai posted:

generally the URE rate is 1/10th for enterprise drives.
Right I think that's what the assumption was, and then it turned out not to be?
It's going to take me 20mn to find the quote again 8(
e: also if you have evidence to the contrary, I'd love to see it. The numbers I see most often are 10^14 vs 10^15, but never any real-world (not extrapolated) evidence.

evil_bunnY fucked around with this message at 14:34 on Apr 23, 2012

evil_bunnY
Apr 2, 2003

Do you not have an extra connector to make it a hot spare?

evil_bunnY
Apr 2, 2003

movax posted:

Hm, so what niche is this exactly? A HTPC that's also its own media server in an attractive enough case/quiet enough to leave in your living room?
The niche is me.

Or it would be if I didn't have an MD3000i.

evil_bunnY
Apr 2, 2003

Every time I think of what it'd take using legacy Linux software raid tools I cringe a little.

evil_bunnY
Apr 2, 2003

titaniumone posted:

4Gb link to my home fileserver, suckas :smug:
Still pushing 1gb streams though.

titaniumone posted:

I have multiple 1Gb clients which will occasionally be reading/writing at full load while other clients are attempting to stream HD video and it sucks for everyone involved. This solves that problem.
:hf:

evil_bunnY
Apr 2, 2003

crm posted:

The appeal of the "ever growing storage thingy" is great. You can't really do this with ZFS, can you?
You can't add drive to a raidz vdev, but you can add vdevs to an online pool.

Adbot
ADBOT LOVES YOU

evil_bunnY
Apr 2, 2003

crm posted:

With Raid 6, can you expand the array one disk (or more?) at a time? How much trouble is it to do this?
Depends on the controller and driver support.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply