|
Mewcenary posted:Also, there seem to be a LOT of posts from people just building their own setups. Any particular reason why this is recommended? Does it really work out a lot cheaper / smaller / less noisy than the 'black box' NAS options? necrobobsledder posted:Because I could probably always use a small, power sipping box with space for 6 drives, I'll probably keep this for my 24/7 NAS and a big huge NAS that I turn on with Wake On LAN when I need some particular file in archives. evil_bunnY fucked around with this message at 17:05 on Nov 17, 2011 |
# ¿ Nov 17, 2011 16:25 |
|
|
# ¿ Apr 25, 2024 16:30 |
|
Residency Evil posted:The main appeal of something new would be the (hopefully) decreased power draw, smaller size, and piece of mind. An ideal solution would be something like Drobo or the original WHS, which have solutions for redundant storage that seem very appealing. The old WHS probably won't run on this computer and I really can't justify paying for a networked drobo. This: http://www.amazon.com/Data-Robotics-FireWire-Storage-DR04DD10/dp/B001CZ9ZEE/ref=sr_1_1?ie=UTF8&qid=1321567278&sr=8-1 is tempting, but I imagine connecting it to my linux system would be pretty miserable.
|
# ¿ Nov 18, 2011 14:15 |
|
devmd01 posted:I went with a massive file server with nine drives because the only thing I had to pay for was the case and rma shipping for the psu and a couple of hard drives, and Indiana has some of the cheapest municipal electrical rates in the country. Decomissioned company-built whitebox ip video security whitebox servers,
|
# ¿ Nov 18, 2011 14:22 |
|
Hamburglar posted:Where do you add hdparm -S242 /dev/sdX in a boot script? Or does that only apply if using Linux or something? (meaning I will have hdd spindown in Windows 7?) sdX should really be sda/sdb or whatever your storage drives are called. Also spindown is for treehuggers. evil_bunnY fucked around with this message at 16:36 on Dec 6, 2011 |
# ¿ Dec 6, 2011 16:31 |
|
IOwnCalculus posted:a VM with four cores on that i5 2400 and 12GB assigned to it (though I may just up it to 16GB for shits and grins). Also 12GB is kinda fun, I serve 25TB with 4.
|
# ¿ Jan 25, 2012 01:15 |
|
Star War Sex Parrot posted:I wish they'd post a follow-up to that study. It's five years old and based on drives that are even older. Drive manufacturers have done a lot to both the hardware and firmware of hard drives to extend their lifespan and I'm curious if there are tangible results. quote:We monitor the temperature of every drive in our datacenter through the standard SMART interface, and we’ve observed in the past three years that: 1) hard drives in pods in the top of racks run three degrees warmer on average than pods in the lower shelves; 2) drives in the center of the pod run five degrees warmer than those on the perimeter; 3) pods do not need all six fans—the drives maintain the recommended operating temperature with as few as two fans; and 4) heat doesn’t correlate with drive failure (at least in the ranges seen in storage pods).
|
# ¿ Jan 25, 2012 12:55 |
|
BlackMK4 posted:Maybe the wrong thread, but is there a USB device that will connect to a server via VGA/DVI and USB or PS2 connectors so you can manage a headless server without a monitor or keyboard/mouse in the event of ethernet going down?
|
# ¿ Feb 10, 2012 11:29 |
|
error1 posted:I'm a bit annoyed that I didn't buy a motherboard that supports VT-d.. After some of my 2TB disks started crashing i decided to rebuild my fileserver and it seems ESXi 5.0 runs fine on it. But since I'm missing VT-d i can't passthrough the SATA controllers.
|
# ¿ Feb 13, 2012 10:52 |
|
Odette posted:Why 3 different services? Wouldn't one suffice?
|
# ¿ Mar 4, 2012 23:25 |
|
Any hardware reseller should have brackets for less than $10
|
# ¿ Mar 27, 2012 18:25 |
|
ISCSI is a block protocol (so if you're going to mount it on n>1 machines you'll need a concurrent access filesystem, which NTFS isn't. Share your NAS volumes using a file protocol (smb/cifs, nfs, afs, you name it). Then the server process takes care of locking etc
|
# ¿ Mar 27, 2012 18:56 |
|
fletcher posted:But after 10 years, even if there are no errors, should the drives be replaced?
|
# ¿ Mar 28, 2012 22:42 |
|
Are you sure the read media can do faster than that?
|
# ¿ Mar 29, 2012 11:40 |
|
I think the only major annoyance is the lack of kernel CIFS.
|
# ¿ Mar 31, 2012 20:45 |
|
FISHMANPET posted:I was super excited about in kernel CIFS, but at least for the home environment, it's a pain in the rear end. I keep using it out of some moral obligation or something, but gently caress I hate it. Maybe it works really well in an enterprise (AD) environment, but it's pretty bad without.
|
# ¿ Mar 31, 2012 23:25 |
|
I don't think I'd choose Linux for a storage box. ZFS on BSD or solaris/illumos tends to be much less of a headache. It'll be interesting to see where BTRFS ends up a couple of years down the line.
|
# ¿ Apr 4, 2012 16:09 |
|
DEAD MAN'S SHOE posted:What kind of headaches?
|
# ¿ Apr 5, 2012 10:37 |
|
ITunes server probably keeps track of changes on the NAS.
|
# ¿ Apr 7, 2012 19:43 |
|
IT Guy posted:Which is other people having the same issue. They believe it's the CPU maxing out on the encryption of the SSH protocol. Viktor posted:eliminating SSH will not correct the problem the CPU is pegged out with the checksums from rsync.
|
# ¿ Apr 9, 2012 23:01 |
|
If you open SMB/CIFS to a routed network you deserve everything you'll get.
|
# ¿ Apr 11, 2012 10:52 |
|
titaniumone posted:I saw someone mention earlier that they had a ZFS file server with multiple NICs so that they wouldn't be bottlenecked by gigabit ethernet.
|
# ¿ Apr 11, 2012 22:08 |
|
movax posted:24GB of RAM and 8 threads, solely for file-serving at the moment. IT Guy posted:True, unless you LAGG your endpoint as well evil_bunnY fucked around with this message at 16:39 on Apr 12, 2012 |
# ¿ Apr 12, 2012 16:32 |
|
marketingman posted:Just want to say that I just did this and got pathetic performance on brand new hardware, and while I constantly run into people saying to do it this way they all magically never seem to see my follow up post about the poor performance... LmaoTheKid posted:It must be a REALLY lovely USB card, because aside from swap space and booting, you don't really use the card at all.
|
# ¿ Apr 13, 2012 16:08 |
|
Longinus00 posted:in place file modifications can potentially scatter a file all over a device.
|
# ¿ Apr 15, 2012 17:56 |
|
The failure rate of disk drives at 5 years is 10%+. If you're using big arrays and identical drives (and thus similar failure patterns), you're living on borrowed time without Raid6 or better (ideally something with end-to-end data integrity like RaidZ2 on ZFS). RAID 5 RAID 6 This is on a big (20 drives) array, but you get the point. Also shitton of media errors simply go undetected in traditional raid setups. I don't even want to talk about non-parity raid levels. evil_bunnY fucked around with this message at 12:37 on Apr 16, 2012 |
# ¿ Apr 16, 2012 12:30 |
|
wang souffle posted:Honestly, what is everyone using for backup of large amounts of data (10TB+) these days?
|
# ¿ Apr 17, 2012 01:48 |
|
Yeah I only have that much storage because I can't stop shooting pictures.
|
# ¿ Apr 17, 2012 20:15 |
|
nyoron posted:Say if there's three machines, Workstation A, Workstation B, and Workstation C.
|
# ¿ Apr 20, 2012 09:33 |
|
IT Guy posted:Is it a protocol thing or a Windows thing? Does NFS have the same behaviour? e: NFS 4.2 introduces server-side copy. evil_bunnY fucked around with this message at 13:39 on Apr 20, 2012 |
# ¿ Apr 20, 2012 13:31 |
|
Residency Evil posted:Out of curiosity, what do you need inside of a microserver to turn it into a HTPC? I'm currently using a boxee to stream from the microserver, but less is always better.
|
# ¿ Apr 20, 2012 15:06 |
|
net use z: /persistent:yes \\nasbox\sharename
|
# ¿ Apr 21, 2012 02:35 |
|
Star War Sex Parrot posted:Right. drat near every NAS does AFP, but apparently nothing does Home Sharing. Oh well
|
# ¿ Apr 22, 2012 09:22 |
|
D. Ebdrup posted:Alright, I split the difference and set it up to run every 3 weeks (it takes about 16 hours to finish a scrub).
|
# ¿ Apr 23, 2012 12:27 |
|
adorai posted:generally the URE rate is 1/10th for enterprise drives. It's going to take me 20mn to find the quote again 8( e: also if you have evidence to the contrary, I'd love to see it. The numbers I see most often are 10^14 vs 10^15, but never any real-world (not extrapolated) evidence. evil_bunnY fucked around with this message at 14:34 on Apr 23, 2012 |
# ¿ Apr 23, 2012 14:16 |
|
Do you not have an extra connector to make it a hot spare?
|
# ¿ Apr 25, 2012 21:19 |
|
movax posted:Hm, so what niche is this exactly? A HTPC that's also its own media server in an attractive enough case/quiet enough to leave in your living room? Or it would be if I didn't have an MD3000i.
|
# ¿ May 1, 2012 00:31 |
|
Every time I think of what it'd take using legacy Linux software raid tools I cringe a little.
|
# ¿ May 3, 2012 02:30 |
|
titaniumone posted:4Gb link to my home fileserver, suckas titaniumone posted:I have multiple 1Gb clients which will occasionally be reading/writing at full load while other clients are attempting to stream HD video and it sucks for everyone involved. This solves that problem.
|
# ¿ May 3, 2012 19:54 |
|
crm posted:The appeal of the "ever growing storage thingy" is great. You can't really do this with ZFS, can you?
|
# ¿ May 7, 2012 16:32 |
|
|
# ¿ Apr 25, 2024 16:30 |
|
crm posted:With Raid 6, can you expand the array one disk (or more?) at a time? How much trouble is it to do this?
|
# ¿ May 7, 2012 19:00 |