Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Flashing Twelve posted:

Hm, okay. Sounds like I might just pick up 3x 4TB WD Reds, 8TB storage should be more than enough. Maybe flog off the WD Greens to defray the cost. ZFS on Linux sounds like a good solution to me. Thanks for the help :)

Maybe take a look at SnapRAID then? It's a lot more flexible than RAID5/6 (can have up to 6 parity 'drives'), requires no special hardware/software (runs on standard disk partitions, and there are versions for both Linux and Windows), can actually detect and fix bit-rot and deleted files, and most importantly, if you have more failures than you have parity, you still have the data on your non-failed drives, which can be read from any other computer. Even works on VM and virtual disks.

SnapRAID
http://snapraid.sourceforge.net/

Windows GUI to make it somewhat easier to use
https://elucidate.codeplex.com/

Adbot
ADBOT LOVES YOU

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Edward IV posted:

So even 5 disks plus a hotspare isn't advised for RAID-5? I'm thinking of buying a prebuilt NAS for downloads and media streaming to make an working capacity between 8 to 12 TB and it looks like a 5-bay NAS aren't that common or cheap. 6-bays on the other hand are more common and would allow me to reach the same capacity with smaller, cheaper drives despite the increased number of drives. For example, 6X 3 TB drives will cost me about $720 while 5X 4 TB drives cost about the$800 while giving me the same capacity with a hotspare. Of course there's also the price difference for the number of bays the NAS has. I suppose I can still get the same capacity with a hotspare in a 4-bay with 4X 6 TB drives but those drives are quite expensive. Or should forego the hotspare?

The problem with RAID5/6, is the likelyhood of running into an error DURING recovery, which increases with drive size. If you want to make a RAID5 array with 250gb disks, you're much more in the clear, but that's not what you are looking to do. It also has a terrible disaster scenario in that, if you lose 2/3 disks at once, EVERYTHING is gone due to the data striping. Some of the newer methods, like FlexRaid, SnapRaid, or Unraid, avoid this by not striping data. Thus, in the worst case scenario of multiple disk failures, you still have the disks that did not fail. There is a performance loss, as the data striping aids in reading data faster, but most of the storage we are talking about here does not need to be servicing multiple requests at once.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
Problem with FlexRaid / Unraid is they want money... I've been running an Unraid server for awhile now, but I'm thinking of moving my stuff over to SnapRaid, as it has some more features I like, and isn't so picky about environment.

Also, depending on which files you stream, you can easily stream more than your network capacity by reading files from different disks (movies on disk 1, tv shows on disk 2, cartoons on disk 3, etc)

Skandranon fucked around with this message at 17:00 on Apr 10, 2015

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

The Gunslinger posted:

Without having access I can't really recommend anything else. If the drives are fine physically its more likely that someone was farting around on the source and deleted the files which in turn would cause BT Sync to remove them. A memory problem is not likely to leave you with an intact but blank array.

This is why I never really liked things like Drobo... it's good as long as it keeps working, but your disaster recovery is a nightmare.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

MMD3 posted:

It really wouldn't be that bad at all if I'd had a legitimate up-to-date backup, the problem is I hadn't run a backup to another external drive in the better part of a year :cry:

I don't know how this disaster recovery on my Synology is any more difficult than a straight-up external drive other than the fact that it may take longer to rebuild since it was in RAID.

Well, disaster recovery for RAID5/6 is already stressful enough, with needing the same RAID card/enclosure, setting drives in the right order, etc, and is impossible for all data if you lose more than your parity protects. Also, for many of us here, full 'backup' is not possible, these arrays are for storing enormous media collections, fully backing them up essentially requires a second, similarly sized array. In these situations, you want more recovery options than standard RAID offers.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

MMD3 posted:

ahhh, in my case we're talking about ~4TB across all of my media types. I can easily back that up to an external drive if I'd been diligent. My photography archive is probably only 1.5TB of that and it's the only thing that seems to have been deleted. I just have no idea how it was deleted.

For something like that, I wouldn't worry too much about assembling a complex array at all. I'd set that up on 2x4tb drives in RAID-1 and back those up online to CrashPlan Cloud or Backblaze every night. If one drive dies, just replace it & rebuild, if 2 die, either download again from backup, or get them to mail you a HD to get back up and running quicker. You can also do your own backup to external as you already are(were?), but that can just be the 'faster to download' option, instead of 'last best hope of recovery'.

Skandranon fucked around with this message at 22:05 on Apr 10, 2015

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Junkiebev posted:


Several Questions:

Q1: How does this setup look?
Q2: Why do people run their FreeNAS instances off of USB drives and should I do so?
Q3: It only has PCI-e 8x - is that likely bite me in the rear end at some point?
Q4: It doesn't have USB 3.0 - is that likely bite me in the rear end at some point?
Q5: Intel is about to release their 14nm process processors so should I wait until that happens to pull the trigger?
Q6: HGST NAS drives or WD Reds?

Q3: Unlikely, most SATA RAID controller cards are 4/8x, not 16x, so you'll be fine using that system for awhile.
Q4: I don't see why this would be a problem, unless you really want to connect large USB 3.0 hard drives and need them to copy at full speed. Booting from a 2.0 drive will depend more on the quality of the flash drive than the max bus speed of USB 2.0. And optimizing your boot speeds is a bad decision anyways... you should be booting that thing very infrequently.
Q6: If you can get them for a good price, the HGSTs are very nice. Backblaze has some good articles on the failure rates of their drives. That being said, the WD Reds have a fairly good rep as well right now. I went with 4x4tb WD Reds for my latest HD purchase, but that was mainly because I couldn't easily get HGST drives here in Canada, for some strange reason.

Skandranon fucked around with this message at 21:14 on Apr 11, 2015

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Junkiebev posted:

Sorry for the question barrage and thank you for the help so far.

Between these 2 motherboards

http://www.asrockrack.com/general/productdetail.asp?Model=C2550D4I#Specifications [4 core Avoton]

http://www.asrockrack.com/general/productdetail.asp?Model=C2750D4I#Specifications [8 core Avoton]

I'm seeing a $160 premium for:

More cores [4/8]
More L1 Cache [128KB/256KB]
More L2 Cache [2MB/4MB]
More Wattage [14W/20W] - Who cares, that is like 2-3 money units per year
A worse additional SATA controller [Marvel SE9220/SE9172]

I'm leaning towards the (cheaper) C2550D4I - Is my thinking flawed?

If you are only going to use this for your NAS, you don't really need the extra cores. If you maybe want to virtualize the FreeNAS part and run other things on it as well, the extra cores would help. But if you really want to do a VM hypervisor, you'll probably want something beefier than 8 Atom cores anyways, so you're probably best with the cheaper one.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Farmer Crack-rear end posted:

The bolded part isn't always true. I moved my file server into a new case a couple of months ago, and accidentally got a couple of SATA cables swapped around; the Areca card still detected and worked with the array perfectly well.

Isn't always true, but when it is true it just adds to frustration and anxiety, which is not what you want when doing disaster recovery. Especially if you are dealing with someone elses server and they didn't write this all down from the start.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

savesthedayrocks posted:

I checked out the other thread, is there any recommendations either way on brands to avoid or go with?

The Cyberpower Pure Sinewave ones get some pretty good reviews, but I don't think those are the ones on sale.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
Most of this talk about UPS only really matter when the power goes out and it has to switch to providing power. If your power never actually goes out, it's just a very expensive surge protector. It also can condition the power coming in, within limits, so if you aren't getting a proper 110V, it will fix that. Though, if that's a chronic issue, might want to get that addressed, instead of relying on UPSs to fix that forever.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
http://www.cepro.com/article/the_myth_of_whole_house_surge_protection/ seems to indicate they are largely useless against the kind of surges (from within the house) that are being described.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Don Lapre posted:

This is like, the worst written article ive seen in 14 days.

edit: looks like its written by a guy who sells power line conditioners and he has similiar articles posted all over the internets.

That'll show me to trust the internet. Thanks Obama.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

deimos posted:

The problem with PFC and Square waves has to do with the cross voltage at the boost converter capacitor banks, usually these are specced for 400V to accomodate 230VAC, but square wave 230VAC UPSes will have cross voltages in excess of 400V.

A 120VAC UPS will not have this issue because they have closer to 300V cross voltages (usually 168V and -168V are the voltages they use, unless they have more steps).

There's a ton of discussion on the subject here: http://www.jonnyguru.com/forums/showthread.php?t=3964 and the general consensus is 120VAC is fine.


e: I guess I should've said you're not tripping overcurrent but overvoltage protection.

Sounds like witchcraft to me...

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

MC Fruit Stripe posted:

That's exactly what I expected, thank you :)

That said, I'm kinda talking myself out of it. I don't know, my thing is, I have all the storage I need. I just want to stop losing data to hard drive failures. Have an external drive, it fails. Have another, it fails. So I want an enclosure. Except they're (comparatively) expensive and again, I have as much storage space as I want. I'm not sure my problem isn't solved with a simple external drive or two dedicated to backups. I mean, that's solving my problem with external drives by adding more external drives, but still, it'd be a lot easier to justify buying an enclosure and populating it, if I needed storage AND backups. I only need backups.

Perhaps you could just do a simple 2x4tb mirror (RAID1) inside your desktop? Windows 7&8 can do that easily, with no additional hardware. That gives you 1 drive redundancy, and that protects against any data loss between backups. Also, enclosure hard drives tend to be the lower quality ones, so you're setting yourself up for more drive failures. It is also very simple to recovery from, unlike RAID5/6.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

MC Fruit Stripe posted:

I'm the worst person because I know exactly what you mean, but I also didn't do it, and I know that's very goon in a well. Thing is, I thought about my particular situation - I've got all the space I need. I'm not running out of room for years and years. So in my particular case, if I take all this empty space and move some files around - instead of 2 drives that are 35% full, make it 1 that is 70% and free one up. I worked it out and basically with nothing more than a $150 5tb external from Seagate, and some file movement, I can back up everything I'd ever possibly want backed up.

I feel like my situation would have been much different if I wanted to add storage - if I was looking to add 8tb but also ensure that it was redundant, enclosure all the way. But in my spot, where if anything I have too much room, and just need to start backing things up, I'm just going to go with a simple external drive and call it a day.

Anyway I just wanted to follow up and say thanks, all the advice was great of course (and it's hilarious given what I do for a living that I know nothing about the consumer NAS space), but I'm the goon in a well and a simple robocopy to an external drive probably takes care of me.

Good luck and godspeed. May your drives live forever.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
Not really, they expect them to have actually failed before they'll honor a warranty. However, if you do something, say, like powering the drive up, then shaking it pretty hard, you'll probably make the heads hit the platters and ruin it. Careful with this though, the force generated by the spinning disk is actually non-trivial and could make you drop the drive if not careful, and they'll probably notice and not honor a large dent.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
Can also try running the drive with some powerful magnets attached stuck to the outside.

Or just put it in a box and have it wiped 24/7 until it fails.

Or do all 3.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
Someone was asking about how to RMA a HD before it actually died naturally.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Bob Morales posted:

We used to put things in the microwave for a couple seconds

That usually causes sparks and burns on the PCB. Someone posted earlier that Seagate is usually pretty chill about it, so perhaps we are fretting over nothing. I'm swapping 4 of them out soon and will see about RMAing them all at once, then either sell them on ebay or keep them around as spares.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Shaocaholica posted:

Umm, so how are you supposed to deal with a NAS hardware failure? Not a drive failure but an actual hardware failure of the NAS? Can you pop the drives out and read them in some other device? What if it was in a raid?

The answer to this is pretty much "replace the hardware and hope for the best". Some NAS/RAID cards handle this well, others not so much.

This is my main objection to using such NAS type devices, you're tied to the specific hardware implementation, making recovery of the system as a whole more difficult if you have not simply invested in spares. I'm working on moving my media to a Snapraid array, which has a number of features which make recovery of hardware failure much easier. This also makes it easier to move the drives around to new systems, instead of having to copy the data entirely via network from one array to another.

1) Data is stored on standard NTFS partitions. Can be moved to any other system and read directly. This could be with EXT4 as well, if using Linux, same applies.
2) Config is stored in an easy to backup read/modify .cfg file, which is transferable to other systems.
3) Supports multiple parity drives (up to 6).
4) If more drives fail at once than parity can protect, you can still read data directly from the drives that did not fail.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
With just 4 drives, there are really only 2 basic configs that make any sense. A pair of RAID-1, or a RAID-5. This will give you either 6tb or 9tb, respectively. The mirrors have a much better recovery procedure, and faster read/write. If you're new to this, and don't have significant storage requirements, I'd go with the RAID-1.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

eightysixed posted:

I've been thinking about this. How can I move my Xpenology from 4x1TB to 2x4TB or whatever? Is that not possible without moving the data to another source, re-rolling everything, and then copying the data back? Because.... :effort:

Copying to another source and re-rolling everything is the safest route, though most expensive as well. I think ZFS allows you to gradually add larger drives, but you don't get the additional space until they've ALL been upgraded. Some of the newer, non-standard RAID solutions allow you to expand your storage gradually, like Drobo or Unraid. Snapraid is perhaps the easiest to expand, as you simply add more drives to the pool and re-sync. I completely rebuilt my 1st SnapRaid array from 4x4tb to a 4x4tb + 5x3tb array last night, and most of the effort was moving the parts to a new case & installing Windows Server 2012. The config of the array only took a few minutes.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Feenix posted:

I've got a Red 4tb drive I purchased about a month ago. I like it. I hear they are tops!

I bought a Sorbent lay-flat USB3.0 enclosure for it. It's self-powered. I have it connected to my iMac.

It's not the end of the world or anything but I've found it to need "waking up" a lot more than any other drive I think I've ever owned. Sometimes I click on the HDD icon on my desktop and i hear it spin up and it takes 5 seconds before it shows me the window. Once or twice I was watching some media that lives on the Red and it paused for a few seconds and I heard it spin up then continued just fine.

In my Energy Saver in System preferences, I think things are set ok. I don't have "Put Hard Disks to Sleep when possible". set.

Not sure if there's another setting I should be looking for... if it's the drive, or maybe somehow the enclosure?

Thoughts or advice? :)

Sounds more like something to do with it being self-powered (powered via USB). The enclosure itself could be somewhat aggressive about powering it down.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Shaocaholica posted:

Just looking for any recs given I like the WD EX2.

What are you planning on doing with it, beyond a 4 disk array? What is it you need more than 2 USB ports for?

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
Use NTFS and live with the transfer rate, if Windows is your priority. It would be worse to try using EXT or HFS.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
I think, at least for the backup drive, it needs to be easily accessible when plugged into other machines.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Megaman posted:

I am slowly cycling out a ton of 1TB disks for 4TB disks in my 3 RAIDZ arrays, but now I have tons of extra 1TB disks lying around doing nothing that I can't find a use for. What does everyone do their extra disks? I'm trying not to be wasteful. I have about 15 extra at this point.

I'm building a backup array with my old 2tb and 1.5tb disks. I've been using them off and on in Windows RAID1 arrays to provide temp space for my desktops or servers. Maybe take them and make a test Unraid/Snapraid. If you go with Snapraid, you can make it less crap by using 2-3 parity disks.

Selling old drives is tricky though, even if they wipe fine, they could still have issues, and if you ship them, you don't know if they acquired those issues due to shipping. Lots of complaints from customers, so you either sell them with the caveat of "PROBABLY WILL NOT WORK" and take a huge price cut (making it even less worth the time to wipe them), or take your chances with people bitching and shipping back disks and refunds.

I plan to wear all my disks down to nubs to avoid the human element, then take them apart for cool magnets and coasters.

Skandranon fucked around with this message at 19:35 on Apr 27, 2015

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Bob Morales posted:

https://www.youtube.com/watch?v=apFJBhqUv_E

Just smashed some old drives we had piling up

It would be easier to use a hammer...

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

I'd buy some new 2tb or 3tb NAS drives, restore on there. 1tb are pretty small these days for an array. If you've been fine with 4x1tb=3tb of space for now, 4x3tb=9tb of space will keep you going for a good long time, and the drives will be useful in a future system, providing they don't all die.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Laserface posted:

Im trying to keep cost down, so I was either going to re-use drives or buy 2x 2TB and either stripe until I require more space (with a regular external backup) or run in a Mirror (since I only have around 2.2TB of data at the moment)

Im going to try and borrow the PSU from our netgear at work and get mine going enough to do a more recent backup while I wait for my new box to arrive.

Understood, but don't discount your time in the matter. Also, try to avoid an upgrade that will be obsoleted too soon. If you don't need 4 drives now, go with 2x3tb in RAID-1, and then you can add more 3tb drives later on. This gives you an upgrade path to 4x3tb without mucking about with different sized drives. You want enough breathing room so in 3 months you aren't having to migrate from a 2tb RAID 1 to either some 3tb drives or RAID-5 with the 2tb. Migrations are always stressful to some degree, take time, and pose some risk of data loss, so it's worth it to spend a bit upfront to minimize/avoid them.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
I'd try to keep the storage separate from the video playout, it simplifies things and allows you to specialize.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Tiger.Bomb posted:

Yeah, I understand the reasoning, but then I need two computers.

It's workable, though. My smart tv does have a plex app but it's slow as hell, and I could use my crashing computer as a front end, just move the storage into the nas.

I'd want a nas that I can run sickbeard, plex server, sabnzbd, etc, from. Is that reasonable? If so, any suggestions in the ~$600 range (I already have HDDs). I saw this which might be in a similar range.

You don't want a motherboard/CPU/RAM with issues for a storage array, could end up corrupting your data. I'd use the crashing one for video playback and something else for the NAS.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Laserface posted:

I've been using lovely seagate barracudas (the 1Tb ones from like 6 years ago that seem to just randomly die or get SMART errors after a few months) with no issue. One even has a smart error that a customer returned to me, and I stuck it in my NAS and it's worked fine since.

So if I stand to gain speed by using blacks over reds I'm cool with the risk since I backup anyway.

There is no increased risk in using Black drives over Red, you'll just spend more money for a negligible difference in performance. Also, why are you even keen on putting together an array? You seem fine with the idea of losing it all. You could just throw 1-2 4tb drives into your current desktop and do playback from there.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Tiger.Bomb posted:

Weird I was recommended the same PC in the HTPC thread. I will check it out.

Keep in mind that case has very little room for expansion, and if you have plans to expand, it's a lot cheaper to be able to add more hard drives than upgrading the size of your drives.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

skooma512 posted:

Are 3TBs still rear end when it comes to reliablity? That's what Backblaze says anyway.

Depends which 3TBs. They've put out a few more articles that elaborate more on the topic. The 3TB WD Reds are good, as well as the Hitachis. It's the Seagate DM001 3TBs that are terrible, supposedly because they used cheap parts due to that tsunami that wiped out HD production a few years back.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

necrobobsledder posted:

I'm cheaping out and am looking at some Toshiba drives instead. RAID6 / RAIDZ2 is for drives being unreliable, right?

Dual parity is more due to the fact that, with very large drives, the chances of another failure during rebuild go up. So now you need more parity to even allow you to reliably rebuild. Two drives fail, and then it all falls apart. If you wanted to be cheap, I'd go with good drives in RAID5 than crap drives in RAID6. You don't actually want to ever have to rebuild your array, it's there for the worst case scenario.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
In my 2 servers, I have a smaller mirrored array along with the main array. One is 2x3tb, the other 2x1.5tb. When moving data around like that, I prefer to copy data there, rebuild new array from scratch, then copy back to array. Always keeps the data in at least some redundancy protected array. However, this requires much more hardware than you have available. How much data do you have? If you are slightly able to tolerate risk, you could copy your data to a single backup drive, then onto your new array.

The correct answer to your question is 'get more drives to move data with', but I suspect you can't/won't do this. All other solutions entail some degree of risk of data loss.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me
I've never been a fan of some of the more complex RAID strategies, like ZFS, DROBO, or even RAID5. If you lose more than your parity, you lose either the entire array, or recovery is unimaginably difficult. Same goes with most NAS appliances, as if the hardware itself fails, you again are in a sticky spot where you replace the NAS with exactly the same and hope it will recognize the array.

I prefer either RAID1, as recovery is dead simple (use the drive that still works), or things like Unraid/Snapraid. If you blow your parity protection, you still have all the drives that work, and they can be read from any system that supports the partition type (Unraid uses RieserFS, Snapraid uses NFTS/EXT4). You could even read them via USB enclosure if you wanted.

Adbot
ADBOT LOVES YOU

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

DNova posted:

Recovery is actually pretty easy: replace dead hardware, create new array, restore from backup.

That's not recovering your data, that's starting from scratch. Doesn't work if your array is actually larger than your capacity to backup.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply