|
Flashing Twelve posted:Hm, okay. Sounds like I might just pick up 3x 4TB WD Reds, 8TB storage should be more than enough. Maybe flog off the WD Greens to defray the cost. ZFS on Linux sounds like a good solution to me. Thanks for the help Maybe take a look at SnapRAID then? It's a lot more flexible than RAID5/6 (can have up to 6 parity 'drives'), requires no special hardware/software (runs on standard disk partitions, and there are versions for both Linux and Windows), can actually detect and fix bit-rot and deleted files, and most importantly, if you have more failures than you have parity, you still have the data on your non-failed drives, which can be read from any other computer. Even works on VM and virtual disks. SnapRAID http://snapraid.sourceforge.net/ Windows GUI to make it somewhat easier to use https://elucidate.codeplex.com/
|
# ¿ Apr 10, 2015 15:38 |
|
|
# ¿ Apr 27, 2024 19:39 |
|
Edward IV posted:So even 5 disks plus a hotspare isn't advised for RAID-5? I'm thinking of buying a prebuilt NAS for downloads and media streaming to make an working capacity between 8 to 12 TB and it looks like a 5-bay NAS aren't that common or cheap. 6-bays on the other hand are more common and would allow me to reach the same capacity with smaller, cheaper drives despite the increased number of drives. For example, 6X 3 TB drives will cost me about $720 while 5X 4 TB drives cost about the$800 while giving me the same capacity with a hotspare. Of course there's also the price difference for the number of bays the NAS has. I suppose I can still get the same capacity with a hotspare in a 4-bay with 4X 6 TB drives but those drives are quite expensive. Or should forego the hotspare? The problem with RAID5/6, is the likelyhood of running into an error DURING recovery, which increases with drive size. If you want to make a RAID5 array with 250gb disks, you're much more in the clear, but that's not what you are looking to do. It also has a terrible disaster scenario in that, if you lose 2/3 disks at once, EVERYTHING is gone due to the data striping. Some of the newer methods, like FlexRaid, SnapRaid, or Unraid, avoid this by not striping data. Thus, in the worst case scenario of multiple disk failures, you still have the disks that did not fail. There is a performance loss, as the data striping aids in reading data faster, but most of the storage we are talking about here does not need to be servicing multiple requests at once.
|
# ¿ Apr 10, 2015 16:42 |
|
Problem with FlexRaid / Unraid is they want money... I've been running an Unraid server for awhile now, but I'm thinking of moving my stuff over to SnapRaid, as it has some more features I like, and isn't so picky about environment. Also, depending on which files you stream, you can easily stream more than your network capacity by reading files from different disks (movies on disk 1, tv shows on disk 2, cartoons on disk 3, etc) Skandranon fucked around with this message at 17:00 on Apr 10, 2015 |
# ¿ Apr 10, 2015 16:55 |
|
The Gunslinger posted:Without having access I can't really recommend anything else. If the drives are fine physically its more likely that someone was farting around on the source and deleted the files which in turn would cause BT Sync to remove them. A memory problem is not likely to leave you with an intact but blank array. This is why I never really liked things like Drobo... it's good as long as it keeps working, but your disaster recovery is a nightmare.
|
# ¿ Apr 10, 2015 18:53 |
|
MMD3 posted:It really wouldn't be that bad at all if I'd had a legitimate up-to-date backup, the problem is I hadn't run a backup to another external drive in the better part of a year Well, disaster recovery for RAID5/6 is already stressful enough, with needing the same RAID card/enclosure, setting drives in the right order, etc, and is impossible for all data if you lose more than your parity protects. Also, for many of us here, full 'backup' is not possible, these arrays are for storing enormous media collections, fully backing them up essentially requires a second, similarly sized array. In these situations, you want more recovery options than standard RAID offers.
|
# ¿ Apr 10, 2015 21:08 |
|
MMD3 posted:ahhh, in my case we're talking about ~4TB across all of my media types. I can easily back that up to an external drive if I'd been diligent. My photography archive is probably only 1.5TB of that and it's the only thing that seems to have been deleted. I just have no idea how it was deleted. For something like that, I wouldn't worry too much about assembling a complex array at all. I'd set that up on 2x4tb drives in RAID-1 and back those up online to CrashPlan Cloud or Backblaze every night. If one drive dies, just replace it & rebuild, if 2 die, either download again from backup, or get them to mail you a HD to get back up and running quicker. You can also do your own backup to external as you already are(were?), but that can just be the 'faster to download' option, instead of 'last best hope of recovery'. Skandranon fucked around with this message at 22:05 on Apr 10, 2015 |
# ¿ Apr 10, 2015 22:02 |
|
Junkiebev posted:
Q3: Unlikely, most SATA RAID controller cards are 4/8x, not 16x, so you'll be fine using that system for awhile. Q4: I don't see why this would be a problem, unless you really want to connect large USB 3.0 hard drives and need them to copy at full speed. Booting from a 2.0 drive will depend more on the quality of the flash drive than the max bus speed of USB 2.0. And optimizing your boot speeds is a bad decision anyways... you should be booting that thing very infrequently. Q6: If you can get them for a good price, the HGSTs are very nice. Backblaze has some good articles on the failure rates of their drives. That being said, the WD Reds have a fairly good rep as well right now. I went with 4x4tb WD Reds for my latest HD purchase, but that was mainly because I couldn't easily get HGST drives here in Canada, for some strange reason. Skandranon fucked around with this message at 21:14 on Apr 11, 2015 |
# ¿ Apr 11, 2015 21:11 |
|
Junkiebev posted:Sorry for the question barrage and thank you for the help so far. If you are only going to use this for your NAS, you don't really need the extra cores. If you maybe want to virtualize the FreeNAS part and run other things on it as well, the extra cores would help. But if you really want to do a VM hypervisor, you'll probably want something beefier than 8 Atom cores anyways, so you're probably best with the cheaper one.
|
# ¿ Apr 12, 2015 00:11 |
|
Farmer Crack-rear end posted:The bolded part isn't always true. I moved my file server into a new case a couple of months ago, and accidentally got a couple of SATA cables swapped around; the Areca card still detected and worked with the array perfectly well. Isn't always true, but when it is true it just adds to frustration and anxiety, which is not what you want when doing disaster recovery. Especially if you are dealing with someone elses server and they didn't write this all down from the start.
|
# ¿ Apr 13, 2015 15:56 |
|
savesthedayrocks posted:I checked out the other thread, is there any recommendations either way on brands to avoid or go with? The Cyberpower Pure Sinewave ones get some pretty good reviews, but I don't think those are the ones on sale.
|
# ¿ Apr 14, 2015 02:31 |
|
Most of this talk about UPS only really matter when the power goes out and it has to switch to providing power. If your power never actually goes out, it's just a very expensive surge protector. It also can condition the power coming in, within limits, so if you aren't getting a proper 110V, it will fix that. Though, if that's a chronic issue, might want to get that addressed, instead of relying on UPSs to fix that forever.
|
# ¿ Apr 14, 2015 16:30 |
|
http://www.cepro.com/article/the_myth_of_whole_house_surge_protection/ seems to indicate they are largely useless against the kind of surges (from within the house) that are being described.
|
# ¿ Apr 14, 2015 21:30 |
|
Don Lapre posted:This is like, the worst written article ive seen in 14 days. That'll show me to trust the internet. Thanks Obama.
|
# ¿ Apr 14, 2015 22:13 |
|
deimos posted:The problem with PFC and Square waves has to do with the cross voltage at the boost converter capacitor banks, usually these are specced for 400V to accomodate 230VAC, but square wave 230VAC UPSes will have cross voltages in excess of 400V. Sounds like witchcraft to me...
|
# ¿ Apr 15, 2015 21:03 |
|
MC Fruit Stripe posted:That's exactly what I expected, thank you Perhaps you could just do a simple 2x4tb mirror (RAID1) inside your desktop? Windows 7&8 can do that easily, with no additional hardware. That gives you 1 drive redundancy, and that protects against any data loss between backups. Also, enclosure hard drives tend to be the lower quality ones, so you're setting yourself up for more drive failures. It is also very simple to recovery from, unlike RAID5/6.
|
# ¿ Apr 16, 2015 15:59 |
|
MC Fruit Stripe posted:I'm the worst person because I know exactly what you mean, but I also didn't do it, and I know that's very goon in a well. Thing is, I thought about my particular situation - I've got all the space I need. I'm not running out of room for years and years. So in my particular case, if I take all this empty space and move some files around - instead of 2 drives that are 35% full, make it 1 that is 70% and free one up. I worked it out and basically with nothing more than a $150 5tb external from Seagate, and some file movement, I can back up everything I'd ever possibly want backed up. Good luck and godspeed. May your drives live forever.
|
# ¿ Apr 17, 2015 04:26 |
|
Not really, they expect them to have actually failed before they'll honor a warranty. However, if you do something, say, like powering the drive up, then shaking it pretty hard, you'll probably make the heads hit the platters and ruin it. Careful with this though, the force generated by the spinning disk is actually non-trivial and could make you drop the drive if not careful, and they'll probably notice and not honor a large dent.
|
# ¿ Apr 18, 2015 19:25 |
|
Can also try running the drive with some powerful magnets attached stuck to the outside. Or just put it in a box and have it wiped 24/7 until it fails. Or do all 3.
|
# ¿ Apr 18, 2015 21:10 |
|
Someone was asking about how to RMA a HD before it actually died naturally.
|
# ¿ Apr 18, 2015 22:08 |
|
Bob Morales posted:We used to put things in the microwave for a couple seconds That usually causes sparks and burns on the PCB. Someone posted earlier that Seagate is usually pretty chill about it, so perhaps we are fretting over nothing. I'm swapping 4 of them out soon and will see about RMAing them all at once, then either sell them on ebay or keep them around as spares.
|
# ¿ Apr 19, 2015 23:24 |
|
Shaocaholica posted:Umm, so how are you supposed to deal with a NAS hardware failure? Not a drive failure but an actual hardware failure of the NAS? Can you pop the drives out and read them in some other device? What if it was in a raid? The answer to this is pretty much "replace the hardware and hope for the best". Some NAS/RAID cards handle this well, others not so much. This is my main objection to using such NAS type devices, you're tied to the specific hardware implementation, making recovery of the system as a whole more difficult if you have not simply invested in spares. I'm working on moving my media to a Snapraid array, which has a number of features which make recovery of hardware failure much easier. This also makes it easier to move the drives around to new systems, instead of having to copy the data entirely via network from one array to another. 1) Data is stored on standard NTFS partitions. Can be moved to any other system and read directly. This could be with EXT4 as well, if using Linux, same applies. 2) Config is stored in an easy to backup read/modify .cfg file, which is transferable to other systems. 3) Supports multiple parity drives (up to 6). 4) If more drives fail at once than parity can protect, you can still read data directly from the drives that did not fail.
|
# ¿ Apr 20, 2015 21:05 |
|
With just 4 drives, there are really only 2 basic configs that make any sense. A pair of RAID-1, or a RAID-5. This will give you either 6tb or 9tb, respectively. The mirrors have a much better recovery procedure, and faster read/write. If you're new to this, and don't have significant storage requirements, I'd go with the RAID-1.
|
# ¿ Apr 21, 2015 02:02 |
|
eightysixed posted:I've been thinking about this. How can I move my Xpenology from 4x1TB to 2x4TB or whatever? Is that not possible without moving the data to another source, re-rolling everything, and then copying the data back? Because.... Copying to another source and re-rolling everything is the safest route, though most expensive as well. I think ZFS allows you to gradually add larger drives, but you don't get the additional space until they've ALL been upgraded. Some of the newer, non-standard RAID solutions allow you to expand your storage gradually, like Drobo or Unraid. Snapraid is perhaps the easiest to expand, as you simply add more drives to the pool and re-sync. I completely rebuilt my 1st SnapRaid array from 4x4tb to a 4x4tb + 5x3tb array last night, and most of the effort was moving the parts to a new case & installing Windows Server 2012. The config of the array only took a few minutes.
|
# ¿ Apr 21, 2015 19:58 |
|
Feenix posted:I've got a Red 4tb drive I purchased about a month ago. I like it. I hear they are tops! Sounds more like something to do with it being self-powered (powered via USB). The enclosure itself could be somewhat aggressive about powering it down.
|
# ¿ Apr 21, 2015 20:15 |
|
Shaocaholica posted:Just looking for any recs given I like the WD EX2. What are you planning on doing with it, beyond a 4 disk array? What is it you need more than 2 USB ports for?
|
# ¿ Apr 27, 2015 01:29 |
|
Use NTFS and live with the transfer rate, if Windows is your priority. It would be worse to try using EXT or HFS.
|
# ¿ Apr 27, 2015 05:32 |
|
I think, at least for the backup drive, it needs to be easily accessible when plugged into other machines.
|
# ¿ Apr 27, 2015 14:51 |
|
Megaman posted:I am slowly cycling out a ton of 1TB disks for 4TB disks in my 3 RAIDZ arrays, but now I have tons of extra 1TB disks lying around doing nothing that I can't find a use for. What does everyone do their extra disks? I'm trying not to be wasteful. I have about 15 extra at this point. I'm building a backup array with my old 2tb and 1.5tb disks. I've been using them off and on in Windows RAID1 arrays to provide temp space for my desktops or servers. Maybe take them and make a test Unraid/Snapraid. If you go with Snapraid, you can make it less crap by using 2-3 parity disks. Selling old drives is tricky though, even if they wipe fine, they could still have issues, and if you ship them, you don't know if they acquired those issues due to shipping. Lots of complaints from customers, so you either sell them with the caveat of "PROBABLY WILL NOT WORK" and take a huge price cut (making it even less worth the time to wipe them), or take your chances with people bitching and shipping back disks and refunds. I plan to wear all my disks down to nubs to avoid the human element, then take them apart for cool magnets and coasters. Skandranon fucked around with this message at 19:35 on Apr 27, 2015 |
# ¿ Apr 27, 2015 19:33 |
|
Bob Morales posted:https://www.youtube.com/watch?v=apFJBhqUv_E It would be easier to use a hammer...
|
# ¿ Apr 27, 2015 22:17 |
|
Laserface posted:Thoughts? I'd buy some new 2tb or 3tb NAS drives, restore on there. 1tb are pretty small these days for an array. If you've been fine with 4x1tb=3tb of space for now, 4x3tb=9tb of space will keep you going for a good long time, and the drives will be useful in a future system, providing they don't all die.
|
# ¿ Apr 28, 2015 17:08 |
|
Laserface posted:Im trying to keep cost down, so I was either going to re-use drives or buy 2x 2TB and either stripe until I require more space (with a regular external backup) or run in a Mirror (since I only have around 2.2TB of data at the moment) Understood, but don't discount your time in the matter. Also, try to avoid an upgrade that will be obsoleted too soon. If you don't need 4 drives now, go with 2x3tb in RAID-1, and then you can add more 3tb drives later on. This gives you an upgrade path to 4x3tb without mucking about with different sized drives. You want enough breathing room so in 3 months you aren't having to migrate from a 2tb RAID 1 to either some 3tb drives or RAID-5 with the 2tb. Migrations are always stressful to some degree, take time, and pose some risk of data loss, so it's worth it to spend a bit upfront to minimize/avoid them.
|
# ¿ Apr 29, 2015 00:07 |
|
I'd try to keep the storage separate from the video playout, it simplifies things and allows you to specialize.
|
# ¿ May 2, 2015 00:07 |
|
Tiger.Bomb posted:Yeah, I understand the reasoning, but then I need two computers. You don't want a motherboard/CPU/RAM with issues for a storage array, could end up corrupting your data. I'd use the crashing one for video playback and something else for the NAS.
|
# ¿ May 2, 2015 01:58 |
|
Laserface posted:I've been using lovely seagate barracudas (the 1Tb ones from like 6 years ago that seem to just randomly die or get SMART errors after a few months) with no issue. One even has a smart error that a customer returned to me, and I stuck it in my NAS and it's worked fine since. There is no increased risk in using Black drives over Red, you'll just spend more money for a negligible difference in performance. Also, why are you even keen on putting together an array? You seem fine with the idea of losing it all. You could just throw 1-2 4tb drives into your current desktop and do playback from there.
|
# ¿ May 2, 2015 05:40 |
|
Tiger.Bomb posted:Weird I was recommended the same PC in the HTPC thread. I will check it out. Keep in mind that case has very little room for expansion, and if you have plans to expand, it's a lot cheaper to be able to add more hard drives than upgrading the size of your drives.
|
# ¿ May 3, 2015 03:51 |
|
skooma512 posted:Are 3TBs still rear end when it comes to reliablity? That's what Backblaze says anyway. Depends which 3TBs. They've put out a few more articles that elaborate more on the topic. The 3TB WD Reds are good, as well as the Hitachis. It's the Seagate DM001 3TBs that are terrible, supposedly because they used cheap parts due to that tsunami that wiped out HD production a few years back.
|
# ¿ May 6, 2015 21:40 |
|
necrobobsledder posted:I'm cheaping out and am looking at some Toshiba drives instead. RAID6 / RAIDZ2 is for drives being unreliable, right? Dual parity is more due to the fact that, with very large drives, the chances of another failure during rebuild go up. So now you need more parity to even allow you to reliably rebuild. Two drives fail, and then it all falls apart. If you wanted to be cheap, I'd go with good drives in RAID5 than crap drives in RAID6. You don't actually want to ever have to rebuild your array, it's there for the worst case scenario.
|
# ¿ May 7, 2015 23:29 |
|
In my 2 servers, I have a smaller mirrored array along with the main array. One is 2x3tb, the other 2x1.5tb. When moving data around like that, I prefer to copy data there, rebuild new array from scratch, then copy back to array. Always keeps the data in at least some redundancy protected array. However, this requires much more hardware than you have available. How much data do you have? If you are slightly able to tolerate risk, you could copy your data to a single backup drive, then onto your new array. The correct answer to your question is 'get more drives to move data with', but I suspect you can't/won't do this. All other solutions entail some degree of risk of data loss.
|
# ¿ May 8, 2015 18:45 |
|
I've never been a fan of some of the more complex RAID strategies, like ZFS, DROBO, or even RAID5. If you lose more than your parity, you lose either the entire array, or recovery is unimaginably difficult. Same goes with most NAS appliances, as if the hardware itself fails, you again are in a sticky spot where you replace the NAS with exactly the same and hope it will recognize the array. I prefer either RAID1, as recovery is dead simple (use the drive that still works), or things like Unraid/Snapraid. If you blow your parity protection, you still have all the drives that work, and they can be read from any system that supports the partition type (Unraid uses RieserFS, Snapraid uses NFTS/EXT4). You could even read them via USB enclosure if you wanted.
|
# ¿ May 9, 2015 16:48 |
|
|
# ¿ Apr 27, 2024 19:39 |
|
DNova posted:Recovery is actually pretty easy: replace dead hardware, create new array, restore from backup. That's not recovering your data, that's starting from scratch. Doesn't work if your array is actually larger than your capacity to backup.
|
# ¿ May 9, 2015 17:08 |