|
Good: Snagged a Norco 4020 chassis last night for $100 on Craigslist, and thanks to using largely the 8087-to-SATA fanout cables, wasn't too bad to cable up even though it doesn't have proper SAS backplanes. Bad: I got the whole thing built and cabled and only then realized he didn't include the screws for the drives, which need to at least be flathead and possibly with even smaller heads than usual. Ordered some off of Amazon with same day delivery in hopes that they'll work but I'm not confident. Worst case, if they don't work (and can't be made to work with some creative fabricobbling) the actual Norco screws are only about $12 shipped, they just won't be here for a while.
|
# ? Jul 14, 2016 18:18 |
|
|
# ? Mar 29, 2024 00:39 |
|
I think my constellations are going after 17k+ hours. Zpool status alerted me to errors on one disk, smartctl spits out 12k reallocated sectors for /dev/sdb, plus unrecoverable read/write errors on two more disks, albeit with no errors or reallocated sectors. Since I'm a little disappointed with having to write down €600+ of disks in under two years, what are some harddisk recommendations from the fine folks in this thread I can use to replace my current ones? For reference, I have 4x3tb constellation CS's running currently, hooked up to a flashed M1015. Will the controller take 4tb (or even 5tb) drives? I'm almost out of space too, so this is a decent time for me to think of upgrading.
|
# ? Jul 16, 2016 12:59 |
|
Hambilderberglar posted:I think my constellations are going after 17k+ hours. I wont personally buy anything but Hitachi NAS 4TB or above.
|
# ? Jul 16, 2016 14:32 |
|
M1015 controllers are popular because they are the cheapest controllers roughly that will correctly map disks larger than 4 TB. I run a mixture of hard disk manufacturers and don't ever buy them in batches more than 3 to spread out mishandling risks. 2 Western Digital 4 TB reds, 6 Toshiba 4 TB disk drives. On my backup array I have several Western Digital 2 TB green drives and 4 2 TB Samsung disks (all had their firmware or boot settings modified to minimize head parking and other nonsense that's bad and kills drives early). My 2 TB based array has been running 24x7 for... 4 years now and asides from 3 within-warranty failures spread across them, I've had no problems. So yeah, I have 30 TB usable on 16 hard disks..... yeah....
|
# ? Jul 16, 2016 14:38 |
|
redeyes posted:I wont personally buy anything but Hitachi NAS 4TB or above.
|
# ? Jul 16, 2016 15:59 |
|
I'm working on getting my entire music library on to my WD Mycloud PR4100. For complicated reasons (@#$@#$@#$ Olive Musica) what I have is a tarball that will need to be unzipped on the far side. For anybody else with a Mycloud, you can SSH into a Unix shell on the machine; I believe it's running Debian Wheezy. To do this, you have to turn on ssh, which is a hard-to-find setting. Details are here. At that point, you can walk around the file system, run commands, and in general
|
# ? Jul 16, 2016 20:01 |
|
Hambilderberglar posted:I known n=1 doesnt count, but how long have you had yours? Letsee, about 2.5 years. No bad sectors and no errors and I do surface scans every few months. I am using 8 of them but I have sold another 10ish and no failures as of yet. But really I recommended those drives because of Backblaze's stats: https://www.backblaze.com/blog/hard-drive-reliability-stats-q1-2016/
|
# ? Jul 16, 2016 20:33 |
|
Yeah, I'm probably going to spend the extra coin for HGST next time I need drives. The Reds have been solid, but those HGST Backblaze numbers are unreal. In Norco news, despite at least one Amazon review to the contrary, the Supermicro screws for their hot swap bays are small enough to clear the Norco 4020 as well. Edit: also, the noise level is significant, but not nearly as loud as I expected. The 80mm fans are not quiet, especially since mine all have some bearing noise, but they are waaaay quieter than the 40mm screamers you'll find in a 1U rack mount. IOwnCalculus fucked around with this message at 00:29 on Jul 17, 2016 |
# ? Jul 16, 2016 21:57 |
|
Again, personal anecdotes aren't worth a whole lot but that said I've had nothing but great success with Hitachi NAS drives. I've got three 2TBs from 2011 still rocking hard and 7 4TBs that are 2014-2016. They've all been run nonstop and have been migrated from a Drobo to a Rackable SE3016 hooked up to FreeNAS and now to a Lenovo SA120. The 2TBs have been migrated to another site and are being used as a replication target. When I start replacing the 4TBs with 6TB units then the 2TBs will be phased out. All in all, very impressed.
|
# ? Jul 17, 2016 00:42 |
|
I've been running the whole sabnzbd/sonarr/couchpotato/plex thing forever now on my PC using a Windows Storage Space JBOD array with a few random hard drives. One finally died, I lost everything, and now I'm slowly rebuilding my library. Losing all your poo poo is a pretty loving terrible experience, so I've been looking into getting a NAS. Previously I had about 7TB worth of junk, so a two drive solution doesn't seem great as I don't have much room for inevitable expansion. I was looking at the Synology product lines, and the DS1515+ seems like it'll fit the bill nicely as I can just get two 8TB drives to start with and slowly add more as I need them. Problem is, the DS1815+ is only $150 more, so from a forward-looking standpoint it seems silly to not get more expansion slots considering the external expansion bay for Synology NAS's is $500. I keep getting cold feet on pulling the trigger on this because the DS1815+ and 2x WD Red 8TB drives is going to run me $1,500ish which feels like a lot to have a redundant Plex library and to offload the stuff my PC is already doing. Right now I'm down to a single 6TB WD Green drive in my PC, so I suppose my other option is to buy two more of those and do a Windows Storage Space with parity to have redundancy and only be spending around $500 on drives using existing hardware I have. My concern with that is, I have no idea how Storage Spaces work if I need to format my PC for some reason, and I'd have the potential to packrat enough poo poo that moving it to something else is going to be a real hassle... Which makes the dedicated NAS thing way more appealing as I feel like I can just throw a DS1815+ in my basement and forget about it until I need more space. If I ever manage to fill it up with 8x 8TB drives, presumably that'd be so far in the future that other storage solutions would have come along? Is buying a DS1815+ dumb for my uses?
|
# ? Jul 18, 2016 05:54 |
|
io_burn posted:I've been running the whole sabnzbd/sonarr/couchpotato/plex thing forever now on my PC using a Windows Storage Space JBOD array with a few random hard drives. One finally died, I lost everything, and now I'm slowly rebuilding my library. Losing all your poo poo is a pretty loving terrible experience, so I've been looking into getting a NAS. Previously I had about 7TB worth of junk, so a two drive solution doesn't seem great as I don't have much room for inevitable expansion. I was looking at the Synology product lines, and the DS1515+ seems like it'll fit the bill nicely as I can just get two 8TB drives to start with and slowly add more as I need them. Problem is, the DS1815+ is only $150 more, so from a forward-looking standpoint it seems silly to not get more expansion slots considering the external expansion bay for Synology NAS's is $500. If you've been happy with how everything has been running on your windows box and you just need some parity then instead of storage spaces take a look at DrivePool. It's pretty intuitive and will even let you set up different parity levels for different folders, only limited by the number of drives you have. I've been using it for about a year and it's been rock solid.
|
# ? Jul 18, 2016 06:10 |
|
Any good recommendations for a low power motherboard/cpu combo? I'd like something that is like the appliances from Qnap/Synology where they consume ~35W under load and normally barely anything. Not much luck finding decent N3150/J1900 motherboards with more than 2 SATA3 ports.
|
# ? Jul 18, 2016 07:20 |
|
Krailor posted:If you've been happy with how everything has been running on your windows box and you just need some parity then instead of storage spaces take a look at DrivePool. It's pretty intuitive and will even let you set up different parity levels for different folders, only limited by the number of drives you have. I've been using it for about a year and it's been rock solid. Totally agree. DrivePool f'n rocks. I think the main dev used to be on Microsoft's HomeServer team.
|
# ? Jul 18, 2016 15:56 |
|
priznat posted:Any good recommendations for a low power motherboard/cpu combo? I'd like something that is like the appliances from Qnap/Synology where they consume ~35W under load and normally barely anything. Your drives are going to consume more power than your mobo / CPU, and I'd suggest that the simpler option might be to find a B150/H110/H170 motherboard you like and put a Pentium G4400 or similar on it.
|
# ? Jul 18, 2016 16:01 |
|
I had a storage spaces box that poo poo the bed after only one of the drives died too - with mirrored spaces, at that.
|
# ? Jul 18, 2016 16:40 |
|
Quick question that hopefully belongs in this thread and not tech support. I recently bought some Seagate 4TB drives to set up a raid for non-critical storage, but after I had it up and working I was periodically hearing a strange 'chirping' noise from one of the drives, so I broke the RAID and set all the drives up as separate drives so I have now isolated which drive it is. Symptom: Periodic 'chirp' sound, usually quite brief, but on occasion can be half a second or so in duration, sort of like chirp then brief fingernails on a chalkboard. The interval that it happens can be 15-20 minutes apart, or many hours. Some days I won't hear it at all. Drive - ST4000DM000 The question - Is this a 'drive is going to fail' issue, or is it nothing to worry about? Drive performance seems fine other than this, and drive temperature is the same as the other drives, typically 32c-36c.
|
# ? Jul 19, 2016 07:54 |
|
That is a noise I absolutely return drives for. Maybe it'll work for years and be ok but I'm not going to find out.
|
# ? Jul 19, 2016 07:56 |
|
phosdex posted:That is a noise I absolutely return drives for. Maybe it'll work for years and be ok but I'm not going to find out. Cool, thats sort of what I needed to know. I'll exchange it. Thank you.
|
# ? Jul 19, 2016 07:58 |
|
New drives arrived! One question, I'm using ZFS on linux (gentoo), what's the proper way to replace a disk in a ZFS pool when I have no extra drive slots available to me? Zpool replace syntax looks to be something like zpool replace /dev/sdb [new-device] but having no slot to plug it into, it doesn't seem sensible for me to specify a new device? Since the new disks are also a terabyte larger, I want to replace them all one by one.
|
# ? Jul 20, 2016 10:46 |
Hambilderberglar posted:New drives arrived! One question, I'm using ZFS on linux (gentoo), what's the proper way to replace a disk in a ZFS pool when I have no extra drive slots available to me? Zpool replace syntax looks to be something like zpool replace /dev/sdb [new-device] but having no slot to plug it into, it doesn't seem sensible for me to specify a new device? Since the new disks are also a terabyte larger, I want to replace them all one by one. I believe you can find the serial number of the disk (via 'smartctl -a <device>'), use 'zpool offline <pool> <device>' on the disk, shut down the server, replace the disk (that's why you want to find the serial number), start up the server, replace the disk with 'zpool replace <pool> <olddevice> <newdevice>', and then wait for it to resilver. Do any of you people happen to know whether a Xeon E3-1235L V5 can make use of the iGPU for QSV transcoding on a motherboard that has no video outputs beside the one provided by the BMC, like a Supermicro X11SSH-F?
|
|
# ? Jul 20, 2016 14:59 |
|
The Locator posted:Cool, thats sort of what I needed to know. I'll exchange it. Thank you. Seconding, I'd exchange that in a heartbeat. D. Ebdrup posted:I believe you can find the serial number of the disk (via 'smartctl -a <device>'), use 'zpool offline <pool> <device>' on the disk, shut down the server, replace the disk (that's why you want to find the serial number), start up the server, replace the disk with 'zpool replace <pool> <olddevice> <newdevice>', and then wait for it to resilver. This is the proper way to do it. However, if a drive drops completely for any reason (you pulled it, it goes completely dead, etc) ZFS will show that "device" as some long string of numbers, probably followed by (was /dev/sdX). It won't treat a new drive in that same /dev/sdX position as a pool member until you add / replace it.
|
# ? Jul 20, 2016 15:28 |
|
gently caress my life, didn't pay attention and got loving coolspin (5900rpm) drives. But I'm too lazy to return them. As long as they don't fail within 3 years I'll be okay. Thanks Ebdrup and IOC for confirming my ZFS stuff. Hambilderberglar fucked around with this message at 17:30 on Jul 20, 2016 |
# ? Jul 20, 2016 17:14 |
|
Feh, I can still saturate gigabit on reads all day long with 9x Reds in RAIDZ2, and I think those are 5900RPM as well.
|
# ? Jul 20, 2016 17:44 |
|
It does depend on number of drives in your VDEV, but yeah, your HDDs are very unlikely to be the bottleneck in a GbE network. My ZFS fileserver drives are a mix of Samsung F4 2TBs (5400), WD Red 3TBs (5400), and HGST NAS 6TB (7200).
|
# ? Jul 20, 2016 18:09 |
|
Watermelon Daiquiri posted:It went well! Since I wanted to use it to stream steam games as well (no sense buying an extra 100+ machine when I can just stick it on this since everything is low use and I want to use ds4 controllers), that limited me to only a few possible linux distributions. I just stuck ubuntu on there and installed it as lightweight as possible, getting samba, plex, and steam all set up, using zpool to set up my drives. Oh good to hear! Out of curiosity, how did you set up your zpool? The last of my parts came this week and I finally finished building today (first build so took me forever and made me feel incredibly dumb in the process). I managed to get FreeNAS up and running from a USB flash drive and then stalled when I got to trying creating a zpool. Honestly, after spending the last 4 hours or so reading about zpools, vdevs and the various permutations of RAIDZ and alternatives, I think I'm more confused than when I started. Right now, I've only got a pair of WD Red 3TB drives to work with for storage (I did get a separate SDD to run the plex jail). As I accumulate more stuff, I will probably add more drives but at the moment I've probably only got around 1-1.5 TB of data, so I didn't want to load up on more than 2 drives to start. Based on this post and this other post it sounds like mirrored stripes (or is it striped mirrors?! and are these the same as mirrored vdevs?!) would be the way to go. Am I just going down completely the wrong path?
|
# ? Jul 22, 2016 16:18 |
|
With 2 drives your only real choice is a mirror, this is a mirrored vdev. You can then later add 2 more drives of the same size to become a striped mirror. You can continue adding pairs of drives to this.
|
# ? Jul 22, 2016 16:49 |
|
I'm running out of space on the 6 drive RAIDZ2 NAS I set up, but upgrading means buying a ton of 4tb drives...
|
# ? Jul 22, 2016 18:40 |
|
Leng posted:Right now, I've only got a pair of WD Red 3TB drives to work with for storage (I did get a separate SDD to run the plex jail). As I accumulate more stuff, I will probably add more drives but at the moment I've probably only got around 1-1.5 TB of data, so I didn't want to load up on more than 2 drives to start. If you have two disks and they are mirrored, you end up with 3tb. If one of the two disks fails, your data is recoverable. If you have four disks, and they are striped, you end up with 12tb. If any one disk fails, your data is gone. If you have four disks, you can also do the following: two mirrored disks striped with two more mirrored disks. You end up with 6tb, use 4 drives. A disk in either mirrored vdev can fail, at the same time. And your data will be okay (iirc). https://en.wikipedia.org/wiki/Nested_RAID_levels This has flashier graphs if you're a visual person. Unless you like to live dangerously, a mirror is the only sensible thing to do with a two disk configuration. With mdadm you could create a software raid with "missing" disks, but I don't know if I'd use that for anything I care about myself.
|
# ? Jul 22, 2016 18:44 |
|
To be honest, considering the limitations of ZFS VDEVs in terms of expansion, the answer to "how should I set up 2 drives" really should be "get a third drive and set it up as RAID-Z".
|
# ? Jul 22, 2016 23:50 |
|
phosdex posted:With 2 drives your only real choice is a mirror, this is a mirrored vdev. You can then later add 2 more drives of the same size to become a striped mirror. You can continue adding pairs of drives to this. Thanks for the confirmation, this sounds like the setup that would work the best for me. Hambilderberglar posted:Unless you like to live dangerously, a mirror is the only sensible thing to do with a two disk configuration. With mdadm you could create a software raid with "missing" disks, but I don't know if I'd use that for anything I care about myself. Your explanation plus the snazzy diagrams finally made it click - I don't mind living a little dangerously now and then but not when it comes to data. Thanks for taking the time to go through it. Now, to finish building!
|
# ? Jul 22, 2016 23:51 |
|
Oops, I killed a drive earlier. No great loss - it was only a 320GB Hitachi drive from 2011, and I only used it for storing music and videos. Basically, noticed that there were a few old temp files that VLC was picking up on, and that I couldn't delete or, in fact, see, so I decided to format the drive. Backed up everything to another drive, the format went smoothly, and I copied everything back over. When I rebooted, however, the drive started making noises, and the task manager was showing it at 100% utilisation, even though I could see that nothing was writing to it. Decided to reboot another time for good measure, and when that failed to do anything, unplugged the drive and tried another port. At this point it stopped working completely. I'm not too annoyed because it was a really old drive and it was probably going to die soon anyway, but it's still a bit of a shame Here's a bonus of the abhorrent noises the drive was making in its death throes: https://www.youtube.com/watch?v=REYhzs9r8f8
|
# ? Jul 26, 2016 09:27 |
|
Anyone know if/when the prices on WD Reds are going to drop?
|
# ? Jul 26, 2016 21:47 |
|
I know the price for 3TB Reds haven't moved at all in over 2 years. I wouldn't really expect anything soon.
|
# ? Jul 26, 2016 22:20 |
|
I'd expect some form of decent deal come Black Friday, but yeah there's not a whole lot of pressure for them to drop prices. I don't think any of the technologies that are allowing the 8TB/10TB drives on the market to exist are also lowering the cost of the 2-4TB drives.
|
# ? Jul 26, 2016 22:45 |
|
IOwnCalculus posted:I'd expect some form of decent deal come Black Friday, but yeah there's not a whole lot of pressure for them to drop prices. I don't think any of the technologies that are allowing the 8TB/10TB drives on the market to exist are also lowering the cost of the 2-4TB drives. Any chance SSDs would catch up anytime soon?
|
# ? Jul 26, 2016 23:42 |
|
Cockmaster posted:Any chance SSDs would catch up anytime soon? In price they will probably come down some more but I don't think they'll reach parity with hard-drives. 850 Evo is at roughly $0.31/GB right now for most versions (500 GB, 1TB, 2TB). The 4TB 850 Evo comes out in a few days but it's at $0.37/GB or $1,499.
|
# ? Jul 27, 2016 00:17 |
Cockmaster posted:Any chance SSDs would catch up anytime soon? I'm also curious about this. My current NAS has five 6TB Reds in it. I'll have to replace the drives in ~4 years. I'm thinking my next NAS could be all SSDs as the current rate prices are falling?
|
|
# ? Jul 27, 2016 00:25 |
|
fletcher posted:I'm also curious about this. My current NAS has five 6TB Reds in it. I'll have to replace the drives in ~4 years. I'm thinking my next NAS could be all SSDs as the current rate prices are falling? Hard to say, as the price of HDDs may still keep pace. Large, 'slow' SSDs for mass storage would be very tempting.
|
# ? Jul 27, 2016 01:25 |
|
Skandranon posted:Hard to say, as the price of HDDs may still keep pace. Large, 'slow' SSDs for mass storage would be very tempting. I think we are still limited by fabrication capacity. Even if we took every fab on the planet and had it cranking out the highest density flash, we would still not be producing an amount of storage equal to what is produced in regular hard drives. Hard drives have lost the desktop wars, and are losing the server wars, but they aren't going away as a storage medium anytime soon, they are just too cheap.
|
# ? Jul 27, 2016 03:38 |
|
|
# ? Mar 29, 2024 00:39 |
|
I kinda feel we're at this point where the majority of average users has more hard drive space than they need. So the low-end of storage will and has come down to a price where they can buy a drive for not too much and be happy. The larger-size drives though are able to maintain a higher price point simply from overall lack of demand from consumers.
|
# ? Jul 27, 2016 03:44 |