Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nulldevice
Jun 17, 2006
Toilet Rascal
Had something strange pop up on my home server that I haven't seen before. Was noticing that webmin was showing two of my raid disks with errors. At first I thought it was hddtemp being strange, but I decided to check smartctl and see what it was seeing, and sure enough on two disks there are errors. There are no errors in the counters, rather ABRT errors in the log. What's also strange is that its two disks sequentially, one very new, one nearly a year old. Here's the SMART output from both disks:
code:
[root@homenas ~]# smartctl -a /dev/sda
smartctl 5.43 2012-06-30 r3573 [x86_64-linux-2.6.32-431.23.3.el6.x86_64] (local build)
Copyright (C) 2002-12 by Bruce Allen, [url]http://smartmontools.sourceforge.net[/url]

=== START OF INFORMATION SECTION ===
Device Model:     WDC WD30EFRX-68EUZN0
Serial Number:    WD-WCC4N0886241
LU WWN Device Id: 5 0014ee 2b4a5e7b8
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   8
ATA Standard is:  ACS-2 (revision not indicated)
Local Time is:    Thu Aug 28 14:19:20 2014 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (40680) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 408) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x703d) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   183   181   021    Pre-fail  Always       -       5833
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       33
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   098   098   000    Old_age   Always       -       1976
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       33
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       23
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       696
194 Temperature_Celsius     0x0022   119   114   000    Old_age   Always       -       31
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       0

SMART Error Log Version: 1
ATA Error Count: 2
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 2 occurred at disk power-on lifetime: 399 hours (16 days + 15 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 51 01 00 00 00 00  Error: ABRT

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d5 01 e1 4f c2 00 08      00:16:25.397  SMART READ LOG

Error 1 occurred at disk power-on lifetime: 399 hours (16 days + 15 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 51 01 00 00 00 00  Error: ABRT

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d5 01 e1 4f c2 00 08      00:16:24.990  SMART READ LOG
  b0 d5 01 e1 4f c2 00 08      00:16:24.990  SMART READ LOG

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

[root@homenas ~]# smartctl -a /dev/sdb
smartctl 5.43 2012-06-30 r3573 [x86_64-linux-2.6.32-431.23.3.el6.x86_64] (local build)
Copyright (C) 2002-12 by Bruce Allen, [url]http://smartmontools.sourceforge.net[/url]

=== START OF INFORMATION SECTION ===
Device Model:     WDC WD30EFRX-68AX9N0
Serial Number:    WD-WCC1T0777250
LU WWN Device Id: 5 0014ee 2b303d22d
Firmware Version: 80.00A80
User Capacity:    3,000,592,982,016 bytes [3.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   8
ATA Standard is:  ACS-2 (revision not indicated)
Local Time is:    Thu Aug 28 14:22:52 2014 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (38460) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 386) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x70bd) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   200   200   051    Pre-fail  Always       -       0
  3 Spin_Up_Time            0x0027   179   177   021    Pre-fail  Always       -       6033
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       54
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   088   088   000    Old_age   Always       -       9472
 10 Spin_Retry_Count        0x0032   100   253   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   253   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       54
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       35
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       18
194 Temperature_Celsius     0x0022   117   107   000    Old_age   Always       -       33
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   200   200   000    Old_age   Offline      -       0

SMART Error Log Version: 1
ATA Error Count: 2
        CR = Command Register [HEX]
        FR = Features Register [HEX]
        SC = Sector Count Register [HEX]
        SN = Sector Number Register [HEX]
        CL = Cylinder Low Register [HEX]
        CH = Cylinder High Register [HEX]
        DH = Device/Head Register [HEX]
        DC = Device Command Register [HEX]
        ER = Error register [HEX]
        ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.

Error 2 occurred at disk power-on lifetime: 7896 hours (329 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 51 01 00 00 00 00  Error: ABRT

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d5 01 e1 4f c2 00 08      00:18:00.813  SMART READ LOG
  b0 d5 01 e1 4f c2 00 08      00:18:00.802  SMART READ LOG

Error 1 occurred at disk power-on lifetime: 7896 hours (329 days + 0 hours)
  When the command that caused the error occurred, the device was active or idle.

  After command completion occurred, registers were:
  ER ST SC SN CL CH DH
  -- -- -- -- -- -- --
  04 51 01 00 00 00 00  Error: ABRT

  Commands leading to the command that caused the error were:
  CR FR SC SN CL CH DH DC   Powered_Up_Time  Command/Feature_Name
  -- -- -- -- -- -- -- --  ----------------  --------------------
  b0 d5 01 e1 4f c2 00 08      00:18:00.802  SMART READ LOG
  b0 d5 01 e1 4f c2 00 08      00:18:00.791  SMART READ LOG

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
My first concern is whether or not these disks are a ticking time bomb, or is this a false positive. There aren't any pending sectors or reallocation occurring on the disks. I'm going to take a look at the logs on the server and see if there's anything that might coincide with these errors.

Adbot
ADBOT LOVES YOU

Nulldevice
Jun 17, 2006
Toilet Rascal
Wonder why all the drives didn't log that error. Guess smartmon gave up after two drives. Go figure. I've gone ahead and replaced the cables to those drives just to be safe based upon the research I did on the error on my own. I'm not that up to date on the standards that SMART uses, so I didn't think it could be a bogus command. Thanks for the info!

Nulldevice
Jun 17, 2006
Toilet Rascal

BobHoward posted:

By any chance are all the rest of your drives something other than WD30EFRX firmware version 80.00A80? My guess is that it's an issue specific to that drive model and/or firmware rev. Note that smartctl reported "Not in smartctl database" for both of them. The database is a list of known quirks, capabilities, parameter interpretation methods, and so forth. It's not unheard of for the generic fallback support to have a few minor issues (which is why the database exists).

The problem with SMART is that it's a much bigger and messier interface than the main ATA command set which actually moves user data around. It's harder to implement 100% correctly, and for that matter lots of it is left as manufacturer defined so there is no such thing as 100% correct. Since you don't need 100% debugged SMART for a drive to work well and be useful, this is a recipe for minor SMART related bugs to make it into shipping products surprisingly often.

code:
[root@homenas ~]# smartctl -a /dev/sda | grep Firmware
Firmware Version: 80.00A80
[root@homenas ~]# smartctl -a /dev/sdb | grep Firmware
Firmware Version: 80.00A80
[root@homenas ~]# smartctl -a /dev/sdc | grep Firmware
Firmware Version: 80.00A80
[root@homenas ~]# smartctl -a /dev/sdf | grep Firmware
Firmware Version: 80.00A80
[root@homenas ~]# smartctl -a /dev/sdd | grep Firmware
Firmware Version: 80.00A80
[root@homenas ~]# smartctl -a /dev/sde | grep Firmware
Firmware Version: 80.00A80
All the same firmware revision.

code:
[root@homenas ~]# smartctl -a /dev/sda | grep 'Device Model'
Device Model:     WDC WD30EFRX-68EUZN0
[root@homenas ~]# smartctl -a /dev/sdb | grep 'Device Model'
Device Model:     WDC WD30EFRX-68AX9N0
[root@homenas ~]# smartctl -a /dev/sdc | grep 'Device Model'
Device Model:     WDC WD30EFRX-68AX9N0
[root@homenas ~]# smartctl -a /dev/sdd | grep 'Device Model'
Device Model:     WDC WD30EFRX-68AX9N0
[root@homenas ~]# smartctl -a /dev/sde | grep 'Device Model'
Device Model:     WDC WD30EFRX-68AX9N0
[root@homenas ~]# smartctl -a /dev/sdf | grep 'Device Model'
Device Model:     WDC WD30EFRX-68EUZN0
Two different device models had the error. So it doesn't seem specific to a firmware revision or specific model really. Just completely random.

Nulldevice
Jun 17, 2006
Toilet Rascal
great, i've got nine of the wd red 3TB drives between two servers. how about those low failure rates on the hitachis tho? did not expect that.

Nulldevice
Jun 17, 2006
Toilet Rascal
Should we tell him or let him find out the hard way?

Nulldevice
Jun 17, 2006
Toilet Rascal

ufarn posted:

Bummer. There has to be some way of migrating.

Yep. Copy your data to an external source like a backup hard drive and then migrate your disks over, then copy your data over to the new NAS. That's about the only way this is going to work.

Nulldevice
Jun 17, 2006
Toilet Rascal

ufarn posted:

Can you do this from the drives alone, or do they need to be connected to/in the NAS?

My World is giving me a ton of access grief, so if I can access the drive directly, that would be great.

Since they're two drives in RAID0, can I just pull them out, format one, and transfer the other, and then format that too?

Then I can just put them in the Synology one at a time.

If they're in raid0 they must both remain in the NAS at all times during the data copy. I don't know if you can hook the external up to the myworld or not so you'll have to look into that (also be aware of what file system the myworld formats the external as for the data transfer, this could make the difference between hooking the external directly to the synology as I'm not sure what formats the synology can mount). You could hook the external up to your PC and copy the data through the PC to the external that way, then once you've copied everything over you would verify everything is there, then remove the disks from the myworld, then place them in the synology, then perform the initialization/format procedure in the synology and then you could possibly hook the external up directly to the synology and copy the data to the new system. Speeds may vary but it should work. Anyone correct me if I've got this wrong.

Nulldevice
Jun 17, 2006
Toilet Rascal

NihilCredo posted:

I've had the realisation that, since I plan to leave my file server off most of the time and only WakeOnLAN it as necessary - meaning idle power consumption is not really an issue - it might not make much economic sense to build a separate machine from my desktop... hardware-wise, that is.

Software-wise, of course, is probably another matter, since the desktop must run Windows. Hypothetically, what would be the least terrible option available for using a Windows machine as a file server/torrent box/streaming host? Is ZFS off the table or are there drivers for it? Would it be as silly as it sounds to run FreeNAS in a VM under Windows?

You really don't want to run FreeNAS in a VM unless you're doing hardware passthrough of a drive controller or drives themselves. FreeNAS (ZFS in particular) wants full control of the disks. Also virtual disk performance is awful on top of that and there could be serious data errors (so I've heard. I've run FreeNAS as a test in ESXi and the disk write performance was very poor). If your dataset isn't changing much, IE large static media collection, you could look into something like Snapraid. Just send the torrents to a scratch disk before they get loaded to the media collection. There's also Flexraid, but I don't know much about it and I believe there's a cost associated with it. Have you considered a NAS appliance like a Synology or Q-Nap? I've used both and they're great devices. They're also very low power for the most part and compact so they don't take up much space. Using your desktop as a server has some drawbacks, I used to do it years ago and I ran into all sorts of limitations and bugs. They've probably been long fixed by now, for instance running out of memory for file shares under Windows 7. That one required a registry hack to fix. Also if you're wanting a local backup of your data, a NAS is handy. (but remember to have an offsite solution too) Keeping your local backups on the same machine is asking for trouble, say the power supply goes tits up and takes all the hard drives with it. There goes your backups. Also a NAS appliance like Synology comes with great tech support.

Anyway, just some jumbled thoughts.

Nulldevice
Jun 17, 2006
Toilet Rascal
Seagate 3TB NAS drives http://www.newegg.com/Product/Produ...-_-22178392-S0A $89.99 with code ESCEHHF22 through 11:59 PDT tonight.

Nulldevice
Jun 17, 2006
Toilet Rascal
Correct. It will essentially create a mirror of the first drive. You are adding redundancy that point, not capacity.

Nulldevice
Jun 17, 2006
Toilet Rascal

uhhhhahhhhohahhh posted:

drat, thought it only used part of the drive for parity, not the full one. Looks like I'll have to buy 2 more then, down to 300gb!!





As an aside, I have my xpenology box routeable, as well as Sonar/NZBGet/Couchpotato/PhotoStation (through a DDNS) so I can get on it when I'm out on my phone or at work... everything is passworded obviously but I'm wondering if it's still a security risk? Should I just set up the VPN on my router and connect to that from my phone so I don't have all these ports forwarded?

Personally this is how I do things. I just run openvpn on my server and port forward to the openvpn server. I do have some services forwarded through a reverse proxy but on random high ports and via SSL, and password protected. This is maybe two services tho. The VPN is nice because it allows for more access without the port forwarding. More access is nice.

Nulldevice
Jun 17, 2006
Toilet Rascal

Furism posted:

I have an older Synology 4 slots NAS that's been acting up recently. It takes about 20-30 presses on the power button to turn it on and I don't take this as a good sign. It's 4 or 5 years old so not under warranty anymore. I was thinking to replace it before something more important breaks (like the CF card or whatever they store the OS on - happened to me in the past, on a Synology as well).

I was thinking to get something with a x86 processor because, as far as I know, soft RAIDs are faster with those than ARM? I don't need any fancy feature (web UI, Bittorent, DLNA, ...), just SSH (for rsync) and CIFS/NFS. Is there any brand or model that's basically "simple but sturdy" ?

Alternatively I could go for a DIY solution but those seemed quite expensive. I need 4 x3.5" slots and 1x2.5 (for the OS, but I guess I could boot from a USB drive... not sure how reliable those would be in the long term though) if possible on a very small motherboard and case, and very low power consumption as well. Mini-ITX motherboards are incredibly expensive apparently. Note that I live in Europe so Neweggs and websites like that don't work.

So what would be the best solution for my problem?

Qnap might not be so bad for what you're looking for. I've got one at work that I use for various purposes. It's similar to the Synology but doesn't cost as much. I've got a four bay unit. It does support SSH login and has an rsync client/daemon. I think I paid about $220 for it on sale. It does have an ARM processor in it but it really performs very well. I'm currently running a raid 6 array on it and there is no real latency and I get near wire speeds with it. This is a TS-431+. As always YMMV. But I think they're worth looking into.

Nulldevice
Jun 17, 2006
Toilet Rascal

KinkyJohn posted:

What are the differences between WD red/green/blue/purple again? I know I would want red for storage, but why are the others so terrible again?

Red: Storage drives, equipped with TLER for RAID
Green: Low power drives, parks heads aggressively. Due to this limits lifespan.
Blue: Standard consumer drive. 1 or 2 year warranty.
Purple: Video recording drive. You wouldn't use this in a typical storage situation.
Black: High performance drive. 5 year warranty.

Nulldevice
Jun 17, 2006
Toilet Rascal
I'm fairly certain that the 3TB Seagates were limited to a specific model/run. The ones made after the Thailand flooding if I'm remembering correctly. Those drives were absolute poo poo and failed at extremely high rates. This is responsible for the high failure rates seen on the Backblaze graphs. The data has simply been skewed by a bad production run. Remove those drives and we'd see a more realistic view of Seagate's current 3TB drives.

Nulldevice
Jun 17, 2006
Toilet Rascal

EconOutlines posted:

I'm going to be cross posting with the main PC thread a bit but I thought I'd get my ducks in a row here first. I'd like to re-purpose my soon-to-be 5 year old PC as a Plex NAS when I upgrade my main one.

Currently working with:

-i7-2600K
-32 GB DDR3 @ 798MHz (10-10-10-27) which I may carry over to my new build if its compatible
-P67A-D3-B3 (Socket 1155) Mobo (6 SATA slots)
-4TB, 3TB, 2TB Seagate, 3 x 2TB WD Greens. Might replace some of the 2TBs or just add some more drives

Power supply is the only thing I'm going to replace, so it doesn't take my system down when it dies. I'm having difficulty on deciding what to use for an OS/RAID solution due to the need to transcode with Plex. I'd rather my family/friends not be hitting my main system's CPU and instead re-purpose the 2600K.

It seems like some sort of RAID solution on Windows Server or Linux would be best since their actual NAS OS options are limited to commercial hardware except for unRaid.

Any and all suggestions are welcome. :)

You're not really limited to commercial hardware (if by that you mean enterprise) on Linux. I run a CentOS 6 home built NAS with MDADM RAID6 on off the shelf hardware. For your situation where you have different sized disks, you're probably best off using a Linux distro with Snapraid. I've used Snapraid in my set up before and it's pretty good, handles everything from the command line or cronjobs. Basically set your largest disks as parity and put your data on the remaining disks. If you lose a disk you only lose the data on the one disk, and with Snapraid, you can get that data back by replacing the disk and running a command to rebuild the disk. It also only spins up the needed disks for your operations as opposed to a traditional RAID where all disks are spinning. It's also free unlike unRaid or other solutions. It also runs on Windows in the event you want to go with a Windows OS. It's what I'd recommend for this situation.

Nulldevice
Jun 17, 2006
Toilet Rascal

Skandranon posted:

All of this is good, but for your setup, I'd suggest getting another 4tb drive so you can have 2 parity drives with Snapraid. After that, you can gradually swap out the remaining 3tb & 2tb drives as they die with 4tb ones as you feel makes sense.

Good call on the secondary parity. It has been a while since I really had gotten into Snapraid, so it's multiple parity disk capabilities were somewhat forgotten. Can have up to what...6 now?

Nulldevice
Jun 17, 2006
Toilet Rascal

Guni posted:

Hi all-knowledgeable NAS goons! I've asked a few questions in this thread before and got some great advice (which I ultimately have never used due to various reasons). But I have a new set of questions so I can confirm what I'm thinking. I'm about to go balls to the wall with my mini-ITX build and remove my 3.5" bay; so that won't be an option to consider.

Basically, I want to store GoPro videos of me and my dad motorbike riding, store music and videos on something that I can easily access (as I'm planning on getting a 1TB SSD to accompany my current, 250GB SSD in my build, it soon won't be feasible to store all this stuff in my actual computer). So all of the stuff I want to do is pretty simple and I know almost any NAS will be able to do it, but here's my questions:

1) My home internet is absolutely woeful (it took 3 days to upload a 15 minute video to YouTube), am I correct in assuming that once this stuff is downloaded, the transfer to the NAS is actually limited by the slowest local piece of equipment (likely my modem/router)? I.e. The transfer of data will be a lot quicker?

2) Assuming the above is correct, do I actually have to have my NAS connected via Ethernet to my modem, or can it all be done via wifi?

3) Do I still need to actually back this data up, or will having the NAS in (insert whatever array# appropriate) be sufficient?

4) Assuming I'm correct on points 1 and 2, what's a good 2 bay NAS?

Thanks in advance goons :)

1) The NAS will transfer the data around your network at the speeds of your internal network connections. So if you've got gigabit ethernet you can expect up to gigabit ethernet speeds. Your network speed is not limited by your internet speed. The only time this is true is when you're transferring data over the internet.

2) Best to connect everything through a switch. If your modem/router has a built in switch, make sure it's at least gigabit. You can transfer data over wifi, but it's best to have the NAS plugged in via wired ethernet.

3) RAID is NOT backup. RAID is for minimizing downtime. Anything you sync to your NAS is a committed change. If you screw up a document or picture and sync it to the NAS, that change is forever and you don't get your old data back. You should consider a backup solution such as an external hard drive or a cloud backup service. Personally I use a second NAS server and 2 external drives and amazon cloud for backup. Total overkill for my linux ISOs but I'd like to make sure I don't lose anything I've been collecting over the years. Yes, you can use your NAS to backup your PCs and other devices, but as far as it being the only target for your data goes, have a working, tested backup.

4) Synology and QNap both make excellent 2-bay units. I run a pair of the old Synology 212js at work and they're decent, but don't expect rocketing speeds out of them. These are also older models, so I don't know what the newer ones can do. I've also got a 4 bay Qnap that I picked up for about $220 on Woot that has dual gigabit ports and I can get great speeds out of it. Currently running 4x3TB WD Red Pros in Raid6. That's 6TB of usable space with 2 disk redundancy. Again, RAID is NOT backup.

Nulldevice
Jun 17, 2006
Toilet Rascal

Cockmaster posted:

Does this mean I'd be all right with a single-drive NAS plus an external hard drive to back up anything I couldn't easily replace (plus Google Drive for important documents)?

I was looking at the Synology DS216play or DS216j.

If you're going with the DS216 model you've got room for two disks, so you're best off with using Synology hybrid RAID or RAID1 plus an external drive and cloud backup. This gives you the best possible protection against downtime and data loss. Remember, with hard drives it's not a matter of 'if' drives will fail, it's a matter of 'when' drives will fail. Just remember that any data that isn't backed up off site is vulnerable to loss from fire, flood, plague of locusts, etc. Make sure you have a regularly scheduled backup.

Nulldevice
Jun 17, 2006
Toilet Rascal

Skandranon posted:

I've been having issues with Crashplan as well. About at the 1.5tb uploaded part and it struggles to upload anything more. Haven't really looked further into it as I'm frustrated enough with it and would rather be doing other things. I was pretty excited about Amazon Drive until I saw it has a 2gb filesize cap.

I'm using Amazon Cloud Drive and I've got files as large as 14GB stored there. I'm using rclone to get them up to the site tho, so I don't know where this 2GB limitation is coming from.

Nulldevice
Jun 17, 2006
Toilet Rascal

Lichy posted:

I have an old toshiba laptop that I want to try and use as a local area network drive through my router. I was thinking of installing Fedora server or something like that on it. The goal is for it to be useable as network drive for file transfer between different laptops in the house. It need not be connected to the Internet, however that might work. Any guides on how to set up something similar?

You'd hook the laptop up to the network and set up Samba to share files to the Windows clients, and NFS to the Linux clients. Check out the documentation for Fedora for how to set something like this up.

Nulldevice
Jun 17, 2006
Toilet Rascal
Going to have to second DrDork here. The backblaze data is skewed heavily on Seagates due to the batch of bad 3TB drives that hit the market after the flooding in Taiwan. Huge failure rate on a specific model of drives. I've been using some Seagate 5TB externals for about a year and they are rock solid, so I'd definitely say their build quality is fine. Currently experimenting with Toshiba desktop 4TB drives in a backup NAS that I built over the weekend. Since it's just a backup host I'm not too concerned about the system, but I'm going to be paying attention to the drive conditions and seeing how they perform overall. I dumped about 7TB of data on them over about 2.5 days averaging 800Mb/s so they do perform very well. These are desktop class drives. My go to drives for NAS is usually WD Red, but I'd definitely take a stab at the Seagate NAS drives in the future should I decide to upgrade or replace existing drives. No reason not to. Generally they are a few bucks cheaper than the Reds and that can add up when you're buying multiple drives for a project.

Nulldevice
Jun 17, 2006
Toilet Rascal

Mr. Crow posted:

I'm new to the NAS world and working on building my first system (technically still deciding what I want to do). Plan on setting up a server with ESXi and running multiple VMs, including a NAS. Been looking a lot at ZFS as this thread and most NAS blogs seem to have a hard-on for it; but it kind of seems like overkill for a home media server. I don't like the inflexibility and general requirements it has, at least from a home use scenario.

I was also looking at mergeFS + snapRAID, and to be honest it seems like a much better and robust solution for my needs, I was wondering what experiences y'all have had with them?

Here is an interesting article on using them on a media server https://www.linuxserver.io/2016/02/02/the-perfect-media-server-2016/

So I thought I'd try out the mergerFS and snapraid set up in a VM on my main server. Obviously this isn't going to perform as well as it would on bare metal, but it does give a pretty good idea as to how it would function. I built a Debian 8.7.1 set up on a 20GB drive because I didn't really plan to put much on the main system. Afterward I created six 100GB SCSI drives (KVM, Virtio) and added them to the VM. I partitioned them and formatted them XFS. I mounted them as follows:

code:
/mnt/disk1
/mnt/disk2
/mnt/disk3
/mnt/disk4
/mnt/disk5
/mnt/parity1
I used UUIDs in the fstab for the disks to keep it clean. I created the same directory structure on each /mnt/disk*.
Next step was to install mergerfs and snapraid. Neither of these are in the Debian repos, so I had to grab the deb package for mergerfs for Jessie 64bit for my VM and download the snapraid 11.0 source code. I installed make and gcc via apt-get so I could compile and install snapraid, and used dpkg -i to install mergerfs.

Mount line for mergerfs, I placed it in /etc/rc.local as for whatever reason (probably my misunderstanding of something) could not get it to work in /etc/fstab, so I used this command.
code:
mergerfs -o direct_io,minfreespace=10G,allow_other,use_ino,category.create=eplfs,moveonenospc=true /mnt/disk1:/mnt/disk2:/mnt/disk3:/mnt/disk4:/mnt/disk5 /mnt/storage
Next was to configure snapraid.conf
code:
parity=/mnt/parity1/snapraid.parity

content=/mnt/disk1/snapraid.content
content=/mnt/disk2/snapraid.content

data d1 /mnt/disk1/
data d2 /mnt/disk2/
data d3 /mnt/disk3/
data d4 /mnt/disk4/
data d5 /mnt/disk5/

exclude *.unrecoverable
exclude /tmp/
exclude /lost+found/
The sample snapraid.conf file has a lot of other options that you can use, but this is the basic configuration that I used to get this system working.

Next was to install Samba. So apt-get install samba. I'm not going to get really into depth here, as the provided configuration file contains plenty of examples for setting up a share. You are going to want to set your share up to point to '/mnt/storage/folder'. Make sure the user connecting has the right permissions, make sure to set them on the folder. Doing this from the /mnt/storage directory will set the permissions across all of the disks/folders.

Next is to load some data. I found that copying data from a Windows 10 pro machine to this server I could peak at 100MB/s or higher. One thing I did notice that I felt was extremely strange is when I overwrote a file with the same data, the rate dropped to 3-5MB/s. Overall, the speed was quite acceptable for a virtual machine. I copied 100GB of data to the virtual server via rsync/scp and it hovered around 60MB/s between the host server and the VM. The source of the data and the target VM disk were on the same RAID6 so I think the overall speed there was acceptable.

Next step was to run snapraid sync to write out the parity data. Probably due to the source/target disks all being on one array things ran a little slow. Also this VM only has 2 cores and 2GB of RAM allocated to it, so it isn't a powerhouse. I think it took about 30 minutes to sync up 100GB of data.

Here's what it looks like now:
code:
root@newnas-x:/etc# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdg1        19G  1.8G   16G  11% /
udev             10M     0   10M   0% /dev
tmpfs           403M  5.8M  397M   2% /run
tmpfs          1006M     0 1006M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs          1006M     0 1006M   0% /sys/fs/cgroup
/dev/sdc1       100G  177M  100G   1% /mnt/disk3
/dev/sdb1       100G   40M  100G   1% /mnt/disk2
/dev/sda1       100G   40M  100G   1% /mnt/disk1
/dev/sde1       100G   87G   14G  87% /mnt/disk5
/dev/sdf1       100G   87G   14G  87% /mnt/parity1
/dev/sdd1       100G   31G   70G  31% /mnt/disk4
1:2:3:4:5       500G  118G  383G  24% /mnt/storage
Overall impression is that it's a pretty decent set up if you aren't changing a lot of data. Snapraid warns on this set up because I've got five data disks and only one parity disk. It recommends two parity for five data. Anyway, that's my report on this type of build. I think it's plenty viable. Read up on the mergerfs options and the snapraid options and configurations. It will tell you a lot of how and why I've configured things the way I did on this set up.

Nulldevice
Jun 17, 2006
Toilet Rascal

Twerk from Home posted:

How strong is the suggestion that you shouldn't use ext4 on volumes larger than 16TB? I can't tell if it's just a minor performance thing, or if I should not even consider ext4 on large volumes and look at xfs instead.

https://www.unix-ninja.com/p/Formatting_Ext4_volumes_beyond_the_16TB_limit

It can be done with the latest tools. It looks like it's a 32bit limitation of the tools that come with most distros and using a 64bit version of the tools seems to take care of the issue. Personally I just use XFS rather than loving with it.

Nulldevice
Jun 17, 2006
Toilet Rascal

jawbroken posted:

1 parity drive is fine for drives that large.

:stonklol:

Nulldevice
Jun 17, 2006
Toilet Rascal

fletcher posted:

Anybody else using restic? I'm looking for a linux CLI client to backup my NAS to B2. It looks pretty nice, and it has a lot of activity on github.

I'm using rclone with B2 and it is really good. Not what you're asking about, but figured I'd offer an alternative. I uploaded 8TB on my gigabit connection in about three days I guess with the speed manually throttled to 700Mbps. B2 likes lots of simultaneous connections, so I run 32 concurrent connections and 64 checkers on my uploads, and set the versions on the B2 side to a single version of retention since I have versioned backups at home. Figure it'd save space on the cloud side and keep costs low. I've been using rclone for a while now and it's been good to me.

Nulldevice
Jun 17, 2006
Toilet Rascal

Greatest Living Man posted:

Is anyone here familiar with setting up OpenVPN on FreeNAS? I can now connect to my VPN from an outside computer, but I can't access any intranet sites (like 192.168.1.232, my freeNAS WebUI). I know it has something to do with routing but I'm not really sure where to go from here.

code:
push "route 192.168.1.0 255.255.255.0"
doesn't make my VPN client see anything on my intranet, and I don't think traffic is actually being routed through. I've tried this on my phone with 4G as well so I'm pretty sure it's not a weird LAN incompatibility.

You need a route to point your ovpn block back to the server or else your router won't know what to do with the traffic. Should be as simple as adding a static route in your router. As far as pushing all your traffic out the default gateway you need a specific statement in the server config to do this. push "redirect-gateway def1" should push traffic out the gateway. I've tested this extensively with clients in places like the UK, France, and China.

edit: you'll also need DNS in there as well.
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 8.8.4.4"

-N

Nulldevice fucked around with this message at 13:06 on Oct 9, 2017

Nulldevice
Jun 17, 2006
Toilet Rascal

Greatest Living Man posted:

I added a static route to my router with the host: 192.168.1.254 (my openvpn jail IP address) netmask: 255.255.255.0 gateway: 192.168.1.1 (router IP) metric: 2 and type: WAN. Is this the correct way of thinking about it or should I be creating a static route with the IP that my openVPN assigns? (10.8.0.6)
Not getting any sense of an inter/intranet connection currently.

You should have a route for 10.8.0.0/whatever pointing to the OpenVPN server. This will allow your server to talk to the other hosts on your network.

Nulldevice
Jun 17, 2006
Toilet Rascal

SlowBloke posted:

Hi, I've just finished setting up my new NAS(a QNAP 1253bu) i wanted to ask in case there are any qnap users here:
Is there any advantages/disadvantages to keep a single datavol rather than setting up multiple ones by media type?

What if you run out of room in one of your media partitions for a particular type of media? Kinda leaves you hosed. Just stick with one large volume and use folders and permissions to manage everything. Less chance of poo poo going boom. (oh, and backups, always have backups. raid is not backup.)

Nulldevice
Jun 17, 2006
Toilet Rascal

Incessant Excess posted:

Couldn't you just re-format your disk and then copy the files back from the NAS?

Well the cryptolocker will likely use your own saved network credentials to also gently caress up your backups. That's pretty common these days, so the use of a network share isn't all that safe.

Nulldevice
Jun 17, 2006
Toilet Rascal

Farmer Crack-rear end posted:

Set the network share you're backing up to to be non-writable from your normal credentials, and configure your backups to run under a separate set of credentials.

What I was referring to is that it can access all of your network credentials. If you've saved any network credentials in Windows, it's highly likely a competent cryptolocker will be able to use them. Everything is stored in the same place. You can see this in the credential manager in the control panel, locations and credentials are stored there. The malware will simply mount the share using the credentials stored and wipe out the backups if possible. Things like file history would be easily wiped out. The way I've gotten around this is a little different. I share out my directories to my server and the server mounts the directory using automount and does a rsync diff of the home directory and anything else important (directory is also read only as a share) and keeps it on the server. All of my NAS shares are read only with one exception which is just scratch space/drop off location. All of my download work is done directly on the server. Using CentOS as a base and various programs to handle the downloads (rtorrent/rutorrent/nzbget) then i log into the server and use custom scripts to manage downloaded content. I have a second server that is used as a backup target which has no shares. Everything is loaded via ssh/rsync nightly with 12 days worth of backups on auto rotation. Also have on demand mounted external drives. Final backup is Backblaze B2/rclone for catastrophic failures. I put a lot of thought into 'what if', probably to the level of extreme paranoia. However with all of this I make no assertion that I'm bulletproof, anything can happen. I think I'm pretty well protected as is, but I'm always looking for ways to improve the situation.

Nulldevice
Jun 17, 2006
Toilet Rascal

Richard M Nixon posted:

I'm sorry in advance for being lovely with research, but I'm having a hard time sorting through very out-of-date info.

I'm rebuilding my 2009-era home NAS after getting lovely after a very bad 2x simultaneous device failure in my raid5 array. I'm looking for alternatives to RAID but a lot of the info I'm reading is from the first half of this decade or I'll find conflicting posts on reddit or other tech sites.

My requirements are:
Linux based
Allow me to pool an arbitrary number of disks together
Simple to add new disks to the pool
Better fault tolerance than 1x disk failure
I don't care much about performance - I'll be streaming media from the disks but only to a single device at a time

From what I was reading, SnapRAID was the leading contender, but its weird scheduled parity building throws me, and my understanding is that the number of failures it can withstand is == the number of parity drives.

I'm also seeing UnRAID (same limitation of parity) and FlexRaid (apparently a lovely dev?). Are there other contenders?

Snapraid is pretty easy to set up and get going honestly. The configuration file is very well explained. You can sync up the parity as many times a day as you want to. When I used it I did a sync nightly. The advantages of this are if you were to delete a file by accident, as long as you didn't sync right afterward, you could undelete the file. Snapraid supports up to six parity disks (maybe more now, I haven't checked), so it could conceivably withstand a six disk failure. The only restriction is that the parity disks must be as big as, or larger than your data disks. I created a prototype server using Snapraid and MergerFS to create a disk pool with parity. To make the pool consistent I used the same directory structure on every data disk that was going into the Merger pool. Once combined all the data appears in one location. You can set Merger to only fill a disk to a certain percentage before moving to the next disk in the pool. Performance seemed pretty decent even if it was a VM (I think bare metal would do even better), and functionality was good. I created a samba share pointing to the merged directories as needed and had no trouble. I did this using Debian 8. I believe all of the packages are in apt or can be added pretty easily. As far as setup difficulty, I'd rate it as relatively easy to do if you're patient and pay attention to detail. Also it is very easy to add disks to the system. Just make sure they're smaller or equal size to the parity disks and add them to the snapraid.conf and the mergerfs mount command. Here's some output from the system:

Disk space (small VM so disks are small, but it gets the point across) You'll notice that parity is as large as the largest disk space consumed by data.:
code:
/dev/sdd1        99G   81G   14G  86% /storage0/parity
/dev/sdc1        99G   81G  9.9G  90% /storage0/disk3
/dev/sdb1        99G   52G   40G  57% /storage0/disk2
/dev/sda1        99G   80G   12G  88% /storage0/disk1
1:2:3           295G  211G   61G  78% /storage0/main
Parity disk (this is a small system, so single parity disk is all that's needed right now)
code:
/dev/sdd1 on /storage0/parity type ext4 (rw,relatime,data=ordered)
These are the data disks.
code:
/dev/sdc1 on /storage0/disk3 type ext4 (rw,relatime,data=ordered)
/dev/sdb1 on /storage0/disk2 type ext4 (rw,relatime,data=ordered)
/dev/sda1 on /storage0/disk1 type ext4 (rw,relatime,data=ordered)
This is the MergerFS Pool
code:
1:2:3 on /storage0/main type fuse.mergerfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
Mount command for merger:
code:
mergerfs -o defaults,allow_other,direct_io,use_ino,category.create=eplfs,moveonenospc=true,minfreespace=10G /storage0/disk1:/st
orage0/disk2:/storage0/disk3 /storage0/main
The snapraid config (look at the uncommented portions):
code:
# Example configuration for snapraid

# Defines the file to use as parity storage
# It must NOT be in a data disk
# Format: "parity FILE [,FILE] ..."
#parity /mnt/diskp/snapraid.parity
parity /storage0/parity/snapraid.parity

# Defines the files to use as additional parity storage.
# If specified, they enable the multiple failures protection
# from two to six level of parity.
# To enable, uncomment one parity file for each level of extra
# protection required. Start from 2-parity, and follow in order.
# It must NOT be in a data disk
# Format: "X-parity FILE [,FILE] ..."
#2-parity /mnt/diskq/snapraid.2-parity
#3-parity /mnt/diskr/snapraid.3-parity
#4-parity /mnt/disks/snapraid.4-parity
#5-parity /mnt/diskt/snapraid.5-parity
#6-parity /mnt/disku/snapraid.6-parity

# Defines the files to use as content list
# You can use multiple specification to store more copies
# You must have least one copy for each parity file plus one. Some more don't hurt
# They can be in the disks used for data, parity or boot,
# but each file must be in a different disk
# Format: "content FILE"
content /var/snapraid.content
content /storage0/disk1/snapraid.content


# Defines the data disks to use
# The name and mount point association is relevant for parity, do not change it
# WARNING: Adding here your /home, /var or /tmp disks is NOT a good idea!
# SnapRAID is better suited for files that rarely changes!
# Format: "data DISK_NAME DISK_MOUNT_POINT"
data d1 /storage0/disk1/
data d2 /storage0/disk2/
data d3 /storage0/disk3/

# Excludes hidden files and directories (uncomment to enable).
#nohidden

# Defines files and directories to exclude
# Remember that all the paths are relative at the mount points
# Format: "exclude FILE"
# Format: "exclude DIR/"
# Format: "exclude /PATH/FILE"
# Format: "exclude /PATH/DIR/"
exclude *.unrecoverable
exclude /tmp/
exclude /lost+found/

# Defines the block size in kibi bytes (1024 bytes) (uncomment to enable).
# WARNING: Changing this value is for experts only!
# Default value is 256 -> 256 kibi bytes -> 262144 bytes
# Format: "blocksize SIZE_IN_KiB"
#blocksize 256

# Defines the hash size in bytes (uncomment to enable).
# WARNING: Changing this value is for experts only!
# Default value is 16 -> 128 bits
# Format: "hashsize SIZE_IN_BYTES"
#hashsize 16

# Automatically save the state when syncing after the specified amount
# of GB processed (uncomment to enable).
# This option is useful to avoid to restart from scratch long 'sync'
# commands interrupted by a machine crash.
# It also improves the recovering if a disk break during a 'sync'.
# Default value is 0, meaning disabled.
# Format: "autosave SIZE_IN_GB"
#autosave 500
autosave 50
# Defines the pooling directory where the virtual view of the disk
# array is created using the "pool" command (uncomment to enable).
# The files are not really copied here, but just linked using
# symbolic links.
# This directory must be outside the array.
# Format: "pool DIR"
#pool /pool

# Defines a custom smartctl command to obtain the SMART attributes
# for each disk. This may be required for RAID controllers and for
# some USB disk that cannot be autodetected.
# In the specified options, the "%s" string is replaced by the device name.
# Refers at the smartmontools documentation about the possible options:
# RAID -> [url]https://www.smartmontools.org/wiki/Supported_RAID-Controllers[/url]
# USB -> [url]https://www.smartmontools.org/wiki/Supported_USB-Devices[/url]
#smartctl d1 -d sat %s
#smartctl d2 -d usbjmicron %s
#smartctl parity -d areca,1/1 /dev/sg0
#smartctl 2-parity -d areca,2/1 /dev/sg0
Really all of this is done in two places to get it up and running. Snapraid.conf and your mergerfs mount command. I keep the mergerfs mount in rc.local. Just decide what sized disks you want to max out at and buy as many of those as you want as parity and some as data. You can also use smaller disks as well. Once you dump all of your data on the system you'll need to do the first sync which will take a while.

I found this build on the internet somewhere in a couple of places and decided to try it out. It's pretty solid.

Nulldevice
Jun 17, 2006
Toilet Rascal

Harik posted:

Why are people transcoding in 2018? I haven't had a media box in years that can't play anything i throw at it.

Kodi on fire TV, built-in player on my roku, etc.

Plex shouldn't be a problem just streaming files as-is.

Phone watching, tablet watching. There are all sorts of devices that people may want to use that require transcoding. Not every device can playback content as-is.

Nulldevice
Jun 17, 2006
Toilet Rascal

eightysixed posted:

What's a good/the recommended processor/mobo combo with the most SATA ports available? Nothing heavy duty, but I'm finally going to migrate to unRAID and ditch Xpenology. I have 4x4TB's brand new, never opened and my old 5x1TB from the Xpen box.

Well really how many sata ports do you ideally want? The pickings get slim the higher the number you go. The most common number of ports is 6. I looked st 10 port boards and there wasn't really anything that was all gamered out or cost over $500. As far as a processor goes, an i3 will be adequate for your needs unless you plan on doing a lot of virtual machines or containers. What I would probably do is go for an 8th gen i3 as it has four physical cores and a decently priced 6 to 8 port board.

https://www.newegg.com/Product/Product.aspx?item=N82E16813144162 - $68 MSI motherboard with 6 SATA ports and a 16X PCIE port if you want to drop in a SAS controller to add more drives. Supports 8th gen processors.
https://www.newegg.com/Product/Product.aspx?Item=N82E16819117822&cm_re=core_i3-_-19-117-822-_-Product -- 4c/4t Core i3 8100 CPU (300 series, compatible with motherboard)
https://www.newegg.com/Product/Product.aspx?Item=N82E16820148983 - 8GB DDR4 RAM
-or-
https://www.newegg.com/Product/Product.aspx?item=N82E16820148985 - 16GB DDR4 RAM

Nulldevice
Jun 17, 2006
Toilet Rascal

Tamba posted:

I have an old 160 GB SSD that I don't use anymore, and my FreeNAS server has an empty SATA port.
Can I add that as L2ARC, or is there any reason why I shouldn't do that?

I'll let the experts explain the whys but from my experience unless you're running an enterprise system there is almost no reason to run an L2ARC on a home system. It'll pretty much never see use. You have to be exhausting tons of ARC to get to that point. Basically it serves no benefit.

Nulldevice
Jun 17, 2006
Toilet Rascal

Discussion Quorum posted:

I am going to piggyback off the poster above asking about DIY.

I need something reasonably compact, low-power, and quiet (our home office is becoming a nursery, so it will live next to the TV until we move somewhere with a dedicated office). Main uses will be file sharing and backup of personal files, consolidated scheduled remote backup to Backblaze B2, and running a bunch of Docker containers (4-6, mostly lightweight stuff like OpenHab, Mosquitto, and PiHole). AES-NI would be nice for filesystem encryption and maybe offloading OpenVPN from my router. Possibly some low-usage (max 2 streams) Plex/Emby. Currently some of this is running on a stack of RPis which I would like to cut down or replace.

Would something like the QNAP TS-453Be be the right solution for me, or would I do better to build something with a mITX embedded board with the same J3455? How is the experience running plain-Jane Linux after QNAP support ends?

Anything to be careful of/avoid if shopping for a used QNAP or MicroServer?

I think the only thing that would hold you back on the Q-Nap is the 8GB memory limitation. I haven't owned a Q-Nap in a long time so I don't know about B2 integration but I'm guessing you looked into this. I would venture to say the Q-Nap may suit your needs but read further for my opinions.

I would weigh the costs between the Q-Nap hardware (diskless) vs going DIY and seeing which you would get more horsepower out of.

For DIY you could go with a 8th gen Celeron (2c/2) or Pentium (2c/4t) and 16GB DDR4 RAM to handle the dockers. Get a board with sufficient SATA ports or grab an HBA from ebay for less than $50 and install the OS of your choice on a drive connected to the motherboard. For the living room I'd find a case that has a low noise profile.

Anyway I hope my rambling helps a little.

Nulldevice
Jun 17, 2006
Toilet Rascal

meinstein posted:

I'm new to this and I'm looking to build something to run Open Media Vault - I think? It looks like that's the way I should be going.

Can someone help me with the build I have below? Something tells me I should just scrap all of it and buy an appliance (a friend recommend Drobo), but here are my possibly stupid reasons for wanting something I have full control over and OpenMediaVault in particular:

1. I'm fairly comfortable with linux. I maintain a few .deb packages and debian-based distros are my go-to for machines where I care about stability.
2. I'll end up running a plex server off it if I have the computing power, so I don't need to turn on my Good Gaming Rig to watch my home movies
3. I have some home-automation ideas, I work as a software engineer at an IoT shop and I've been putting too many things on too many raspis. It would be nice to just start churning out docker images and having them live happily together
4. I need to be able to back up from macOS, Linux, and Windows

I'm pretty sure in particular that CPU is overkill in both processing power and power consumption but I'm not sure where I should be going instead, especially when it's so cheap to buy. I've been looking at board with integrated Atom processors, but I'm quite attached to that case and finding a good mini-ITX combo is difficult.

It's my understanding I don't need to worry about any sort of hardware raid with OMV -- right?

Thank you for any and all suggestions!

PCPartPicker Part List

CPU: Intel - Celeron G4900 3.1 GHz Dual-Core Processor ($56.50 @ Walmart)
CPU Cooler: be quiet! - Pure Rock Slim 35.14 CFM CPU Cooler ($29.88 @ OutletPC)
Motherboard: ASRock - H370M-ITX/ac Mini ITX LGA1151 Motherboard ($108.88 @ SuperBiiz)
Memory: G.Skill - Aegis 8 GB (1 x 8 GB) DDR4-2666 Memory ($34.99 @ Newegg)
Storage: SanDisk - SSD PLUS 120 GB 2.5" Solid State Drive ($27.95 @ Amazon)
Storage: Western Digital - Red 4 TB 3.5" 5400RPM Internal Hard Drive ($123.72 @ Newegg)
Storage: Western Digital - Red 4 TB 3.5" 5400RPM Internal Hard Drive ($123.72 @ Newegg)
Storage: Western Digital - Red 4 TB 3.5" 5400RPM Internal Hard Drive ($123.72 @ Newegg)
Case: Fractal Design - Node 304 Mini ITX Tower Case ($87.40 @ Newegg)
Total: $716.76
Prices include shipping, taxes, and discounts when available
Generated by PCPartPicker 2019-04-28 20:47 EDT-0400

This looks like a good starter build for an OMV box. You won't need any hardware raid for it as you will be using mdadm for software raid. if you're going to run plex you might need a beefier cpu tho.

Nulldevice
Jun 17, 2006
Toilet Rascal

Brain Issues posted:

Anybody else have shucked 14TBs yet?

I'm wondering if these are SMR, because my 2 14TB drives have consistently 3-4x Utilization% reported by my Synology.







This is with no data transfer currently happening except for seeding some torrents (at only 500KB/s).

edit: Am I an idiot, and it's just because those 2 14TB drives are probably being used as the 2 parity drives?

In the Synology units parity is striped across all drives, there are no dedicated parity drives as there are in systems such as unraid or snapraid or a raid4 system. There may be another reason why the drives are showing activity. How long have the drives been in the system? Synology uses mdadm and btrfs(optionally) to build arrays, so if the drives are showing activity it may still be building the array. You could try using the console and checking /proc/mdstat (cat /proc/mdstat) and seeing if any of the arrays are still being built.

Nulldevice
Jun 17, 2006
Toilet Rascal

Brain Issues posted:

I put the two 14tb drives in about 2 months ago, and converted from SHR-1 to SHR-2. The conversion took 3 weeks to finish building the array.

It is no longer building the array.

code:
admin@freenas:/$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] 
md5 : active raid0 sdge1[0]
      976757952 blocks super 1.2 64k chunks [1/1] [U]
      
md2 : active raid6 sda5[0] sdgc5[11] sdgb5[10] sdf5[6] sde5[9] sdd5[7] sdc5[2] sdb5[8]
      23413124736 blocks super 1.2 level 6, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
      
md4 : active raid6 sdd7[0] sdgc7[4] sdgb7[3] sde7[2] sdb7[1]
      5857183296 blocks super 1.2 level 6, 64k chunk, algorithm 2 [5/5] [UUUUU]
      
md3 : active raid6 sdf6[0] sdgc6[5] sdgb6[4] sde6[3] sdb6[2] sdd6[1]
      15627995648 blocks super 1.2 level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]
      
md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4] sdf2[5]
      2097088 blocks [6/6] [UUUUUU]
      
md0 : active raid1 sda1[0] sdb1[1] sdc1[3] sdd1[2] sde1[4] sdf1[5]
      2490176 blocks [6/6] [UUUUUU]
      
unused devices: <none>
The only activity currently happening is seeding torrents, no Smart Tests are running, no indexing, etc.

Do you have some documentation supporting that Synology doesn't use dedicated parity drives for SHR-1/SHR-2? Because it doesn't seem like that is what Synology is doing to me.

It's just the underlying technology. You're seeing it right there in the output of the mdstat. Each array is showing either raid6 or raid1, not raid4 (dedicated parity). Synology doesn't give you the option to use a dedicated parity drive. Which disks are serving up/downloading the torrents?

Nulldevice
Jun 17, 2006
Toilet Rascal

Brain Issues posted:

How can I tell? They're all part of 1 volume.

Hmmm, iostat if it's available in the shell could tell you which array is servicing the program for your torrents. Unfortunately my Synology is at my folks place 550 miles away so I can't experiment. I haven't heard anything about the 14TB drives being SMR at this time, so I wouldn't worry about it just yet. I know my 1019+ with 5x12TB is always doing something (can hear the disk activity) but it's low level stuff, similar or lower to your 14TB drives. I just don't worry about it. When I retrieve the unit I'm going to see what the activity is.

Adbot
ADBOT LOVES YOU

Nulldevice
Jun 17, 2006
Toilet Rascal

Sneeze Party posted:

I have two Seagate 8GB Ironwolf drives in my 2-bay Synology NAS. I think they're SMR. Does that mean if one of them fails, I won't be able to rebuild the array?

Ironwolf drives are all CMR.

efb

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply