Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Perhaps people should leave the distinction at "portable" vs. "non-portable" because that's what people really care about in the end I think rather than whether it's done in hardware, proprietary device driver, or software anymore.

Adbot
ADBOT LOVES YOU

grunthaas
Mar 4, 2003

Over the years Ive kept a home server of some kind and have brought drives to top up the storage. Recently Ive ended up with a big old tower with 8 drives in ranging from 250gb to 1.5tb. Ive decided that its too big and power hungry so Ive just got myself an N40L and a 3tb drive. The drives I want to keep using are 1tb, 1.5tb and 3tb, leaving 1 spare slot in the chassis for the future. Ive got an old 160gb as a boot drive in the ODD bay.

Ill be running linux on the server as Im pretty familiar with it and run it on most of my other machines. On the current server I have 4x250gb in a raid 5 array for a bit of redundant storage. Id like to have something similar on the new box, but Im not sure what the best way to set it up would be. Given that Ive got differing size drives I think my options are BTRFS or ZFS (thanks to error1 for mentioning ZFSonLinux). As far as I can tell, at the moment BTRFS doesnt have raid 5 functionality, so its going to have to be ZFS.

Ive googled around and found info about setting up ZFS with redundancy on different size drives by partitioning them and then using the partitions to build the filesystem on. This info is generally on older pages and seems like a workaround rather than the proper way to do it. So, my question: whats the best way for me to setup ZFS on my 3 drives?

grunthaas fucked around with this message at 23:37 on Oct 5, 2012

evil_bunnY
Apr 2, 2003

The best way is to buy drives that are the same size in the first place.

Longinus00
Dec 29, 2005
Ur-Quan

grunthaas posted:

Over the years Ive kept a home server of some kind and have brought drives to top up the storage. Recently Ive ended up with a big old tower with 8 drives in ranging from 250gb to 1.5tb. Ive decided that its too big and power hungry so Ive just got myself an N40L and a 3tb drive. The drives I want to keep using are 1tb, 1.5tb and 3tb, leaving 1 spare slot in the chassis for the future. Ive got an old 160gb as a boot drive in the ODD bay.

Ill be running linux on the server as Im pretty familiar with it and run it on most of my other machines. On the current server I have 4x250gb in a raid 5 array for a bit of redundant storage. Id like to have something similar on the new box, but Im not sure what the best way to set it up would be. Given that Ive got differing size drives I think my options are BTRFS or ZFS (thanks to error1 for mentioning ZFSonLinux). As far as I can tell, at the moment BTRFS doesnt have raid 5 functionality, so its going to have to be ZFS.

Ive googled around and found info about setting up ZFS with redundancy on different size drives by partitioning them and then using the partitions to build the filesystem on. This info is generally on older pages and seems like a workaround rather than the proper way to do it. So, my question: whats the best way for me to setup ZFS on my 3 drives?

So let's ignore btrfs/zfs for 1 second and think about how you would even set this up using mdadm. If you care about redundancy (and it sounds like you do) then you could create a 1TB raid5 between all 3 disks and a 0.5TB raid1 between the 1.5TB and 3TB drive giving you 2.5TB total available space. With btrfs you can just add all the disks into a btrfs style raid1 and you'd also get 2.5TB total. With zfs you could probably create a 1TB raidz (giving you 2TB usuable now) and just expand it as you swap out your small disks with larger ones. Trying to add a 4th disk to your setup would be easy to do with btrfs, easyish but doable with md, and very painful with zfs.

tldr. btrfs is the only sane solution for throwing heterogeneous disk sizes at but you don't, currently, get redundancy above mirroring. Alternatively, solutions from synology and drobo will handle heterogeneous disk sizes without much thought but you obviously need their hardware for that.

Longinus00 fucked around with this message at 03:43 on Oct 6, 2012

grunthaas
Mar 4, 2003

Thanks for the suggestions.

evil_bunnY posted:

The best way is to buy drives that are the same size in the first place.

I agree, but I dont want to buy 2 or 3 new drives and end up with 2.5tb going spare, seems like a waste to me.

Longinus00 posted:

So let's ignore btrfs/zfs for 1 second and think about how you would even set this up using mdadm. If you care about redundancy (and it sounds like you do) then you could create a 1TB raid5 between all 3 disks and a 0.5TB raid1 between the 1.5TB and 3TB drive giving you 2.5TB total available space. With btrfs you can just add all the disks into a btrfs style raid1 and you'd also get 2.5TB total. With zfs you could probably create a 1TB raidz (giving you 2TB usuable now) and just expand it as you swap out your small disks with larger ones. Trying to add a 4th disk to your setup would be easy to do with btrfs, easyish but doable with md, and very painful with zfs.

tldr. btrfs is the only sane solution for throwing heterogeneous disk sizes at but you don't, currently, get redundancy above mirroring. Alternatively, solutions from synology and drobo will handle heterogeneous disk sizes without much thought but you obviously need their hardware for that.

Thanks for this, maybe Ill look more at BTRFS. I guess it will eventually get raid 5 style redundancy and for the moment I could setup some partitions with mirroring and leave the rest of the space for less important stuff.

evil_bunnY
Apr 2, 2003

Jesus don't run btrfs for poo poo you care about.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Given how long BTRFS is being in development now, you'd think it'll start to get stable. It must be five years now, at that time ZFS was relatively stable already.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
How do I determine which USB port my UPS is plugged into for the UPS configuration in NAS4Free? And is there I way I can check that it's talking to the UPS?

edit: It printed it out to the terminal when I plugged in the UPS.

For an APC BE750G I used



Status page was under Diagnostics->Information->UPS.

fletcher fucked around with this message at 01:17 on Oct 7, 2012

Longinus00
Dec 29, 2005
Ur-Quan

grunthaas posted:

Thanks for the suggestions.


I agree, but I dont want to buy 2 or 3 new drives and end up with 2.5tb going spare, seems like a waste to me.


Thanks for this, maybe Ill look more at BTRFS. I guess it will eventually get raid 5 style redundancy and for the moment I could setup some partitions with mirroring and leave the rest of the space for less important stuff.

So while btrfs is the most sane solution for heterogeneous disks it doesn't necessarily mean it's the solution for you. It's still technically labeled experimental and you'll want to run the newest kernel available (3.5/3.6). The "raid is not backup" mantra applies here.


Combat Pretzel posted:

Given how long BTRFS is being in development now, you'd think it'll start to get stable. It must be five years now, at that time ZFS was relatively stable already.

It's relatively stable now, especially in 3.5/3.6, and some companies (e.g. oracle) even officially support it. Comparing the age of btrfs to zfs isn't exactly fair because at the start btrfs was just the pet project of one person and didn't get all that much support from oracle. Since it was developed open source, however, lots of companies are contributing to it now (e.g. fujitsu, red hat) so it's improved a lot recently.

Does anyone know the status of the open source zfs branch? Can I assume that it's effectively frozen feature wise since anything they add will make it a fork of the official zfs?

evensevenone
May 12, 2001
Glass is a solid.
I'd like a NAS box thingie that does the following things:

* Time Machine
* Windows Backup
* Bittorrent (with upnp)
* Decent performance (it seems like this is a problem for the cheaper ones?)
* $100-$200 range (diskless)

Don't need:
* newzbin
* ZFS/whatever super-duper file system (raid 1 is fine)

It seems like most everything has that, but also there are a lot of reports of various ones being unstable/randomly slow. What is the cheapest solid option?

D-Link DNS-320 ($99)
Zyxel NSA320 ($120)
BUFFALO LinkStation Pro Duo ($120)
Synology ds212J ($200)
QNAP 212 ($180)

I guess basically the question is, are the QNAP/Synology a lot better? They all seem to have similar processors and around 256mb of RAM.

For the bittorrent client, ideally I'd like it to be able to handle a couple hundred (idle) torrents at once and have a fairly simple interface for adding new ones--if it added everything saved to a specific folder that would be best, and if it had some support for moving torrents that would be really helpful.

bobfather
Sep 20, 2001

I will analyze your nervous system for beer money
A few hundred torrents may be taxing for a NAS box like that.

With that said, I have a DS212J and I run Transmission on it. It works well, but I never have multiple torrents going, and in fact I find usenet a whole lot better (and almost never use torrents).

The QNAP hardware is solid, but the software isn't great. Of all the listed options, the DS212J is best.

Thanks Ants
May 21, 2004

#essereFerrari


Seconding Synology - they are very nice little boxes for the sort of usage you'd put on them at home.

Bonobos
Jan 26, 2004
So I ended up getting those 4tb drives after all.

I was reading online that I should test out the drives before putting in use in my array. Since I haven't really done anything like that, what would be teh best way to thoroughly test out a drive to make sure I have one without issues?

Online as always, opinions are mixed. some suggest a full format, others suggest using windows built in chkdsk command to look for bad blocks.

Or should I be using the Hitachi's drive fitness program since the drives are HGST / Hitachi drives?

What is the recommended course of action for prepping/testing drives prior to using them for your data?

Longinus00
Dec 29, 2005
Ur-Quan

Bonobos posted:

So I ended up getting those 4tb drives after all.

I was reading online that I should test out the drives before putting in use in my array. Since I haven't really done anything like that, what would be teh best way to thoroughly test out a drive to make sure I have one without issues?

Online as always, opinions are mixed. some suggest a full format, others suggest using windows built in chkdsk command to look for bad blocks.

Or should I be using the Hitachi's drive fitness program since the drives are HGST / Hitachi drives?

What is the recommended course of action for prepping/testing drives prior to using them for your data?

If the advice was related to the often repeated mantra of "hard drive failures follow a bath tub distribution" (even though some high profile research has found no such correlation, indeed the research casts doubt on many commonly held beliefs about hard drive failures) then the idea is that hard drives tend to either fail early or late in their life. In this case the idea is to stress the hard drive early to make sure if the hard drive does happen to have "infant mortality" you can get it replaced quickly and under warranty. To weed out any early failures just load the drives as much as you can for a couple months and see if any of them fails. I highly recommend you read the google paper (it's not super technical, print it out and read it while on the toilet) and make up your own mind.

If you're too lazy to do even that here are two blog post that summarizes a bunch of the more relevant points.
http://storagemojo.com/2007/02/19/googles-disk-failure-experience/
http://storagemojo.com/2007/02/20/everything-you-know-about-disks-is-wrong/

Longinus00 fucked around with this message at 02:43 on Oct 9, 2012

phorge
Jan 10, 2001
I got banned for not reading the Leper Colony. Thanks OMGWTFBBQ!
Ok, I'm 99% sure I know the solution to my problem, but I really don't like it nor want to do that if I don't have to and am looking for a goon hail mary... any input would be appreciated.

My home server consists of a 12 drive RAID6 mdadm array with a 13th drive as a hot spare (and a 14th cold spare). All drives are Seagate ST31500341AS (1.5TB) drives (1 is currently out of warranty, 6 expire by the end of 2012, the remainder in 2013). The filesystem is ext4 and at the time of the original array creation I didn't know about the ext4 16TB limit. Over the past few years I've had a handful of disks fail, no big deal I have a hot spare... it rebuilds, I RMA the bad drive and the replacement becomes my new hot spare. I've also expanded the array and filesystem a few times over the years. Flashback to this past weekend... I notice I have less than 300GB free on my array. I decide to take my cold spare drive and add it to the array. I expand the array +1 and ~30 hours or so later it successfully finishes. I run an e2fsck and all looks good, so I expand the filesystem... all looks good. Remount the array and now have a 15TB array... all is well. I then convert what used to be the cold spare to a hot spare and go on with my day thinking I have everything covered. Yesterday afternoon I get an email from the server that a drive in the array has failed, but it's rebuilding to the hot spare. This morning the rebuild was ~80% complete and then mdadm sent an email that yet another drive has failed... now I'm getting nervous. I'm at work now, but the initial array rebuild should be complete now, but it's running degraded because that 2nd drive dropped out this morning. I should still have valid data should another drive drop out... but I'm obviously not comfortable about what's going on (seriously, if anyone mentions backups I'll cut you... full on duplication of data is not realistic in this type of home situation, RAID6 and a couple spare drives is all I was willing to commit to).

So, I'm looking at what my options are here... I can pay through the nose to buy another couple of the same drives in the array to keep limping along as-is (after all, as I understand it, I cannot expand this array past 16TB because of some sort of ext4 issue... please tell me I'm wrong with this point)... or build a whole new server with larger drives and some filesystem that doesn't have a 16TB limit (xfs maybe?) and copy all the data to that new server. Obviously going the second route is going to cost lots of money, but I really don't see a way around it because of the ext4 quagmire I've gotten myself into. I would love it if I could gradually replace my drives with larger ones and expand past 16TB... but I don't see a way to accomplish this. I think mdadm supports gradually replacing existing drives with larger ones (one at a time, rebuilding each time), but if I did this, I don't think I would realize any more space until replacing all drives (and if I start buying 3TB or 4TB drives, I really don't need to start off with 13 drives... 7 drives and another Norco enclosure would probably be cheaper).

So, fellow goons... is there anyway short of a new server / array in my future? Sure, in the short-term I can power the server off (hopefully staving off any additional failures) and RMA the current two failed drives (each have a valid warranty still) and power it back on and let it rebuild with the new replacements... but I think I'm stuck at the current array size of ~15TB. What about in 6 months when I'm running out of space again? First world problems, I know...

Hoping someone with a deeper understanding of mdadm and ext4 might have some bright idea that will help me out without dropping $1500+ on parts to build another server.

evil_bunnY
Apr 2, 2003

Is $5/month really unaffordable?

Jolan
Feb 5, 2007
Does anyone know about how long it should take to get 4 new 3TB drives set up for Raid5?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell


You're screwed. There's no easy way to migrate from ext4 to xfs. I'm in the same boat with my 16TB mdadm+LVM+ext4 setup.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

evil_bunnY posted:

Is $5/month really unaffordable?

What is the $5/mo a reference to? Online backup service?

phorge
Jan 10, 2001
I got banned for not reading the Leper Colony. Thanks OMGWTFBBQ!
evil_bunnY: Yes, that's completely affordable (Backblaze I assume is what you are eluding to), but let's not pretended we don't know what a server this size would be used for in a home setting. I have 100mbit internet at my house, but the upload is only 5mbit. For every 8GB file I download I'd need to turn around and send that back up a much thinner pipe... I'm shocked my ISP hasn't ever contacted me about bandwidth usage already, but I'm sure if most of my downloads turned around and went back out my upstream they would say something pretty quick. Not to mention that 14TB / 5Mbps would take 272 days to do the initial sync (not counting any additions along the way). It's just not realistic... plus I have reservations on Backblaze sticking to $5 per month for "unlimited" storage when dealing with this much data. Sure, none of this is data I can't live without... but that doesn't change the fact that this sucks. I understand raid is not a backup... I opted to go with a dual parity RAID6 + a hot and cold spare. It's not infallible, but it was the most realistic plan for my situation (I'd argue that most people don't even have that much redundancy in drives in a home server / NAS). I just didn't foresee the 16TB issue until I was too far along in the process.

Thermopyle: That's what I was afraid of... thanks for chiming in though. Any thoughts on if you will be switching to mdadm and XFS or something else (zfs, etc)? I had been eyeballing btrfs for the past year or so after I realized I was going to hit a 16TB limit, but that still isn't ready for primetime either.

evil_bunnY
Apr 2, 2003

Thermopyle posted:

You're screwed. There's no easy way to migrate from ext4 to xfs. I'm in the same boat with my 16TB mdadm+LVM+ext4 setup.
You can online-convert to btrfs! Totally production ready!

fletcher posted:

What is the $5/mo a reference to? Online backup service?
Yes. There's never no excuse for no backups.

phorge posted:

evil_bunnY: Yes, that's completely affordable (Backblaze I assume is what you are eluding to), but let's not pretended we don't know what a server this size would be used for in a home setting.
Ah yes, the infamous hand-curated, 10TB porno collection.

evil_bunnY fucked around with this message at 20:44 on Oct 9, 2012

evil_bunnY
Apr 2, 2003

.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

evil_bunnY posted:

Yes. There's never no excuse for no backups.

16TB would take me 3 years to upload!

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

fletcher posted:

What is the $5/mo a reference to? Online backup service?

If so (or even if not so), I'm wondering about anyone getting really high storage amounts up to Crashplan.

I've got a couple TB uploaded on a Family+ Unlimited plan. Anyone have more? Just wondering if they ever get to the point where they tell you to cut it out.

evil_bunnY
Apr 2, 2003

fletcher posted:

16TB would take me 3 years to upload!
It wouldn't be that unreasonable if you'd started at the same time as the porno hoarding, buddy!

If it were me I'd just put a ZFS on BSD rig together using higher capacity drives then sell the 15+ drives you're running now to help finance it.

IOwnCalculus
Apr 2, 2003





Thermopyle posted:

If so (or even if not so), I'm wondering about anyone getting really high storage amounts up to Crashplan.

I've got a couple TB uploaded on a Family+ Unlimited plan. Anyone have more? Just wondering if they ever get to the point where they tell you to cut it out.

I'm at 4.6TB from my fileserver up to Crashplan's servers, plus maybe 500GB from other devices. Haven't told me to gently caress off yet, and I've even done a restore of most of that data.

Mantle
May 15, 2004

Can someone please help me interpret this table?

http://awesomescreenshot.com/054iqv5ae

I think the "used" column is how much space the snapshot is taking up. What is the "refer" column? Also, it looks like the snapshots are taking up the space of an entire image of the dataset! Why is this? I only want it to take up the incremental increase in space of the changed files.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I believe that Used is how much data is exclusive to that dataset (aka filesystem, snapshot, or clone). Refer is how much data that dataset is actually pointing to, but other datasets could be pointing to that as well.

Though that could be wrong, as by that interpretation, your client_projects snapshots are churning by nearly 100% each time, meaning there's almost no common data between them.

E: I think I'm correct, here's the official word from Oracle:

refer posted:

Read-only property that identifies the amount of data accessible by a dataset, which might or might not be shared with other datasets in the pool.
When a snapshot or clone is created, it initially references the same amount of disk space as the file system or snapshot it was created from, because its contents are identical.

The property abbreviation is refer.

used posted:

The used property is a read-only property that identifies the amount of disk space consumed by this dataset and all its descendents. This value is checked against the dataset's quota and reservation. The disk space used does not include the dataset's reservation, but does consider the reservation of any descendent datasets. The amount of disk space that a dataset consumes from its parent, as well as the amount of disk space that is freed if the dataset is recursively destroyed, is the greater of its space used and its reservation.

When snapshots are created, their disk space is initially shared between the snapshot and the file system, and possibly with previous snapshots. As the file system changes, disk space that was previously shared becomes unique to the snapshot and is counted in the snapshot's space used. The disk space that is used by a snapshot accounts for its unique data. Additionally, deleting snapshots can increase the amount of disk space unique to (and used by) other snapshots. For more information about snapshots and space issues, see Out of Space Behavior.

The amount of disk space used, available, and referenced does not include pending changes. Pending changes are generally accounted for within a few seconds. Committing a change to a disk using the fsync(3c) or O_SYNC function does not necessarily guarantee that the disk space usage information will be updated immediately.

The usedbychildren, usedbydataset, usedbyrefreservation, and usedbysnapshots property information can be displayed with the zfs list -o space command. These properties identify the used property into disk space that is consumed by descendents. For more information, see Table 6-1.

http://docs.oracle.com/cd/E19963-01/html/821-1448/gazss.html

Mantle
May 15, 2004

FISHMANPET posted:

Though that could be wrong, as by that interpretation, your client_projects snapshots are churning by nearly 100% each time, meaning there's almost no common data between them.

Ok, that's what I suspected as well. The problem is that the snapshots should usually not be churning at all. If you look at the October 10 snapshot, it looks like 0 bytes have changed, but then when the October 11 snapshot comes all the October 10 data will be duplicated. Any idea what is going on?

Mantle fucked around with this message at 21:19 on Oct 10, 2012

Jolan
Feb 5, 2007
I just started using my QNAP TS412, and I can set up an iTunes server on it. Could someone enlighten me about what the difference would be between such an iTunes server and just putting my music in a folder and adding that folder in iTunes?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Jolan posted:

I just started using my QNAP TS412, and I can set up an iTunes server on it. Could someone enlighten me about what the difference would be between such an iTunes server and just putting my music in a folder and adding that folder in iTunes?
DAAP access from any of your other devices that speak it (Xbox, PS3, etc.) without your computer being on and having iTunes running. The term is kind of a misnomer, since it won't play any DRMed content.

Mantle
May 15, 2004

I noticed the time stamps on the files in my snapshots have been changing every day and my hypothesis is that rsync is considering that a "file change" and telling the server to duplicate the file with the new stamp. I am going to try running the rsync server with

code:
--size-only             only use file size when determining if a file should be transferred
to make rsync ignore date stamps.

Does this look right?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Do you know how much data is being synced to your client data? I suppose if it's resyncing all the data each night, the blocks could become shifted and all look different. But shouldn't rsync be smart enough to run a full checksum of the file before it sends it over?

Megaman
May 8, 2004
I didn't read the thread BUT...
I have a question about freenas since it seems my, until now, awesome synology 1511+ seems to be on the fritz (looks like a reboot loop? not sure yet..).

If I set up freenas with, say, 20 drives in raidz2, and the freenas box dies but the drives are fine, can i migrate those to a fresh working freenas machine? And if so, how do I go about this? Wouldn't I need a table configuration of some kind to understand the disk set? Or is it some black magic that automatically would understand the entire set?

Galler
Jan 28, 2008


The drives themselves contain all the information needed to make the array work. It's really as easy as plugging everything in and using the import command.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
RAIDZ2 on ZFS is no more "magic" than any other RAID setup on any other file system. All you'd have to do would be plug all the drives in, boot up FreeNAS (or pretty much any other ZFS OS, like NAS4Free), and import the array. Done. poo poo, if you kept a backup firmware/settings image, you could load that onto the new box, and probably wouldn't even have to use import. Just plug and go.

Mantle
May 15, 2004

FISHMANPET posted:

Do you know how much data is being synced to your client data? I suppose if it's resyncing all the data each night, the blocks could become shifted and all look different. But shouldn't rsync be smart enough to run a full checksum of the file before it sends it over?

I don't know the answers to these questions, but I thought rsync would be smart enough to do that. Hopefully an rsync wizard can jump in here and clear things up.

Megaman
May 8, 2004
I didn't read the thread BUT...

Galler posted:

The drives themselves contain all the information needed to make the array work. It's really as easy as plugging everything in and using the import command.

Would you recommend freenas over a proprietary product, such as a synology? Since it seems that if the hardware fails you could be screwed? Whereas freenas seems to be very modular.

Megaman
May 8, 2004
I didn't read the thread BUT...
Also, another question: how do most people add SATA drives? Do you shut the whole system down every time you want to add a drive? Or do you plug them in on the fly? I know Linux handles plugging in SATA on the fly just like USB, but I don't know if it's recommended. So I certainly wouldn't know how FreeBSD would handle it? Anyone have experience with this?

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Megaman posted:

Would you recommend freenas over a proprietary product, such as a synology? Since it seems that if the hardware fails you could be screwed? Whereas freenas seems to be very modular.
FreeNAS/NAS4Free have the advantage of being, well, free. They also are modular and have a growing list of plug-ins and 3rd party programs and such, and get frequent updates/improvements. Synology and the like are quite expensive, often offer fewer features, and do not update nearly as often. However, you're getting a finished, professional device up-front, guaranteed to work well (no hardware issues, much less loving with configuration, etc), and professionally supported.

It's kinda the difference between building your own PC and just buying an Apple. Neither are right or wrong, they're just different and aimed at different people.

Megaman posted:

Also, another question: how do most people add SATA drives?
Hot-plugging varies on OS and hardware. The N40L, for example, does not support hot-plugging on a hardware level, no matter the OS. However, if you have hardware and software that supports it, it's not like it hurts the drive or anything. I might be concerned if I were popping the drive in all the time (especially removal, since you're moving the disk while it may still be spinning), but if it's just an occasional thing, don't worry about it.

DrDork fucked around with this message at 04:17 on Oct 11, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply