Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Froist
Jun 6, 2004

I decided I needed to drag my archaic backup system into something more modern as one of my drives is dying, so I ordered a N40L before the cashback offer finished. Previously I had a matching-size drive for each of my 'live' drives which I'd periodically plug in and do a manual diff (not as bad as it sounds!) and copy, then stash under my bed again.

I'm reasonably techy but not all that experienced with linux, so it's a whole new world of things to learn and I was hoping for some sanity checking of what I've come up with. I'm basically planning:
  • Keep the 250gb drive for the OS but move it up to the top bay
  • Ubuntu headless server/SSH admin
  • ZFS over 4x2tb in RAIDZ, should give me 6tb useable space and protection should a drive fail
  • Samba sharing to Macs + PCs around the house
  • Possibly Time machine backups - though it seems I would need AFP for this
  • DLNA server, maybe with CouchPotato etc down the line
  • For the moment just sticking with 2GB ram - will this actually become a limiting factor for a pure-server use case?

It's ZFS (the most critical part) that I have the main worries about due to lack of experience. I tried it in a VM and it seemed far too easy - to the point I assumed I was missing something. It basically came down to:
code:
sudo apt-get install zfs-fuse
sudo zpool create media -m /storage raidz /dev/sda /dev/sdb /dev/sdc /dev/sdd
I understand I need to set up a schedule to scrub it, but is there anything else I may be missing? Also I'll probably try to use the kernel version of zfs rather than Fuse for real, it sounds like the performance difference is worth the extra effort to set it up.

I'll also be installing Apache and planning to write a mobile-friendly web interface for browsing the server/viewing stats/etc. I've seen Webmin, are there any other web interfaces people use?

Adbot
ADBOT LOVES YOU

Froist
Jun 6, 2004

DNova posted:

Sounds fine, but I'd push you towards FreeNAS on a 2-4gb USB stick with 8gb of RAM rather than ZFS on linux on the 250gb drive.

Thanks for the backup. The only reason I'd gone this route was that I'd heard it was hard to extend FreeNAS to cover other use cases that might come up, whereas with a standard linux install I can just SSH in and install whatever. I was thinking of trying out Ubuntu from a flash drive rather than the supplied disk, but then saw mentioned the idea that the spare space on this drive can be used for in-progress downloads, allowing the main array to be spun down except when in use.

I really need to just start playing around with it but it's going to take a while of shuffling data around until I can free up all the disks I intend to use, and I want to get the ZFS side straight before I do this so the data I copy back to it is then safe - I don't want it sat around on the 'temporary' disks for too long without backup.

Froist
Jun 6, 2004

Thanks for all the pointers guys. I think I'm going to stick with Ubuntu rather than FreeBSD as I know it better. I did some reading on the ZFS-on-Linux newsgroup and lots of people seemed to be praising its state, saying they're on the verge of putting it into production.

titaniumone posted:

2GB of ram is not enough for a functional ZFS system. Your performance will be bad, and if you're unlucky, you may experience kernel panics.
Hmm. UK memory prices don't seem to be quite as cheap as USA at the moment (especially for ECC), but I think I'll grab a 4GB stick to put in the other slot, taking it up to 6GB. If that still turns out to be a limiting factor down the line I can always get another 4GB to max the system out. It's below the "1TB == 1GB" discussed here, but should be a whole load better than just 2GB.

Wheelchair Stunts posted:

Oh, man. You don't have to use fuse! Check this out. Especially since you use Ubunut, it even has its own little PPA setup. Performance was a lot better for me through this than fuse.

Yeah, before you posted this I decided to do some tests with two old mismatched drives I had lying around. I'm thinking the actual performance might increase with my 'real' drives, but it should be a fair comparison between the two methods.

With Fuse I was getting 33MB/s write and 54MB/s read, with it consuming almost all of my memory (leaving 77MB free). With the ZFS-on-Linux setup I got 98MB/s write and 148MB/s read, leaving about 230MB free memory. This was dd-ing a 10GB file from /dev/zero and to /dev/null. I thought it would be more fiddly to set up, but having tried it it's definitely worth it.

Final question..

The two drives I used for my test were 500GB and 320GB. I initialised the zpool with the command:
code:
sudo zpool create -f mypool -m /storage raidz /dev/sda /dev/sdb
I know 2 drives doesn't really constitute raidz, I'm planning to use 4 of the same size for real. However, my understanding was that in this configuration I should have a pool with a capacity of 320GB, based on the smallest drive. When I run "zfs list" I get:
code:
NAME     USED  AVAIL  REFER  MOUNTPOINT
mypool  10.0G   740G  10.0G  /storage
Why is the pool showing up as 750GB capacity, have I done something wrong here? This is how my pool shows up:
code:
sudo zpool status
  pool: mypool
 state: ONLINE
 scan: none requested
config:

	NAME        STATE     READ WRITE CKSUM
	mypool      ONLINE       0     0     0
	  sda       ONLINE       0     0     0
	  sdb       ONLINE       0     0     0

Froist
Jun 6, 2004

Leb posted:

You're just striping two drives together, without any redundancy.

Either add the disks as a mirror or add a 3rd or 4th disk to achieve raidz1.

Ah, thanks. So when I set it up with 4 disks for real, that 'create' command will automatically add redundancy?

Froist
Jun 6, 2004

FISHMANPET posted:

No, you need to specify what redundancy you want. If you don't specify anything you get a stripe. I don't remember the exact commands, but I think it's something like zpool create poolname raidz dev1 dev2 dev3 dev4

Yeah, sorry, I meant the 'create' command in my earlier post that included 'raidz'. I guess it just didn't add redundancy in my test case because I didn't include enough disks.

Thanks for all the help guys, I'll stop crapping up the thread with inane questions now :)

Froist
Jun 6, 2004

evilhacker posted:

Do you need to have all of your drives when you create your ZFS volume or is there a way to migrate from a single drive to a mirror to a zRaid?

I saw a page today where someone described expanding from a 2 drive mirror up to a 4 drive raid-z. It definitely sounded a bit sketchy but he said it worked. I can't find the link now, it's on my work PC's history, but I'll look it up tomorrow.

Froist
Jun 6, 2004

evilhacker posted:

Do you need to have all of your drives when you create your ZFS volume or is there a way to migrate from a single drive to a mirror to a zRaid?

Froist posted:

I saw a page today where someone described expanding from a 2 drive mirror up to a 4 drive raid-z. It definitely sounded a bit sketchy but he said it worked. I can't find the link now, it's on my work PC's history, but I'll look it up tomorrow.

Here's the page I was thinking of: http://unix4lyfe.org/zpool/

Froist
Jun 6, 2004

Wheelchair Stunts posted:

I have had luck with Ubuntu and zfs on linux. You just add the ppa and aptitude install what you need. It does it through a "portability layer" which is just them putting compatibility for Solaris APIs in the kernel. This taints the kernel from a licensing perspective, so don't expect this to be distributed ever. I seemed to be able to get decent performance especially in comparison to ZFS over Fuse. I would still probably go with FreeBSD or more likely a Nexenta/Illumos/OpenSolaris type.

Another +1 for ZFS on Linux here. I set it up a couple of months ago on my N40L as I'm only really familiar with Ubuntu. It's been flawless since day 1, can pretty much max out my gigabit network. Definitely a huge improvement over the Fuse version that I briefly tested.

Froist
Jun 6, 2004

kapalama posted:

So does that ZFS work by just taking advantage of all the storage space you got, and dynamically resizing and all that as you add random new drives?

No, it's not as clever as Drobo/BeyondRaid. I threw 4x2TB drives in it giving me 6tb with redundancy - given the form factor of the machine I can't expand the storage beyond that.

In general a zpool (which is what the OS writes to) is made up of one or more vdevs, which in turn is made up from multiple physical disks. You can't change a vdev once it's created, but you can add more vdevs to a zpool to expand its capacity. So if you start with 3x1tb (giving 2tb with redundancy) you could later add another 3x2tb to take the zpool's total up to 6tb of usable space (original 2tb + new 4tb).

Froist
Jun 6, 2004

kapalama posted:

Performance in the sense of speed, right?
What I am doing now, software RAID of externals on Mac OS X is slow as a pig anyway.

Not having to think about extending it would be worth it, but not more than a grand(!). I guess I should have checked the price.

On the other hand, being able to not have to buy sets might still make it worth it.

I've no experience with the 8-bay Drobo, but did have a 4-bay for a while. It was horrifically slow (to the point of destroying my media center experience, when it had to refresh the DVD covers for all my shows and took minutes to do so) and at one point corrupted the filesystem nearly losing all the data on it. The ease-of-use with the Drobo is cool, but the whole point of the device is data security and after my experience I couldn't trust it again. Plus the fact it's proprietary, so if the Drobo itself dies there's no way to read the data except with another Drobo.

As far as buying pairs of drives, I'm talking off the top of my head here but I can't see how 8 bay vs 4 would make a difference there? Unless you only half-fill it to begin with and slot new individual drives into the empty bays.

Froist
Jun 6, 2004

kapalama posted:

How long ago did you own it?

I am not sure if there is a NAS system which would take 8 drives to begin with, and if there is the price starts to climb to near drobo land.

Isn't the big problem with NAS RAID sets that if one drive fails, and you can no longer match the drive, you have to buy another set of 4 or whatever?

(Plus I kind of don't want to have to do setups and learn how to run a system just to access drives.)
I had it for less than a year before selling it again on eBay and going back to manually mirrored backups onto external drives. Far more primitive but it felt safer to me, albeit without up-to-the-minute backups.

I've no idea about NAS systems, but it'd be easy to find a tower case that could take 8 disks and roll your own. I know ZFS doesn't need identical replacement drives, not sure about software RAID, but I had heard similar about some hardware RAID setups.

Depending on your technical level you could look into building your own system to do it - it's likely easier than you think. I set mine up with very little previous linux experience (none with ZFS) after about a week's reading up, and as mentioned earlier it's been running perfectly for a couple of months.

D0N0VAN posted:

This scares me... I have had a 4 bay drobo for about a year and a bit now, it has worked flawlessly (although a little slow) but I always worry that one day it could just fizzle and destroy all my data.

I didn't want to scare anyone because it was likely very much a fringe case, but from that point it was a no-go for me personally. I'd been using HFS+ as the filesystem (rather than the probably more common NTFS) and had switched a few times between using it with the DroboShare and directly connecting to my Mac over USB. One day neither of them would read the filesystem any more - the only way I got the data off it was a last-ditch attempt of using MacDrive on a Windows PC to read it.

Froist
Jun 6, 2004

Not strictly NAS related, but I'm sure people in here will know the answer.

I set up my N40L with Ubuntu/ZFS on Linux and it's been ticking over fine for a couple of months, but I remembered I should really have a scrub/snapshot schedule set up. I set these entries in my crontab (sudo crontab -e):
code:
0 3 * * 6 /sbin/zpool scrub mypool
0 0 * * * /sbin/zfs snapshot mypool@`date +%Y-%m-%d-%H-%M`.auto
The scrub schedule works fine once a week, but the daily snapshots are never created. If I run the command manually (sudo /sbin/zfs snapshot mypool@`date +%Y-%m-%d-%H-%M`.auto) it does create a snapshot for that date/time. Is there something obvious I'm missing, being reasonably new to linux?

Also, is taking a snapshot daily overkill? How often is the consensus? As it's media storage my files don't change that regularly at all really so weekly would probably be fine, but it seems like it's low overhead and little disadvantage to doing it daily.

Froist
Jun 6, 2004

Has anyone got any experience with Windows 8 Storage Pools and could help me make a decision?

I've got an N40L that's about 6 months old, running Ubuntu Server with ZFS and 4x2tb drives, that just ticks over every day as a media storage box with usenet tools. About a foot away I have a 5-year-old (but still going strong) Windows 7 Media Center box that runs as a PVR and streams any other media from the N40L.

Pushed into it by an impending house move, I've realised this is a bit wasteful and could (hopefully) be consolidated into the N40L doing both jobs in a smaller, neater and quieter package. I really like WMC so don't want to move to Ubuntu/MythTV, and it seems like for this use case Storage Pools should cover my needs as ZFS does now.

I've done some reading around and Storage Pools performance doesn't seem stellar, but past the initial data migration I don't think the poor write performance will matter for me as it's just a media playback device - I very rarely copy files to it across the network, it just ticks over and downloads things at its own pace overnight. It looks like read speed isn't too bad from a parity pool, only write speed is affected? And even if it is, it shouldn't be affected to the point it can't smoothly playback video? I also read it isn't great at adding storage to a parity pool later, but really 6tb should see me through for a long time (famous last words) - at the moment the total of my years of hoarding hasn't reached 3tb. The other up-side is that I'd feel a bit more secure with my data on a "supported" solution. ZFS just works, but I'm no linux expert and not confident in my abilities to save the data if a drive ever did die, whereas Storage Pools should be a bit easier.

Can anyone see any drawbacks that I'm missing, or does this seem a good idea? The only things stopping me trying it out are the investment in a new TV tuner (as the N40L only takes PCI-E) and the week's-worth of data shuffling to be able to clear the drives for a new install.

Froist
Jun 6, 2004

PitViper posted:

How is the ZFS-on-Linux port? I see there's the FUSE project, as well as the native kernel port. WOndering if there's benefits or drawbacks to either in particular. My old box-of-random-drives finally kicked the bucket, so I'll be rebuilding things properly next week when the new hardware shows up.

I've been using it on Ubuntu for about 6 months and not had a single problem. I guess the real test is how it performs when something goes wrong, but unfortunately (or luckily!) I can't attest to that. I did try out the FUSE version at first and performance was pretty poor, so switched to the kernel version and it blew FUSE out of the water.

Froist
Jun 6, 2004

Tornhelm posted:

It works fine as a htpc. I've got one in my parents lounge running Win 7, software raid 5 and using xbmc as the front end. All you need is a cheap passively cooled and video card to run audio over HDMI.

Hijacking this topic, what's the difference between software RAID5 and Storage Spaces (in Windows 8)? I was planning on switching my N40L over from Ubuntu/ZFS to Windows 8/Storage Spaces, but if I can stick to Win7/Raid5 I'd probably be happier with that. I've got no plans to upgrade the capacity of the pool (it's 4x2tb ==6tb with redundancy at the moment), just hadn't realised you could do software RAID in Windows. I guess the performance is better than Storage Spaces too?

Froist
Jun 6, 2004

Tornhelm posted:

The performance of software raid 5 isn't anything special, but its more than enough to stream media to two other computers while my parents watch poo poo on it in the lounge room. It probably also helps that ita one of the other computers which is the sabnzbd machine and handles most of the more intensive work.

Sorry to keep questioning you, but you seem to have the setup I want :). I did a bit of reading around after this and everything I saw implied that Windows 7 removed the RAID 5 option. Are you using a 3rd party RAID software, or is there a different way of getting it working? One of the "workarounds" I saw mentioned was "Install Windows 7 -> Install VirtualBox -> Install linux VM with MDADM -> Give VM access to drives -> Access from Windows via virtual network" which just sounds hideous.

Froist
Jun 6, 2004

astr0man posted:

If you going to do this why wouldn't you just keep your current Linux/ZFS setup?

I've questioned myself too, but basically I want to roll two computers (fileserver + DVR/playback) that sit right next to each other into one. I could stick with ZFS and use MythTV, but I really like Windows Media Center - I've been using it for years. ZFS has been fast/stable as anything for me, but really I don't need blinding speed for playing back video files.

Froist
Jun 6, 2004

SilentGeek posted:

I'm putting together an N40L with FreeNAS 8 and 3 x 2TB WD Red drives and am planning on setting it up with Raid-Z. I was curious if it will be possible in the future to add a couple extra drives to the RAID without having to rebuild the entire RAID. Sorry if this has been asked already.

Also if I wanted to password protect only certain files on the NAS would I need to setup a separate dataset on the RAID volume to hold the data I want password protected? I'm putting this together so I can stream videos and such across the house but I also want to keep all of my working documents on it so I can access them from anywhere in the house. I don't care who has access to the videos and music but would rather that only I had access to my documents.

Short answer: ZFS doesn't allow you to add disks to the array to expand capacity later.

If you're only accessing the files over the network, ie no direct access to the box/SSH accounts for the other people in your house, you can just configure separate shared folders with different permissions. I do exactly this with my N40L/Ubuntu/ZFS box. I've never tried using NFS, only Samba. With Samba you can either just password protect the share (so people can browse your shares and see "SilentGeek's dirty videos" but not get in it) or hide them from view completely so you need to know the share name to access it.

Froist
Jun 6, 2004

moron posted:

My ~2 year old Microserver has developed an intermittent gentle humming sound. Tapping the case tends to get it to stop temporarily, so I'm inclined to think that one of the fans has developed dodgy bearings. I've tried to identify whether it's the large rear exhaust fan, or the fan in the power supply, which is to blame, but I've not been able to pinpoint the source.

Has anyone else encountered this?

Mine (N40L coming up a year old) has always had an intermittent vibration sound if that's the same thing. I think it's just something rattling on the right side of the case, the lightest touch there will stop it, but I've not yet figured out what it is though or how to stop it happening. I'm pretty sure it's not one of the fans though - the PSU is on the opposite side of the case, and I've replaced the main fan for a quieter version a few months back.

Froist
Jun 6, 2004

Mr Shiny Pants posted:

If you just want a Nas why not get a Microserver? They practically give the N54L away and if you want more horsepower you can get the GEN8 version.

I have the older one and it runs Ubuntu like a champ with ZFS as my filesystem. It saturates Gbit Ethernet easily and is rock solid.

Snap, I've got the N40L with ZFS and Ubuntu and it's been rock solid/fast for 2.5 years.

You really don't need much know-how to set it up either - I bundled my way through it with a few guides and a little Linux experience. I did get a bit of a scare this week when I did an apt-get upgrade and my pools disappeared after a reboot, but I just needed to explicitly update the ZFS packages.

A few months ago I dipped in here, saw the discussion about xpenology and almost got carried away into formatting the whole thing to set that up, but came to my senses and stuck with stability :)

Froist
Jun 6, 2004

Megaman posted:

What would be the reason for choosing Ubuntu over FreeNAS? It's the same thing, except FreeNAS comes with a webUI, and it's completely brainless, whereas Ubuntu you're forced to use the command line the entire time, at least as far as I know. Thoughts?

Meh, I don't have a good answer. Back then I guess I was in the stage of my life where I thought it was "cooler" to set things up myself. If I was doing it now I'd probably go with something more "managed', but there's no real reason to change things now. Although I guess in theory I could format the OS drive and import the ZFS pool into another setup if I really wanted to..

On a tangibly related subject, I'm now getting the following from zpool status:

quote:

status: The pool is formatted using a legacy on-disk format. The pool can still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the pool will no longer be accessible on software that does not support feature flags.

I guess there's no real risk to going ahead?

Froist
Jun 6, 2004

I have a question that may get me chased out of this thread with pitchforks on principal. Here goes..

Current setup: A few years ago I set up a NAS in a HP N40L with 4x2tb data drives using ZFS, plus a smaller drive for the OS (Ubuntu). I've been using it with one of the drives for parity (so 6tb usable in one zpool), running Sickbeard etc, and it's been ticking over with zero hassle since the start. A bit more recently I got on board with the "raid != backup" train, and bought 3x2tb external drives to occasionally rsync the data to in categories and (ideally, though I'm lax on it in practice) keep off-site.

Issue: Predictably I'm beginning to creep close to my current storage limits. One of my externals (and the one for which the data category grows the fastest) is currently 96% full, and the whole zpool itself is 82% full.

Controversial plan: As there's no way for me to easily grow this pool without buying a full array of new disks, and I already have an external "occasional yet good enough" backup of the data, I'm thinking of throwing caution to the wind: Switch to a non-redundant storage method so I "gain" another 2gb of internal space, and buy one more external drive to cover the shortfall in external backup space. Nothing I have is particularly irreplaceable (except around 400gb of raw GoPro footage which I may throw into Glacier, but more likely I'd be better off losing as I'll never look at it again anyway).

Desired features:
  • I'd like to have something where I could pool together data spanning multiple drives into single network shares, while not linking the actual storage and meaning one dead disk will take out all the data. I'm fine with some extra effort shuffling the data around when required to achieve this, but the key factor is that the total of some data I would like to keep "merged" will soon grow too big to be stored on a single disk.
  • SSH/MySQL server/Sickbeard etc
  • Ability to run cron jobs/custom scripts from the shell, mainly to run backup scripts to my externals (ExFAT/NTFS formatted)
  • To support the above, some form of scripting language (python etc)
  • Time Machine support would be a nice bonus but far from a requirement
  • Some sort of VPN server so that I don't have to expose different services to the web would be great, but I've been living without it this long

Is this the kind of thing I could achieve with Xpenology? Am I right in thinking Xpenology is Debian under the hood, so would allow extra tinkering/functionality beyond what is provided as stock/with plugins? I don't mind a chunk of time and effort setting this up in the short term.

Froist fucked around with this message at 16:26 on May 6, 2016

Froist
Jun 6, 2004

I don't really know what advice I'm looking for here, just spitballing naive ideas.

I've got an N40L that's been my stable NAS for 5 years, running ZFS on Ubuntu 14.04. It's got 4x2tb drives for the main storage pool which is now 96% full, and another 1.5tb drive for the OS which is basically empty. My rate of consumption has gone down drastically, so I guess I'd rather make use of what I have than make an investment in upgrading it right now.

I have the data mirrored onto 3x2tb offline, external drives using some rsync scripts I put together years back. But I'd been getting a bit negligent in keeping these up to date and while I'd prefer it was plug-and-auto-backup (which at one point it was), it now took me a couple of hours to piece together how it all worked and get them updated. Combined with the fact it's still running a 6 year old OS and a bunch of services I no longer use, it feels like time for a fresh start.

So I'm thinking about wiping the lot and rebuilding it with Xpenology, utilising the 5th drive to give me a bit of extra headroom before I have to bite the bullet and do a proper upgrade. I've read that with the N40L I'd need a NIC to be able to run the latest DSM versions, but beyond that how much headache is it running an Xpenology box?

I was also thinking of setting it up as a Plex server to keep my media database centralised, but I'd only be playing on the same LAN and have read that Plex transcodes by default, so maybe I shouldn't go down that route. I've been using Kodi on my smart TV (and previously a NUC for playback), but I'd prefer the database to live with the media than on the playback device. I know you can do that by tinkering with Kodi config files, but in general I feel like I'm 5+ years out of touch with the most appropriate solutions.

Any general advice?

Froist
Jun 6, 2004

Thanks for the tips! I had been using a single 5.25" -> 3.5" adapter to mount in the top bay, I didn't realise the double-drive versions exist. The modified bios rings a bell but it's so long ago that I set it up it's worth me checking again.

The 3x2tb externals I have are all 2.5" drives, so now I'm thinking maybe I should just get a single huge external for my offline mirror (rather than trying to juggle it between multiple drives), then crack my existing externals open to make it 7x2tb internal. That would double my usable space from 6tb to 12tb, which should keep me going for the forseeable future.

Any opinions on a frankenstein card like this? They don't seem that common, but it seems like it would be useful to add USB 3 ports to transfer that amount of data in and out of the box, as well as adding the internal SATA connections.

Froist
Jun 6, 2004

Minty Swagger posted:

Brief googling seems to say that card has worked ok before. Give it a try!

If you're planning on putting 2.5 drives into your NAS why not get one of those 5.25 -> 4x 2.5 adapters? Just put 4 in for a total of 9 drives? You might be getting close to the max your PSU can handle though without upgrading that too, so be careful. The original PSU is 150W, but there are ~300W ones floating around online you could upgrade to, if you were so inclined.

This is actually what I went with in the end, though it’ll be 7x2tb rather than 9 drives (not sure I could manage 9 anyway?).

I do have a spare 1tb 2.5” I could put in the last slot, or a small SSD that may be useful for caching (but I don’t think any of my workloads would make use of that). I figure the PSU should be fine like this as these exact (bus powered) drives have all been plugged in at once before, and I’m removing a 3.5” drive along the way.

Thanks for the tips! Weekend project sorted, if I can get all the data shuffled around by then..

Froist
Jun 6, 2004

It turned into a bit of a messy saga, but I got my minimal-cost storage upgrade on my N40L (from a few pages ago) up and running.

I bought the 4x2.5" mount and only found out afterwards that the external drives I had (Samsung M3s) aren't shuckable and have a proprietary connector inside, so that's winging its way back to Amazon now. In the end I got Xpenology with DSM 6.17 running using the onboard NIC, 4x2tb internal drives, and 3x2tb externals mounted inside and connected up via a USB3 hub/PCIe card, with DSM tricked into thinking they're internal drives. I know this is far from ideal from a performance perspective, but the server sits in my loft connected via powerline adapters so the whole thing isn't setting speed records anyway. And I still have a full mirror of the data onto a (new) external drive.

Having said that, network performance on Xpenology seems pretty bad compared to what I'm used to from my old Ubuntu + ZFS setup. Copying to a SMB share I get around 3mb/s transfer rate, and copying from the same share I get around the same rate but CPU usage jumps to 50%. Having seen this I tried AFP and got about the same results but without the CPU usage spike. I tried sticking my old OS drive back in there (i.e. exactly the same network conditions) and get 10mb/s both ways.

While the "internal USB drives" setup is a bit messy I'm pretty sure that isn't the bottleneck - running a 'dd' test directly on the Xpenology install shows 271mb/s write and 477mb/s read performance from the volume. With hindsight I probably should have run benchmarks like this before copying 6TB back to the box, lesson learned I guess.

I like having the nice DSM interface, but if there isn't a quick fix for this I'm tempted to switch to Unraid or FreeNAS or something before I get too much set up on here.

tl;dr: Any ideas why network throughput would be less than half my previous setup having moved to Xpenology?

Edit: I booted Xpenology back up to try iPerf after taking some benchmarks on my old install, and everything seemed to be performing similarly. So tried some file copies and those had improved too, without the CPU spike. Not sure what the problem was but it appears to be solved.

Leaving the post here so you can still laugh at my hideous, slow setup. But at least it's no worse than it was before :)

Froist fucked around with this message at 15:00 on May 27, 2020

Froist
Jun 6, 2004

I get this goes against the ethos of this thread, but I'm looking for advice about moving away from a NAS.

I've been running a home NAS for probably 15 years, from Drobo to N40L running Ubuntu+ZFS, to the same N40L running Xpenology. I have 12tb of redundant capacity in there, but anything of irreplacable value is backed up on other cloud services. Right now this is all really aging hardware and a bit of a Heath Robinson setup - I have four 2tb internal SATA drives (some of those may be the whole 15 years old), and when I was running low on space a couple of years back but not wanting to invest a lot, I mounted three 2tb 2.5" USB drives I already had inside the case (not even shucked, just hooked up to a USB hub). Given the age of the hardware and hackiness of this setup I also have an external 12tb which I use Synology's USB copy feature to plug in occasionally and mirror the lot to a cold backup.

With the combination of recently becoming a dad, moving to an (inefficient) house with lots of improvements to make, and rising energy costs (UK) I'm finding it harder to justify the expense/complication of running an extra box 24/7 for the amount I actually need to access it. Factors 1 and 2 also mean I don't really want to spend over the odds on a replacement solution.

I'm thinking of just moving back to a single big (12tb) drive in my PC that I can just boot/WoL when I need to access something. Obviously I'd be losing redundancy here, but would be hoping to still do something similar to the USB Copy for a cold backup. Is there a Windows feature/3rd party software that can replace Synology's USB Copy feature? i.e. not doing a full disk clone or taking images, but mirroring delta updates to an external filesystem.

Alternatively are there any "better" solutions that would markedly reduce the energy footprint (~60w) without being a large investment? My option B was to also pick up a 2-bay Synology DS220+, but that would go from "£200 for a 12tb drive" to "£500 for a 12tb drive and new enclosure", and while it would be better for redundancy I'd lose my cold backup unless I spent another £200 for another drive. Even if the DS220+ ran at 15w (quoted, but I don't think that includes the drives), it would take 5.5 years to turn a profit on that outlay.

Adbot
ADBOT LOVES YOU

Froist
Jun 6, 2004

Klyith posted:

2. If you can nerd out with a little bit of command line, robocopy or rsync can do this type of thing. I rolled my own backup system based on robocopy for several years (then I quit windows this year). Probably I should have been using rsync the whole time -- it has built-in options to do what I was doing without all the batch hackery. But my thing worked well enough.

Good reminder, when I ran this as an Ubuntu box I had my own rsync scripts - I could probably do the same again.

Klyith posted:

Re: redundancy, if you want to get 2 drives you can mirror them using ntfs raid or storage spaces in windows.

Another good point, thanks. Maybe I could just pick up a matching 12tb, shuck them both and stop worrying about the cold backup..

Thanks Ants posted:

Does the NAS have to be on 24x7? I have Unraid running on a newer Microserver and cut the energy consumption to a third of what it was by running a shutdown script each day at about 3am and then using a £10 TP-Link smart plug to turn it back on each evening, with the BIOS configured to boot whenever AC power is restored.

I'd have to think if I could make this work. The times I end up accessing it are a bit unpredictable - I work from home so occasionally want to access it during the day, can't just leave it until the evening and automatically save 2/3 of the day. But maybe I could just try a scheduled shutdown every day so it doesn't stay on unused, and then WoL-ing it when I do want to access it. I do have a spare smart plug but don't think I'd need it when WoL works fine.

Thanks for the ideas!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply