|
I decided I needed to drag my archaic backup system into something more modern as one of my drives is dying, so I ordered a N40L before the cashback offer finished. Previously I had a matching-size drive for each of my 'live' drives which I'd periodically plug in and do a manual diff (not as bad as it sounds!) and copy, then stash under my bed again. I'm reasonably techy but not all that experienced with linux, so it's a whole new world of things to learn and I was hoping for some sanity checking of what I've come up with. I'm basically planning:
It's ZFS (the most critical part) that I have the main worries about due to lack of experience. I tried it in a VM and it seemed far too easy - to the point I assumed I was missing something. It basically came down to: code:
I'll also be installing Apache and planning to write a mobile-friendly web interface for browsing the server/viewing stats/etc. I've seen Webmin, are there any other web interfaces people use?
|
# ¿ Apr 29, 2012 22:04 |
|
|
# ¿ Apr 24, 2024 01:23 |
|
DNova posted:Sounds fine, but I'd push you towards FreeNAS on a 2-4gb USB stick with 8gb of RAM rather than ZFS on linux on the 250gb drive. Thanks for the backup. The only reason I'd gone this route was that I'd heard it was hard to extend FreeNAS to cover other use cases that might come up, whereas with a standard linux install I can just SSH in and install whatever. I was thinking of trying out Ubuntu from a flash drive rather than the supplied disk, but then saw mentioned the idea that the spare space on this drive can be used for in-progress downloads, allowing the main array to be spun down except when in use. I really need to just start playing around with it but it's going to take a while of shuffling data around until I can free up all the disks I intend to use, and I want to get the ZFS side straight before I do this so the data I copy back to it is then safe - I don't want it sat around on the 'temporary' disks for too long without backup.
|
# ¿ Apr 30, 2012 00:38 |
|
Thanks for all the pointers guys. I think I'm going to stick with Ubuntu rather than FreeBSD as I know it better. I did some reading on the ZFS-on-Linux newsgroup and lots of people seemed to be praising its state, saying they're on the verge of putting it into production.titaniumone posted:2GB of ram is not enough for a functional ZFS system. Your performance will be bad, and if you're unlucky, you may experience kernel panics. Wheelchair Stunts posted:Oh, man. You don't have to use fuse! Check this out. Especially since you use Ubunut, it even has its own little PPA setup. Performance was a lot better for me through this than fuse. Yeah, before you posted this I decided to do some tests with two old mismatched drives I had lying around. I'm thinking the actual performance might increase with my 'real' drives, but it should be a fair comparison between the two methods. With Fuse I was getting 33MB/s write and 54MB/s read, with it consuming almost all of my memory (leaving 77MB free). With the ZFS-on-Linux setup I got 98MB/s write and 148MB/s read, leaving about 230MB free memory. This was dd-ing a 10GB file from /dev/zero and to /dev/null. I thought it would be more fiddly to set up, but having tried it it's definitely worth it. Final question.. The two drives I used for my test were 500GB and 320GB. I initialised the zpool with the command: code:
code:
code:
|
# ¿ May 2, 2012 11:53 |
|
Leb posted:You're just striping two drives together, without any redundancy. Ah, thanks. So when I set it up with 4 disks for real, that 'create' command will automatically add redundancy?
|
# ¿ May 2, 2012 13:13 |
|
FISHMANPET posted:No, you need to specify what redundancy you want. If you don't specify anything you get a stripe. I don't remember the exact commands, but I think it's something like zpool create poolname raidz dev1 dev2 dev3 dev4 Yeah, sorry, I meant the 'create' command in my earlier post that included 'raidz'. I guess it just didn't add redundancy in my test case because I didn't include enough disks. Thanks for all the help guys, I'll stop crapping up the thread with inane questions now
|
# ¿ May 2, 2012 14:50 |
|
evilhacker posted:Do you need to have all of your drives when you create your ZFS volume or is there a way to migrate from a single drive to a mirror to a zRaid? I saw a page today where someone described expanding from a 2 drive mirror up to a 4 drive raid-z. It definitely sounded a bit sketchy but he said it worked. I can't find the link now, it's on my work PC's history, but I'll look it up tomorrow.
|
# ¿ May 2, 2012 21:18 |
|
evilhacker posted:Do you need to have all of your drives when you create your ZFS volume or is there a way to migrate from a single drive to a mirror to a zRaid? Froist posted:I saw a page today where someone described expanding from a 2 drive mirror up to a 4 drive raid-z. It definitely sounded a bit sketchy but he said it worked. I can't find the link now, it's on my work PC's history, but I'll look it up tomorrow. Here's the page I was thinking of: http://unix4lyfe.org/zpool/
|
# ¿ May 3, 2012 15:14 |
|
Wheelchair Stunts posted:I have had luck with Ubuntu and zfs on linux. You just add the ppa and aptitude install what you need. It does it through a "portability layer" which is just them putting compatibility for Solaris APIs in the kernel. This taints the kernel from a licensing perspective, so don't expect this to be distributed ever. I seemed to be able to get decent performance especially in comparison to ZFS over Fuse. I would still probably go with FreeBSD or more likely a Nexenta/Illumos/OpenSolaris type. Another +1 for ZFS on Linux here. I set it up a couple of months ago on my N40L as I'm only really familiar with Ubuntu. It's been flawless since day 1, can pretty much max out my gigabit network. Definitely a huge improvement over the Fuse version that I briefly tested.
|
# ¿ Jun 9, 2012 13:23 |
|
kapalama posted:So does that ZFS work by just taking advantage of all the storage space you got, and dynamically resizing and all that as you add random new drives? No, it's not as clever as Drobo/BeyondRaid. I threw 4x2TB drives in it giving me 6tb with redundancy - given the form factor of the machine I can't expand the storage beyond that. In general a zpool (which is what the OS writes to) is made up of one or more vdevs, which in turn is made up from multiple physical disks. You can't change a vdev once it's created, but you can add more vdevs to a zpool to expand its capacity. So if you start with 3x1tb (giving 2tb with redundancy) you could later add another 3x2tb to take the zpool's total up to 6tb of usable space (original 2tb + new 4tb).
|
# ¿ Jun 9, 2012 14:06 |
|
kapalama posted:Performance in the sense of speed, right? I've no experience with the 8-bay Drobo, but did have a 4-bay for a while. It was horrifically slow (to the point of destroying my media center experience, when it had to refresh the DVD covers for all my shows and took minutes to do so) and at one point corrupted the filesystem nearly losing all the data on it. The ease-of-use with the Drobo is cool, but the whole point of the device is data security and after my experience I couldn't trust it again. Plus the fact it's proprietary, so if the Drobo itself dies there's no way to read the data except with another Drobo. As far as buying pairs of drives, I'm talking off the top of my head here but I can't see how 8 bay vs 4 would make a difference there? Unless you only half-fill it to begin with and slot new individual drives into the empty bays.
|
# ¿ Jun 10, 2012 12:33 |
|
kapalama posted:How long ago did you own it? I've no idea about NAS systems, but it'd be easy to find a tower case that could take 8 disks and roll your own. I know ZFS doesn't need identical replacement drives, not sure about software RAID, but I had heard similar about some hardware RAID setups. Depending on your technical level you could look into building your own system to do it - it's likely easier than you think. I set mine up with very little previous linux experience (none with ZFS) after about a week's reading up, and as mentioned earlier it's been running perfectly for a couple of months. D0N0VAN posted:This scares me... I have had a 4 bay drobo for about a year and a bit now, it has worked flawlessly (although a little slow) but I always worry that one day it could just fizzle and destroy all my data. I didn't want to scare anyone because it was likely very much a fringe case, but from that point it was a no-go for me personally. I'd been using HFS+ as the filesystem (rather than the probably more common NTFS) and had switched a few times between using it with the DroboShare and directly connecting to my Mac over USB. One day neither of them would read the filesystem any more - the only way I got the data off it was a last-ditch attempt of using MacDrive on a Windows PC to read it.
|
# ¿ Jun 10, 2012 17:21 |
|
Not strictly NAS related, but I'm sure people in here will know the answer. I set up my N40L with Ubuntu/ZFS on Linux and it's been ticking over fine for a couple of months, but I remembered I should really have a scrub/snapshot schedule set up. I set these entries in my crontab (sudo crontab -e): code:
Also, is taking a snapshot daily overkill? How often is the consensus? As it's media storage my files don't change that regularly at all really so weekly would probably be fine, but it seems like it's low overhead and little disadvantage to doing it daily.
|
# ¿ Jul 5, 2012 18:56 |
|
Has anyone got any experience with Windows 8 Storage Pools and could help me make a decision? I've got an N40L that's about 6 months old, running Ubuntu Server with ZFS and 4x2tb drives, that just ticks over every day as a media storage box with usenet tools. About a foot away I have a 5-year-old (but still going strong) Windows 7 Media Center box that runs as a PVR and streams any other media from the N40L. Pushed into it by an impending house move, I've realised this is a bit wasteful and could (hopefully) be consolidated into the N40L doing both jobs in a smaller, neater and quieter package. I really like WMC so don't want to move to Ubuntu/MythTV, and it seems like for this use case Storage Pools should cover my needs as ZFS does now. I've done some reading around and Storage Pools performance doesn't seem stellar, but past the initial data migration I don't think the poor write performance will matter for me as it's just a media playback device - I very rarely copy files to it across the network, it just ticks over and downloads things at its own pace overnight. It looks like read speed isn't too bad from a parity pool, only write speed is affected? And even if it is, it shouldn't be affected to the point it can't smoothly playback video? I also read it isn't great at adding storage to a parity pool later, but really 6tb should see me through for a long time (famous last words) - at the moment the total of my years of hoarding hasn't reached 3tb. The other up-side is that I'd feel a bit more secure with my data on a "supported" solution. ZFS just works, but I'm no linux expert and not confident in my abilities to save the data if a drive ever did die, whereas Storage Pools should be a bit easier. Can anyone see any drawbacks that I'm missing, or does this seem a good idea? The only things stopping me trying it out are the investment in a new TV tuner (as the N40L only takes PCI-E) and the week's-worth of data shuffling to be able to clear the drives for a new install.
|
# ¿ Nov 17, 2012 12:43 |
|
PitViper posted:How is the ZFS-on-Linux port? I see there's the FUSE project, as well as the native kernel port. WOndering if there's benefits or drawbacks to either in particular. My old box-of-random-drives finally kicked the bucket, so I'll be rebuilding things properly next week when the new hardware shows up. I've been using it on Ubuntu for about 6 months and not had a single problem. I guess the real test is how it performs when something goes wrong, but unfortunately (or luckily!) I can't attest to that. I did try out the FUSE version at first and performance was pretty poor, so switched to the kernel version and it blew FUSE out of the water.
|
# ¿ Dec 13, 2012 16:01 |
|
Tornhelm posted:It works fine as a htpc. I've got one in my parents lounge running Win 7, software raid 5 and using xbmc as the front end. All you need is a cheap passively cooled and video card to run audio over HDMI. Hijacking this topic, what's the difference between software RAID5 and Storage Spaces (in Windows 8)? I was planning on switching my N40L over from Ubuntu/ZFS to Windows 8/Storage Spaces, but if I can stick to Win7/Raid5 I'd probably be happier with that. I've got no plans to upgrade the capacity of the pool (it's 4x2tb ==6tb with redundancy at the moment), just hadn't realised you could do software RAID in Windows. I guess the performance is better than Storage Spaces too?
|
# ¿ Dec 17, 2012 14:18 |
|
Tornhelm posted:The performance of software raid 5 isn't anything special, but its more than enough to stream media to two other computers while my parents watch poo poo on it in the lounge room. It probably also helps that ita one of the other computers which is the sabnzbd machine and handles most of the more intensive work. Sorry to keep questioning you, but you seem to have the setup I want . I did a bit of reading around after this and everything I saw implied that Windows 7 removed the RAID 5 option. Are you using a 3rd party RAID software, or is there a different way of getting it working? One of the "workarounds" I saw mentioned was "Install Windows 7 -> Install VirtualBox -> Install linux VM with MDADM -> Give VM access to drives -> Access from Windows via virtual network" which just sounds hideous.
|
# ¿ Dec 19, 2012 17:36 |
|
astr0man posted:If you going to do this why wouldn't you just keep your current Linux/ZFS setup? I've questioned myself too, but basically I want to roll two computers (fileserver + DVR/playback) that sit right next to each other into one. I could stick with ZFS and use MythTV, but I really like Windows Media Center - I've been using it for years. ZFS has been fast/stable as anything for me, but really I don't need blinding speed for playing back video files.
|
# ¿ Dec 20, 2012 00:52 |
|
SilentGeek posted:I'm putting together an N40L with FreeNAS 8 and 3 x 2TB WD Red drives and am planning on setting it up with Raid-Z. I was curious if it will be possible in the future to add a couple extra drives to the RAID without having to rebuild the entire RAID. Sorry if this has been asked already. Short answer: ZFS doesn't allow you to add disks to the array to expand capacity later. If you're only accessing the files over the network, ie no direct access to the box/SSH accounts for the other people in your house, you can just configure separate shared folders with different permissions. I do exactly this with my N40L/Ubuntu/ZFS box. I've never tried using NFS, only Samba. With Samba you can either just password protect the share (so people can browse your shares and see "SilentGeek's dirty videos" but not get in it) or hide them from view completely so you need to know the share name to access it.
|
# ¿ Jan 31, 2013 22:08 |
|
moron posted:My ~2 year old Microserver has developed an intermittent gentle humming sound. Tapping the case tends to get it to stop temporarily, so I'm inclined to think that one of the fans has developed dodgy bearings. I've tried to identify whether it's the large rear exhaust fan, or the fan in the power supply, which is to blame, but I've not been able to pinpoint the source. Mine (N40L coming up a year old) has always had an intermittent vibration sound if that's the same thing. I think it's just something rattling on the right side of the case, the lightest touch there will stop it, but I've not yet figured out what it is though or how to stop it happening. I'm pretty sure it's not one of the fans though - the PSU is on the opposite side of the case, and I've replaced the main fan for a quieter version a few months back.
|
# ¿ Feb 23, 2013 12:31 |
|
Mr Shiny Pants posted:If you just want a Nas why not get a Microserver? They practically give the N54L away and if you want more horsepower you can get the GEN8 version. Snap, I've got the N40L with ZFS and Ubuntu and it's been rock solid/fast for 2.5 years. You really don't need much know-how to set it up either - I bundled my way through it with a few guides and a little Linux experience. I did get a bit of a scare this week when I did an apt-get upgrade and my pools disappeared after a reboot, but I just needed to explicitly update the ZFS packages. A few months ago I dipped in here, saw the discussion about xpenology and almost got carried away into formatting the whole thing to set that up, but came to my senses and stuck with stability
|
# ¿ Jul 12, 2014 12:40 |
|
Megaman posted:What would be the reason for choosing Ubuntu over FreeNAS? It's the same thing, except FreeNAS comes with a webUI, and it's completely brainless, whereas Ubuntu you're forced to use the command line the entire time, at least as far as I know. Thoughts? Meh, I don't have a good answer. Back then I guess I was in the stage of my life where I thought it was "cooler" to set things up myself. If I was doing it now I'd probably go with something more "managed', but there's no real reason to change things now. Although I guess in theory I could format the OS drive and import the ZFS pool into another setup if I really wanted to.. On a tangibly related subject, I'm now getting the following from zpool status: quote:status: The pool is formatted using a legacy on-disk format. The pool can still be used, but some features are unavailable. I guess there's no real risk to going ahead?
|
# ¿ Jul 12, 2014 19:25 |
|
I have a question that may get me chased out of this thread with pitchforks on principal. Here goes.. Current setup: A few years ago I set up a NAS in a HP N40L with 4x2tb data drives using ZFS, plus a smaller drive for the OS (Ubuntu). I've been using it with one of the drives for parity (so 6tb usable in one zpool), running Sickbeard etc, and it's been ticking over with zero hassle since the start. A bit more recently I got on board with the "raid != backup" train, and bought 3x2tb external drives to occasionally rsync the data to in categories and (ideally, though I'm lax on it in practice) keep off-site. Issue: Predictably I'm beginning to creep close to my current storage limits. One of my externals (and the one for which the data category grows the fastest) is currently 96% full, and the whole zpool itself is 82% full. Controversial plan: As there's no way for me to easily grow this pool without buying a full array of new disks, and I already have an external "occasional yet good enough" backup of the data, I'm thinking of throwing caution to the wind: Switch to a non-redundant storage method so I "gain" another 2gb of internal space, and buy one more external drive to cover the shortfall in external backup space. Nothing I have is particularly irreplaceable (except around 400gb of raw GoPro footage which I may throw into Glacier, but more likely I'd be better off losing as I'll never look at it again anyway). Desired features:
Is this the kind of thing I could achieve with Xpenology? Am I right in thinking Xpenology is Debian under the hood, so would allow extra tinkering/functionality beyond what is provided as stock/with plugins? I don't mind a chunk of time and effort setting this up in the short term. Froist fucked around with this message at 16:26 on May 6, 2016 |
# ¿ May 6, 2016 16:23 |
|
I don't really know what advice I'm looking for here, just spitballing naive ideas. I've got an N40L that's been my stable NAS for 5 years, running ZFS on Ubuntu 14.04. It's got 4x2tb drives for the main storage pool which is now 96% full, and another 1.5tb drive for the OS which is basically empty. My rate of consumption has gone down drastically, so I guess I'd rather make use of what I have than make an investment in upgrading it right now. I have the data mirrored onto 3x2tb offline, external drives using some rsync scripts I put together years back. But I'd been getting a bit negligent in keeping these up to date and while I'd prefer it was plug-and-auto-backup (which at one point it was), it now took me a couple of hours to piece together how it all worked and get them updated. Combined with the fact it's still running a 6 year old OS and a bunch of services I no longer use, it feels like time for a fresh start. So I'm thinking about wiping the lot and rebuilding it with Xpenology, utilising the 5th drive to give me a bit of extra headroom before I have to bite the bullet and do a proper upgrade. I've read that with the N40L I'd need a NIC to be able to run the latest DSM versions, but beyond that how much headache is it running an Xpenology box? I was also thinking of setting it up as a Plex server to keep my media database centralised, but I'd only be playing on the same LAN and have read that Plex transcodes by default, so maybe I shouldn't go down that route. I've been using Kodi on my smart TV (and previously a NUC for playback), but I'd prefer the database to live with the media than on the playback device. I know you can do that by tinkering with Kodi config files, but in general I feel like I'm 5+ years out of touch with the most appropriate solutions. Any general advice?
|
# ¿ May 19, 2020 18:40 |
|
Thanks for the tips! I had been using a single 5.25" -> 3.5" adapter to mount in the top bay, I didn't realise the double-drive versions exist. The modified bios rings a bell but it's so long ago that I set it up it's worth me checking again. The 3x2tb externals I have are all 2.5" drives, so now I'm thinking maybe I should just get a single huge external for my offline mirror (rather than trying to juggle it between multiple drives), then crack my existing externals open to make it 7x2tb internal. That would double my usable space from 6tb to 12tb, which should keep me going for the forseeable future. Any opinions on a frankenstein card like this? They don't seem that common, but it seems like it would be useful to add USB 3 ports to transfer that amount of data in and out of the box, as well as adding the internal SATA connections.
|
# ¿ May 20, 2020 11:56 |
|
Minty Swagger posted:Brief googling seems to say that card has worked ok before. Give it a try! This is actually what I went with in the end, though it’ll be 7x2tb rather than 9 drives (not sure I could manage 9 anyway?). I do have a spare 1tb 2.5” I could put in the last slot, or a small SSD that may be useful for caching (but I don’t think any of my workloads would make use of that). I figure the PSU should be fine like this as these exact (bus powered) drives have all been plugged in at once before, and I’m removing a 3.5” drive along the way. Thanks for the tips! Weekend project sorted, if I can get all the data shuffled around by then..
|
# ¿ May 21, 2020 00:21 |
|
It turned into a bit of a messy saga, but I got my minimal-cost storage upgrade on my N40L (from a few pages ago) up and running. I bought the 4x2.5" mount and only found out afterwards that the external drives I had (Samsung M3s) aren't shuckable and have a proprietary connector inside, so that's winging its way back to Amazon now. In the end I got Xpenology with DSM 6.17 running using the onboard NIC, 4x2tb internal drives, and 3x2tb externals mounted inside and connected up via a USB3 hub/PCIe card, with DSM tricked into thinking they're internal drives. I know this is far from ideal from a performance perspective, but the server sits in my loft connected via powerline adapters so the whole thing isn't setting speed records anyway. And I still have a full mirror of the data onto a (new) external drive. Edit: I booted Xpenology back up to try iPerf after taking some benchmarks on my old install, and everything seemed to be performing similarly. So tried some file copies and those had improved too, without the CPU spike. Not sure what the problem was but it appears to be solved. Leaving the post here so you can still laugh at my hideous, slow setup. But at least it's no worse than it was before Froist fucked around with this message at 15:00 on May 27, 2020 |
# ¿ May 27, 2020 14:20 |
|
I get this goes against the ethos of this thread, but I'm looking for advice about moving away from a NAS. I've been running a home NAS for probably 15 years, from Drobo to N40L running Ubuntu+ZFS, to the same N40L running Xpenology. I have 12tb of redundant capacity in there, but anything of irreplacable value is backed up on other cloud services. Right now this is all really aging hardware and a bit of a Heath Robinson setup - I have four 2tb internal SATA drives (some of those may be the whole 15 years old), and when I was running low on space a couple of years back but not wanting to invest a lot, I mounted three 2tb 2.5" USB drives I already had inside the case (not even shucked, just hooked up to a USB hub). Given the age of the hardware and hackiness of this setup I also have an external 12tb which I use Synology's USB copy feature to plug in occasionally and mirror the lot to a cold backup. With the combination of recently becoming a dad, moving to an (inefficient) house with lots of improvements to make, and rising energy costs (UK) I'm finding it harder to justify the expense/complication of running an extra box 24/7 for the amount I actually need to access it. Factors 1 and 2 also mean I don't really want to spend over the odds on a replacement solution. I'm thinking of just moving back to a single big (12tb) drive in my PC that I can just boot/WoL when I need to access something. Obviously I'd be losing redundancy here, but would be hoping to still do something similar to the USB Copy for a cold backup. Is there a Windows feature/3rd party software that can replace Synology's USB Copy feature? i.e. not doing a full disk clone or taking images, but mirroring delta updates to an external filesystem. Alternatively are there any "better" solutions that would markedly reduce the energy footprint (~60w) without being a large investment? My option B was to also pick up a 2-bay Synology DS220+, but that would go from "£200 for a 12tb drive" to "£500 for a 12tb drive and new enclosure", and while it would be better for redundancy I'd lose my cold backup unless I spent another £200 for another drive. Even if the DS220+ ran at 15w (quoted, but I don't think that includes the drives), it would take 5.5 years to turn a profit on that outlay.
|
# ¿ Nov 28, 2022 02:27 |
|
|
# ¿ Apr 24, 2024 01:23 |
|
Klyith posted:2. If you can nerd out with a little bit of command line, robocopy or rsync can do this type of thing. I rolled my own backup system based on robocopy for several years (then I quit windows this year). Probably I should have been using rsync the whole time -- it has built-in options to do what I was doing without all the batch hackery. But my thing worked well enough. Good reminder, when I ran this as an Ubuntu box I had my own rsync scripts - I could probably do the same again. Klyith posted:Re: redundancy, if you want to get 2 drives you can mirror them using ntfs raid or storage spaces in windows. Another good point, thanks. Maybe I could just pick up a matching 12tb, shuck them both and stop worrying about the cold backup.. Thanks Ants posted:Does the NAS have to be on 24x7? I have Unraid running on a newer Microserver and cut the energy consumption to a third of what it was by running a shutdown script each day at about 3am and then using a £10 TP-Link smart plug to turn it back on each evening, with the BIOS configured to boot whenever AC power is restored. I'd have to think if I could make this work. The times I end up accessing it are a bit unpredictable - I work from home so occasionally want to access it during the day, can't just leave it until the evening and automatically save 2/3 of the day. But maybe I could just try a scheduled shutdown every day so it doesn't stay on unused, and then WoL-ing it when I do want to access it. I do have a spare smart plug but don't think I'd need it when WoL works fine. Thanks for the ideas!
|
# ¿ Nov 28, 2022 11:32 |