|
Combat Pretzel posted:If you have ZFS and use it via some "managed" system a la FreeNAS, you can set up automatic snapshots to contain fat finger deletes to some degree. You can set up mixed schedules of periods and retention, too. Say keep one snapshot per hour over the last three hours, then one per day over the last three days and then say one every week over the last four weeks. I have a retarded matryoshka doll setup of Windows file shares being stored on iSCSI LUNs published via COMSTAR, which is using ZFS pools as the data store. But it does allow me to set up a poo poo simple set of VSS scripts to contain a lot of that accidental deletion stuff. I just put together a 10 disk RAID-Z2 pool made of the 8 TB Ironwolf drives, the latest backblase drive survey showed pretty good numbers for the 4+TB seagates, so I figured why not? That and my old pools were getting kinda super full.
|
# ? Jan 24, 2017 04:06 |
|
|
# ? Apr 26, 2024 08:44 |
|
If you're using samba shares you can also just enable the recycle bin. I do that and have a cron script that runs once a day that deletes anything in there after 7 days.
|
# ? Jan 24, 2017 04:30 |
|
Paul MaudDib posted:So if you have 5 drives in your array, you have a 0.95^5 = 77% chance of your array failing. 23%. 77% chance of surviving.
|
# ? Jan 24, 2017 09:18 |
people posted:about the pros and cons of RAID5/6 Combat Pretzel posted:If you have ZFS and use it via some "managed" system a la FreeNAS, you can set up automatic snapshots to contain fat finger deletes to some degree. You can set up mixed schedules of periods and retention, too. Say keep one snapshot per hour over the last three hours, then one per day over the last three days and then say one every week over the last four weeks. Methylethylaldehyde posted:I have a retarded matryoshka doll setup of Windows file shares being stored on iSCSI LUNs published via COMSTAR, which is using ZFS pools as the data store. Thermopyle posted:ata-WDC_WD50EFRX-68L0BN1_WD-WXB1HB4JY3KC
|
|
# ? Jan 24, 2017 10:42 |
|
D. Ebdrup posted:Did it default to using this drive identifier for you? I've been considering taking the time to boot a FreeBSD live cd (well, USB flash disk) and renaming all my eli devices for my encrypted root on zpool to something more useful, and that naming scheme is basically perfect because it identifies how the disks are connected, what make and model they are, and it's serial number (?) - so if you've got as much OCD as me and label your disks with their SN on the outward-facing side, you can quickly identify which drive to pull based on zpool status. Looks like linux's /dev/disk/by-id: code:
|
# ? Jan 24, 2017 14:03 |
|
Phosphine posted:23%. 77% chance of surviving. Whoops, fixed.
|
# ? Jan 24, 2017 15:24 |
|
Another problem is resilver time. You can easily be looking at days+ to rebuild a high-capacity RAID array if you lose a drive. If this is a business-critical system, ask yourself what you would do in the meantime. Do you have alternatives, or are you losing money? That will drive your design here too. The tl;dr is for a simple use-case I would forget about raid. You should probably go with either LVM or ZFS, with either one or two volume groups/pools. Then you create one big pool for each volume group, so you have either one big spanned volume across 4 drives or two volumes spanned across four drives. Then you serve them with Samba and clone them across every night using rsync. Use FreeNAS. You will still benefit from using ZFS's data integrity checks in this situation, and it gives you a lot of future options for expansion/performance increase/etc. You just won't be able to use some of the features like snapshotting unless you increase your disk space first, but that will just be a matter of plugging in more drives, you won't have to dump and rebuild like you would to change your underlying filesystem. Either way I think you would get more for your money by building your own machine with (again, I like this build), but of course that takes time. You could really do worse than buying a ThinkServer TS140 or something similar, too. They are nice little boxes. I still do recommend ECC RAM if at all feasible, regardless of what hardware you get. It's not a must-have but it's pretty high on the importance list. Note that your motherboard/CPU also need to support it, and different machines use different types (registered/unregistered/fully-buffered/load-reduced/etc). You also want at least 8 or 16 GB of RAM in this, especially if you are going with ZFS. Paul MaudDib fucked around with this message at 16:20 on Jan 24, 2017 |
# ? Jan 24, 2017 16:10 |
|
D. Ebdrup posted:Did it default to using this drive identifier for you? This guy is right: Tevruden posted:Looks like linux's /dev/disk/by-id: The two machines I have zfs pools on are running ZFS-On-Linux and both seem to default to using /dev/disk/by-id. Paul MaudDib posted:Another problem is resilver time. You can easily be looking at days+ to rebuild a high-capacity RAID array if you lose a drive. For reference, I just resilvered a RAIDZ1 pool with four 2 TB drives in it (which has a capacity of 5.17TB) and it took 17 hours. The real lovely thing is that after finishing that I did a reboot and it started the resilvering all over again...currently trying to figure out whats up with that. Thermopyle fucked around with this message at 17:08 on Jan 24, 2017 |
# ? Jan 24, 2017 17:06 |
|
D. Ebdrup posted:Doesn't OpenSolaris or whatever derivative of it you're using (Illumos?) support kernel iSCSI targets? FreeBSD has that, and I thought that was the last Unix-like to impliment it. I'm using OmniOS, and COMSTAR is the kernel level iscsi implementation, or at least the management frontend for it.
|
# ? Jan 24, 2017 17:14 |
|
D. Ebdrup posted:To be fair to everything not-FreeNAS, this is just a cronjob so basically everything that can run ZFS drat well also better impliment cron or it ain't worth using in the first place.
|
# ? Jan 24, 2017 17:31 |
|
What's my best option for backups if I'm running my NAS as a VM under a hypervisor? My understanding is that rules out ZFS (technically doable but seems there are too many gotchas from my research)?
|
# ? Jan 24, 2017 18:36 |
|
The only real gotcha is that if you pass raw or file backed services through, there's possibly no way for ZFS to force flushes (maybe your hypervisor of choice passes any requested on the virtual disk down to the real one, in case of raw devices, might be worthwhile to find out), which a big part of its data integrity bases on. If you can pass through a HBA via PCIe Passthrough, then the situation is different.
|
# ? Jan 24, 2017 18:55 |
|
Mr. Crow posted:What's my best option for backups if I'm running my NAS as a VM under a hypervisor? My understanding is that rules out ZFS (technically doable but seems there are too many gotchas from my research)? The absolute best case here is to run a hypervisor / hardware setup that works with Intel VT-d or AMD IOMMU, and directly pass the hardware controller(s) with the NAS disks attached to it to the NAS VM. The NAS VM then gets baremetal access to the drives and you could physically remove them and plug them into a physical box with the same OS, no problem.
|
# ? Jan 24, 2017 18:58 |
|
How do I move forward with my NAS (main use is Plex) when I'm running out of space? Currently I'm running Xpenology with 6, 4tb drives in SHR2 and I'm about 80% full. I'm also running another 6 USB drives of varying sizes as backups. At my current pace, I expect that I'll run out of space within the year. Do I buy a new synology-type unit and use it only for storage, mount the drives under xpenology and then go from there? Do I try to find a SAS that Xpenology accepts and move everything in to a new case with new hard drives? Do I just blow everything away and start over with new drives under FreeNas?
|
# ? Jan 24, 2017 20:43 |
Thermopyle posted:The two machines I have zfs pools on are running ZFS-On-Linux and both seem to default to using /dev/disk/by-id. Methylethylaldehyde posted:I'm using OmniOS, and COMSTAR is the kernel level iscsi implementation, or at least the management frontend for it. Combat Pretzel posted:The cron job becomes mildly complex when it comes to properly tracking these snapshots and making sure the oldest/right ones get deleted. I'd rather use a "battle proven" interface to it instead of jerryrigging cron jobs or hoping that third party scripts function properly. One such smart solution that I can recall off-hand, because I heard about it recently is, when Allan Jude described how he moved from an UFS FreeBSD installation to a ZFS installation without more than 1 reboot and about a reboots worth of downtime - it went something like this: 1) Add disks, partition to add bootcode, swap and add GELI partitions for root on encrypted zpool, as well as a gzero device (unfortunately it one of a few handfuls of stuff not documented in manpage, but basically it's a +2EB device, that writes to /dev/null and returns 0 when read from, that can be used as a stand-in for the existing disk with the UFS partition so long as it isn't read from) 2) Setup a zpool on the GELI partitions along with the various datasets and install FreeBSD into it 3) Sync all files over to the zpool with rsync, then do it again to get the changed files, then go to single-user mode to prevent file changes and rsync a third time 4) Reboot to the zpool, wipe the ufs disk, add bootcode and swap to the remaining disk, and setup a GELI partition, then replace the gzero device with the remaining disk. Mind you, he's not only guy who wrote the bootcode for FreeBSDs bootloader to support booting from both MBR and GPT to a zpool and an encrypted zpool (written in x86 assembly, that is), he's also written several books on FreeBSD and ZFS with Michael W Lucas, so it's not surprising that he comes up with stuff that genius.
|
|
# ? Jan 24, 2017 20:53 |
|
suddenlyissoon posted:How do I move forward with my NAS (main use is Plex) when I'm running out of space? Currently I'm running Xpenology with 6, 4tb drives in SHR2 and I'm about 80% full. I'm also running another 6 USB drives of varying sizes as backups. At my current pace, I expect that I'll run out of space within the year. If you have the ability to add additional drives to your existing system through some sort of SAS/HBA that would certainly be the easist solution. XPenology supports most of the popular LSI cards that you find on Ebay and there are plenty of guides around the net on how to flash them to IT mode. Another option is to gradually upgrade your current drives to larger ones by replacing one drive at a time and letting XPenology resilver the new drive. This isn't as cost effective since you're now left with perfectly good 4tb drives that you can't use. Another option is to build a new XPenology system and set it up as an expansion unit for your main system.
|
# ? Jan 24, 2017 21:25 |
|
suddenlyissoon posted:How do I move forward with my NAS (main use is Plex) when I'm running out of space? Currently I'm running Xpenology with 6, 4tb drives in SHR2 and I'm about 80% full. I'm also running another 6 USB drives of varying sizes as backups. At my current pace, I expect that I'll run out of space within the year. Well, what do you want out of your upgrade? Do you actually need RAID performance? If so, there's no way around it, you're buying more drives or you're buying a new set of larger drives one way or another. There's no free lunch here. You could also use LVM/ZFS, set up a volume group on a single new drive, transfer a drive's worth of crap over, nuke the old drive, grow the volume group, rinse/repeat. That would let you get off RAID (i.e. get more out of your existing drives) without nuking what you've got, and you can grow your volume group in the future as you add disks or upgrade your old disks. And again, you can get RAID-like performance out of ZFS various ways, like a cache SSD or RAID-style striping (LVM can do striping too). A cache SSD still uses a SATA/SAS port (unless you have M.2/mSATA slots available?) but you get more out of your existing HDDs so it would balance out. Paul MaudDib fucked around with this message at 03:02 on Jan 25, 2017 |
# ? Jan 25, 2017 02:51 |
|
Paul MaudDib posted:And again, you can get RAID-like performance out of ZFS various ways, like a cache SSD or RAID-style striping (LVM can do striping too). A cache SSD still uses a SATA/SAS port (unless you have M.2/mSATA slots available?) but you get more out of your existing HDDs so it would balance out. Frankly, assuming you're restricted to GigE networks like most sane people, you don't even have to worry about performance because even a simple RAIDZ1 with 3+ drives will handily saturate your network without any need to resort to things like caching or whatever, and RAID-anything for random access is already going to be restricted by single-drive access speeds for virtually any practical application, so you're not losing on that front, either. Things are a little different if you're talking enterprise-level multi-user setups, but for a home Plex server it's absolutely irrelevant.
|
# ? Jan 25, 2017 04:35 |
|
I'm thinking about getting something like this USB3/eSATA 4 bay drive enclosure to use with maybe snapraid or one of those types of solutions. I'm just planning on storing re-obtainable media files on it. Anything wrong with that enclosure or those types of enclosures? Any recommendations for a different one?
|
# ? Jan 25, 2017 17:02 |
|
D. Ebdrup posted:
I use Napp-it to manage my hobo-san, it makes it a ton less annoying to actually do anything. I no longer have to google a blog post on how to configure a target view, the little menus let me set one up nice and quick.
|
# ? Jan 25, 2017 23:35 |
|
Thermopyle posted:I'm thinking about getting something like this USB3/eSATA 4 bay drive enclosure to use with maybe snapraid or one of those types of solutions. I'm just planning on storing re-obtainable media files on it. I've had two of those. One was fine, the other one had errors that made btrfs checksums fail and then eventually ate the entire array, but I bought both of them in 2014. Edit: the door is a little weird and has a tendency to fall off when you open it.
|
# ? Jan 26, 2017 00:10 |
|
I am running Ubuntu 16.04 and I need to image my boot drive to a new SSD 256 Gig. The BIG issue is the boot drive is a 256 Gig USB drive and I know there are some issues when you image between device types. The USB has been configured to keep as much cache off of it as possible but I am still worried about the finite writes on USB. SSD would be better and the reason I am doing this. What would be the advice of everyone to start from? Some things I am thinking of and need some feedback on 1) Which utility to use? rsync (which I am still not good at understanding the huge amount of options. only used it for simple directory moves) or simple dd? There are two partitions on the usb disk using ext4; one for boot and one for everything else. Worried about DD finding an error on the USB and reducing the speed to slower than slow; option with dd is to use ddrescure which ignores bad blocks until it's done moving the rest of the good blocks. 2) Probably safer to export the ZFS drives and redo the zpool again, but I would love if I didn't have to do this since I would have to reconfigure Plex. 3) grub is going to be poo poo to redo. 4) dd can't copy from a bigger drive to a smaller drive. I need to find out the number of blocks the ssd has to see even if I can dd it. EVIL Gibson fucked around with this message at 20:40 on Jan 28, 2017 |
# ? Jan 28, 2017 20:37 |
|
Just use Clonezilla. It can do the whole deal for you, no big deal.
|
# ? Jan 29, 2017 02:56 |
|
EVIL Gibson posted:
Just Zpool export the pool and Zpool import it on the new system. This works a treat.
|
# ? Jan 29, 2017 20:45 |
|
Are there any issues with adding drives that have different RPM into a software RAID? I'm not worried about performance all too much, I just don't want my data to get lost.
|
# ? Jan 30, 2017 09:54 |
As far as I know, there isn't really any way to get screwed over and lose data when it comes to mixing spindle speed, block size (512 vs 512e vs 4096), number of LBAs, or cache size, but it will affect performance of the RAID - and probably more than you expect. It's only when you add/replace drives in an array that you have to be careful about the number of LBAs.
|
|
# ? Jan 30, 2017 11:59 |
|
Actuarial Fables posted:Are there any issues with adding drives that have different RPM into a software RAID? I'm not worried about performance all too much, I just don't want my data to get lost. No, you'll be fine. At worst, your RAID will perform to the level of the slowest drive you have in the array. So in that sense you're potentially wasting the extra performance of the faster drive, but it's not gonna hurt anything.
|
# ? Jan 30, 2017 14:19 |
|
Thanks! I'm planning on getting two Seagate and two WD NAS drives to start with and the Seagate ones run at 5900, so I got a little confused.
|
# ? Jan 30, 2017 15:07 |
|
Backblaze published its 2016 hard drive reliability stats https://www.backblaze.com/blog/hard-drive-benchmark-stats-2016/
|
# ? Jan 31, 2017 23:25 |
|
borkencode posted:Backblaze published its 2016 hard drive reliability stats https://www.backblaze.com/blog/hard-drive-benchmark-stats-2016/ Well, that does seem to support the idea that Seagate, on the whole, ain't bad, but has particular model numbers which are just dumpster fires. 13.5% failure is, uh, not good. Outside that, I think it shows what a lot of people have been saying: outside a few problematic models, HGST/Toshiba/Seagate/WD are all pretty close over the long run, with failure rates low enough that other factors can be taken into consideration when selecting drives.
|
# ? Feb 1, 2017 01:03 |
|
I've used freenas on my old pc for a few years now, and it is good, but there's a lot you can't do in BSD and much of what you can do you have to build from source in a jail and gently caress that noise. Installing or upgrading anything from ports is such a goddamn waste of time it makes me want to admin BSD for a living so I can get paid to sit here and press enter every minute or two while every dependency on earth compiles from source Anyway, I was looking at the flood of cheap old EOL'd intel workstations on ebay, got carried away and made a lowball offer on an HP Z400 w/ a beefy 6 core xeon and 24 gigs of ECC, which I won. It's got 4 pcie slots (2x8 and 2x16) and it can accommodate 5 hard drives with an adapter for the 3x5.25" bays (a lot of comparable machines don't have an oldschool 3-stack of 5.25" bays and would be hard pressed to accommodate 5 drives). It also supports VT-D, which lets me pass through devices directly to VMs. So now I've got a freenas VM running in there under esxi 6.5 with my old reflashed Dell H100 SAS card passed through to it, I've imported my zpools and everything I used to do in jails or plugins are running alongside in linux, as god intended, installed in seconds with binaries from a goddamned package manager. And I've still got a lot of headroom for more VM nonsense. I may even be able to pass a GPU through to something and run the media center end of things from the same box. I've never done anything in esxi and the whole setup was faster and easier than getting deluge compiled and working from ports in a jail. It's even using less power than the old machine. I wish I'd done this a while ago. My next post, presumably, will be about loving it up and losing my zpools. Stay tuned!! poverty goat fucked around with this message at 16:22 on Feb 4, 2017 |
# ? Feb 4, 2017 16:17 |
|
DrDork posted:Well, that does seem to support the idea that Seagate, on the whole, ain't bad, but has particular model numbers which are just dumpster fires. 13.5% failure is, uh, not good. Maybe you shouldn't buy the brand that has problematic models every 2-3 years though. I mean, if you cherrypick the numbers you can get whatever results you want. Out of my hard drives that haven't failed, 100% are still working, let's see you beat that!
|
# ? Feb 4, 2017 16:29 |
|
poverty goat posted:there's a lot you can't do in BSD how sure are you about this? That doesn't seem right, are you sure we're talking about the same thing, the devil's OS, right? quote:Anyway, I was looking at the flood of cheap old EOL'd intel workstations on ebay, got carried away and made a lowball offer on an HP Z400 w/ a beefy 6 core xeon and 24 gigs of ECC, which I won. It's got 4 pcie slots (2x8 and 2x16) and it can accommodate 5 hard drives with an adapter for the 3x5.25" bays (a lot of comparable machines don't have an oldschool 3-stack of 5.25" bays and would be hard pressed to accommodate 5 drives). It also supports VT-D, which lets me pass through devices directly to VMs. I have pretty much that machine right now as my Lubuntu working machine, but with 16 GB and 4 cores. Good luck with the power supply though, mine wouldn't handle even half of its rated load and the 24-pin connector is non-standard. quote:So now I've got a freenas VM running in there under esxi 6.5 with my old reflashed Dell H100 SAS card passed through to it, I've imported my zpools and everything I used to do in jails or plugins are running alongside in linux, as god intended, installed in seconds with binaries from a goddamned package manager. And I've still got a lot of headroom for more VM nonsense. I may even be able to pass a GPU through to something and run the media center end of things from the same box. Good luck and don't forget to hail satan, friend. [but really good luck, and make sure everything flushes through as quickly as possible] Paul MaudDib fucked around with this message at 16:34 on Feb 4, 2017 |
# ? Feb 4, 2017 16:30 |
|
Paul MaudDib posted:Good luck with the power supply though, mine wouldn't handle even half of its rated load and the 24-pin connector is non-standard. I'm aware. I've looked into how to adapt a standard ATX PSU for it, if needed, and I'm pretty sure this is just the thing if you trust dhgate
|
# ? Feb 4, 2017 16:42 |
|
Paul MaudDib posted:Maybe you shouldn't buy the brand that has problematic models every 2-3 years though. I mean, if you cherrypick the numbers you can get whatever results you want. Out of my hard drives that haven't failed, 100% are still working, let's see you beat that! Outside of the problematic model, though, Seagate has better reliability than WD, so arguably if you stay away from the drives that have 1-2 stars on Amazon/Newegg, you should come out on top! The point is that super-unreliable drives are easy to identify for the most part, and everything else is within a standard deviation of each other, so it makes more sense to buy based off other factors (price, spindle speed, noise, warranty, etc) than lock yourself into one manufacturer in the mistaken assumption that their product is head and shoulders better than the others. DrDork fucked around with this message at 16:44 on Feb 4, 2017 |
# ? Feb 4, 2017 16:42 |
|
poverty goat posted:I'm aware. I've looked into how to adapt a standard ATX PSU for it, if needed, and I'm pretty sure this is just the thing if you trust dhgate Assuming China isn't going to burn my house down, that looks like exactly what I need. Ugh, I've been waiting for someone to do that for years. Off topic though, what's up with DHGate's HTTPS? HttpsAnywhere is really upset with it for some reason.
|
# ? Feb 4, 2017 17:03 |
poverty goat posted:I've used freenas on my old pc for a few years now, and it is good, but there's a lot you can't do in BSD and much of what you can do you have to build from source in a jail and gently caress that noise. Installing or upgrading anything from ports is such a goddamn waste of time it makes me want to admin BSD for a living so I can get paid to sit here and press enter every minute or two while every dependency on earth compiles from source Jails (which are a type of container, though not the first and certainly not the last) exist, as the original papers title hints at, to confine root. You can completely ignore using jails, unless you either want to: 1) isolate something for security purposes (say, if you have a httpd listening on port 80 and you don't want an exploit against that to affect your entire system), 2) confine something for a clean build environment for poudriere (more on this later), or 3) take in-production and deploy to a test environment so that you can test something without affecting production). Ports are a collection of Makefiles (think recipe) that describe where to get the original source and what to do with it in order to all manner of software not included in the base system, but you can build any POSIX compatible software on FreeBSD. Most ports come with many build options, but the defaults are generally rather conservative. Packages are built from ports via poudriere by FreeBSD, and are made available from pkg.freebsd.org (which itself uses SRV records to ensure you use as local a package repo as possible, plus it allows for failover). Out of the over-26000 ports in base, only 11 failed to build for the last quarterly build for AMD64/x86-64 (the one that you're on, unless you changed the defualt behaviour). Additionally, packages built by FreeBSD for pkg.freebsd.org always use default values for ports (which is part of, but not the whole reason, why ports have conservative default build options). Interestingly, ports and packages have been the inspiration of Gentoos emerge system and debians package system respectively. This means that if you're happy with the default options for packages, you can simply use pkg to grab it - and if you want custom packages, you can do what FreeBSD does and set up poudriere (digitalocean has a rather nice article on how) which can then regularly check whether any given port has been updated, build it for you with whatever options you want, inform you if it fails, and can even be used to build packages for platforms which aren't tier1 and therefore aren't necessarily to have all packages built for it (such as ARM and MIPS, something I personally do). You can do both of what FreeNAS and pfSense can do by using FreeBSD, with much less complexity, more flexibility, and do it faster (since ipfw is considerably faster than pf nowadays), plus a lot of other things like encrypted root on zpool with up-to-triple-parity or n-disk mirroring (depending on whether you need storage space or IOPS) and boot environments that you can create in a few milliseconds, which mean that if your system somehow breaks because of an update, you can be right back to a working system with a single reboot (and your user-files weren't touched, only system files), as well as jails plus an actual hypervisor (bhyve, like what ESXi offers) which can both leverage ZFS with all the cloning, templating, quick deployment and resilience that that offers. Also, as an aside, it's fairly simple to run a Linux userland on top of the FreeBSD kernel in a jail. What you've done is scrapped approximately 20 years of computing advancement by using a hypervisor which you then use to completely segment your servers resources in such a fashion that you can almost-never regain them again, on top of which you're then running multiple types of containers - that's effectively what happened back in the 90s where x86 computers couldn't yet do proper SMP. Containers on bare-metal, where you don't segment your compute and storage, is something that the industry is JUST NOW thinking of (after Amazon introduced the idea of segmented compute and storage with EC2 and S3), that Solaris/Illumos and FreeBSD have been doing for ages. Mind you, I'm not knocking virtualisation - it's wonderful for certain things; but the major downsides are almost never out-weighted by its upsides. BlankSystemDaemon fucked around with this message at 19:00 on Feb 4, 2017 |
|
# ? Feb 4, 2017 18:32 |
|
D. Ebdrup posted:FreeNAS is an appliance NAS OS based on FreeBSD (ie. it's meant to do one thing and do it well - similar to how pfSense is an appliance router/firewall OS based on FreeBSD), and cannot be compared to real FreeBSD, but aside from that, I think you might've misunderstood the point of jails on FreeBSD, what ports is for, and what packages are for. You're right, I'm cursing BSD when I should be cursing FreeNAS. It forces you to use jails to install anything, which is sensible but I ended up using ports for everything because pkg was broken in jails for a while. I generally had a bad experience all around with plugins or linux compatibility layer stuff under freeNAS, not for lack of trying. e: and obviously I was pushing the bounds of what freeNAS is designed for at this point and I have no interest in giving up any of the convenience and quality of life features in freeNAS to go full bsd It's a home server, everything I wish to back up is backed up to crashplan and nothing of irl importance depends on any of it. I don't want a second server running 24/7 in my home. Also I want an honest to god linux VM, and a VM with a GPU and HDMI and a pony. I'm sure it's not optimal but this is the only way to fit it all into one box and I think my chances of catastrophic failure are within acceptable bounds poverty goat fucked around with this message at 19:21 on Feb 4, 2017 |
# ? Feb 4, 2017 19:05 |
|
FreeNAS 10 (still in beta, IIRC) is moving away from jails in favor of Docker containers running in a Linux VM (via bhyve). It's not available yet (as a stable release anyway) but when you get sick of dealing with VM pass through, it will probably be ready to deploy. So you have that to look forward to. I'm waiting with bated breath, personally. Sure, I can make anything build on BSD, but I'd rather reuse someone else's work configuring the build and deployment process, and if I don't like a decision they made, Dockerfiles are easy to tweak and images are easy to build. Meanwhile, I'd rather run my NAS on bare metal, so I'm deploying new services on my Windows machine via Docker/Hyper-V, with plans to move them over when 10 drops. (After a few months for it to get a good shakedown cruise, of course. )
|
# ? Feb 4, 2017 21:32 |
|
|
# ? Apr 26, 2024 08:44 |
I love the way you old fucks treat computing like a battle scene in Star Trek. Doesn't matter how mundane the activity or device or system, you find a way to turn something easy into a soap opera for absolutely no reason. It's loving awesome, never change.
|
|
# ? Feb 4, 2017 23:59 |