|
apropos man posted:Yeah. I think I jumped to a conclusion there before I've had a chance to look into it. I'm gonna have a read up about it after work and I'll probably go with one big pool like you said but somehow partitioning it off to give a discreet amount of storage to each VM. I'm looking forward to tinkering with ZFS. Cheers! You can use Zvols as partitions, or create datasets with quota's. I would just use KVM qcow files or something that are sparse created and have a maximum size that you like. For my VMs I just have a mirrored Zpool of 2 SSDs, every VM is stored in qcow2 files. When I was using Infiniband I had ZVols that were exported as raw SCSI devices from my server to my Windows machine. Both work well but Zvols are a bit of a blackbox because they don't have a filesystem that you can browse from your NAS, they are raw blockdevices.
|
# ? Jun 23, 2017 12:42 |
|
|
# ? Apr 18, 2024 22:05 |
|
apropos man posted:I'm looking forward to tinkering with ZFS. Cheers! One of us! One of us!
|
# ? Jun 23, 2017 12:45 |
|
DrDork posted:One of us! One of us! The problem with ZFS is that once you start using it, everything else is just meh.
|
# ? Jun 23, 2017 12:46 |
|
Mr Shiny Pants posted:The problem with ZFS is that once you start using it, everything else is just meh. I'll take that as further recommendation! I want to use full disk encryption and I shall be using SELinux extended attributes. This is copacetic with ZFS, yes? I'd also like the storage to be independent of the VM disk. Someone suggested adding it to the thin provisioned VM storage but I'd like separation in case I tear the VM down. Edit: actually, qcows might work as individual storage units. I'll have a look in a couple hours. apropos man fucked around with this message at 14:10 on Jun 23, 2017 |
# ? Jun 23, 2017 13:53 |
|
Mr Shiny Pants posted:The problem with ZFS is that once you start using it, everything else is just meh. I regret using it. For the guy making multiple filesystems: think long and hard about your reasons for doing it. It can make things more difficult in the future when you run out of space. What are you gaining that's useful to you that isn't served by just using a couple folders?
|
# ? Jun 23, 2017 14:08 |
|
The main reason I've decided to segregate and attach to separate VMs is all the trouble I had in setting up NFS and/or samba with Plex and/or Emby. I think I'll use ZFS as an overall pool and attach a 500GB qcow directly to my media serving VM, then attach a 20GB qcow directly to my 'important files' VM. All other ZFS space will be fair game for whatever I decide to do with it later.
|
# ? Jun 23, 2017 14:14 |
|
Meh, that FreeNAS doesn't do Infiniband/SRP kind of irks me. Illumos is an idea, but I'd like a frontend, napp-it however gives me the 90ies vibes. I'm not sure what to make of Nexenta. It looks like what I want, but the Community Edition seems to have become a red-headed stepchild. Also, gently caress Intel for not doing RDMA on their 10GbE adapters.
|
# ? Jun 23, 2017 14:20 |
|
I'm having a nightmare with SoftRaid on my Mac. The previous dot revision works perfectly fine when running the easy setup until the very end when it gives an Unexpected Error warning and no other detail. The absolute newest version even as a trial refuses to recognise that Easy Setup is in the same folder as SoftRaid itself. Right now I can't work out if it's the drives (new to me, but fully tested by Don Lapre on here before sale so I don't think it's them), my Mobius 2 Bay enclosure (new to me, Amazon warehouse deal because of a bit of damage to the packaging but appeared to be brand new inside), or SoftRaid on a brand new macOS install. I'm tempted to just run the disks using the Mobius as the RAID handler (I'm doing a straight mirror between the two drives, and someone who had a unit fail said the data integrity between the two drives was bit perfect and had no problem getting his data off) and saving myself the headache. Anyone ever had a similar issue and might be able to point out where the problem lies?
|
# ? Jun 23, 2017 14:32 |
|
Would running a qcow inside ZFS constitute nested copy on write, since ZFS does it anyway? Maybe I'm paying too much credence to acronyms, here.
|
# ? Jun 23, 2017 14:37 |
|
Thermopyle posted:I regret using it. Why? You doing multiple VDEVS with the need to expand? Or rebalancing issues?
|
# ? Jun 23, 2017 15:32 |
|
Mr Shiny Pants posted:Why? You doing multiple VDEVS with the need to expand? Or rebalancing issues? Expansion is the big one, rebalancing can be a problem as well, also system requirements. For my usage (bulk media storage), I just don't get any benefits out of ZFS that outweighs the fact that I've got to spend a thousand dollars each time I want more storage. I wish I was using snapraid and my next big upgrade will be to move over to that, but I've got something like 40TB I'll have to copy off of ZFS its going to be expensive. ZFS is undoubtedly cooler in a nerdy kind of way.
|
# ? Jun 23, 2017 16:26 |
Combat Pretzel posted:Meh, that FreeNAS doesn't do Infiniband/SRP kind of irks me. If you need RDMA for 10G, you need to consider an OS that's better at pushing bits. FreeBSD can do both Infiniband/SFP+ and almost satuate 100G TCP with ~40% CPU utilization (~30% once a change from Netflix is upstreamed), and can do root on zfs with beadm. apropos man posted:Would running a qcow inside ZFS constitute nested copy on write, since ZFS does it anyway? Maybe I'm paying too much credence to acronyms, here. All that being said, ZFS is neat and cool and it also has certain downsides if you're not willing to put up with the fact that it was designed for an entirely different market segmet despite scaling well.
|
|
# ? Jun 23, 2017 17:11 |
|
D. Ebdrup posted:Snip I really want to like FreeBSD, but in my opinion it is almost in the same boat as Solaris, all the cool stuff is coming to Linux first. Which is a shame really.
|
# ? Jun 23, 2017 17:18 |
|
It's ok to like FreeBSD Some cool stuff may be implemented on Linux first, but when FreeBSD releases it, it's generally quite polished. Occasionally, the cool stuff is on FreeBSD first. SamDabbers fucked around with this message at 17:43 on Jun 23, 2017 |
# ? Jun 23, 2017 17:37 |
|
SamDabbers posted:It's ok to like FreeBSD All the cool stuff is on OpenBSD first. Sometimes FreeBSD adopts it, changes into an unrecognizable and unmergeable mess and lets it rot (e.g. pf). Sometimes, if the stars align, makes it into Linux. The only thing that the NAS people would want that isn't in OpenBSD (for good reason) is ZFS. For me that's perfectly fine, but for some it isn't.
|
# ? Jun 23, 2017 18:33 |
SamDabbers posted:It's ok to like FreeBSD Volguus posted:All the cool stuff is on OpenBSD first. Sometimes FreeBSD adopts it, changes into an unrecognizable and unmergeable mess and lets it rot (e.g. pf). Sometimes, if the stars align, makes it into Linux. The only thing that the NAS people would want that isn't in OpenBSD (for good reason) is ZFS. For me that's perfectly fine, but for some it isn't.
|
|
# ? Jun 23, 2017 19:13 |
|
D. Ebdrup posted:I'm hugely biased because I've been using FreeBSD as my primary (non-gaming) OS since 2001, but there are plenty of good reasons for using FreeBSD outside of a NAS/server solution. Yup. For one, it's free. For two, it sounds like a real trippy psychedelic. For free. How much better can you get!?
|
# ? Jun 23, 2017 19:56 |
|
D. Ebdrup posted:Some people, mostly Linux and OpenBSD zealots, think differently. I used it too as my primary OS back in the 4.x days. Early 2000s something it was. Had to go back to Linux a few months after 5.0 got released. Those were not good times. On the home gateway though, OpenBSD only. Since ... forever (3.something version). From my point of view the BSDs always felt more coherent. More ... engineered. Things, everything, just blended and worked together really nicely. As opposed to linux distributions where I always had the feeling that is just a bunch of programs thrown together who may or may not like each other depending on the time of the day.
|
# ? Jun 23, 2017 20:35 |
DrDork posted:Yup. For one, it's free. For two, it sounds like a real trippy psychedelic. For free. How much better can you get!? Volguus posted:I used it too as my primary OS back in the 4.x days. Early 2000s something it was. Had to go back to Linux a few months after 5.0 got released. Those were not good times. On the home gateway though, OpenBSD only. Since ... forever (3.something version). From my point of view the BSDs always felt more coherent. More ... engineered. Things, everything, just blended and worked together really nicely. As opposed to linux distributions where I always had the feeling that is just a bunch of programs thrown together who may or may not like each other depending on the time of the day. Speaking of differences between Linux and BSD, there's are two apocryphal quote about it, namely: "BSD is designed while Linux is grown" and "BSD is what you get when a bunch of Unix hackers sit down to try to port a Unix system to the PC. Linux is what you get when a bunch of PC hackers sit down and try to write a Unix system for the PC." Both illustrate the strengths of either process to anyone familiar with both systems, and anyone who isn't has the option of discovering what exactly those quotes mean.
|
|
# ? Jun 23, 2017 21:17 |
|
I installed zfs and made a mirrored zpool featuring two volumes. When I tried to add it to virsh in CentOS 7.3 I got this: code:
I decided to tear everything down (export the zpool) and install Fedora server, because it's got more up to date packages than CentOS. I get the same error on Fedora. I've been configuring this server all week. Now it's back to a plain Fedora server install with one plain VM. I'm calling it a night. I'd like to know if Debian or something allows 'virsh pool define.... -type zfs' and I'll make the switch. I'd try and compile it myself if I could find a relatively simple guide, as I'm not used to compiling stuff. Some of the posts I saw mentioning this problem were a year old. ZFS 'pool-define-as' must be compiled into one of the mainstream distro's version of libvirt by now, surely? Tired and frustrated with this now.
|
# ? Jun 24, 2017 00:12 |
|
Seems like libvirt isn't compiled with ZFS support by default because the support is still experimental. You'll have to compile it yourself. I'd just create a new dataset for VM images, mount it, and use it as a directory pool. Using a dataset instead of ZVOLs (as I imagine libvirt uses) also saves you a decent chunk of space when creating ZFS snapshots due to the different guarantees ZFS makes for datasets and ZVOLs.
|
# ? Jun 24, 2017 02:19 |
|
FYI: It is indeed possible to import a ZFS pool from FreeNAS to ZFS-on-Linux, since the multi_vdev_dump (or whatever it's called) flag is not typically actually used by FreeNAS. Annoyingly, one of my Reds is starting to throw errors now. Edit: I did end up having to do this to get it to actually import everything properly and not freak out every reboot. code:
IOwnCalculus fucked around with this message at 06:36 on Jun 24, 2017 |
# ? Jun 24, 2017 06:09 |
|
IOwnCalculus posted:FYI: It is indeed possible to import a ZFS pool from FreeNAS to ZFS-on-Linux, since the multi_vdev_dump (or whatever it's called) flag is not typically actually used by FreeNAS. Yeah. Before I tore everything down last night I did the export zpool thing and then imported it back into my new Fedora installation. It was very easy. I made sure I'd created my mount points for the datasets exactly the same before I imported, so that may have helped. I've had a few hours sleep now and ready to batter away at this again. Next is to make myself aware of the difference between a dataset and a zvol...
|
# ? Jun 24, 2017 06:46 |
|
apropos man posted:Next is to make myself aware of the difference between a dataset and a zvol... A zvol would be used if you're going to be using iscsi and control of the file system to some other os. You can have a dataset and a zvol (or multiples of both ) on the same zpool.
|
# ? Jun 24, 2017 07:14 |
|
phosdex posted:A zvol would be used if you're going to be using iscsi and control of the file system to some other os. You can have a dataset and a zvol (or multiples of both ) on the same zpool. Thanks. I've successfully created a zvol on my host system and mounted it on the guest: On the host I did: code:
So are there any other options I should be specifying for my zvol apart from 'compression=on'? How is my zvol handled by ZFS internally on the actual drives? Do I have all the benefits of a dataset or have I mitigated some of the benefits by using a zvol instead of a dataset? Can ZFS still see the individual files being written to my mirror pool or are they just seen as a block device?
|
# ? Jun 24, 2017 07:40 |
|
apropos man posted:How is my zvol handled by ZFS internally on the actual drives? Do I have all the benefits of a dataset or have I mitigated some of the benefits by using a zvol instead of a dataset? Can ZFS still see the individual files being written to my mirror pool or are they just seen as a block device? A ZVOL is a type of dataset, though typically I've seen dataset used interchangeably with file system, and I'm guilty of doing that myself most of the time. You'll have most of the benefits a normal file system would have, including copy on write and snapshots, but ZFS can only see it as a block device. The one major gotcha with ZVOLs compared to file systems is that snapshots take more space (though they're still incremental, so a hundred snapshots won't take a hundred times the space) since ZFS has to guarantee that every bit in that ZVOL can be changed, unlike with a file system that will just take consume free space until it runs out. Since you're only using a 10GB ZVOL on 1TB disks this isn't really a problem. Other things you're going to want to do on ZFS are set up weekly/monthly scrubs (verify all data in the pool against the checksums) and figure out how you want to set up snapshots, since they're not automatic and ZFS has no built in way to automate them. I've got a little script that emails me the status of my pool every night and I run weekly scrubs.
|
# ? Jun 24, 2017 08:08 |
|
IOwnCalculus posted:FYI: It is indeed possible to import a ZFS pool from FreeNAS to ZFS-on-Linux, since the multi_vdev_dump (or whatever it's called) flag is not typically actually used by FreeNAS. For shits and grins, here's what shows at the end of a 'sudo zpool get all tank': code:
|
# ? Jun 24, 2017 08:31 |
|
Desuwa posted:A ZVOL is a type of dataset, though typically I've seen dataset used interchangeably with file system, and I'm guilty of doing that myself most of the time. You'll have most of the benefits a normal file system would have, including copy on write and snapshots, but ZFS can only see it as a block device. Hmm. If I'm gonna get the full benefits of ZFS by using datasets on my guests then I'm prepared to compile the latest libvirt. Even if it means starting again with a CentOS installation. The reason I'm thinking this way is that my current setup is gonna be fine for small zvol's under 10GB but what happens with my big 500GB media zvol when it gets snapshotted inside a 1TB mirror pair? I'll maybe start getting tight for space somewhere down the line. Gonna think about it and look for guides on compiling libvirt. I'm typically a package management kind of guy and not used to compiling stuff.
|
# ? Jun 24, 2017 08:38 |
|
apropos man posted:Hmm. If I'm gonna get the full benefits of ZFS by using datasets on my guests then I'm prepared to compile the latest libvirt. Even if it means starting again with a CentOS installation. The reason I'm thinking this way is that my current setup is gonna be fine for small zvol's under 10GB but what happens with my big 500GB media zvol when it gets snapshotted inside a 1TB mirror pair? I'll maybe start getting tight for space somewhere down the line. I would just use a regular dataset if you are not exporting stuff to another server. Makes it much easier and more efficient. If you are exporting something like iSCSI luns, Zvols are really awesome because they are essentially blockdevices.
|
# ? Jun 24, 2017 08:41 |
|
Yeah. I like the way I can just mount a zvol in a VM guest that I created on (and can also fill with data from) the host. What's holding me back from just using a zvol is that as far as ZFS is concerned it's just seeing and mirroring a lump of storage, without all the lovely nuance and granular file-level coordination that comes with a dataset. That's how I'm seeing it, although I could be looking at this wrongly.
|
# ? Jun 24, 2017 09:30 |
|
apropos man posted:Hmm. If I'm gonna get the full benefits of ZFS by using datasets on my guests then I'm prepared to compile the latest libvirt. Even if it means starting again with a CentOS installation. The reason I'm thinking this way is that my current setup is gonna be fine for small zvol's under 10GB but what happens with my big 500GB media zvol when it gets snapshotted inside a 1TB mirror pair? I'll maybe start getting tight for space somewhere down the line. I don't think libvirt will do anything for you besides automate the creation and destruction of ZVOLs as you create and destroy VMs. It's not going to change how ZVOL snapshots work. If you're storing 500GB of media you should probably just be creating a regular file system and sharing it with the VM either through NFS/SMB or virtio_9p, not making a 500GB virtual disk for the media VM. I assumed you were keeping the VMs separate from the data storage; I would highly recommend keeping VMs and media in separate datasets. Do you really want to roll your media back if you accidentally trash your VM? Only use ZVOLs where there's a need for it, such as VM images that need to be formatted with a different file system. Even then I'd personally just create a dataset and use qcow2 images. The only ZVOL I use on my server is a 1TB encrypted volume because ZFS has no native encryption yet, but even that will be unnecessary soon(tm). My opinion is you should keep is simple and only share things once, using one method, so if you need SMB to share things with windows/mobile clients just run samba on the system that's running ZFS and use that to share it with everything else, local and remote. If you don't like that from a security perspective you could do something more involved with samba in a chroot or VM, but one way or another you're giving samba read/write access to all of your files. That's where ZFS snapshots come in; any malware will need root permissions on the system running ZFS to destroy your snapshots.
|
# ? Jun 24, 2017 10:22 |
|
apropos man posted:Yeah. I like the way I can just mount a zvol in a VM guest that I created on (and can also fill with data from) the host. I'm horrified by what's being implied here. You do not want to mount a device of any kind on both the guest and the host at the same time and ZFS is not going to protect you from the data corruption you can cause by doing this. Just share a dataset (as in ZFS file system) "normally" with NFS/SMB/9p/whatever and mount that inside the guest. e: It's not actually clear to me what you're doing. What I'm imagining is that you've got a single ZVOL formatted with XFS/ext4 with your VM installed on it, and you're also mounting that block device on the host to fill it with media, which is about the worst possible way to share storage. Desuwa fucked around with this message at 10:45 on Jun 24, 2017 |
# ? Jun 24, 2017 10:26 |
|
Where did I say I was mounting them at the same time?
|
# ? Jun 24, 2017 10:34 |
|
D. Ebdrup posted:If you need RDMA for 10G, you need to consider an OS that's better at pushing bits.
|
# ? Jun 24, 2017 16:40 |
Combat Pretzel posted:More a matter of keeping CPU utilization down on the client-side. Also, it's nice that it can saturate 100G, but under what conditions, i.e. core count and whether it's pure TCP throughput or with something else on top. From what I can see researching this, block based throughput goes up a lot going from iSCSI to enabling iSER or changing to SRP. It's very rare to find anything pushing 100Gbps though, because most everything capable of doing those speeds are done in FPGA or ASIC.
|
|
# ? Jun 24, 2017 17:53 |
|
Let's see you hoover up all that RAM now, Crashplan. Running Crashplan inside of docker also makes it insanely easy to migrate. My Crashplan config files were inside the ZFS array itself, so even though I blew everything else away, I just had to redeploy it on the "new" server with the new paths mapped to where they already had been inside of the old container. It fired up as if it had just been rebooted.
|
# ? Jun 24, 2017 20:35 |
IOwnCalculus posted:Let's see you hoover up all that RAM now, Crashplan. Nice! How much are you backing up and what sort of heap size did you give it? I've got 8TB and gave it 8GB heap but it seems to spin out with crazy CPU usage sometimes.
|
|
# ? Jun 24, 2017 22:15 |
|
About 14TB, and apparently something like 800k files? Hard to say because I have a few different sets and the files I care about most are represented in multiple sets (local backup, backup to my rackmount at work, backup to Crashplan's cloud). I have the Java app set to 12GB max RAM and haven't run out since I did that.
|
# ? Jun 24, 2017 22:24 |
|
EssOEss posted:Then I got two BSODs during the optimization. IRQL_NOT_LESS_OR_EQUAL. 90% of the case when I have seen this it has been a driver issue in the past but hard disks don't have drivers, do they? At least I can't find anything on the WD website to install. Update: I found drivers for my SATA controller! They had a 2013 date on them, which made me hesitate, but after installing them the BSODs went away (at least it seems so for now).
|
# ? Jun 25, 2017 08:08 |
|
|
# ? Apr 18, 2024 22:05 |
|
So I spent a while trying to get the latest libvirt compiled on CentOS. This was my method:code:
So I've torn the whole thing down (again) and installed Ubuntu 16.04 server, since Ubuntu comes with ZFS licenses and presumably the non-GPL version of ZFS. It just works out of the box, once the repository zfs packages are installed. code:
|
# ? Jun 25, 2017 09:08 |