Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mr Shiny Pants
Nov 12, 2012

apropos man posted:

Yeah. I think I jumped to a conclusion there before I've had a chance to look into it. I'm gonna have a read up about it after work and I'll probably go with one big pool like you said but somehow partitioning it off to give a discreet amount of storage to each VM. I'm looking forward to tinkering with ZFS. Cheers!

You can use Zvols as partitions, or create datasets with quota's. I would just use KVM qcow files or something that are sparse created and have a maximum size that you like.

For my VMs I just have a mirrored Zpool of 2 SSDs, every VM is stored in qcow2 files. When I was using Infiniband I had ZVols that were exported as raw SCSI devices from my server to my Windows machine. Both work well but Zvols are a bit of a blackbox because they don't have a filesystem that you can browse from your NAS, they are raw blockdevices.

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

apropos man posted:

I'm looking forward to tinkering with ZFS. Cheers!

One of us! One of us!

Mr Shiny Pants
Nov 12, 2012

DrDork posted:

One of us! One of us!

The problem with ZFS is that once you start using it, everything else is just meh.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Mr Shiny Pants posted:

The problem with ZFS is that once you start using it, everything else is just meh.

I'll take that as further recommendation!

I want to use full disk encryption and I shall be using SELinux extended attributes. This is copacetic with ZFS, yes?

I'd also like the storage to be independent of the VM disk. Someone suggested adding it to the thin provisioned VM storage but I'd like separation in case I tear the VM down.

Edit: actually, qcows might work as individual storage units. I'll have a look in a couple hours.

apropos man fucked around with this message at 14:10 on Jun 23, 2017

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Mr Shiny Pants posted:

The problem with ZFS is that once you start using it, everything else is just meh.

I regret using it.

For the guy making multiple filesystems: think long and hard about your reasons for doing it. It can make things more difficult in the future when you run out of space. What are you gaining that's useful to you that isn't served by just using a couple folders?

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
The main reason I've decided to segregate and attach to separate VMs is all the trouble I had in setting up NFS and/or samba with Plex and/or Emby.

I think I'll use ZFS as an overall pool and attach a 500GB qcow directly to my media serving VM, then attach a 20GB qcow directly to my 'important files' VM.

All other ZFS space will be fair game for whatever I decide to do with it later.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Meh, that FreeNAS doesn't do Infiniband/SRP kind of irks me. Illumos is an idea, but I'd like a frontend, napp-it however gives me the 90ies vibes. I'm not sure what to make of Nexenta. It looks like what I want, but the Community Edition seems to have become a red-headed stepchild.

Also, gently caress Intel for not doing RDMA on their 10GbE adapters.

EL BROMANCE
Jun 10, 2006

COWABUNGA DUDES!
🥷🐢😬



I'm having a nightmare with SoftRaid on my Mac. The previous dot revision works perfectly fine when running the easy setup until the very end when it gives an Unexpected Error warning and no other detail. The absolute newest version even as a trial refuses to recognise that Easy Setup is in the same folder as SoftRaid itself.

Right now I can't work out if it's the drives (new to me, but fully tested by Don Lapre on here before sale so I don't think it's them), my Mobius 2 Bay enclosure (new to me, Amazon warehouse deal because of a bit of damage to the packaging but appeared to be brand new inside), or SoftRaid on a brand new macOS install. I'm tempted to just run the disks using the Mobius as the RAID handler (I'm doing a straight mirror between the two drives, and someone who had a unit fail said the data integrity between the two drives was bit perfect and had no problem getting his data off) and saving myself the headache. Anyone ever had a similar issue and might be able to point out where the problem lies?

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Would running a qcow inside ZFS constitute nested copy on write, since ZFS does it anyway? Maybe I'm paying too much credence to acronyms, here.

Mr Shiny Pants
Nov 12, 2012

Thermopyle posted:

I regret using it.

For the guy making multiple filesystems: think long and hard about your reasons for doing it. It can make things more difficult in the future when you run out of space. What are you gaining that's useful to you that isn't served by just using a couple folders?

Why? You doing multiple VDEVS with the need to expand? Or rebalancing issues?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Mr Shiny Pants posted:

Why? You doing multiple VDEVS with the need to expand? Or rebalancing issues?

Expansion is the big one, rebalancing can be a problem as well, also system requirements.

For my usage (bulk media storage), I just don't get any benefits out of ZFS that outweighs the fact that I've got to spend a thousand dollars each time I want more storage. I wish I was using snapraid and my next big upgrade will be to move over to that, but I've got something like 40TB I'll have to copy off of ZFS its going to be expensive.

ZFS is undoubtedly cooler in a nerdy kind of way.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

Meh, that FreeNAS doesn't do Infiniband/SRP kind of irks me.
Also, gently caress Intel for not doing RDMA on their 10GbE adapters.
I wonder if that's reserved for the TrueNAS thing that iXsystems sell - wouldn't surprise me if it was.
If you need RDMA for 10G, you need to consider an OS that's better at pushing bits.

FreeBSD can do both Infiniband/SFP+ and almost satuate 100G TCP with ~40% CPU utilization (~30% once a change from Netflix is upstreamed), and can do root on zfs with beadm.

apropos man posted:

Would running a qcow inside ZFS constitute nested copy on write, since ZFS does it anyway? Maybe I'm paying too much credence to acronyms, here.
I don't see how doing COW on top of ZFS can hurt, but I'd recommend a lot of testing before you put anything into its final configuration.


All that being said, ZFS is neat and cool and it also has certain downsides if you're not willing to put up with the fact that it was designed for an entirely different market segmet despite scaling well.

Mr Shiny Pants
Nov 12, 2012

I really want to like FreeBSD, but in my opinion it is almost in the same boat as Solaris, all the cool stuff is coming to Linux first. Which is a shame really.

SamDabbers
May 26, 2003



It's ok to like FreeBSD :shobon:

Some cool stuff may be implemented on Linux first, but when FreeBSD releases it, it's generally quite polished. Occasionally, the cool stuff is on FreeBSD first.

SamDabbers fucked around with this message at 17:43 on Jun 23, 2017

Volguus
Mar 3, 2009

SamDabbers posted:

It's ok to like FreeBSD :shobon:

Some cool stuff may be implemented on Linux first, but when FreeBSD releases it, it's generally quite polished. Occasionally, the cool stuff is on FreeBSD first.

All the cool stuff is on OpenBSD first. Sometimes FreeBSD adopts it, changes into an unrecognizable and unmergeable mess and lets it rot (e.g. pf). Sometimes, if the stars align, makes it into Linux. The only thing that the NAS people would want that isn't in OpenBSD (for good reason) is ZFS. For me that's perfectly fine, but for some it isn't.

BlankSystemDaemon
Mar 13, 2009



SamDabbers posted:

It's ok to like FreeBSD :shobon:
Some people, mostly Linux and OpenBSD zealots, think differently.

Volguus posted:

All the cool stuff is on OpenBSD first. Sometimes FreeBSD adopts it, changes into an unrecognizable and unmergeable mess and lets it rot (e.g. pf). Sometimes, if the stars align, makes it into Linux. The only thing that the NAS people would want that isn't in OpenBSD (for good reason) is ZFS. For me that's perfectly fine, but for some it isn't.
I'm hugely biased because I've been using FreeBSD as my primary (non-gaming) OS since 2001, but there are plenty of good reasons for using FreeBSD outside of a NAS/server solution.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

D. Ebdrup posted:

I'm hugely biased because I've been using FreeBSD as my primary (non-gaming) OS since 2001, but there are plenty of good reasons for using FreeBSD outside of a NAS/server solution.

Yup. For one, it's free. For two, it sounds like a real trippy psychedelic. For free. How much better can you get!? :catdrugs:

Volguus
Mar 3, 2009

D. Ebdrup posted:

Some people, mostly Linux and OpenBSD zealots, think differently.

I'm hugely biased because I've been using FreeBSD as my primary (non-gaming) OS since 2001, but there are plenty of good reasons for using FreeBSD outside of a NAS/server solution.

I used it too as my primary OS back in the 4.x days. Early 2000s something it was. Had to go back to Linux a few months after 5.0 got released. Those were not good times. On the home gateway though, OpenBSD only. Since ... forever (3.something version). From my point of view the BSDs always felt more coherent. More ... engineered. Things, everything, just blended and worked together really nicely. As opposed to linux distributions where I always had the feeling that is just a bunch of programs thrown together who may or may not like each other depending on the time of the day.

BlankSystemDaemon
Mar 13, 2009



DrDork posted:

Yup. For one, it's free. For two, it sounds like a real trippy psychedelic. For free. How much better can you get!? :catdrugs:
That's certainly one more reason. Good thing they didn't call it FreefallBSD, or a lot of people might've thrown themselves out of airplanes to acquire it.

Volguus posted:

I used it too as my primary OS back in the 4.x days. Early 2000s something it was. Had to go back to Linux a few months after 5.0 got released. Those were not good times. On the home gateway though, OpenBSD only. Since ... forever (3.something version). From my point of view the BSDs always felt more coherent. More ... engineered. Things, everything, just blended and worked together really nicely. As opposed to linux distributions where I always had the feeling that is just a bunch of programs thrown together who may or may not like each other depending on the time of the day.
5.0 was definitely rough, but it's not as if FreeBSD is exactly what it was then.
Speaking of differences between Linux and BSD, there's are two apocryphal quote about it, namely: "BSD is designed while Linux is grown" and "BSD is what you get when a bunch of Unix hackers sit down to try to port a Unix system to the PC. Linux is what you get when a bunch of PC hackers sit down and try to write a Unix system for the PC." Both illustrate the strengths of either process to anyone familiar with both systems, and anyone who isn't has the option of discovering what exactly those quotes mean.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
I installed zfs and made a mirrored zpool featuring two volumes.

When I tried to add it to virsh in CentOS 7.3 I got this:

code:
# virsh pool-define-as --name zpool0 --source-name pool0 --type zfs
error: Failed to define pool zpool0
error: internal error: missing backend for pool type 11 (zfs)
I'm really liking the way zfs handles the two datasets I've currently built. I searched for how to update libvirt on CentOS to accommodate zfs but couldn't find anything apart from a couple of posts about compiling libvirt.

I decided to tear everything down (export the zpool) and install Fedora server, because it's got more up to date packages than CentOS.

I get the same error on Fedora. I've been configuring this server all week. Now it's back to a plain Fedora server install with one plain VM.

I'm calling it a night. I'd like to know if Debian or something allows 'virsh pool define.... -type zfs' and I'll make the switch. I'd try and compile it myself if I could find a relatively simple guide, as I'm not used to compiling stuff.

Some of the posts I saw mentioning this problem were a year old. ZFS 'pool-define-as' must be compiled into one of the mainstream distro's version of libvirt by now, surely?

Tired and frustrated with this now.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!
Seems like libvirt isn't compiled with ZFS support by default because the support is still experimental. You'll have to compile it yourself.

I'd just create a new dataset for VM images, mount it, and use it as a directory pool. Using a dataset instead of ZVOLs (as I imagine libvirt uses) also saves you a decent chunk of space when creating ZFS snapshots due to the different guarantees ZFS makes for datasets and ZVOLs.

IOwnCalculus
Apr 2, 2003





FYI: It is indeed possible to import a ZFS pool from FreeNAS to ZFS-on-Linux, since the multi_vdev_dump (or whatever it's called) flag is not typically actually used by FreeNAS.

Annoyingly, one of my Reds is starting to throw errors now.

Edit: I did end up having to do this to get it to actually import everything properly and not freak out every reboot.

code:
sudo zpool export tank
sudo zpool import -d /dev/disk/by-id tank

IOwnCalculus fucked around with this message at 06:36 on Jun 24, 2017

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

IOwnCalculus posted:

FYI: It is indeed possible to import a ZFS pool from FreeNAS to ZFS-on-Linux, since the multi_vdev_dump (or whatever it's called) flag is not typically actually used by FreeNAS.

Annoyingly, one of my Reds is starting to throw errors now.

Edit: I did end up having to do this to get it to actually import everything properly and not freak out every reboot.

code:
sudo zpool export tank
sudo zpool import -d /dev/disk/by-id tank

Yeah. Before I tore everything down last night I did the export zpool thing and then imported it back into my new Fedora installation. It was very easy. I made sure I'd created my mount points for the datasets exactly the same before I imported, so that may have helped.

I've had a few hours sleep now and ready to batter away at this again. Next is to make myself aware of the difference between a dataset and a zvol...

phosdex
Dec 16, 2005

apropos man posted:

Next is to make myself aware of the difference between a dataset and a zvol...

A zvol would be used if you're going to be using iscsi and control of the file system to some other os. You can have a dataset and a zvol (or multiples of both ) on the same zpool.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

phosdex posted:

A zvol would be used if you're going to be using iscsi and control of the file system to some other os. You can have a dataset and a zvol (or multiples of both ) on the same zpool.

Thanks. I've successfully created a zvol on my host system and mounted it on the guest:

On the host I did:
code:
# zfs create -ps -o compression=on -V 10G pool0/newzvol
# virsh
virsh # attach-disk guestname --source /dev/pool0/newzvol --target vdb
It appeared (as /dev/vda) on the guest and I created a suitable mount point and fstab entry, so I have a working dedicated storage passthrough from host to guest, which is what I was fannying around all last night trying to do. :cawg:

So are there any other options I should be specifying for my zvol apart from 'compression=on'?

How is my zvol handled by ZFS internally on the actual drives? Do I have all the benefits of a dataset or have I mitigated some of the benefits by using a zvol instead of a dataset? Can ZFS still see the individual files being written to my mirror pool or are they just seen as a block device?

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

apropos man posted:

How is my zvol handled by ZFS internally on the actual drives? Do I have all the benefits of a dataset or have I mitigated some of the benefits by using a zvol instead of a dataset? Can ZFS still see the individual files being written to my mirror pool or are they just seen as a block device?

A ZVOL is a type of dataset, though typically I've seen dataset used interchangeably with file system, and I'm guilty of doing that myself most of the time. You'll have most of the benefits a normal file system would have, including copy on write and snapshots, but ZFS can only see it as a block device.

The one major gotcha with ZVOLs compared to file systems is that snapshots take more space (though they're still incremental, so a hundred snapshots won't take a hundred times the space) since ZFS has to guarantee that every bit in that ZVOL can be changed, unlike with a file system that will just take consume free space until it runs out. Since you're only using a 10GB ZVOL on 1TB disks this isn't really a problem.


Other things you're going to want to do on ZFS are set up weekly/monthly scrubs (verify all data in the pool against the checksums) and figure out how you want to set up snapshots, since they're not automatic and ZFS has no built in way to automate them. I've got a little script that emails me the status of my pool every night and I run weekly scrubs.

IOwnCalculus
Apr 2, 2003





IOwnCalculus posted:

FYI: It is indeed possible to import a ZFS pool from FreeNAS to ZFS-on-Linux, since the multi_vdev_dump (or whatever it's called) flag is not typically actually used by FreeNAS.

For shits and grins, here's what shows at the end of a 'sudo zpool get all tank':

code:
tank  feature@async_destroy                         enabled                                       local
tank  feature@empty_bpobj                           active                                        local
tank  feature@lz4_compress                          active                                        local
tank  feature@spacemap_histogram                    active                                        local
tank  feature@enabled_txg                           active                                        local
tank  feature@hole_birth                            active                                        local
tank  feature@extensible_dataset                    enabled                                       local
tank  feature@embedded_data                         active                                        local
tank  feature@bookmarks                             enabled                                       local
tank  feature@filesystem_limits                     enabled                                       local
tank  feature@large_blocks                          enabled                                       local
tank  unsupported@org.illumos:skein                 inactive                                      local
tank  unsupported@org.illumos:sha512                inactive                                      local
tank  unsupported@com.joyent:multi_vdev_crash_dump  inactive                                      local

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Desuwa posted:

A ZVOL is a type of dataset, though typically I've seen dataset used interchangeably with file system, and I'm guilty of doing that myself most of the time. You'll have most of the benefits a normal file system would have, including copy on write and snapshots, but ZFS can only see it as a block device.

The one major gotcha with ZVOLs compared to file systems is that snapshots take more space (though they're still incremental, so a hundred snapshots won't take a hundred times the space) since ZFS has to guarantee that every bit in that ZVOL can be changed, unlike with a file system that will just take consume free space until it runs out. Since you're only using a 10GB ZVOL on 1TB disks this isn't really a problem.


Other things you're going to want to do on ZFS are set up weekly/monthly scrubs (verify all data in the pool against the checksums) and figure out how you want to set up snapshots, since they're not automatic and ZFS has no built in way to automate them. I've got a little script that emails me the status of my pool every night and I run weekly scrubs.

Hmm. If I'm gonna get the full benefits of ZFS by using datasets on my guests then I'm prepared to compile the latest libvirt. Even if it means starting again with a CentOS installation. The reason I'm thinking this way is that my current setup is gonna be fine for small zvol's under 10GB but what happens with my big 500GB media zvol when it gets snapshotted inside a 1TB mirror pair? I'll maybe start getting tight for space somewhere down the line.

Gonna think about it and look for guides on compiling libvirt. I'm typically a package management kind of guy and not used to compiling stuff.

Mr Shiny Pants
Nov 12, 2012

apropos man posted:

Hmm. If I'm gonna get the full benefits of ZFS by using datasets on my guests then I'm prepared to compile the latest libvirt. Even if it means starting again with a CentOS installation. The reason I'm thinking this way is that my current setup is gonna be fine for small zvol's under 10GB but what happens with my big 500GB media zvol when it gets snapshotted inside a 1TB mirror pair? I'll maybe start getting tight for space somewhere down the line.

Gonna think about it and look for guides on compiling libvirt. I'm typically a package management kind of guy and not used to compiling stuff.

I would just use a regular dataset if you are not exporting stuff to another server. Makes it much easier and more efficient. If you are exporting something like iSCSI luns, Zvols are really awesome because they are essentially blockdevices.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Yeah. I like the way I can just mount a zvol in a VM guest that I created on (and can also fill with data from) the host.

What's holding me back from just using a zvol is that as far as ZFS is concerned it's just seeing and mirroring a lump of storage, without all the lovely nuance and granular file-level coordination that comes with a dataset.

That's how I'm seeing it, although I could be looking at this wrongly.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

apropos man posted:

Hmm. If I'm gonna get the full benefits of ZFS by using datasets on my guests then I'm prepared to compile the latest libvirt. Even if it means starting again with a CentOS installation. The reason I'm thinking this way is that my current setup is gonna be fine for small zvol's under 10GB but what happens with my big 500GB media zvol when it gets snapshotted inside a 1TB mirror pair? I'll maybe start getting tight for space somewhere down the line.

Gonna think about it and look for guides on compiling libvirt. I'm typically a package management kind of guy and not used to compiling stuff.

I don't think libvirt will do anything for you besides automate the creation and destruction of ZVOLs as you create and destroy VMs. It's not going to change how ZVOL snapshots work. If you're storing 500GB of media you should probably just be creating a regular file system and sharing it with the VM either through NFS/SMB or virtio_9p, not making a 500GB virtual disk for the media VM. I assumed you were keeping the VMs separate from the data storage; I would highly recommend keeping VMs and media in separate datasets. Do you really want to roll your media back if you accidentally trash your VM?

Only use ZVOLs where there's a need for it, such as VM images that need to be formatted with a different file system. Even then I'd personally just create a dataset and use qcow2 images. The only ZVOL I use on my server is a 1TB encrypted volume because ZFS has no native encryption yet, but even that will be unnecessary soon(tm).

My opinion is you should keep is simple and only share things once, using one method, so if you need SMB to share things with windows/mobile clients just run samba on the system that's running ZFS and use that to share it with everything else, local and remote. If you don't like that from a security perspective you could do something more involved with samba in a chroot or VM, but one way or another you're giving samba read/write access to all of your files. That's where ZFS snapshots come in; any malware will need root permissions on the system running ZFS to destroy your snapshots.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

apropos man posted:

Yeah. I like the way I can just mount a zvol in a VM guest that I created on (and can also fill with data from) the host.

I'm horrified by what's being implied here. You do not want to mount a device of any kind on both the guest and the host at the same time and ZFS is not going to protect you from the data corruption you can cause by doing this. Just share a dataset (as in ZFS file system) "normally" with NFS/SMB/9p/whatever and mount that inside the guest.

e: It's not actually clear to me what you're doing. What I'm imagining is that you've got a single ZVOL formatted with XFS/ext4 with your VM installed on it, and you're also mounting that block device on the host to fill it with media, which is about the worst possible way to share storage.

Desuwa fucked around with this message at 10:45 on Jun 24, 2017

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Where did I say I was mounting them at the same time?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

D. Ebdrup posted:

If you need RDMA for 10G, you need to consider an OS that's better at pushing bits.

FreeBSD can do both Infiniband/SFP+ and almost satuate 100G TCP with ~40% CPU utilization (~30% once a change from Netflix is upstreamed), and can do root on zfs with beadm.
More a matter of keeping CPU utilization down on the client-side. Also, it's nice that it can saturate 100G, but under what conditions, i.e. core count and whether it's pure TCP throughput or with something else on top. From what I can see researching this, block based throughput goes up a lot going from iSCSI to enabling iSER or changing to SRP.

BlankSystemDaemon
Mar 13, 2009



Combat Pretzel posted:

More a matter of keeping CPU utilization down on the client-side. Also, it's nice that it can saturate 100G, but under what conditions, i.e. core count and whether it's pure TCP throughput or with something else on top. From what I can see researching this, block based throughput goes up a lot going from iSCSI to enabling iSER or changing to SRP.
From memory, because I'm not sure either of this has been published and it's mostly stuff I've heard told to people at conferences, Netflix are doing ~94MBps TCP with TLS in-kernel (because of DRM fulfillment) to ~20k customers / box with their OpenConnect CDN servers (which host their actual video content, each run on a E5-2697v3 - content is plain HTTPS with HTML5 video; ie. h264 and h265) and Chelsio has demonstrated ~95MBps at 8% sending load and 1% receiving load, as well as 100Gbps iSCSI (through offload).

It's very rare to find anything pushing 100Gbps though, because most everything capable of doing those speeds are done in FPGA or ASIC.

IOwnCalculus
Apr 2, 2003





Let's see you hoover up all that RAM now, Crashplan. :smuggo:



Running Crashplan inside of docker also makes it insanely easy to migrate. My Crashplan config files were inside the ZFS array itself, so even though I blew everything else away, I just had to redeploy it on the "new" server with the new paths mapped to where they already had been inside of the old container. It fired up as if it had just been rebooted.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

IOwnCalculus posted:

Let's see you hoover up all that RAM now, Crashplan. :smuggo:



Running Crashplan inside of docker also makes it insanely easy to migrate. My Crashplan config files were inside the ZFS array itself, so even though I blew everything else away, I just had to redeploy it on the "new" server with the new paths mapped to where they already had been inside of the old container. It fired up as if it had just been rebooted.

Nice! How much are you backing up and what sort of heap size did you give it? I've got 8TB and gave it 8GB heap but it seems to spin out with crazy CPU usage sometimes.

IOwnCalculus
Apr 2, 2003





About 14TB, and apparently something like 800k files? Hard to say because I have a few different sets and the files I care about most are represented in multiple sets (local backup, backup to my rackmount at work, backup to Crashplan's cloud). I have the Java app set to 12GB max RAM and haven't run out since I did that.

EssOEss
Oct 23, 2006
128-bit approved

EssOEss posted:

Then I got two BSODs during the optimization. IRQL_NOT_LESS_OR_EQUAL. 90% of the case when I have seen this it has been a driver issue in the past but hard disks don't have drivers, do they? At least I can't find anything on the WD website to install.

Update: I found drivers for my SATA controller! They had a 2013 date on them, which made me hesitate, but after installing them the BSODs went away (at least it seems so for now).

Adbot
ADBOT LOVES YOU

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
So I spent a while trying to get the latest libvirt compiled on CentOS. This was my method:

code:
Make sure original libvirtd is running and all dependencies met by creating a dummy VM guest.
...
Grab latest libvirt tarball from [url]http://libvirt.org/sources[/url]
I got 3.4.0:
wget [url]http://libvirt.org/sources/libvirt-3.4.0.tar.xz[/url]

Untar it and cd to it.
Iteratively run ./configure whilst installing dependencies to get a list of what's needed:

Requirements I needed to install on a fresh CentOS 7.3 system:

yum install gcc libxml2-devel yajl-devel device-mapper-devel libpciaccess-devel libnl-devel

Once all dependencies are satisfied, run this:

./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc

make
make install
ldconfig
Rebooting and checking with 'libvirtd --version' confirmed I was running 3.4.0 instead of 2.0.0 which currently comes with CentOS. Unfortunately I was still getting a failure due to backend incompatibility when I tried to import a zpool into virsh with 'pool-define-as --name zfspool4kvm --source-name pool0 --type zfs'.

So I've torn the whole thing down (again) and installed Ubuntu 16.04 server, since Ubuntu comes with ZFS licenses and presumably the non-GPL version of ZFS.
It just works out of the box, once the repository zfs packages are installed.
code:
virsh # pool-define-as --name zfspool4kvm --source-name pool0 --type zfs
Pool zfspool4kvm defined
Now I just need to turn off automounting of the zpool in my host and attach it to my guest. I'm presuming that compiling the latest for CentOS isn't working for one of two reasons. Either it's not libvirtd that's incompatible or that the ZFSonLinux project just doesn't yet support that function.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply