Search Amazon.com:
Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«398 »
  • Post
  • Reply
Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

apropos man posted:

Now I just need to turn off automounting of the zpool in my host and attach it to my guest. I'm presuming that compiling the latest for CentOS isn't working for one of two reasons. Either it's not libvirtd that's incompatible or that the ZFSonLinux project just doesn't yet support that function.

It's not that libvirt is incompatible, it's that most distros don't build it with support for ZFS by default (it's why you're getting an error that mentions a specific backend is missing rather than an unknown/invalid type). Ubuntu does build it with ZFS support enabled because they've started shipping ZoL in the base system based on their particular interpretation of the CDDL and GPL. You'd need to have configured it with --enable-zfs, or whatever the particular feature is called, before you built it.


Again, I'm not certain exactly what you're trying to do, but it if I do understand what you're doing it's not going to work, and if I don't it sounds far more complicated than necessary. You can't have the VM running from a ZVOL on an unimported pool or have the VM somehow import the pool it's running on. I think you're approaching this the wrong way; it will be much easier to just have one system own the entire pool, only using ZVOLs for VM images, and have that system share the file systems with everything else. Having the host import the pool will be easiest by far and means you won't need to nest VMs or otherwise depend on the ZFS VM being running to run the others.

Since you require SMB I'd just run samba on the host, but if you want to keep samba in a VM I'd share everything over the virtual network with NFS and give the samba VM the minimum access it needs.


If you really don't want the host to be able to see any files then you might be better off falling back to LVM to cut up both disks into equal partitions and creating multiple pools, but that sounds like an administrative mess and will complicate your ability to grow your system by just installing larger drives if you ever wanted to, on top of setting in stone the space available to each pool.

Adbot
ADBOT LOVES YOU

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

I still can't get what I'm trying to do to work in Ubuntu.

What I want is this (and I haven't changed my intentions over the last couple of days. I may have got confused by all this but my intention has always remained the same):

1 host running Linux with host virtualisation/KVM/ZFS running.

This host is on an SSD, taking up the entire SSD.

Two WD Reds in a mirror ZFS pool. Both Reds are the same size. Mirror pool spans the entire disk capacity and contains two datasets to start with, more will probably be added later. Pool is added to to KVM.

Once the mirror is added to KVM it's not mounted on the host anymore. It is just an active ZFS pool, being hosted by the host OS.

Two guest VM's, both running from the host and using /var/lib/libvirt/images/xxxx.qcow2 on the SSD as their system drive. One dataset passed to each one independently by the host. One has videos on it and the other has docs.

-------------------------------------------------------------------------------------------

If I get this right then I add a third VM/dataset3 then fourth VM/dataset4 etc. VM3 would have it's system on the SSD as a qcow2, as would VM4 etc. etc.

I've managed to get everything working apart from the fact that I can't pass a ZFS dataset through to KVM/virsh. I don't know if there's a GUI that allows this, as I've been doing it over ssh.

I can easily pass a zvol through to KVM and I could have had all this wrapped up and done yesterday if I hadn't been so stubborn and insisted to myself that I need a dataset, not a zvol, passed through.

apropos man fucked around with this message at Jun 25, 2017 around 10:29

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

apropos man posted:

I still can't get what I'm trying to do to work in Ubuntu.

As far as I can tell what you're trying to do is not actually possible. It's akin to trying to mount the same device in the host and the guest at the same time.

apropos man posted:

Once the mirror is added to KVM it's not mounted on the host anymore. It is just an active ZFS pool, being hosted by the host OS.

A pool can only be imported on one machine at a time, and you can't interact with the pool without importing it. You may not be mounting the file systems on the host but the pool can't be imported on a VM without exporting it from the host.

If you're absolutely unwilling to share the ZFS file systems from the host to the guests with NFS/SMB/9p then you'll probably want to slice up the disks into equal partitions, pass pairs of partitions to each VM, and create mirrored pools inside each VM from those partitions. That's unnecessarily complicated, less flexible, and harder to administer because you've got to do everything two or three times, but it will maintain all the advantages of ZFS while also making whatever it is you want to do possible. I recommend against this.

If you're content with the security from passing partitions through to a VM then you should be perfectly happy using NFS instead. NFS also serves to isolate ZFS from the VMs so that even if root gets compromised in a VM it can't access your snapshots, which only the host can see.

apropos man posted:

If I get this right then I add a third VM/dataset3 then fourth VM/dataset4 etc. VM3 would have it's system on the SSD as a qcow2, as would VM4 etc. etc.

I've managed to get everything working apart from the fact that I can't pass a ZFS dataset through to KVM/virsh. I don't know if there's a GUI that allows this, as I've been doing it over ssh.

I can easily pass a zvol through to KVM and I could have had all this wrapped up and done yesterday if I hadn't been so stubborn and insisted to myself that I need a dataset, not a zvol, passed through.

Virsh is not doing what you think it's doing. When you virsh-add-pool all you're doing is telling virsh you want to use the zfs pool to store VM images on ZVOLs instead of another option, like qcow2 files. You'll have zpool/vm1-zvol instead of vm1.qcow2, not in addition to.

It can't do anything to pass ZFS file systems through to VMs, and this comes down to the difference between devices and file systems. A ZVOL is effectively a block device, as you might create in LVM, but a file system dataset is like ext4/xfs, it's what you'd put on that block device. For example, you can share /dev/sda1 with a VM but you can't share the file system on /dev/sda1 alone.


e: QEMU has a built-in SMB instance which makes it really easy if all you want to do is share a single directory tree read/write with a VM. I'd honestly forgotten about it. That's probably the easiest way to do what you want.

Desuwa fucked around with this message at Jun 25, 2017 around 12:27

evol262
Nov 30, 2010
#!/usr/bin/perl

apropos man posted:

Two guest VM's, both running from the host and using /var/lib/libvirt/images/xxxx.qcow2 on the SSD as their system drive. One dataset passed to each one independently by the host. One has videos on it and the other has docs.

There's basically no reason to do what you're describing. The performance gain is negligible, and the zvols won't be accessible elsewhere unless you use a clustered filesystem (or sanlock) to mediate access.

If you want it accessible from multiple systems, use nfs or smb or gluster or cephfs.

If you want performance, just use a zvol as a libvirt data pool and create qcows on it.

If you want native performance, export as iscsi LUNs from the host and use those. You can still do this with zvols and snapshot them, though using a regular zfs filesystem would still be better.

If you really, really need raw disks (Oracle, mostly), proceed with zvols. I'd still use iscsi...

What's the use case?

EssOEss
Oct 23, 2006
128-bit approved

Say, is there a better way to share a directory between a Windows (server) machine and a Linux (client) machine than SMB? I have a VM where I run some Linux stuff and I do not want to build a filesystem barrier between the two. So far, I have gotten around okay-ish with SMB but I notice the SMB connection seems to randomly drop once a day or so, crashing the lovely apps that try to use it and lack proper retry logic. I would prefer my shared directories to remain permanently available to Linux, without these mysterious drops.

Is there some alternative I could try? Or is there some way to make the SMB client on Linux more robust and automatically/transparently handle whatever issue it is having? This is Ubuntu Server 16.04 but I don't really care what brand of Linux I use - I am open to alternatives if others have better features in this regard.

This is a Hyper-V VM, so no builtin directory sharing features.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll

Windows can mount NFS filesystems with the Windows Services for NFS feature that's been available for several Windows versions now. The primary problems with NFS (which still does happen even if you're using Linux or any other POSIX OS for that matter) is around uids and gids, but that can be mapped around in Windows if you think it through a little bit.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Desuwa posted:


loads of handy info


Thanks. I've just spent another couple of hours looking at 9p sharing. I think I did this at the very start, before heading towards nfs, then smb, then zfs.

I feel like I've gone round in a week-long circle and come back to the idea of just mounting the two different datasets as an nfs share for each guest.

Gonna finally commit to something tonight that actually works!

evol262 posted:

There's basically no reason to do what you're describing. The performance gain is negligible, and the zvols won't be accessible elsewhere unless you use a clustered filesystem (or sanlock) to mediate access.

If you want it accessible from multiple systems, use nfs or smb or gluster or cephfs.

If you want performance, just use a zvol as a libvirt data pool and create qcows on it.

If you want native performance, export as iscsi LUNs from the host and use those. You can still do this with zvols and snapshot them, though using a regular zfs filesystem would still be better.

If you really, really need raw disks (Oracle, mostly), proceed with zvols. I'd still use iscsi...

evol262 posted:

What's the use case?

Home server with some torrents and a few personal documents on it.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell


You're making this way more complicated than it needs to be!

eames
May 9, 2009



isn't that the point of a hobby?

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Thermopyle posted:

You're making this way more complicated than it needs to be!

I know. I know. I've currently tore it down and rebuilt (for the last time??).

I have this running:

CentOS 7 with virt host utils and ZFS mirror pool. Each dataset in the pool mounted at /mnt on the host.
So /mnt/zdocuments, /mnt/zvideos, /mnt/zmusic etc.

I have one CentOS guest running at the moment, with an NFS mount:
host:/mnt/zvideo --> guest:/home/foo/videos.

Currently copying loads of videos over to it and installing Plex.

Then I will sort out the other VM shares.

At least this way I get full advantage of the ZFS datasets.

Mr Shiny Pants
Nov 12, 2012


apropos man posted:

I know. I know. I've currently tore it down and rebuilt (for the last time??).

I have this running:

CentOS 7 with virt host utils and ZFS mirror pool. Each dataset in the pool mounted at /mnt on the host.
So /mnt/zdocuments, /mnt/zvideos, /mnt/zmusic etc.

I have one CentOS guest running at the moment, with an NFS mount:
host:/mnt/zvideo --> guest:/home/foo/videos.

Currently copying loads of videos over to it and installing Plex.

Then I will sort out the other VM shares.

At least this way I get full advantage of the ZFS datasets.

With this I take it you mean you have an P9 mount running inside the guest? That is good, I have the same setup for my plex machine and it works very well.

Edit: NFS? Why? You can just mount the folder inside the VM, saves a lot of hassle.

My setup: Zpool for VMs on SSDs and spinning rust for my Linux iso's on a different Zpool, the VM runs on the SSD pool and mounts the ISO share from the other Zpool within the VM host. Pretty easy and works very well.

Mr Shiny Pants fucked around with this message at Jun 25, 2017 around 20:38

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Mr Shiny Pants posted:

With this I take it you mean you have an P9 mount running inside the guest? That is good, I have the same setup for my plex machine and it works very well.

Edit: NFS? Why? You can just mount the folder inside the VM, saves a lot of hassle.

My setup: Zpool for VMs on SSDs and spinning rust for my Linux iso's on a different Zpool, the VM runs on the SSD pool and mounts the ISO share from the other Zpool within the VM host. Pretty easy and works very well.

I tried P9 earlier and couldn't get it working. This was on Ubuntu before I switched back to CentOS.

I'll try it again later. I'm all adminned out for today. It looks more suited to my case, so I'll definitely have another go at P9.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

Mr Shiny Pants posted:

With this I take it you mean you have an P9 mount running inside the guest? That is good, I have the same setup for my plex machine and it works very well.

Edit: NFS? Why? You can just mount the folder inside the VM, saves a lot of hassle.

My setup: Zpool for VMs on SSDs and spinning rust for my Linux iso's on a different Zpool, the VM runs on the SSD pool and mounts the ISO share from the other Zpool within the VM host. Pretty easy and works very well.

NFS is probably higher performance than 9p, just based on what I've read - I've not actually used 9p. You can't "just mount the folder inside the VM", you have to use some kind of remote file system protocol because that's what the host file system is to the guest. The advantage of 9p over NFS seems to be that you can configure it all from the CLI when spinning up the VM, so the sharing doesn't need to outlive the VM. QEMU also has an SMB server built in, which is probably the easiest way to just share a folder read/write with the guest.

In my environment I just share everything from my server once with samba because I have a Windows desktop (though MS added NFS support to Windows 10 pro) and it makes it easiest to configure rw/ro permissions and force/squash users and groups. This works well enough for me because nothing I store remotely is performance sensitive enough to be bothered by the differences between SMB and NFS.

If you trust your VMs it's probably most performant to share everything R/W with NFS over the virtual network and just restore from snapshots if something trashes your data.

Mr Shiny Pants
Nov 12, 2012


Desuwa posted:

NFS is probably higher performance than 9p, just based on what I've read - I've not actually used 9p. You can't "just mount the folder inside the VM", you have to use some kind of remote file system protocol because that's what the host file system is to the guest. The advantage of 9p over NFS seems to be that you can configure it all from the CLI when spinning up the VM, so the sharing doesn't need to outlive the VM. QEMU also has an SMB server built in, which is probably the easiest way to just share a folder read/write with the guest.

In my environment I just share everything from my server once with samba because I have a Windows desktop (though MS added NFS support to Windows 10 pro) and it makes it easiest to configure rw/ro permissions and force/squash users and groups. This works well enough for me because nothing I store remotely is performance sensitive enough to be bothered by the differences between SMB and NFS.

If you trust your VMs it's probably most performant to share everything R/W with NFS over the virtual network and just restore from snapshots if something trashes your data.

I don't know, you configure it in Virt-Manager and you do a mount -t 9p in the guest and it works. It is not that complicated. It might not be the "best" protocol but it works as advertised in my, admittedly, limited experience.

Desuwa
Jun 2, 2011

I'm telling my mommy. That pubbie doesn't do video games right!

Mr Shiny Pants posted:

I don't know, you configure it in Virt-Manager and you do a mount -t 9p in the guest and it works. It is not that complicated. It might not be the "best" protocol but it works as advertised in my, admittedly, limited experience.

9p is a remote file system protocol. I'm just warning that 9p is, reportedly, much slower than NFS and SMB. SMB sharing a single directory tree is arguably even easier when using QEMU, though I'd personally just run samba or NFS on the host.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

The problem I currently have is that on my guest OS:

Plex is user 996, group 994
Emby is user 995, group 992

And over on my host (where the videos are, in my zpool and where I'm running a samba share and an nfs share):

user 995 is taken by libstoragemgmt and user 996 is unbound DNS resolver
group 994 is taken by cgred and group 992 is libstoragemgmt

So if want to give plex or emby equal UID/GID on each system it's gonna disrupt something else. And that stuff looks to be important at first glance.

I went to bed last night, thinking I had it all set up, and tried streaming a video to my Chromecast. The titles all show up but no content would play.
I'll have another look at it tonight.

hifi
Jul 25, 2012


apropos man posted:

The problem I currently have is that on my guest OS:

Plex is user 996, group 994
Emby is user 995, group 992

And over on my host (where the videos are, in my zpool and where I'm running a samba share and an nfs share):

user 995 is taken by libstoragemgmt and user 996 is unbound DNS resolver
group 994 is taken by cgred and group 992 is libstoragemgmt

So if want to give plex or emby equal UID/GID on each system it's gonna disrupt something else. And that stuff looks to be important at first glance.

I went to bed last night, thinking I had it all set up, and tried streaming a video to my Chromecast. The titles all show up but no content would play.
I'll have another look at it tonight.

there are force user/group directives in samba

D. Ebdrup
Mar 13, 2009


necrobobsledder posted:

Windows can mount NFS filesystems with the Windows Services for NFS feature that's been available for several Windows versions now. The primary problems with NFS (which still does happen even if you're using Linux or any other POSIX OS for that matter) is around uids and gids, but that can be mapped around in Windows if you think it through a little bit.
Holy poo poo, I've been looking for this forever, but I couldn't find it under its old name. How is the performance if the server isn't bottle-necked in disk I/O and can satuate 1Gbps?

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

hifi posted:

there are force user/group directives in samba

Yep. I've had this stuff working before but didn't keep a copy of config files. Will work through it later.

E: should say that I only half-remember there being a way to force user. Nevertheless, I'll look into it.

apropos man fucked around with this message at Jun 26, 2017 around 12:51

evol262
Nov 30, 2010
#!/usr/bin/perl

apropos man posted:

Yep. I've had this stuff working before but didn't keep a copy of config files. Will work through it later.

E: should say that I only half-remember there being a way to force user. Nevertheless, I'll look into it.

It would be better to just have whatever is saving your Linux ISOs do so as another user, and grant read access to that uid/gid (which is in a deterministic >1000 range, hopefully) to your software.

Your host has:
User 'filez', uid/gid 1100/1100

Map the folder into your system and create the appropriate user with the same uid/gid. Add plex/emby to the group. Grant group read access.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell


apropos man posted:

The problem I currently have is that on my guest OS:

Plex is user 996, group 994
Emby is user 995, group 992

And over on my host (where the videos are, in my zpool and where I'm running a samba share and an nfs share):

user 995 is taken by libstoragemgmt and user 996 is unbound DNS resolver
group 994 is taken by cgred and group 992 is libstoragemgmt

So if want to give plex or emby equal UID/GID on each system it's gonna disrupt something else. And that stuff looks to be important at first glance.

I went to bed last night, thinking I had it all set up, and tried streaming a video to my Chromecast. The titles all show up but no content would play.
I'll have another look at it tonight.

Why are you using plex AND emby? I can't really think of a reason to have both...

ufarn
May 30, 2009


I've disabled some features like FTP and SSH from my old My Book World Live by WD, but what are some of the other steps I should take to make sure it's locked down? I assume it's only available to the local network, but don't actually know.

My ISP router doesn't have granular settings, and its base firewall setting messes up my DNS for some reason, unfortunately.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

ufarn posted:

My ISP router doesn't have granular settings, and its base firewall setting messes up my DNS for some reason, unfortunately.

Replacing your ISP router with a decent one of your own would be the first and best step, frankly. Unless you're in some odd situation where you actually have to use the ISP router, picking up a router+modem (especially on sale or used) will pay for itself in the case you're not paying to rent the router monthly, and get you better performance, better security, and better features in almost every case.

ufarn
May 30, 2009


DrDork posted:

Replacing your ISP router with a decent one of your own would be the first and best step, frankly. Unless you're in some odd situation where you actually have to use the ISP router, picking up a router+modem (especially on sale or used) will pay for itself in the case you're not paying to rent the router monthly, and get you better performance, better security, and better features in almost every case.
I had to stop using it because of a bunch of issue with my Internet, but sounds like it might be worth looking into dusting off my Archer C7* and see what it can do with bridge mode.

Do I plug in my ethernet cables into the ISP router or my own?

ufarn fucked around with this message at Jun 26, 2017 around 18:34

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

You'd want to go Wall -> Modem -> Your Router -> Computers

If you can't/don't want to get a modem of your own (you should, they're cheap), you can see if you can place your ISP router into bridge or "modem only" mode or whatever they happen to have decided to call it, and slip that into the modem spot.

ufarn
May 30, 2009


Derp, it's an Archer C7 I have, put in my wireless adapter by mistake.

I'll give it a shot tomorrow.

e: And here's an interface demo of it.

ufarn fucked around with this message at Jun 26, 2017 around 18:46

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Thermopyle posted:

Why are you using plex AND emby? I can't really think of a reason to have both...

Because I had difficulty accessing shares from one, so I also installed the other while I worked out what was going to be the best option. I usually wouldn't have both of them in an everyday situation either. I wish Emby were as slick as Plex, though. It's just not quite there in terms of UX but has the advantage of being a bit more specific with watch folders.

Adbot
ADBOT LOVES YOU

EVIL Gibson
Mar 23, 2001

THE CLOUD WILL PROTECT US


apropos man posted:

Because I had difficulty accessing shares from one, so I also installed the other while I worked out what was going to be the best option. I usually wouldn't have both of them in an everyday situation either. I wish Emby were as slick as Plex, though. It's just not quite there in terms of UX but has the advantage of being a bit more specific with watch folders.

Also the fact you can replace embys version of ffmpeg with one you built to implement hardware decoding and encoding.

Plex used to use ffmpeg before they went closed source but have been cock teasing hardware for the past year or two .

Emby does it perfectly properly prioritizing first gpu then CPU.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«398 »