Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


Linux desktop application question:

I have all of my files on a DIY NAS/server, including a music library of ~16,000 tracks mostly in FLAC and MP3 formats.

What is the best music library type player for managing and playing music that's on a Samba or NFS share?

Out of everything I've tried, Clementine/Strawberry and Quod Libet work mostly okay, but they obviously assume the files are on a local drive, so they take ages to scan the network share and update their libraries every time they start. Hours for a full rescan that would be done in a minute on local files. Currently I'm using a Samba share, but I don't expect NFS would be significantly faster.

Currently the client machine is on WLAN, but it's a solid 802.11ac connection with no interfering other networks.

What's the best solution for network music playback? Find the right application? MPD? DLNA?

Or just go old-school, use folders and a simple player like Audacious instead of a fancy music library type?

Adbot
ADBOT LOVES YOU

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

I would just install a music manager / streaming server directly on the NAS, if at all possible. Ampache or Airsonic or a bunch of others.

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


Yeah, I was looking at using Kodi as a DLNA server, since I'll be using it as an HTPC as well. If I can find a good DLNA client for Linux, that is. I really like Quod Libet and I wish I could just keep using it.

KozmoNaut fucked around with this message at 13:11 on Jul 30, 2020

Mr Shiny Pants
Nov 12, 2012

KozmoNaut posted:

Linux desktop application question:

I have all of my files on a DIY NAS/server, including a music library of ~16,000 tracks mostly in FLAC and MP3 formats.

What is the best music library type player for managing and playing music that's on a Samba or NFS share?

Out of everything I've tried, Clementine/Strawberry and Quod Libet work mostly okay, but they obviously assume the files are on a local drive, so they take ages to scan the network share and update their libraries every time they start. Hours for a full rescan that would be done in a minute on local files. Currently I'm using a Samba share, but I don't expect NFS would be significantly faster.

Currently the client machine is on WLAN, but it's a solid 802.11ac connection with no interfering other networks.

What's the best solution for network music playback? Find the right application? MPD? DLNA?

Or just go old-school, use folders and a simple player like Audacious instead of a fancy music library type?

Weird, I have clementine loading my music from a NFS share, it starts instantly with 10000 tracks.

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


Mr Shiny Pants posted:

Weird, I have clementine loading my music from a NFS share, it starts instantly with 10000 tracks.

Odd. Maybe I need to tweak my Samba settings or play around more with NFS, then.

JLaw
Feb 10, 2008

- harmless -
Shot in the dark here, but would anyone happen to know how to approach debugging this issue:

I'm running elementary OS which is an Ubuntu variant. When I first log in after system start, or if I explicitly log out or lock screen and then log back in, everything is fine.

However if the screen locks because of idleness, and then I log back in, "loginctl session-status" says that my session is stuck in "opening" state. This isn't a showstopper but e.g. it does mean that I have to enter my password to authorize checking for updates if using their package manager GUI. A few minor annoyances at most but it would be super nice just to find out what is causing this and fix it.

Can anyone point me to a good resource about how to approach figuring out what is blocking the session from becoming "active" state?

RFC2324
Jun 7, 2012

http 418

Of note, NFS is significantly faster than SMB, and samba isn't a particularly fast implementation of SMB.

NFS is much faster, especially for enumerating files

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home
I use Lollypop over NFS with a ~400GB collection of mostly FLACs and it's nearly indistinguishable from local files

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


Trip report: NFS is definitely faster, thanks :)

BlankSystemDaemon
Mar 13, 2009



NFS also has actual file locking, you know, in case you care a bit about data not getting corrupt.

And with NFSv4.2 (which is getting added into FreeBSD already, despite the RFC not even having been ratified), we'll be getting a whole bunch of features that'll make it even faster.

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


D. Ebdrup posted:

NFS also has actual file locking, you know, in case you care a bit about data not getting corrupt.

It also implicitly trusts anyone with access to a share, so if they have root on their local machine, they can do anything on that share :v:

I know there are reasons for that, and Kerberos/ACLs will take care of that aspect. For now I'm just locking the NFS share to only my machine.

RFC2324
Jun 7, 2012

http 418

KozmoNaut posted:

It also implicitly trusts anyone with access to a share, so if they have root on their local machine, they can do anything on that share :v:

I know there are reasons for that, and Kerberos/ACLs will take care of that aspect. For now I'm just locking the NFS share to only my machine.

you just set root_squash and you should be good.

e: you should also restrict access to your local IPs, obviously, just like with any service

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


RFC2324 posted:

you just set root_squash and you should be good.

e: you should also restrict access to your local IPs, obviously, just like with any service

Oh yeah, root_squash is super mandatory, and I've locked the share to my hostname (DHCP :ssh:)

xtal
Jan 9, 2011

by Fluffdaddy

KozmoNaut posted:

Linux desktop application question:

I have all of my files on a DIY NAS/server, including a music library of ~16,000 tracks mostly in FLAC and MP3 formats.

What is the best music library type player for managing and playing music that's on a Samba or NFS share?

Out of everything I've tried, Clementine/Strawberry and Quod Libet work mostly okay, but they obviously assume the files are on a local drive, so they take ages to scan the network share and update their libraries every time they start. Hours for a full rescan that would be done in a minute on local files. Currently I'm using a Samba share, but I don't expect NFS would be significantly faster.

Currently the client machine is on WLAN, but it's a solid 802.11ac connection with no interfering other networks.

What's the best solution for network music playback? Find the right application? MPD? DLNA?

Or just go old-school, use folders and a simple player like Audacious instead of a fancy music library type?

Mpd and Beets for sure

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
I like MusicBee and I think it should work for you. You can add/scan a folder manually once without having to keep it on the 'monitored folders' list, plus you can disable 'on startup check for updated or missing files'.

Pablo Bluth fucked around with this message at 15:14 on Jul 31, 2020

BlankSystemDaemon
Mar 13, 2009



KozmoNaut posted:

It also implicitly trusts anyone with access to a share, so if they have root on their local machine, they can do anything on that share :v:

I know there are reasons for that, and Kerberos/ACLs will take care of that aspect. For now I'm just locking the NFS share to only my machine.
I don't know what NFS implementation you're using, but that's absolutely not true. For server-side, UID 28 (the 'nobody' user, that often gets mistakenly used for privilege separation) exists solely because of NFS.
The 4.4BSD/Lite2 implementation has the server-side maproot/mapall feature, documented in exports(5), which forces all mounts, even those done by remote root, to be mounted with a given UID on the serve - and it still exists in the BSDs nowadays.
On FreeBSD, for client-side, vfs.usermounts let you mount stuff without being root (this is also handy for automountd/autofs).

BlankSystemDaemon fucked around with this message at 17:25 on Jul 31, 2020

RFC2324
Jun 7, 2012

http 418

I just wanna say automount/autofs is the poo poo on a fast network, but is painful af if you have any lag at all

BlankSystemDaemon
Mar 13, 2009



RFC2324 posted:

I just wanna say automount/autofs is the poo poo on a fast network, but is painful af if you have any lag at all
That's true for basically any protocol that isn't http, email, or irc, though - as an example, even protocols as simple as telnet or ssh are a pain with any lag, too. Especially if it's lag with a large delta.

RFC2324
Jun 7, 2012

http 418

D. Ebdrup posted:

That's true for basically any protocol that isn't http, email, or irc, though - as an example, even protocols as simple as telnet or ssh are a pain with any lag, too. Especially if it's lag with a large delta.

The pain i dealt with(in a solaris environment, I'll admit) just felt so much worse than most other lag, and using it in any gui environment is way worse than, say, x forwarding. It's always locked up whatever session while waiting for the mount to respond (like most hard mounts, but with network lag). It can also hit unexpectedly because an automount got auto-dismounted and then queried so remounted.

The worst was when some jackass set autohomes to autodismount

I'll give that it's subjective tho

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

KozmoNaut posted:

Out of everything I've tried, Clementine/Strawberry and Quod Libet work mostly okay, but they obviously assume the files are on a local drive, so they take ages to scan the network share and update their libraries every time they start. Hours for a full rescan that would be done in a minute on local files. Currently I'm using a Samba share, but I don't expect NFS would be significantly faster.

Why would they need to scan the library every time? I would expect there to be an option to disable that.

Computer viking
May 30, 2011
Now with less breakage.

Also, doesn't kerberized NFS4 fix the "trust the client" issue rather definitely, if you can use it?

(Not that I ever want to set that up by hand again, but that's a different matter.)

RFC2324
Jun 7, 2012

http 418

Computer viking posted:

Also, doesn't kerberized NFS4 fix the "trust the client" issue rather definitely, if you can use it?

(Not that I ever want to set that up by hand again, but that's a different matter.)

anyone know if yast will do this? I know it does nfs and kerberos configs, I dunno how well it makes them work together

BlankSystemDaemon
Mar 13, 2009



RFC2324 posted:

anyone know if yast will do this? I know it does nfs and kerberos configs, I dunno how well it makes them work together
If all else fails, nfs-ganesha integrates with kerberos, for what it's worth. I use it in a FreeBSD jail for the stuff I share on NFS over WAN using krb5i (ie. both encryption and a MAC for integrity validation).

KozmoNaut
Apr 23, 2008

Happiness is a warm
Turbo Plasma Rifle


D. Ebdrup posted:

I don't know what NFS implementation you're using, but that's absolutely not true. For server-side, UID 28 (the 'nobody' user, that often gets mistakenly used for privilege separation) exists solely because of NFS.
The 4.4BSD/Lite2 implementation has the server-side maproot/mapall feature, documented in exports(5), which forces all mounts, even those done by remote root, to be mounted with a given UID on the serve - and it still exists in the BSDs nowadays.
On FreeBSD, for client-side, vfs.usermounts let you mount stuff without being root (this is also handy for automountd/autofs).

Maybe I'm just misunderstanding stuff and need to read up on it, I guess :)

Saukkis posted:

Why would they need to scan the library every time? I would expect there to be an option to disable that.

Yeah, it can be disabled, but then obviously the local database could get out of sync with what's actually on the network share. Either way, I'm using NFS now and at least Strawberry can play perfectly fine while the library is updating in the background.

Right now I'm going through importing my whole library using beets and it's asking a lot of questions. Woe is me for having obscure releases in my collection, I guess.

cr0y
Mar 24, 2005



Sorry for the crossposting spam but I am not sure where this best fits so I am tossing it in a couple threads, can't really determine the right keywords to search otherwise I would just dig into google results.

I have a docker host and want to migrate to NginxProxyManger to handle my LetsEncrypt SSL cert. I have a couple web services on my local network that I want to present to the public internet. Lets called these ServiceA, B and C, most of them are other docker instances but some are physical hosts, all with a specific port I need to reverse proxy to.

I *somehow* want the following to work

Hit mydomain.com:443 -> Proxy to an internal non-SSL apache instance on port 80
Hit mydomain.com:1234 -> Proxy to an internal docker service on port 8888
Hit mydomain.com:9999 -> Proxy to some other internal non-SSL service

Now NPM has "proxy hosts", and "redirection hosts". I am not sure which I want to be using, moreso I am not sure how I properly configure NPM to take connections from the above 3 ports and send them elsewhere. I can get one service working fine, but can't figure out what I need to be doing to break out traffic based on inbound port, because out of the box NPM assumes your router/firewall is sending everything inbound to it via single port. Does that make sense to anyone? Like I said I have been struggling even trying to articulate my issue even though it seems simple on the surface.

hifi
Jul 25, 2012

Is there some kind of kerberized nfs4 that doesn't involve freeipa/some kind of AD implementation? samba being able to use avahi seems modular and... streamlined. It looks like freenas for example says use CIFS if you want any kind of security.

BlankSystemDaemon
Mar 13, 2009



hifi posted:

Is there some kind of kerberized nfs4 that doesn't involve freeipa/some kind of AD implementation? samba being able to use avahi seems modular and... streamlined. It looks like freenas for example says use CIFS if you want any kind of security.
CIFS is a SMB1.0 thing, which was deprecated as a result of the ShadowBroker-sourced EternalBlue exploit. Are you thinking of AD?

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

cr0y posted:

Sorry for the crossposting spam but I am not sure where this best fits so I am tossing it in a couple threads, can't really determine the right keywords to search otherwise I would just dig into google results.

I have a docker host and want to migrate to NginxProxyManger to handle my LetsEncrypt SSL cert. I have a couple web services on my local network that I want to present to the public internet. Lets called these ServiceA, B and C, most of them are other docker instances but some are physical hosts, all with a specific port I need to reverse proxy to.

I *somehow* want the following to work

Hit mydomain.com:443 -> Proxy to an internal non-SSL apache instance on port 80
Hit mydomain.com:1234 -> Proxy to an internal docker service on port 8888
Hit mydomain.com:9999 -> Proxy to some other internal non-SSL service

Now NPM has "proxy hosts", and "redirection hosts". I am not sure which I want to be using, moreso I am not sure how I properly configure NPM to take connections from the above 3 ports and send them elsewhere. I can get one service working fine, but can't figure out what I need to be doing to break out traffic based on inbound port, because out of the box NPM assumes your router/firewall is sending everything inbound to it via single port. Does that make sense to anyone? Like I said I have been struggling even trying to articulate my issue even though it seems simple on the surface.

You want NPM to bind to 3 ports.

I don't know NPM, but I do know Caddy, and I can tell you that it's purpose built for this kind of thing. It's basically a super simple LetsEncrypt-enabled proxy server. The whole config file would probably be only a few lines long.

hifi
Jul 25, 2012

D. Ebdrup posted:

CIFS is a SMB1.0 thing, which was deprecated as a result of the ShadowBroker-sourced EternalBlue exploit. Are you thinking of AD?

My bad they actually say samba.

hifi
Jul 25, 2012

minato posted:

You want NPM to bind to 3 ports.

I don't know NPM, but I do know Caddy, and I can tell you that it's purpose built for this kind of thing. It's basically a super simple LetsEncrypt-enabled proxy server. The whole config file would probably be only a few lines long.

Yeah this seems like the scenario where you want to use just raw nginx or whatever other web server.

https://github.com/jc21/nginx-proxy-manager/issues/252 This is probably the solution to the problem while still using that app and it looks like a pain in the rear end and not really maintainable or within the spirit of a web app with gui elements to manage your nginx config.

My other suggestion would be, most web apps let you set a HTTP root, so any url you click in the app can be prefixed with "http://butt.com/app1" for example, so you can host multiple things on the same domain without using extra ports or subdomains. Just kinda trying to read your mind here though, maybe that's not helpful.

Mr Shiny Pants
Nov 12, 2012

cr0y posted:

Sorry for the crossposting spam but I am not sure where this best fits so I am tossing it in a couple threads, can't really determine the right keywords to search otherwise I would just dig into google results.

I have a docker host and want to migrate to NginxProxyManger to handle my LetsEncrypt SSL cert. I have a couple web services on my local network that I want to present to the public internet. Lets called these ServiceA, B and C, most of them are other docker instances but some are physical hosts, all with a specific port I need to reverse proxy to.

I *somehow* want the following to work

Hit mydomain.com:443 -> Proxy to an internal non-SSL apache instance on port 80
Hit mydomain.com:1234 -> Proxy to an internal docker service on port 8888
Hit mydomain.com:9999 -> Proxy to some other internal non-SSL service

Now NPM has "proxy hosts", and "redirection hosts". I am not sure which I want to be using, moreso I am not sure how I properly configure NPM to take connections from the above 3 ports and send them elsewhere. I can get one service working fine, but can't figure out what I need to be doing to break out traffic based on inbound port, because out of the box NPM assumes your router/firewall is sending everything inbound to it via single port. Does that make sense to anyone? Like I said I have been struggling even trying to articulate my issue even though it seems simple on the surface.

Just use regular Nginx, it will do what you want. Easily I might add.

other people
Jun 27, 2004
Associate Christ
the linuxserver.io letsencrypt container is dead simple to set up, has good docs, and comes with lots of easy to follow nginx proxy conf examples.

https://docs.linuxserver.io/images/docker-letsencrypt

so uh i recommend you use that

BlankSystemDaemon
Mar 13, 2009



./acme.sh

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

hifi posted:

My other suggestion would be, most web apps let you set a HTTP root, so any url you click in the app can be prefixed with "http://butt.com/app1" for example, so you can host multiple things on the same domain without using extra ports or subdomains.

Subdomains are the way to go, as long as you have a halfway decent domain registrar that will let you register a wildcard subdomain.

Then you just publish each service on service.mydomain.com, let your proxy register a Let's Encrypt certificate for each one (or a wildcard certificate), and you never need to even think about whether your services are properly dealing with custom root paths.

RFC2324
Jun 7, 2012

http 418

NihilCredo posted:

Subdomains are the way to go, as long as you have a halfway decent domain registrar that will let you register a wildcard subdomain.

Then you just publish each service on service.mydomain.com, let your proxy register a Let's Encrypt certificate for each one (or a wildcard certificate), and you never need to even think about whether your services are properly dealing with custom root paths.

Since when does the registrar need to allow subdomains? If you register company.com, do you no longer control *.company.com?

Or do you mean find a dns provider(different thing than your registrar) who allows that level of control, because it's not an issue of allowed so much as capable

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

RFC2324 posted:

Since when does the registrar need to allow subdomains? If you register company.com, do you no longer control *.company.com?

Yeah if you have everything on one IP it's not a problem. But it's not uncommon that company.com is your public/marketing landing page and it's on a totally different infrastructure than your internal services like jira.company.com, erp.company.com, etc.

Of course, you can also just put those on company-services.com or whatever.

RFC2324
Jun 7, 2012

http 418

NihilCredo posted:

Yeah if you have everything on one IP it's not a problem. But it's not uncommon that company.com is your public/marketing landing page and it's on a totally different infrastructure than your internal services like jira.company.com, erp.company.com, etc.

Of course, you can also just put those on company-services.com or whatever.

Uhhh, if you have access to your dns server you can put any IP you want on any subdomain you want.

If you own the domain, you own it and can do anything you want with it.

Any other limitations come from your dns provider not giving you full access to their system, since a subdomain is a full on A record in its own right.

If you think those limitations are part of dns, your dns provider has been lying to you all your life

astral
Apr 26, 2004

They meant certificates.

e: No, on second read, they meant wildcard subdomain for DNS. Like the name server would respond to anything.domain.com with a certain IP.

astral fucked around with this message at 00:44 on Aug 2, 2020

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

RFC2324 posted:

Since when does the registrar need to allow subdomains? If you register company.com, do you no longer control *.company.com?

Or do you mean find a dns provider(different thing than your registrar) who allows that level of control, because it's not an issue of allowed so much as capable

Dome DNS sections of registrars don't allow a wildcard subdomain. So yeah. It's nice for something like traefik where you can define a subdomain per docker container for routing things with minimal setup.

Adbot
ADBOT LOVES YOU

RFC2324
Jun 7, 2012

http 418

astral posted:

They meant certificates.

ok, if this is the case it makes perfect sense and I was wrong.

sorry, missed a vital detail

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply