Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
netdata is great too because it runs at the lowest priority. for historical collection though, you're going to want something like cockroachdb or prometheus to store the info and Grafana as a frontend which, yeah, that's gonna be annoying if you do it locally.

Adbot
ADBOT LOVES YOU

Antigravitas
Dec 8, 2019

Die Rettung fuer die Landwirte:

apropos man posted:

Is it crucial to leave the [printers] and [print$] section of a samba config intact? Even if I don't own a printer and would rather insert a finger into Jair Bolsonaro's urethra than own one?


Also, what happens when you run a checksum against a file that's constantly changing, such as the qcow of a VM that's currently running? Does the algo continue to run, forever?

We always remove those since the samba servers don't act as print servers. It can be useful but you need a working cups and other stuff. We have exactly one samba print server and it does expose a printer share, but it's pain and suffering to set up because Windows printing is horrific.

Most checksum programs just run through the entire length of the file. If something changes they won't notice.

xzzy
Mar 5, 2009

Matt Zerella posted:

netdata is great too because it runs at the lowest priority.

It also sets its oom_score_adj to 1000 so when the system gets starved of memory it's always going to be the first thing that goes.

Lots of users look at dmesg and are all "oh no your stupid netdata process used up all the ram!!" and I have to again patiently explain to them that just because it died first doesn't mean it was using the most memory.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Antigravitas posted:

We always remove those since the samba servers don't act as print servers. It can be useful but you need a working cups and other stuff. We have exactly one samba print server and it does expose a printer share, but it's pain and suffering to set up because Windows printing is horrific.

Most checksum programs just run through the entire length of the file. If something changes they won't notice.

Cheers. I was wondering if prints spooled into a log file or something, and if it was necessary to leave it on even if you were just wanting to share files because of legacy reasons. I'll comment the prints sections out for now and reload daemons. If it works OK then I'll remove them altogether after a couple of days testing.

I'm using SELinux and sharing what's essentially an NFS share mounted into my home directory. I prefer to use NFS between Linux boxes but for the couple of Windows machines on my network and my mobile phone I find it easier to set up a single samba share and use it for ephemeral connections.

I'm tempted to symlink the NFS mount to somewhere else but I'm a bit twitchy about relabelling a swathe of SELinux attributes on the NFS mount to change the contexts into "samba_share_t" when it's actually coming from an NFS mount. I'm worried that it will mess with the attributes on the backend NFS share, so I'm effectively sharing a load of files over NFS which I'm considering relabelling to samba_share_t.

I probably didn't explain that terribly well. I've had it working before on a CentOS 7 VM that I used before and I'm building a CentOS8 VM alongside it. Gradually copying the services over from the 7 VM before it can be destroyed. I had the setup working on the CentOS 7 box, but I can't remember how I did it last time. Rough diagram:

code:
{ BIG         }
{   ZFS       }
{     ARRAY     } |   Dataset A <--> NFS SHARE on this dataset -> CentOS8 VM <-> samba share for convenience/devices/phones etc.
{    ON            }
{     CENTOS8  } |  Dataset B   <--> NFS SHARE (for movies/TV shows)
{         VIRT    } |   Dataset C (not shared/backups and stuff)
{      HOST       } | Dataset D (not shared/running VM's storage)

Mr. Crow
May 22, 2008

Snap City mayor for life

xzzy posted:

Netdata might be overkill as I see it as more of a metrics exporting tool but it has a decent web interface and there's no reason it would run bad on a desktop. It makes per-second resolution graphs of just about everything you could think of.

For the customization you'd end up with Grafana though, which is definitely overkill.

But give netdata a look anyways.

Netdata is bad and has an idiotic implementation and privacy structure.

Mr. Crow
May 22, 2008

Snap City mayor for life

apropos man posted:

Cheers. I was wondering if prints spooled into a log file or something, and if it was necessary to leave it on even if you were just wanting to share files because of legacy reasons. I'll comment the prints sections out for now and reload daemons. If it works OK then I'll remove them altogether after a couple of days testing.

I'm using SELinux and sharing what's essentially an NFS share mounted into my home directory. I prefer to use NFS between Linux boxes but for the couple of Windows machines on my network and my mobile phone I find it easier to set up a single samba share and use it for ephemeral connections.

I'm tempted to symlink the NFS mount to somewhere else but I'm a bit twitchy about relabelling a swathe of SELinux attributes on the NFS mount to change the contexts into "samba_share_t" when it's actually coming from an NFS mount. I'm worried that it will mess with the attributes on the backend NFS share, so I'm effectively sharing a load of files over NFS which I'm considering relabelling to samba_share_t.

I probably didn't explain that terribly well. I've had it working before on a CentOS 7 VM that I used before and I'm building a CentOS8 VM alongside it. Gradually copying the services over from the 7 VM before it can be destroyed. I had the setup working on the CentOS 7 box, but I can't remember how I did it last time. Rough diagram:

code:
{ BIG         }
{   ZFS       }
{     ARRAY     } |   Dataset A <--> NFS SHARE on this dataset -> CentOS8 VM <-> samba share for convenience/devices/phones etc.
{    ON            }
{     CENTOS8  } |  Dataset B   <--> NFS SHARE (for movies/TV shows)
{         VIRT    } |   Dataset C (not shared/backups and stuff)
{      HOST       } | Dataset D (not shared/running VM's storage)

I run NFS at home for ease between all the Linux boxes and then use synching on a subset of the NFS for phones and the couple windows boxes, it works great.

I was constantly having issues with samba and trying to do anything with any sort of permissions and streaming video or other large files.

xzzy
Mar 5, 2009

Well everything open source is bad and implemented by idiots so that's not really useful critique.

The default config is overly permissive but with some changes and a couple iptables rules it locks down well enough.

BlankSystemDaemon
Mar 13, 2009



Matt Zerella posted:

netdata is great too because it runs at the lowest priority. for historical collection though, you're going to want something like cockroachdb or prometheus to store the info and Grafana as a frontend which, yeah, that's gonna be annoying if you do it locally.
On FreeBSD, top -q runs at a very low priority too, and you can see per-process I/O by simply pressing m :colbert:

Also, prometheus needs to use inetd with TCP wrappers (or something similar, if it exists in Linux) to launch a data collector whenever prometheus pulls data - it's silly to have a pull model require an application that runs all the time.

The best analytics ever came from Sun Fishworks and could drill down to per-disk IOPS based on latency with dtrace:
https://www.youtube.com/watch?v=tDacjrSCeq4

FreeBSD comes close though:



BlankSystemDaemon fucked around with this message at 21:26 on Jul 15, 2020

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Mr. Crow posted:

I run NFS at home for ease between all the Linux boxes and then use synching on a subset of the NFS for phones and the couple windows boxes, it works great.

I was constantly having issues with samba and trying to do anything with any sort of permissions and streaming video or other large files.

I've just managed to get it working by enabling 3 sebooleans: allow home dirs, export all r/o, export all r/w.

Then I manually mounted the share on my laptop running Linux.

I checked that it was permanent by rebooting my laptop and rebooting the VM with the samba server on it, that is acting as the conduit between the NFS share and anything I want to connect to samba. It worked without having to relabel any SELinux contexts. :-)

I'm not averse to relabelling everything with restorecon -Rv. I just didn't want to gently caress up the contexts that the NFS server was expecting on the files, but in the end no relabelling was necessary. That must be how I achieved it last time.

The whole setup survived a reboot.

Does Windows play nicely with NFS? I thought that NFS was primarily a *nix thing, so that also swayed me to use samba to share to Windows. The performance of a well-tuned NFS share is much faster than SMB, isn't it? Maybe I should rethink and just use NFS for everything. Hmm.

BlankSystemDaemon
Mar 13, 2009



apropos man posted:

I've just managed to get it working by enabling 3 sebooleans: allow home dirs, export all r/o, export all r/w.

Then I manually mounted the share on my laptop running Linux.

I checked that it was permanent by rebooting my laptop and rebooting the VM with the samba server on it, that is acting as the conduit between the NFS share and anything I want to connect to samba. It worked without having to relabel any SELinux contexts. :-)

I'm not averse to relabelling everything with restorecon -Rv. I just didn't want to gently caress up the contexts that the NFS server was expecting on the files, but in the end no relabelling was necessary. That must be how I achieved it last time.

The whole setup survived a reboot.

Does Windows play nicely with NFS? I thought that NFS was primarily a *nix thing, so that also swayed me to use samba to share to Windows. The performance of a well-tuned NFS share is much faster than SMB, isn't it? Maybe I should rethink and just use NFS for everything. Hmm.
Windows 10 Pro and more expensive editions ship with an NFS client which works quite well, for what it is - except for the fact that Windows doesn't really have a concept of file locking.

xzzy
Mar 5, 2009

D. Ebdrup posted:

Also, prometheus needs to use inetd with TCP wrappers (or something similar, if it exists in Linux) to launch a data collector whenever prometheus pulls data - it's silly to have a pull model require an application that runs all the time.

Explain? Because I'm doing no such thing. Not being snarky either, am genuinely curious what you're talking about.

My prometheus server is a single process docker container and it pulls metrics from 2700 servers without breaking a sweat, so I guess it's fine?

BlankSystemDaemon
Mar 13, 2009



xzzy posted:

Explain? Because I'm doing no such thing. Not being snarky either, am genuinely curious what you're talking about.

My prometheus server is a single process docker container and it pulls metrics from 2700 servers without breaking a sweat, so I guess it's fine?
On FreeBSD, inetd is a daemon that can either respond to plaintext queries for any TCP connection or launch other daemons, so for example you can have it return an ident response, that can even be faked with the -f switch if a user has a .fakeid or .noident file, or it can launch a prometheus data exporter when prometheus pulls data. This can be advantageous since exporting sysctls (and gstat data, as in the earlier example) only takes a fraction of a second.
The point is to decrease the load on the servers you're pulling data from, not the ingest server.

It used to be called the Internet Super Server in common parlance, but, well, Microsoft decided it was fine to call their software ISS, for reasons which probably made some kind of sense at the time.

BlankSystemDaemon fucked around with this message at 22:24 on Jul 15, 2020

xzzy
Mar 5, 2009

That makes sense, I read it as if prometheus itself needed to be behind inetd. Linux moved to xinetd eons ago so the pattern is still available, but I don't use it anymore.

We ran gmond as a daemon for 10+ years and never had load problems, servers have plenty of overhead. So when it came time for the New Hotness I didn't even think twice about running netdata. :v:

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

D. Ebdrup posted:

Windows 10 Pro and more expensive editions ship with an NFS client which works quite well, for what it is - except for the fact that Windows doesn't really have a concept of file locking.

Well my movie streaming is poo poo-hot, going from an SSD array to a VM which is also running on SSD over NFS. I think I'll just leave the hybrid bastard combo of samba running. SMB is handy if you have a file on any sort of random device like a smartphone or tablet or something and you just wanna chuck a file onto some reliable storage, i.e. Linux server.

I haven't seen that many Android apps that support NFS (although I haven't looked hard, to be fair). I think Solid Explorer supports NFS but samba/SMB is more widespread. NFS does seem more straightforward to set up and control though. I had that configured in an hour, the other night.

Mr. Crow
May 22, 2008

Snap City mayor for life

xzzy posted:

Well everything open source is bad and implemented by idiots so that's not really useful critique.

The default config is overly permissive but with some changes and a couple iptables rules it locks down well enough.

The fact you have to login through their api and session management to even get at your server dashboard (with a private registry mind you) is a non starter for me, thats a huge dark pattern. Add to that they collect any and all information about your usage, logins and geolocation and im out, I'll stick with cockpit for a dashboard.

xzzy
Mar 5, 2009

I guess it doesn't effect me, I turned off the registry function and never get nagged for a login when visiting the netdata page on one of my servers. Even before the cloudification the registry wasn't that great a feature so I query puppetdb to build a landing page that does basically the same thing (but also provides a link to grafana for longer term metrics).

Computer viking
May 30, 2011
Now with less breakage.

As for samba vs NFS performance, I had no issues pulling a few hundred MB/s into Windows 7 from a not-especially-tuned samba share when I tried a few years ago, and that was limited by the spinning disks in the server. NFS is probably faster, but SMB is unlikely to be a bottleneck in any sort of daily home use.

And, yes, I bought two old 10gbit/fibre cards solely to link my fileserver to my gaming pc. And because fibre optics are cool.

Mr. Crow
May 22, 2008

Snap City mayor for life
I wasn't aware you could turn off the registry? I evaluated it a few months ago so i didn't go into a crazy amount of effort trying to disable anything but the docs i could see clearly highlighted you need a registry for multi computer dashboard, maybe its just another dark pattern of obfuscating options to avoid it?


Plex is basically the same and I'm really happy to get rid of it and use jellyfin.

Computer viking
May 30, 2011
Now with less breakage.

Plex does still work without a central login, but we mostly use it because we so far haven't found a replacement that we liked - either for UI/polish reasons or because it seemed like everything else was in the middle of splits/merges/not being updated.

I haven't tested Jellyfin, though; I'll take a look when I find some time. :)

edit: Ooh, .net core on FreeBSD looks just a bit unfinished. Time to find out just how bad it is.

Computer viking fucked around with this message at 03:45 on Jul 16, 2020

xzzy
Mar 5, 2009

Mr. Crow posted:

I wasn't aware you could turn off the registry? I evaluated it a few months ago so i didn't go into a crazy amount of effort trying to disable anything but the docs i could see clearly highlighted you need a registry for multi computer dashboard, maybe its just another dark pattern of obfuscating options to avoid it?

You sort of can. In the config you can set registry enabled/disabled and set an url to use as the registry. If you set the registry to disabled and don't set an url, you get the cloud login prompt in the drop down menu but the login link takes you to a 404 error (lol). If you do set an url the cloud login link disappears. I don't use the registry at all (I tried, once. I have too many servers and it's a mess) so it's set to a nonexistent host name and netdata offers some bullshit test servers. But if the url is valid (a netdata daemon you chose to be the master) your stuff should show up.

I think there's a feature request in their issues to make declining their cloud service less stupid and they seem receptive to it, so maybe it'll get better.

I'm not defending their decisions at all, like I said previously everything is dumb. But it's the least bad solution I found when migrating off our ganglia infrastructure and it's done the job I need it to do (which is pump Prometheus full of data).

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Computer viking posted:

Plex does still work without a central login, but we mostly use it because we so far haven't found a replacement that we liked - either for UI/polish reasons or because it seemed like everything else was in the middle of splits/merges/not being updated.

I haven't tested Jellyfin, though; I'll take a look when I find some time. :)

edit: Ooh, .net core on FreeBSD looks just a bit unfinished. Time to find out just how bad it is.

As an ex Plex user, I converted to Jellyfin quite a few months ago, and I've never looked back. I've used it as a Docker container but now I just set up a 4 core VM and I have transcoding turned to low. I mainly stream 1080p but I do have a 4K TV downstairs. I've tried Jellyfin with the odd ridiculously high bit-rate 4K movie (the Bladreunner remake) and it coped fine. I think that one was about a 28GB download or something stupid.

It's so much nicer to have a fully open source project that doesn't care about registering with a central authority. They are doing a brilliant job with the fork from Emby. I think I've used it in a Debian VM before and I'm almost certain I had it running in a CentOS VM at one point. I'm currently using it with a Manjaro VM because I'd had such an easy experience on deb and RPM distros I thought "What the heck. I'll give Arch a try, too."

I've had an easier experience running it in a VM rather than Docker because it saved on piddling around with network settings.

The UI was already pretty good when they first forked it but it's gotten gradually better over time. It's probably slightly behind Plex in terms of showing off the UI to your pals but it's very decent looking and if you are into adding GPU's or running it on bare-metal there are options for using Intel QuickSync for transcoding that you don't (didn't?) get in Plex. The only snag I've found with Jellyfin is when I tried to install it on a tiny Linux box at my parent's house and I struggled to find an app on their LG TV which was compatible with Jellyfin so I used an ageing LG app that was originally designed to work with Emby. I think the Android app is pretty useless at finding your Jellyfin server (last time I used it), but I don't care about streaming a film to my phone.

I'd honestly never even think of using Plex again if the development of Jellyfin continues as well as it has done the last year or so.

Xik
Mar 10, 2011

Dinosaur Gum

This is real cool, but I hope that guy normally wears hearing protection :ohdear:

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home

apropos man posted:


I haven't seen that many Android apps that support NFS (although I haven't looked hard, to be fair). I think Solid Explorer supports NFS but samba/SMB is more widespread. NFS does seem more straightforward to set up and control though. I had that configured in an hour, the other night.

If you find one, report back because I couldn’t find one for my Shield TV. I have to run a samba share just to serve my ROMs to it.

Edit: and if you’ve used Emby, how much has Jellyfin diverged from it? I bought a lifetime license back in 2015 and don’t have any complaints (except the iOS app is kinda trash) but I could see Jellyfin moving faster with all the community support.

Chilled Milk fucked around with this message at 11:18 on Jul 16, 2020

Computer viking
May 30, 2011
Now with less breakage.

apropos man posted:

As an ex Plex user, I converted to Jellyfin quite a few months ago, and I've never looked back. I've used it as a Docker container but now I just set up a 4 core VM and I have transcoding turned to low. I mainly stream 1080p but I do have a 4K TV downstairs. I've tried Jellyfin with the odd ridiculously high bit-rate 4K movie (the Bladreunner remake) and it coped fine. I think that one was about a 28GB download or something stupid.

It's so much nicer to have a fully open source project that doesn't care about registering with a central authority. They are doing a brilliant job with the fork from Emby. I think I've used it in a Debian VM before and I'm almost certain I had it running in a CentOS VM at one point. I'm currently using it with a Manjaro VM because I'd had such an easy experience on deb and RPM distros I thought "What the heck. I'll give Arch a try, too."

I've had an easier experience running it in a VM rather than Docker because it saved on piddling around with network settings.

The UI was already pretty good when they first forked it but it's gotten gradually better over time. It's probably slightly behind Plex in terms of showing off the UI to your pals but it's very decent looking and if you are into adding GPU's or running it on bare-metal there are options for using Intel QuickSync for transcoding that you don't (didn't?) get in Plex. The only snag I've found with Jellyfin is when I tried to install it on a tiny Linux box at my parent's house and I struggled to find an app on their LG TV which was compatible with Jellyfin so I used an ageing LG app that was originally designed to work with Emby. I think the Android app is pretty useless at finding your Jellyfin server (last time I used it), but I don't care about streaming a film to my phone.

I'd honestly never even think of using Plex again if the development of Jellyfin continues as well as it has done the last year or so.

Right, that sounds promising. Does it have a web UI that you can use to view content?

Our setup probably only appeals to couples where both are IT people - we have a small desktop PC hidden away booting passwordless into Firefox full screen on Fedora, and then we just have tabs for the Plex web frontend next to YouTube and whatever. Add a wireless keyboard/trackpad as a remote and it's honestly the least annoying solution we've tried. However, it does kind of limit us to "things viewable as web pages".

ToxicFrog
Apr 26, 2008


Computer viking posted:

Right, that sounds promising. Does it have a web UI that you can use to view content?

Our setup probably only appeals to couples where both are IT people - we have a small desktop PC hidden away booting passwordless into Firefox full screen on Fedora, and then we just have tabs for the Plex web frontend next to YouTube and whatever. Add a wireless keyboard/trackpad as a remote and it's honestly the least annoying solution we've tried. However, it does kind of limit us to "things viewable as web pages".

Yep. Web UI, chromecast support, and there's a plugin for it for Kodi. I initially set it up with Kodi but the chromecast and in-browser support work so well that we basically haven't touched that since the first week, and I might decommission that rpi and retask it for something else.

My two biggest beefs with it are:
- there's no "a pile of random crap, just show me the filesystem structure" library type; "Mixed Content" tries to autodetect things as movies or TV shows and gets it wrong a lot of the time. "Photos" generally works decently for this purpose, at least.
- collection grouping doesn't work (e.g. you can't take all the Tremors movies and group them into a collection that shows up as a single item in the movie list) -- this is a known issue fixed in 10.6.x, though.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

The Milkman posted:

If you find one, report back because I couldn’t find one for my Shield TV. I have to run a samba share just to serve my ROMs to it.

Edit: and if you’ve used Emby, how much has Jellyfin diverged from it? I bought a lifetime license back in 2015 and don’t have any complaints (except the iOS app is kinda trash) but I could see Jellyfin moving faster with all the community support.

I thought Solid Explorer had NFS but I just checked and it has a few options (SMB, Google Drive etc) but no NFS :-(

I'm not sure I ever used Emby. I think I started using Jellyfin just around the time of the fork. I think the fork was for political reasons (Emby were using closed codecs or something), so you may get good mileage out of continuing with that. Development on Jellyfin does seem pretty busy, though. And there's definitely a solid use case for an open alternative to Plex, so I can see why devs must be attracted to the project.

Computer viking posted:

Right, that sounds promising. Does it have a web UI that you can use to view content?

Our setup probably only appeals to couples where both are IT people - we have a small desktop PC hidden away booting passwordless into Firefox full screen on Fedora, and then we just have tabs for the Plex web frontend next to YouTube and whatever. Add a wireless keyboard/trackpad as a remote and it's honestly the least annoying solution we've tried. However, it does kind of limit us to "things viewable as web pages".

More or less exactly the same as me. I have a tiny machine built out of an ASUS motherboard which is like a thinner ITX standard, a Kaby Lake i5, single 8GB SODIMM and a 64GB m.2 SATA drive in it. It's behind my TV and runs Fedora, which I control with a Logitech K400 wireless keyboard (it has the trackpad on the side of the keyboard). I use a browser in Fedora (mainly Firefox and sometimes Chromium) to watch YouTube and Jellyfin. You just point Firefox to your Jellyfin instance on port 8096 and bob's yer uncle. I can't remember the IP of my Jellyfin VM, but it's just a case of running it somewhere on your LAN and going to 192.168.1.xxx:8096 in Firefox. Then hit F5 when you start watching something to toggle fullscreen.

I've also got Kodi installed on my little TV box and it plays nice with Jellyfin but most of the time it works so well in Firefox I don't bother running Kodi.

Computer viking
May 30, 2011
Now with less breakage.

Excellent, that's uncannily close to how we do it (except that it's a full ATX machine in a small tower).

I'll post here when I've messed around with it for a bit. :)

Alpha Mayo
Jan 15, 2007
hi how are you?
there was this racist piece of shit in your av so I fixed it
you're welcome
pay it forward~
Gave Manjaro Linux a shot, it had a nice out of box experience but then tiny issues began appearing, then after 2 weeks of not using I did pacman Syu and it failed a dist upgrade and deleted my kernels somehow (including the recovery one). I don't feel like spending a day trying to fix it, and am done with all the small distros that are cutting-edge but unstable like that.
I know Ubuntu the best, is that still a safe bet? Also considering Fedora for something new. I just want a very large, corporate-backed distro that has stable package management (most important), that I can probably use for 2 years running updates and not have it poo poo the bed. I don't care about cutting-edge anymore, Linux Desktop has come a long way. I remember running cutting-edge before because it was the only way to get audio mixing or whatever, but those types of things seem to be fixed now.

RFC2324
Jun 7, 2012

http 418

Alpha Mayo posted:

Gave Manjaro Linux a shot, it had a nice out of box experience but then tiny issues began appearing, then after 2 weeks of not using I did pacman Syu and it failed a dist upgrade and deleted my kernels somehow (including the recovery one). I don't feel like spending a day trying to fix it, and am done with all the small distros that are cutting-edge but unstable like that.
I know Ubuntu the best, is that still a safe bet? Also considering Fedora for something new. I just want a very large, corporate-backed distro that has stable package management (most important), that I can probably use for 2 years running updates and not have it poo poo the bed. I don't care about cutting-edge anymore, Linux Desktop has come a long way. I remember running cutting-edge before because it was the only way to get audio mixing or whatever, but those types of things seem to be fixed now.

Ubuntu desktop isn't corporate backed anymore, just use Fedora.

Volguus
Mar 3, 2009

RFC2324 posted:

just use Fedora.

Seconding the Fedora recommendation. It combines the best of both worlds: relatively new packages and quite stable.

RFC2324 posted:

Ubuntu desktop isn't corporate backed anymore

Canonical is not in charge anymore? When did this happen?

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:
I like Fedora, though I have to admit I was considering giving Manjaro a try. Considering it slightly less now.

CaptainSarcastic
Jul 6, 2013



OpenSUSE is generally solid and well-supported.

I'm running Tumbleweed and it's been great aside from some recent issues with my stupid rtl8812au wireless adapter. I think that's more a problem with Realtek than OpenSUSE, though.

Mr Shiny Pants
Nov 12, 2012

Alpha Mayo posted:

Gave Manjaro Linux a shot, it had a nice out of box experience but then tiny issues began appearing, then after 2 weeks of not using I did pacman Syu and it failed a dist upgrade and deleted my kernels somehow (including the recovery one). I don't feel like spending a day trying to fix it, and am done with all the small distros that are cutting-edge but unstable like that.
I know Ubuntu the best, is that still a safe bet? Also considering Fedora for something new. I just want a very large, corporate-backed distro that has stable package management (most important), that I can probably use for 2 years running updates and not have it poo poo the bed. I don't care about cutting-edge anymore, Linux Desktop has come a long way. I remember running cutting-edge before because it was the only way to get audio mixing or whatever, but those types of things seem to be fixed now.

I'll probably sound like a broken record, but I really like KUbuntu.

RFC2324
Jun 7, 2012

http 418

Volguus posted:

Seconding the Fedora recommendation. It combines the best of both worlds: relatively new packages and quite stable.


Canonical is not in charge anymore? When did this happen?

I wanna say it was 2018 that they announced they weren't supporting the desktop product anymore, that it would be community only. Checking their website, they are totally supporting it at this point so I am full of poo poo.

Ubuntu is still pretty bad, use Fedora(or OpenSUSE)

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home

Alpha Mayo posted:

I just want a very large, corporate-backed distro that has stable package management (most important), that I can probably use for 2 years running updates and not have it poo poo the bed.

That's exactly Fedora.

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Fedora is released every 6 months and EOLs after 13. For something server-oriented that lasts significantly longer, use CentOS (free) or RHEL (not free).

nem
Jan 4, 2003

panel.dev
apnscp: cPanel evolved
AppStreams in CentOS 8 allow you to dabble with newer technology ahead of the typical release pace of RHEL.

Varkk
Apr 17, 2004

minato posted:

Fedora is released every 6 months and EOLs after 13. For something server-oriented that lasts significantly longer, use CentOS (free) or RHEL (not free).

The built in updates has been solid and not caused me any issues through several release cycles.

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home

minato posted:

Fedora is released every 6 months and EOLs after 13. For something server-oriented that lasts significantly longer, use CentOS (free) or RHEL (not free).

That is true, but he said was coming from Manjaro so I assumed desktop not server. The system upgrade path is pretty solid in my experience, even from current -> next (beta) -> next. The latest two releases are supported as well, so you could theoretically skip every other upgrade if you really wanted to, or give the new releases a month to settle down. Flatpak makes a lot of upgrade headaches go away as well, at least for desktop stuff.

Adbot
ADBOT LOVES YOU

CaptainSarcastic
Jul 6, 2013



The Milkman posted:

That is true, but he said was coming from Manjaro so I assumed desktop not server. The system upgrade path is pretty solid in my experience, even from current -> next (beta) -> next. The latest two releases are supported as well, so you could theoretically skip every other upgrade if you really wanted to, or give the new releases a month to settle down. Flatpak makes a lot of upgrade headaches go away as well, at least for desktop stuff.

Yeah, in my experience in-place updates have been good for a while. I have a desktop running OpenSUSE Leap that I think has been through 4 or 5 point releases, most recently to 15.2, and it's been super smooth. Not really familiar with Flatpak - I think I've looked at it on my Chromebook but not on a normal Linux install.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply