Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Volguus
Mar 3, 2009

mystes posted:

DKMS is a million times better than having to manually compile modules for new kernel versions, but the nvidia drivers are annoying, though.

DKMS used to choke on my system every now and then. It would run fine when a new kernel was installed, compiling and doing its job, but when a new version of the driver came along, it would automatically choke and not compile anything. The only solution was to remove the older driver version (from /var/lib/dkms i think it was) and re-run it.
With akmod i never had any problems.

Adbot
ADBOT LOVES YOU

BaseballPCHiker
Jan 16, 2006

Well my home linux experience has hit another roadblock. Though overall I think I'm 1+ year into using Linux at home and will never look back. Its a wonderful experience.

My issue is my ignorance and specifically my ignorance with dealing with partitions and file systems through the CLI. It just seems to be something I do better with a GUI.

I bought a 3 TB western digital MyBook to use for external storage for my Plex server. I have it plugged in via USB and if I do a lsblk I see:
code:
 
lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0    7:0    0  93.9M  1 loop /snap/core/9066
loop2    7:2    0    97M  1 loop /snap/core/9289
sda      8:0    0 931.5G  0 disk 
├─sda1   8:1    0     1M  0 part 
└─sda2   8:2    0 931.5G  0 part /
sdb      8:16   0   2.7T  0 disk 
I used parted and ran:
code:
mkpart
mklabel gpt
File system type - ext4
Start 1
End 2.5TB
Then it tells me to edit fstab

I've edited fstab to include this line:
UUID=2650acfd-1a40-4404-abea-a0d8908f1d5b /home/username/docker/media-external/ ext4 uid=1000,gid=999

Then when trying to mount the new drive using:
sudo mount /home/username/docker/media-external I get this error message:

mount: /home/username/docker/media-external: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error.

I'm not sure where I am running into issues?

My second thought is to just plug it in and manually move over video files for bulk storage which isnt ideal but manageable.

RFC2324
Jun 7, 2012

http 418

BaseballPCHiker posted:

Well my home linux experience has hit another roadblock. Though overall I think I'm 1+ year into using Linux at home and will never look back. Its a wonderful experience.

My issue is my ignorance and specifically my ignorance with dealing with partitions and file systems through the CLI. It just seems to be something I do better with a GUI.

I bought a 3 TB western digital MyBook to use for external storage for my Plex server. I have it plugged in via USB and if I do a lsblk I see:
code:
 
lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0    7:0    0  93.9M  1 loop /snap/core/9066
loop2    7:2    0    97M  1 loop /snap/core/9289
sda      8:0    0 931.5G  0 disk 
├─sda1   8:1    0     1M  0 part 
└─sda2   8:2    0 931.5G  0 part /
sdb      8:16   0   2.7T  0 disk 
I used parted and ran:
code:
mkpart
mklabel gpt
File system type - ext4
Start 1
End 2.5TB
Then it tells me to edit fstab

I've edited fstab to include this line:
UUID=2650acfd-1a40-4404-abea-a0d8908f1d5b /home/username/docker/media-external/ ext4 uid=1000,gid=999

Then when trying to mount the new drive using:
sudo mount /home/username/docker/media-external I get this error message:

mount: /home/username/docker/media-external: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error.

I'm not sure where I am running into issues?

My second thought is to just plug it in and manually move over video files for bulk storage which isnt ideal but manageable.

you still need to use mkfs on it to actually format it.

BaseballPCHiker
Jan 16, 2006

RFC2324 posted:

you still need to use mkfs on it to actually format it.

Thanks! I think I am getting somewhere now.

So heres what I did:
Made the partition using parted.
Used mkfs.ext4 to make the file system.

Now if I try and mount it I still get
mount: /home/username/docker/media-external: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error.

BUT! Its at least showing a partition now!
code:
lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0    7:0    0  93.9M  1 loop /snap/core/9066
loop2    7:2    0    97M  1 loop /snap/core/9289
sda      8:0    0 931.5G  0 disk 
├─sda1   8:1    0     1M  0 part 
└─sda2   8:2    0 931.5G  0 part /
sdb      8:16   0   2.7T  0 disk 
└─sdb1   8:17   0   2.7T  0 part 

RFC2324
Jun 7, 2012

http 418

and now that I actually bother to read it, your fstab is bad. I'm not sure where to get a good example tho, I just use the man page because I've done it enough.

it should look something like:

code:
UUID=2650acfd-1a40-4404-abea-a0d8908f1d5b /home/username/docker/media-external/     ext4    defaults   0  2
your entry completely omits the last 2 fields.

BaseballPCHiker
Jan 16, 2006

RFC2324 posted:

and now that I actually bother to read it, your fstab is bad. I'm not sure where to get a good example tho, I just use the man page because I've done it enough.

it should look something like:

code:
UUID=2650acfd-1a40-4404-abea-a0d8908f1d5b /home/username/docker/media-external/     ext4    defaults   0  2
your entry completely omits the last 2 fields.

Thank you. Really appreciate the help.

I was coming to the same conclusion, its something off with my fstab file and was pulling up the manual for it.

RFC2324
Jun 7, 2012

http 418

BaseballPCHiker posted:

Thank you. Really appreciate the help.

I was coming to the same conclusion, its something off with my fstab file and was pulling up the manual for it.

Not sure which manual you mean(there are many) here is the man page. gives what you need to make a working entry without having to dig through all the options
https://www.man7.org/linux/man-pages/man5/fstab.5.html

Antigravitas
Dec 8, 2019

Die Rettung fuer die Landwirte:
Post the output of blkid.

Also, I find partitioning from CLI to be tedious. I just use gparted when I have to, or ZFS where I don't have to deal with partitioning at all.

Mr Shiny Pants
Nov 12, 2012
So I got my KVM VM running again on kernel 5.4. Turns out I had a typo in the Grub command line. It is pci.ids and not "pci ids". After that it all started working again.

As for the USB controller:

code:
45:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:145a]
        Subsystem: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:145a]
        Kernel driver in use: vfio-pci
45:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]
        Subsystem: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]
        Kernel driver in use: vfio-pci
        Kernel modules: ccp
45:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
        Subsystem: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
        Kernel driver in use: vfio-pci
This on my X399 board to get the USB controller pass though into my VM.

mystes
May 31, 2006

I was using RDP for the last week, but I finally got a new video card so I can run the windows vm at full resolution using looking glass.

The actual video works 100% perfectly with looking glass and it's cool. Unfortunately the spice mouse control is a bit flaky (barrier seems to be more reliable). I'm using the stable version so maybe the newest beta release will be better.

I guess I might keep using rdp when I need the vm in a window and looking glass for games or other stuff where I want windows to be fullscreen.

It's sort of unfortunate that there isn't a single solution that works perfectly in all cases right now, but hopefully that will change as looking glass gets more mature.

BaseballPCHiker
Jan 16, 2006

Antigravitas posted:

Post the output of blkid.

Also, I find partitioning from CLI to be tedious. I just use gparted when I have to, or ZFS where I don't have to deal with partitioning at all.

Yeah I ended up just throwing the drive on another linux machine and using gparted to get it setup how I wanted. After that it was pretty straightforward to mount. Got it up and working and now I can rip my Simpsons DVDs in higher quality!

Hexigrammus
May 22, 2006

Cheech Wizard stories are clean, wholesome, reflective truths that go great with the marijuana munchies and a blow job.
Hello thread. I'd appreciate some advice on which direction to go with my new laptop.

For the last five years I've been running dual boot Windows/Ubuntu on a cheap Costco Acer netbook. A couple of weeks ago an apparent hardware issue killed it. I ended up replacing it with a refurbished Thinkpad T540p running Windows 10.

It's been quite a while since I spent time poking the guts of computers - long enough for this UEFI thing to gain prominence. The old netbook had UEFI but it was transparent to Ubuntu and I had no issues installing and dual booting on it.

Not so much the Thinkpad. GRUB refused to install, the Windows system is invisible as a bootable partition to the Ubuntu install software and and likewise Linux is invisible to the UEFI boot loader. While poking around with BCDedit and EasyBCD I accidentally deleted the single entry for the Windows partition and had to re-attach it. Apparently the save didn't take and I've lost access to the new copy of Windows. On the bright side Linux is now accessible so I guess UEFI is rolling over to Legacy when it doesn't find its bootable partition. I have the old hard drive in a USB carrier and can also boot from it to either of the old Windows and Linux partitions through the old GRUB. The only thing fouled up is the new Windows system.

I'm thinking that I might need to re-format the new drive. I could clone the old one to it, but not sure what would happen if the new UEFI partition is replaced by one from a different machine, or if there is some magic protection to keep the UEFI from being formatted. Maybe there's a way to nuke UEFI and replace it with GRUB?

Formatting and fresh Windows installs are problematic. I made a recovery disk set for Windows but unfortunately it looks like you need a key for a fresh install and while I'm pretty sure I have a set of media somewhere it isn't with the other backup material and will probably take until the heat death of the universe to find.

CaptainSarcastic
Jul 6, 2013



Did you boot the Thinkpad into Windows and did it check for updates? Did you go through the activation/setup process with it?

If so then the hardware should be registered and you should be able to install the same spin of Windows 10 on it and it will activate automatically. Microsoft has made Windows 10 validation based on hardware fingerprinting, so reinstalls are usually less of a hassle.

Hexigrammus
May 22, 2006

Cheech Wizard stories are clean, wholesome, reflective truths that go great with the marijuana munchies and a blow job.
I booted up Windows, checked it out and made a recovery disk & disk image as soon as I opened the package. I assume it came fully patched from the refurbishing shop - it didn't seem to download anything when I was checking it out.

I might try downloading installation media from Microsoft and see what it does. I understood from the Knowledge Base articles I read that a key was required but it would be good if that was obsolete now.

That might explain why the Thinkpad booted Windows from the old hard drive when I attached it through a USB carrier. I expected it to see a foreign CPU ID and refuse to run. I assumed from its behaviour it was updated drivers but maybe it was re-registering the OS as well. Amazingly enough the old Windows runs better on the Thinkpad through a USB 2.0 port than it did when the hard drive was plugged directly in to the old netbook.

Mr Shiny Pants
Nov 12, 2012
I don't know if this is a Linux question or a DevOps one, but here goes.

I have a docker host that runs NGinx as a reverse proxy. The problem with this setup is that NGinx runs in a Docker bridge network on a specific IP address. If run in this setup, all network requests are coming from the internal docker gateway IP that NATs the incoming connections. Pretty useless for logging because the proxy only sees one IP address.

To get around this you can run the container in host networking and that does the trick. Now you can see the actual IP addresses that connect to the proxy. The only problem with this setup is that you can't connect the container to a second internal network. So a proxy -> internal network -> other container does not work.

Anyone running something similar? I get that Kubernetes with an Ingress network would probably work but that is a bit overkill at the moment.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

A properly configured reverse HTTP proxy should send the X-Forwarded-For header containing the original client IP. You should then configure your application logs to register that:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For

Mr Shiny Pants
Nov 12, 2012

NihilCredo posted:

A properly configured reverse HTTP proxy should send the X-Forwarded-For header containing the original client IP. You should then configure your application logs to register that:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For

It does, but the proxy never sees the original IP address because of IP tables forwarding the packet which changes the source address of the actual packets.

So X-Forwarded always contains the docker internal network gateway.

Mr. Crow
May 22, 2008

Snap City mayor for life

Mr Shiny Pants posted:

I don't know if this is a Linux question or a DevOps one, but here goes.

I have a docker host that runs NGinx as a reverse proxy. The problem with this setup is that NGinx runs in a Docker bridge network on a specific IP address. If run in this setup, all network requests are coming from the internal docker gateway IP that NATs the incoming connections. Pretty useless for logging because the proxy only sees one IP address.

To get around this you can run the container in host networking and that does the trick. Now you can see the actual IP addresses that connect to the proxy. The only problem with this setup is that you can't connect the container to a second internal network. So a proxy -> internal network -> other container does not work.

Anyone running something similar? I get that Kubernetes with an Ingress network would probably work but that is a bit overkill at the moment.

I have a dozen setups like this and it works fine, you need to add the forwarded headers like he says above or you're leaving information out.

Are you manually configuring the iptables or letting docker handle it?

Mr Shiny Pants
Nov 12, 2012

Mr. Crow posted:

I have a dozen setups like this and it works fine, you need to add the forwarded headers like he says above or you're leaving information out.

Are you manually configuring the iptables or letting docker handle it?

I don't know what else to tell you to be honest, from searching it seems like a common problem.

This describes it better I guess: https://github.com/moby/moby/issues/15086

And another: https://github.com/nginx-proxy/nginx-proxy/issues/133

I stay as far away as possible from IPTables. :)
I am wondering how you got it running.

Edit: I am running the exact setup at home, and that does work. I'll dig some more after the weekend.

Mr Shiny Pants fucked around with this message at 08:05 on Jun 19, 2020

Volguus
Mar 3, 2009

Mr Shiny Pants posted:

I don't know what else to tell you to be honest, from searching it seems like a common problem.

This describes it better I guess: https://github.com/moby/moby/issues/15086

And another: https://github.com/nginx-proxy/nginx-proxy/issues/133

I stay as far away as possible from IPTables. :)
I am wondering how you got it running.

Edit: I am running the exact setup at home, and that does work. I'll dig some more after the weekend.

From my experience with iptables from many years ago, I remember that while it is possible for iptables to change the source IP to be the gateway IP, it is not mandatory. If the rules look something like this you should be good. Again, this is from memory from long time ago and I could be wrong. loving around with iptables rules was never my favourite pasttime.

Mr Shiny Pants
Nov 12, 2012
I think there is probably something funky with the configuration because I also thought it should just work out of the box.

The rules do say Masquerade so that lead me to believe that IPTables does something funky with the packets.

MrPablo
Mar 21, 2003

Mr Shiny Pants posted:

I think there is probably something funky with the configuration because I also thought it should just work out of the box.

The rules do say Masquerade so that lead me to believe that IPTables does something funky with the packets.

Masquerade does source NATting. In other words, it converts the source address of incoming packets to the source address of the gateway for incoming traffic, and converts it the destination address back to the original address on outbound packets so the routing is handled correctly.

See this page for more information.

If you want to preserve the source address then you probably want to use a DNAT rule instead.

Here's an example from one of my firewalls:

code:
iptables=/sbin/iptables

# external IP
EXT_IP=123.123.123.123 

# destination IP and ports
dst=192.168.99.3
ports='80,443'

$iptables -t nat -I PREROUTING \
  -m multiport -p tcp -d $EXT_IP --dports $ports \
  -j DNAT --to $dst
(There's a bunch of stuff excluded from this example, but you get the general idea).

Mr. Crow
May 22, 2008

Snap City mayor for life
My experience is you shouldn't be touching iptables at all with docker as it clobbers rules and does it's own thing and it's generally a headache if you try and configure anything for docker outside of it (e.g. running firewalld and docker is a nightmare even if firewalld is using iptables).

For a generic web service I'll usually create a container network (all this through docker API) and add my web application, database, and proxy to that network with the proxy being the only one binding to a real host interface. The other containers are then NATed behind the container network and can communicate with each other but are blocked from the outside world (and any other containers/services I may be running on the system).

I was checking the documentation and they mention the default bridge is legacy and they recommend avoiding it so perhaps depending on the OS you're running there is some weird behavior in the implementation?

https://docs.docker.com/network/bridge/

xzzy
Mar 5, 2009

I run docker both ways as we have some groups of machines that can't have users randomly opening up ports willy nilly and the easiest way for us to prevent that was to stop docker from messing with iptables. It is frustrating to get started though because that type of setup is poorly documented, you're gonna have to flail around before you get it.

Spoiler alert: you need to add rules so containers can get to the internet, eg:

code:
iptables -A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -m comment --comment "docker0: NAT allow outgoing" -j MASQUERADE
Then you can open/block access to ports with standard rules on the INPUT table such as

code:
iptables -I INPUT -p tcp --dport 443 -j DROP
There's a third case where we have a small cluster running a bunch of haproxy containers where we use the macvlan network driver, when you do this the containers can have their own routable IP on their own interface inside the container and they can run their own iptables rules. You'll have to elevate the container's privileges to do any of that though (--cap-add NET_ADMIN --cap-add SYS_ADMIN --security-opt seccomp=unconfined) so you only do it in spots where limited people can fuss around with stuff.

Mr Shiny Pants
Nov 12, 2012
Thanks for the pointers guys, it is really helpful.

I'll create a new network on Monday and see how it goes.

For the record, I haven't touched IPTables or anything, this is just ( what I think ) a regular Docker install.

Mr Shiny Pants
Nov 12, 2012
Well after getting nowhere with rules and other configuration settings I decided to create a new VM, install Ubuntu 20.4 on it, install Docker on it and use the compose files from the other server on it just to have a known good to compare.

This worked first time. It seems there is something weird going on with this particular Linux machine.

Edit: after upgrading to 20.04 it still did not work as it should, after that I deleted all networks did an iptables reset and restarted Docker. After creating a new network it finally works, probably something funky deep down in the system. Thanks for the help.

Mr Shiny Pants fucked around with this message at 10:09 on Jun 23, 2020

BlankSystemDaemon
Mar 13, 2009



Here is an archive of every post in this thread up until Mr Shiny Pants' last post; it's not great for browsing, but at least everything's archived.
It's a compressed tar file, so you'll probably need 7zip to open it.

Alpha Mayo
Jan 15, 2007
hi how are you?
there was this racist piece of shit in your av so I fixed it
you're welcome
pay it forward~
So I have a Linux VPS (Ubuntu 20.04), I'd like to self host a VPN server on it too just for when I am connecting to open WiFi hotspots, I know OpenVPN is the big one but is there anything else I should look into that is performance efficient?

Sheep
Jul 24, 2003
By pretty much every metric Wireguard is a better option.

Personally I wouldn't use OpenVPN for anything anymore. It's harder to configure, easier to mess up configuring, at best as secure but at worse less secure, slower, and way more resource intensive. The only real advantage it has at this point is being older and more widely adopted.

Sheep fucked around with this message at 20:23 on Jun 25, 2020

Alpha Mayo
Jan 15, 2007
hi how are you?
there was this racist piece of shit in your av so I fixed it
you're welcome
pay it forward~

Sheep posted:

By pretty much every metric Wireguard is a better option.

Personally I wouldn't use OpenVPN for anything anymore. It's harder to configure, easier to mess up configuring, at best as secure but at worse less secure (if you mess up configuration) , slower, and way more resource intensive. The only advantage it has is being older.

Thanks, I remembered there was a newer VPN server but couldn't remember the name. I'll go with that.

Discussion Quorum
Dec 5, 2002
Armchair Philistine
So I started using Linux as my main desktop again (Manjaro, which is new to me, I used Arch long ago but am much more recently familiar with Debian/Ubutnu). Overall it's fine I guess, about like I remembered it but graphically a little snazzier.

The big difference is that instead of a piddly 1080p monitor I now have a 27" 4K and 25" 1440p (portrait orientation) and hoo boy is the support for high res/HiDPI/whatever janky. As far as I can tell, Gnome 3 more or less required some sort of workaround by mixing integer scaling in the Gnome settings with command-line xrandr fuckery. Cinnamon is proving to handle it better, but the display settings manager is a little buggy. Also, enabling HiDPI somehow halves the size of the Steam window (even after a restart), making it difficult to read even with my nose 2 inches from the screen.

So I guess my question is, assuming I've used and am reasonably happy with just about any desktop environment (KDE, Gnome, Cinnamon, MATE, XFCE - I could even switch to Ubuntu and use Unity for all I care) - which one will I hate the least with a 4K main display? Any DE-independent multi-monitor/multi-DPI tools I should try out?

e: currently I'm mostly making it work by just changing the UI fonts where I can and dealing with the apps that are too special to follow along, like Firefox or Steam

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
On FreeBSD - so far I've never been able to make partial ports updates work well. For example, if I build some packages on the initial ports repo, then update ports repo to the latest, some port somewhere changes it dependencies on python3.6-setuptools to python3.7-setuptools, and then you get a conflict that there is already a conflicting file in that destination. Just an example but I've run into that situation a couple times.

Searching around, the solution I've gotten is basically to dump a list of installed ports, delete everything, set up an initial install (pkg and portmaster), and then pipe portmaster your list of installed ports to rebuild. Is there a better way to do that?

One downside is that deletes everything in the meantime while it builds. Could you clone your /usr/bin zfs dataset, have portmaster rebuild everything on the clone, snapshot, and promote the snapshot? Or would that break dynamically linked binaries as the libraries change underneath them? Might need to reboot after...

Out of curiosity, is there any built-in support for jails using a secondary userland that is mounted from a clone/snapshot? I could see that being an interesting option for backwards compatibility.

Paul MaudDib fucked around with this message at 05:35 on Jul 1, 2020

Antigravitas
Dec 8, 2019

Die Rettung fuer die Landwirte:

Discussion Quorum posted:

So I guess my question is, assuming I've used and am reasonably happy with just about any desktop environment (KDE, Gnome, Cinnamon, MATE, XFCE - I could even switch to Ubuntu and use Unity for all I care) - which one will I hate the least with a 4K main display? Any DE-independent multi-monitor/multi-DPI tools I should try out?

Ubuntu uses Gnome nowadays.

I don't have a 4k monitor, but afaik due to the fragmented ecosystem everything is a crapshoot anyway. Try how upstream KDE Plasma behaves via KDE Neon. That should be fairly painless to test since you don't need to actually install it to see if it works okay, and it's the "showcase" distro for Plasma. Unless you use NVIDIA, in which case you're hosed anyway.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

On FreeBSD - so far I've never been able to make partial ports updates work well. For example, if I build some packages on the initial ports repo, then update ports repo to the latest, some port somewhere changes it dependencies on python3.6-setuptools to python3.7-setuptools, and then you get a conflict that there is already a conflicting file in that destination. Just an example but I've run into that situation a couple times.

Searching around, the solution I've gotten is basically to dump a list of installed ports, delete everything, set up an initial install (pkg and portmaster), and then pipe portmaster your list of installed ports to rebuild. Is there a better way to do that?

One downside is that deletes everything in the meantime while it builds. Could you clone your /usr/bin zfs dataset, have portmaster rebuild everything on the clone, snapshot, and promote the snapshot? Or would that break dynamically linked binaries as the libraries change underneath them? Might need to reboot after...

Out of curiosity, is there any built-in support for jails using a secondary userland that is mounted from a clone/snapshot? I could see that being an interesting option for backwards compatibility.
FreeBSD ports have the ports/UPDATING and ports/MOVED files - the first is for keeping track of changes which could be breaking, and the second is meant to be a machine-parsable way of letting utilities know about major version updates or deprecated ports.
Both poudriere and pkg support tracking ports/MOVED, so if you use them in combination to build and deploy your own packages (even across multiple machines), you get the same experience as you would if you were using just pkg with the official packages. Incidentally, it's can also be faster than portmaster.
Unfortunately, portmaster, which has always been a community-maintained, was for a long time sort-of unmaintained (ie. only receiving updates by porters who sometimes would take the time, rather than a dedicated maintainer) - and while someone from the community has stepped up to the plate to be the portmaster maintainer, I don't believe it has yet grown support for ports/MOVED, which is why you're running into these problems.

BlankSystemDaemon fucked around with this message at 09:31 on Jul 1, 2020

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
what would have been the approved mechanism for doing a mass update before?

ToxicFrog
Apr 26, 2008


Discussion Quorum posted:

Also, enabling HiDPI somehow halves the size of the Steam window (even after a restart), making it difficult to read even with my nose 2 inches from the screen.

This, at least, I know how to fix: export GDK_SCALE=2 in your login scripts, or run steam as env GDK_SCALE=2 steam. I would have expected that GNOME would automatically handle this for you, but I've never used GNOME on hiDPI.

Note that this pretty much only affects Steam itself; it's an absolute crapshoot how individual games handle it.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

what would have been the approved mechanism for doing a mass update before?
I never used portmaster, and I don't remember - partly because of chemo brain, but also because it's been like 7 years. Best recommendation I have is asking in the irc channel #freebsd on freenode.

kujeger
Feb 19, 2004

OH YES HA HA

Discussion Quorum posted:

So I started using Linux as my main desktop again (Manjaro, which is new to me, I used Arch long ago but am much more recently familiar with Debian/Ubutnu). Overall it's fine I guess, about like I remembered it but graphically a little snazzier.

The big difference is that instead of a piddly 1080p monitor I now have a 27" 4K and 25" 1440p (portrait orientation) and hoo boy is the support for high res/HiDPI/whatever janky. As far as I can tell, Gnome 3 more or less required some sort of workaround by mixing integer scaling in the Gnome settings with command-line xrandr fuckery. Cinnamon is proving to handle it better, but the display settings manager is a little buggy. Also, enabling HiDPI somehow halves the size of the Steam window (even after a restart), making it difficult to read even with my nose 2 inches from the screen.

So I guess my question is, assuming I've used and am reasonably happy with just about any desktop environment (KDE, Gnome, Cinnamon, MATE, XFCE - I could even switch to Ubuntu and use Unity for all I care) - which one will I hate the least with a 4K main display? Any DE-independent multi-monitor/multi-DPI tools I should try out?

e: currently I'm mostly making it work by just changing the UI fonts where I can and dealing with the apps that are too special to follow along, like Firefox or Steam

Just setting scaling to 200% in gnome 3 (settings->display iirc) did all of this for me, even steam looks good and proper.

For separate scaling on multiple monitors you need to run gnome in a Wayland session if I remember right

Computer viking
May 30, 2011
Now with less breakage.

On the FreeBSD upgrading note, I've had some luck with using synth to clean things up. You'll need ports or pkg to work well enough to install ports-mgmt/synth, of course. Then do "synth configure" and flip the switch to use packages when available, do "synth upgrade-system", and wait a long while.

It still sometimes breaks if things get too confusing or if you have incompatible port configurations saved - but the error messages are mostly readable.

Adbot
ADBOT LOVES YOU

acetcx
Jul 21, 2011
I'm running Fedora 32 on my Dell XPS 13 7390 laptop (you guys have already helped me out with it earlier in the thread, thanks again!) and I've run into something strange.

A week or so ago I realized it was running a bit hot at idle and discovered that one of the CPU cores was maxed out running kworker/kacpid and kworker/kacpi_notify. I don't know how long it's been doing this. I did some searching and found out that this is due to a hardware flaw (i.e. it would be doing the same thing regardless of OS) and that I should isolate the offending ACPI interrupt by running:

pre:
grep . -r /sys/firmware/acpi/interrupts/ | sort
which led me to:

pre:
/sys/firmware/acpi/interrupts/gpe6F:11344695     STS enabled      unmasked
which I disabled by running:

pre:
echo disable | sudo tee /sys/firmware/acpi/interrupts/gpe6F
which fixed it! I also checked if my BIOS needed to be updated but it was already up to date.

My question is what the heck is going on and what are the consequences of disabling an ACPI interrupt like that?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply