Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Rocko Bonaparte
Mar 12, 2002

Every day is Friday!

SlapActionJackson posted:

For AMD gpus, users need to have group 'video' to access the hardware. '

The card is from Nvidia, but I saw at least one place claiming to use it for them too. Either way, it didn't change the situation. For some progress towards understanding: I found out that if I logged in normally with the account in question that glxgears worked fine with or without the group. I couldn't get glxgears to run if I su'd to that account from another one. In particular, it couldn't open :0.0.

This is mutating into a DevOps kind of question specifically but I'll run with it here a little longer. I am trying to get a Gitlab runner kick off tests that involve spinning up a virtualbox VM. Vagrant's in the middle, but it looks like it's peddling back an error from virtualbox about not having 3d enabled; I don't really think vagrant is a problem (this time). So I have the configuration set up not to delete the staged setup and I can poke it with a stick. If I'm logged in as the Gitlab runner account in a graphical section, it can launch just fine and dandy. However, I still get errors from virtualbox about lacking 3d from the gitlab-runner daemon. To launch that daemon, I have to kick it off as root; it looks like it then uses the gitlab runner account to actually do things. All the files it is generating have that account owning them, for example. Root can also run these just fine without a complaint about 3d. So it has something to do with a daemon doing all this, and I have no real idea what exactly.

Adbot
ADBOT LOVES YOU

Not Wolverine
Jul 1, 2007

VictualSquid posted:

The times I had similar things happening it was some kind of hard-drive access problem. Maybe you should try a system monitor that displays disk access, also what filesystem are you using?
I might start a thread in tech support, but I think you may have pinpointed the problem. This board has had some oddities with the disks in the past. I used to run windows 10 only on it (it's now dual boot but hasn't seen 10 in months) and back in the days of 10 I would come back every few days to a completely non-responsive comatose pc, like it went to sleep but did not wake up at all. I had tried pressing the power button, reset button, nothing would work but pulling the plug. Then the BIOS wouldn't list the OS drive until I shut it down, unplugged the OS disk, rebooted, turned it back off to plug the disk back in and continue life as if nothing ever happened. Either Intel made some crappy SATA chips (it's an Asus P5QPL-AM, I think I read this chipset is a little buggy) or my board is just going bad. I guess it's time to dig out another case and transplant everything from mATX to ATX. Thanks!

Not Wolverine fucked around with this message at 21:14 on Sep 19, 2019

taqueso
Mar 8, 2004


:911:
:wookie: :thermidor: :wookie:
:dehumanize:

:pirate::hf::tinfoil:


That was my plan, just hoping there might be a nice tool with error handling etc before I do it myself.

Total Meatlove
Jan 28, 2007

:japan:
Rangers died, shoujo Hitler cried ;_;

Mr Shiny Pants posted:

I am doing the exact same thing right now ( TV headend actually ), but my tuner does not show up, any tips?

Edit: for me it was the permissions on the /dev/dvb directory. I just did a chmod 777 on it. It is just the capture card so the security was not that much of a concern. Kodi works like a charm now.

Thank you for updating with this.


To pay that forward, for people like me who are lazy and have traefik:latest set as your docker image the move from 1.7 to 2.0 broke a lot of people’s poo poo and was the cause of a lot of annoyance.

Mr Shiny Pants
Nov 12, 2012
I have a new one, ever since updating from KUbuntu 18 to 19 my VM has been wonky with it's networking. Turns out IPTables is in the way, I am not clear how this works in Ubuntu but I am hesitant to just change things willy nilly.

Are app armor and virsh supposed to do this transparently? I mean updating ip tables? I kept my older Virsh App Armor config when updating and it might have changed in the new version.

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Anyone know how I can tell in which version a specific kernel patch was included?

When I run "perf record -b ..." on a Ryzen 2700X it spits out

perf posted:

error:
cycles: PMU Hardware doesn't support sampling/overflow-interrupts. Try 'perf stat'

I was searching for better explanation of this error, and I came across this article, which links to a patch https://www.phoronix.com/scan.php?page=news_item&px=AMD-Zen-PMU-Events-Linux
The patch was submitted *somewhat* recently (just over a year ago). Is there a reason to suspect it would not be included in kernel 5.0.0-27-generic from ubuntu bionic?

Or am I misunderstanding and that patch is not actually related to the message I'm seeing?

Mr Shiny Pants
Nov 12, 2012

peepsalot posted:

Anyone know how I can tell in which version a specific kernel patch was included?

When I run "perf record -b ..." on a Ryzen 2700X it spits out


I was searching for better explanation of this error, and I came across this article, which links to a patch https://www.phoronix.com/scan.php?page=news_item&px=AMD-Zen-PMU-Events-Linux
The patch was submitted *somewhat* recently (just over a year ago). Is there a reason to suspect it would not be included in kernel 5.0.0-27-generic from ubuntu bionic?

Or am I misunderstanding and that patch is not actually related to the message I'm seeing?

One ghetto way would be to grep the source for a specific text.

other people
Jun 27, 2004
Associate Christ
Find the git ID of the commit. Find it in linus's github web or "git log --oneline | grep SUBJECT"

Then with a clone of the repo you can do "git tag --contains ID" to see all the versions which include that commit.

Or a more lazy way is to grep the kernal package changelog for your distro. For an rpm based distro something like "rpm -q --changelog kernel-${uname -r} | grep SUBJECT".

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

other people posted:

Find the git ID of the commit. Find it in linus's github web or "git log --oneline | grep SUBJECT"

Then with a clone of the repo you can do "git tag --contains ID" to see all the versions which include that commit.
Thanks this was a great help.

Unfortunately looks like it didn't make it into 5.0 :/
code:
$ git log --oneline | grep -i PMU | grep -i AMD
...
98c07a8f74f8 perf vendor events amd: perf PMU events for AMD Family 17h
...

$ git tag --contains 98c07a8f74f8
v5.1
v5.1-rc2
v5.1-rc3
v5.1-rc4
v5.1-rc5
v5.1-rc6
v5.1-rc7
v5.2
v5.2-rc1
v5.2-rc2
v5.2-rc3
v5.2-rc4
v5.2-rc5
v5.2-rc6
v5.2-rc7
v5.3
v5.3-rc1
v5.3-rc2
v5.3-rc3
v5.3-rc4
v5.3-rc5
v5.3-rc6
v5.3-rc7
v5.3-rc8
Another separate question: When I look at a particular *single threaded* program in htop, I see %CPU typically slightly above 100%, up to around 104%. Where would the extra 4% come from on a single threaded app? Is it purely measurement error?
I thought maybe it was scheduler overhead or something, so I also enabled STIME and UTIME which read about 40s for STIME, while the UTIME is at 14h, so not even close to a percent there. Any other ideas?

Edit: I guess it is just measurement error that averages out over time, it does seem to fluctuate slightly below 100% also. I increased the "delay" setting in ~/.config/htop/htoprc from 15 to 50 (tenths of a second) and after that it rarely goes up to 101% now.

peepsalot fucked around with this message at 20:13 on Sep 21, 2019

feedmegin
Jul 30, 2008

I mean I guess the program could be using some library that spins up threads behind the scenes, too.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

:words: about problems, although there aren't 99 of them
Are you hitting I/O operation limits for your spinning rust?
As far as system monitoring tools that others suggested, ZFS has 'zpool iostat' and FreeBSD has 'gstat' which may reveal interesting data to support the above hypothesis.

Otherwise, it's at the point where I think asking in #FreeBSD on FreeNode would be a better idea - because while I do like to help generally where I can, I've had a couple bad weeks of chronic pain so painkillers basically mean fugue-state which isn't conducive to troubleshooting.
If you do join, feel free to hilight me and I'll try to poke some people who might be of more help.

Not Wolverine
Jul 1, 2007
My NAS is an Xubuntu based PC, for storage I have a MDADM Raid 5 array of 4 2TB hard drives. Locally, this uses ext4 and and reports 6TB as expected. If it helps any, the mount line is /dev/md0 on /mnt/vault type ext4 (rw,relatime,discard,stripe=384). This is shared using Samba 4.7.6-Ubuntu. Today when I was trying to copy some files over from a Windows 10 PC, Windows reported the drive was full, but Windows also reports the drive is only at 2TB capacity. Is this likely an issue with Windows, or a setting on the server?

Similarly, if I want to switch this server from Xubuntu to FreeNAS, would it be possible for FreeNAS to import the existing ext4 array? I am skeptical since it was created on Linux, using ext4 and I to my knowledge FreeNAS is BSD with an obsession for ZFS. On my old hardware with only 4GB of RAM I think I did not have enough RAM for a recommended ZFS setup, but now I have 8GB and can get more if necessary. If anything, the main reason I don't want to switch to FreeNAS is simply because the CPU is one of my better ones (a 10 year old Phenom. . .) and I want to keep it available and easy to use for video encoding if necessary thus why I chose to go with Xubuntu.

Computer viking
May 30, 2011
Now with less breakage.

The combination of "ext4" and "can't create files but there is space left" is sort of suggestive. How does df --inodes look? (See https://www.ctrl.blog/entry/how-to-all-out-of-inodes.html )

As for FreeBSD/FreeNAS, there are drivers to read (maybe even write) ext 2/3/4, but I wouldn't rely on them as a permanent solution; you'd kind of have to backup/restore.

BlankSystemDaemon
Mar 13, 2009



FreeBSDs ext implementation has been updated and supports reading and writing to ext4 now, although journaling and encryption support is still missing.
Also, fusefs has been updated (and can now be dtraced properly).

Not Wolverine
Jul 1, 2007
It doesn't appear to be an inode issue, df --inodes says only 1% of the inodes are used, which makes sense considering Kubuntu only reports 119 files on my NAS. I even tried copying over another file from Kubuntu and I did not get a drive full error message, Kubuntu even reports 3TB are free. So far, everything I can see is suggesting this issue is only between my Windows 10 PC and the Xubuntu server, I'm simply not sure where to start looking.

Computer viking
May 30, 2011
Now with less breakage.

It could also be a permission issue - I wouldn't trust the error messages that make it through to be entirely correct all the time. Make a 0777 folder on the server and see if you can write to that, maybe?

(Or an selinux thing - but is that enabled and restrictive by default on Ubuntu & derivatives?)

Truga
May 4, 2014
Lipstick Apathy
ubuntus don't even work with selinux iirc

Not Wolverine
Jul 1, 2007
Good (and bad) news - I am no longer encountering the issue on Windows. The only thing I can think of is last night my server had an update, I didn't look at the details except it required a restart so it must have been something worthwhile. The downside is this just makes me really want to switch over to FreeNAS, except I used all of my large hard drives to create the NAS so backing up is going to be a pain in the rear end. I have Just over 2TB of files on the NAS right now, and I have 2x 1TB hard drives and 3 or 4 500GB hard-drives, but I am finding none of my hard-drives are empty. I guess the next step is to collect up all the data on the NAS, remove the duplicates, then see how much crap I have horded and try to determine whether or not I have enough hard-drive space to backup everything. If anything, the biggest issue in this ordeal is, ironically, hard drive size. I am considering buying a 4TB hard-drive just to have a backup, except I only have one PC with a UEFI firmware which can read a 4TB drive, and it is not my NAS. Similarly, if I want to upgrade my NAS size in the future, I assume because it is BIOS based, I won't be able to use drives more than 2TB, or does FreeNAS have a way to work around the 2TB limit on BIOS?

VictualSquid
Feb 29, 2012

Gently enveloping the target with indiscriminate love.
The 2TB limit on BIOS booted computers is only for the boot drive. I never had problems with using 3 or 4 TB GPT drives together with a small SSD on BIOS computers.

BlankSystemDaemon
Mar 13, 2009



VictualSquid posted:

The 2TB limit on BIOS booted computers is only for the boot drive. I never had problems with using 3 or 4 TB GPT drives together with a small SSD on BIOS computers.
Unfortunately it depends on implementation details; I have at least one motherboard on my shelf right now that has 1) a weird dual-BIOS/UEFI implementation with two separate chips and a jumper to select between them, 2) only half a UEFI implementation since it doesn't have an UEFI shell, nor a UEFI boot manager, nor UEFI secure boot, 3) can't boot from +2TB drives irrespective of what settings are changed and/or what partition table is used.
Isn't firmware great?

mystes
May 31, 2006

D. Ebdrup posted:

Unfortunately it depends on implementation details; I have at least one motherboard on my shelf right now that has 1) a weird dual-BIOS/UEFI implementation with two separate chips and a jumper to select between them, 2) only half a UEFI implementation since it doesn't have an UEFI shell, nor a UEFI boot manager, nor UEFI secure boot, 3) can't boot from +2TB drives irrespective of what settings are changed and/or what partition table is used.
Isn't firmware great?
In the worst case can't you just boot off of an old usb drive you have lying around or something?

Computer viking
May 30, 2011
Now with less breakage.

Our old file server at work was a retired Dell optiplex tower from 2008 or so, with every internal port and two eSATA boxes filled with 4TB drives. It booted from a usb stick, since there was no practical way to boot FreeBSD from a ZFS raid-z pool, and no space for a separate boot drive. Apart from the eSATA card dying once, it worked fine for years.

More relevant to this, the disks - and especially the eSATA enclosures - looked wrong in the BIOS, but showed up correctly when I managed to boot into FreeBSD, since the relevant drivers talk directly do the SATA controller.

Mr Shiny Pants
Nov 12, 2012
So I finally got my KVM Win10 VM updated to 1903. There were some additions added to KVM to make it more Windows friendly it seems which my old config (1709 ) didn't need but the newer versions do. Otherwise System_Thread_Exception.

Even the GPU starts, I was really dreading Error 43. But a 12 character alpha numeric string in the bios seems to have cleared that up.

I found out by copying the original Qcow2 and hanging it under a newly created Win10 VM. Did the update, made sure Windows booted and set it back under my old config which broke it. After that some trial and error on which settings I really needed and it works.

Mr Shiny Pants fucked around with this message at 21:25 on Sep 25, 2019

Volguus
Mar 3, 2009
I have a question about LVM:
A volume group and as well a logical volume can expand multiple physical disk drives and partitions. How does it act when I have one logical volume over multiple partitions? Is it like a RAID0, where I would get the space and the speed but if one drive dies everything goes with it? Or is lvm smarter than that and if one drive dies it only loses whatever was on it?

Toalpaz
Mar 20, 2012

Peace through overwhelming determination
I just wanted to mention because I'm in a poo poo posting mood that I hate apt and miss arch repos and pacman. I think next time my computer has a fatal error or something I'll go with manjaro for the quick installation and pacman + arch repos + yay cause I never learned how to make packages properly.

Hollow Talk
Feb 2, 2014

Toalpaz posted:

I just wanted to mention because I'm in a poo poo posting mood that I hate apt and miss arch repos and pacman. I think next time my computer has a fatal error or something I'll go with manjaro for the quick installation and pacman + arch repos + yay cause I never learned how to make packages properly.

Or just choose a real distro that builds things properly and maybe doesn't tend to produce fatal errors. :v:

Yaoi Gagarin
Feb 20, 2014

I want to write a program/script to look at some process trees, what interfaces can I use to get information like "ps -t sometty --forest"? I assume I have to muck around in /proc to figure out the parent/child relationships, but how do I constrain to just processes under a specific tty?

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Volguus posted:

I have a question about LVM:
A volume group and as well a logical volume can expand multiple physical disk drives and partitions. How does it act when I have one logical volume over multiple partitions? Is it like a RAID0, where I would get the space and the speed but if one drive dies everything goes with it? Or is lvm smarter than that and if one drive dies it only loses whatever was on it?

By default they are concatenated. Extents from 1 to X are on first drive, extents from X+1 to Y are on the second drive, and so on. But it is possible to create a logical volume with built in RAID, 'man lvmraid'. You can use 'lvdisplay --maps' to show how the extents are located in different drives. In theory only the data on the failed drive is lost, but that requires that an intact file system tab or superblock can be found on the remaining drives, and the files aren't fragmented over several drives.

Volguus
Mar 3, 2009

Saukkis posted:

By default they are concatenated. Extents from 1 to X are on first drive, extents from X+1 to Y are on the second drive, and so on. But it is possible to create a logical volume with built in RAID, 'man lvmraid'. You can use 'lvdisplay --maps' to show how the extents are located in different drives. In theory only the data on the failed drive is lost, but that requires that an intact file system tab or superblock can be found on the remaining drives, and the files aren't fragmented over several drives.

Oh, thanks. So, it's not like RAID0 but you could still lose everything. Sigh. I just want to eat the cake and have it too, is it that much to ask?

Toalpaz
Mar 20, 2012

Peace through overwhelming determination

Hollow Talk posted:

Or just choose a real distro that builds things properly and maybe doesn't tend to produce fatal errors. :v:

I'm using Ubuntu but I'm still certain it's only a matter of time until I do something that breaks it lol. It's mostly a post complaining about having to relearn a package manager. Actually I miss the arch wiki too because it is really helpful.

Computer viking
May 30, 2011
Now with less breakage.

This is fine
Mixed media. Photograph of digital work.
Computer viking (2019)

pseudorandom name
May 6, 2007

Volguus posted:

Oh, thanks. So, it's not like RAID0 but you could still lose everything. Sigh. I just want to eat the cake and have it too, is it that much to ask?

TBF what you're asking for is weird.

Janitor Prime
Jan 22, 2004

PC LOAD LETTER

What da fuck does that mean

Fun Shoe

Saukkis posted:

By default they are concatenated. Extents from 1 to X are on first drive, extents from X+1 to Y are on the second drive, and so on. But it is possible to create a logical volume with built in RAID, 'man lvmraid'. You can use 'lvdisplay --maps' to show how the extents are located in different drives. In theory only the data on the failed drive is lost, but that requires that an intact file system tab or superblock can be found on the remaining drives, and the files aren't fragmented over several drives.

I don't recommend lvmraid for anything more than RAID0 or RAID1, it supposed to support raid5 and 6 but the tooling around it sucks and trying to get it grow or expand the array was frustrating. I ended up just using mdadm

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
OK... I'm loving baffled by this one. The following is a bash script that is intended to create an output directory if it doesn't exist, clean it of any existing files, then walk every fine in the current directory and pass it to ffmpeg to remux a couple tracks to an output file and force subs on. For when you want to watch subs on a series but it's built as a dub file, or slicing a multi-language release into a straight english release, etc.

code:
mkdir -p out; rm out/*; find . -maxdepth 1 -type f | sort | 
while read -r line; do 
ffmpeg -i "$line" -c:v copy -map 0:0 -c:a copy -map 0:2 -c:s copy -map 0:4 -disposition:s:0 default out/"$line"; done;
this works on every other line, I get episode 1, episode 2 errors out. I get episode 3, episode 4 errors out, etc. The error from ffmpeg is that it can't find the input file, and the input file that it thinks it's trying to load is missing a few characters from the front.

The filenames are '[GRP] Series - 01 [anidb-id].mkv" etc. The variable that bash is substituting in when it errors out is apparently 'Series - 02 [anidb-id].mkv', 'eries - 04 [anidb-id].mkv', that kind of thing. It looks like maybe it's choking on the brackets in the [GRP] part... but only on every other line? And it's choking a variable number of characters afterwards?

if I swap the payload line for 'echo "$line"' or 'echo "$line out/line"' the output looks golden so I'm not sure why the actual payload line is behaving differently

Environment is freebsd 12.0-RELEASE with a user running bash.

or maybe the reason it's every other line is because it's the one at the end that's the problem somehow?

Paul MaudDib fucked around with this message at 08:07 on Sep 28, 2019

xtal
Jan 9, 2011

by Fluffdaddy

Paul MaudDib posted:

OK... I'm loving baffled by this one. The following is a bash script that is intended to create an output directory if it doesn't exist, clean it of any existing files, then walk every fine in the current directory and pass it to ffmpeg to remux a couple tracks to an output file and force subs on. For when you want to watch subs on a series but it's built as a dub file, or slicing a multi-language release into a straight english release, etc.

code:
mkdir -p out; rm out/*; find . -maxdepth 1 -type f | sort | 
while read -r line; do 
ffmpeg -i "$line" -c:v copy -map 0:0 -c:a copy -map 0:2 -c:s copy -map 0:4 -disposition:s:0 default out/"$line"; done;
this works on every other line, I get episode 1, episode 2 errors out. I get episode 3, episode 4 errors out, etc. The error from ffmpeg is that it can't find the input file, and the input file that it thinks it's trying to load is missing a few characters from the front.

The filenames are '[GRP] Series - 01 [anidb-id].mkv" etc. The variable that bash is substituting in when it errors out is apparently 'Series - 02 [anidb-id].mkv', 'eries - 04 [anidb-id].mkv', that kind of thing. It looks like maybe it's choking on the brackets in the [GRP] part... but only on every other line? And it's choking a variable number of characters afterwards?

if I swap the payload line for 'echo "$line"' or 'echo "$line out/line"' the output looks golden so I'm not sure why the actual payload line is behaving differently

Environment is freebsd 12.0-RELEASE with a user running bash.

or maybe the reason it's every other line is because it's the one at the end that's the problem somehow?

There's probably some funkiness going on with the space/newline/etc, shell quotation or something. Does it work if you change the find command to:
code:
find . -maxdepth 1 -type f -exec ffmpeg -i {}  -c:v copy -map 0:0 -c:a copy -map 0:2 -c:s copy -map 0:4 -disposition:s:0 default out/{} \;

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

xtal posted:

There's probably some funkiness going on with the space/newline/etc, shell quotation or something. Does it work if you change the find command to:
code:
find . -maxdepth 1 -type f -exec ffmpeg -i {}  -c:v copy -map 0:0 -c:a copy -map 0:2 -c:s copy -map 0:4 -disposition:s:0 default out/{} \;

Yeah, that works, thanks. Still wonder what's going on there though. I've bumped into this mystery substitution issue once before on a more complex command where I needed to run gymnastics to extract parts of the filename.

I know, I know, just write a real script.

Paul MaudDib fucked around with this message at 09:03 on Sep 28, 2019

tak
Jan 31, 2003

lol demowned
Grimey Drawer
Use find with -print0:

find <args> -print0 | while read -r -d $'\0' file; do some/cmd "$file"; done

'/' and the null byte are the only chars that can't be a part of a filename, null delimiting is the way to go

redeyes
Sep 14, 2002

by Fluffdaddy
So I was doing my yearly try a distro and see if Linux can replace windows. I ran into the issue that apparently Linux has no hardware accelerated webbrowsers? Playing something like 4k video brings my quad core to its knees. Is this really how it is on Linux?

Volguus
Mar 3, 2009

redeyes posted:

So I was doing my yearly try a distro and see if Linux can replace windows. I ran into the issue that apparently Linux has no hardware accelerated webbrowsers? Playing something like 4k video brings my quad core to its knees. Is this really how it is on Linux?

Works fine here.

Adbot
ADBOT LOVES YOU

other people
Jun 27, 2004
Associate Christ
lol how can someone be so bad at computer

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply