Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

F_Shit_Fitzgerald posted:

So setting shopt -s nullglob will prevent a SNAFU where it starts deleting all jpg files in my system, if I'm understanding correctly? Will do; that's an easy fix. I should have already done that.

Another save guard is to use full paths, 'mv /home/me/Pictures/temp/*.jpg /home/me/Media/'.

Adbot
ADBOT LOVES YOU

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
Alternatively, use find
code:
cd /path/to/somewhere
find . -iname "*.jpg" -maxdepth 1 -execdir mv {} Media/ \;

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

Pablo Bluth posted:

Alternatively, use find
code:
cd /path/to/somewhere
find . -iname "*.jpg" -maxdepth 1 -execdir mv {} Media/ \;

Of course that has the danger that if the 'cd' fails, in what directory will the find run. The easy option is to run this as 'cd /path/... && find . iname '. The other option is to include the option 'set -o errexit' in your script.

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
Hardcoding the paths in find is probably the 'correct style'.
code:
find /path/to/somewhere -maxdepth 1 -iname "*.jpg" -execdir mv {} /path/to/somewhere/Media/ \;
But cd failing is a good point. You can never have too many checks in a script. (well you probably can, but I suspect nearly all scripts don't have enough and hence many have dangerous failure modes lurking in them)
code:
cd /path/to/somewhere
ret=$?
if [ $ret -ne 0 ]; then
        echo "error exit message"
	exit 1
fi
find . -iname "*.jpg" -maxdepth 1 -maxdepth 1 -execdir mv {} Media/ \;

Pablo Bluth fucked around with this message at 13:46 on Mar 23, 2024

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
Yeah, scripts should probably try to use bash strict mode.

xzzy
Mar 5, 2009

Use them ampersands.

cd /tmp/asldjkfalsdf && find -name foo

(but yes, it's better to use the full path as an argument to find, it'll do the same error checking)

I'm no addict to making stupidly complex one liners but if statements in bash are so ugly and cumbersome that I'll use && unless I need to do extra logic inside the conditional.

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
My name is Pablo and I am a Find addict.

Klyith
Aug 3, 2007

GBS Pledge Week

Pablo Bluth posted:

But cd failing is a good point. You can never have too many checks in a script. (well you probably can, but I suspect nearly all scripts don't have enough and hence many have dangerous failure modes lurking in them)

See also: someone made a theme for KDE 6 that had some internal scripting and did a whoopsie with rm -rf, which deleted someone's filesystem.

And now there is big drama on the KDE reddit as people suddenly discover that the DE made for anything-goes tweakers might not be vetting the giant pile of user-submitted themes and geegaws. And that global themes which can change your UI into a fully-functional LCARS replica might be, shock and horror, running code!

Toalpaz
Mar 20, 2012

Peace through overwhelming determination
I've loaded the arch/manjaro hibernate/suspend system.d service page 5 times in the past couple of months and been unable to parse it at all. Hibernation is such a useful feature for laptops. Why do I need to check the kernel and like five different places (exaggeration) to make it work.

Phosphine
May 30, 2011

WHY, JUDY?! WHY?!
🤰🐰🆚🥪🦊
I think there's basically no legit use case for a bash script that doesn't set at least -e, -u and probably pipefail. If you do actually need to run a command that might fail and still continue, there are ways to write that that clearly shows intent and doesn't ruin everything else. Same for potentially unset variables. Always do set -eu, it will definitely save your rear end some day if you write more than ten lines of bash in your life.

Zorak of Michigan
Jun 10, 2006

xzzy posted:

Use them ampersands.

cd /tmp/asldjkfalsdf && find -name foo

(but yes, it's better to use the full path as an argument to find, it'll do the same error checking)

I'm no addict to making stupidly complex one liners but if statements in bash are so ugly and cumbersome that I'll use && unless I need to do extra logic inside the conditional.

I generally prefer cd /whatever || exit 1 "Unable to cd to /whatever". It seems more readable, though of course that's idiosyncratic.

Voodoo Cafe
Jul 19, 2004
"You got, uhh, Holden Caulfield in there, man?"

Pablo Bluth posted:

code:
cd /path/to/somewhere
ret=$?
if [ $ret -ne 0 ]; then
        echo "error exit message"
	exit 1
fi



Testing on $? is error prone, you can branch on the return status directly

code:
if ! cd /path/to/somewhere; then
  echo "some error message";
  exit 1;
fi
or something like this:

code:
die() {
  echo "$1";
  exit 1;
}

cd /path/to/somewhere || die "couldn't change directory";

F_Shit_Fitzgerald
Feb 2, 2017



Here's another one I'm almost embarrassed to ask: I just got an external hard drive for backups that uses NTFS. Online searching yields somewhat conflicting advice about whether I should format this drive (possibly to ext3 or 4) before I use it on Mint, or whether my data* should be fine. I have been interchangeably using USB sticks between Mac, Linux and Windows with no apparent issues, so my instinct is not to worry about it. I thought I'd ask this thread before I did anything (the drive is hooked in but not mounted).


* Stuff like my Music directory and various videos I don't necessarily want cluttering up my Linux machine.

F_Shit_Fitzgerald fucked around with this message at 17:10 on Mar 23, 2024

ziasquinn
Jan 1, 2006

Fallen Rib
it doesn't really matter for back ups afaik. it just is read slower by Linux so you don't wanna be trying to run games off a NTFS drive

that's the overall view I got out of my own searches last time I looked at least. happy to be corrected

cruft
Oct 25, 2007

F_Shit_Fitzgerald posted:

I'm trying to set up a shell script so that if it detects any files of a certain type, those are moved to a separate directory. For example:

code:
if [ -z *.jpg ];
then
  mv *.jpg Media/
fi
When this script is run with 'dummy' jpg files, it throws the error "Line 3: [: a.jpg: binary operator expected". a.jpg is the name of the 'dummy' jpg file I created.

What am I doing wrong? It for sure has to be something really dumb I'm overlooking.

The way I usually deal with this sort of thing is:

code:
ls *.jpg | while read fn; do
  mv "$fn" Media/
done

Klyith
Aug 3, 2007

GBS Pledge Week

F_Shit_Fitzgerald posted:

Here's another one I'm almost embarrassed to ask: I just got an external hard drive for backups that uses NTFS. Online searching yields somewhat conflicting advice about whether I should format this drive (possibly to ext3 or 4) before I use it on Mint, or whether my data* should be fine. I have been interchangeably using thumb sticks between Mac, Linux and Windows with no apparent issues, so my instinct is not to worry about it. I thought I'd ask this thread before I did anything (the drive is hooked in but not mounted).


* Stuff like my Music directory and various videos I don't necessarily want cluttering up my Linux machine.

NTFS support is very good these days. AFAIK it's not as "fast" as other linux-native filesystems but for an external HDD that's a pointless comparison.

If you want to keep using it as a backup drive, and also want it to be usable in other PCs with Windows, leaving it as NTFS is fine. There are a few potential problems (ex linux allows you to use characters in filenames that windows doesn't allow) but not many. And linux just totally ignores the NTFS permissions.


If this drive will be 100% dedicated to linux, a linux-native FS has some advantages like ownership & permissions, or maybe btrfs with checksums for integrity. (I kept using NTFS on my backup drive for several months during my switch from windows to linux, as the emergency abort parachute. Then I switched to btrfs when I figured out send|receive for backups.)

F_Shit_Fitzgerald
Feb 2, 2017



Klyith posted:

NTFS support is very good these days. AFAIK it's not as "fast" as other linux-native filesystems but for an external HDD that's a pointless comparison.

If you want to keep using it as a backup drive, and also want it to be usable in other PCs with Windows, leaving it as NTFS is fine. There are a few potential problems (ex linux allows you to use characters in filenames that windows doesn't allow) but not many. And linux just totally ignores the NTFS permissions.


If this drive will be 100% dedicated to linux, a linux-native FS has some advantages like ownership & permissions, or maybe btrfs with checksums for integrity. (I kept using NTFS on my backup drive for several months during my switch from windows to linux, as the emergency abort parachute. Then I switched to btrfs when I figured out send|receive for backups.)

ziasquinn posted:

it doesn't really matter for back ups afaik. it just is read slower by Linux so you don't wanna be trying to run games off a NTFS drive

that's the overall view I got out of my own searches last time I looked at least. happy to be corrected

Cool; thanks! I decided to just go ahead and reformat to ext4 and save myself the trouble of chown'ing files from root->my_username. Using disks on Mint it was extremely easy, thank god.

cruft posted:

The way I usually deal with this sort of thing is:

code:
ls *.jpg | while read fn; do
  mv "$fn" Media/
done

Huh. OK, I'll try that. Thanks!

Phosphine
May 30, 2011

WHY, JUDY?! WHY?!
🤰🐰🆚🥪🦊
Also also, maybe overkill for home use but if you ever write a bash script at work (hello half my career), do yourself a favour and run shellcheck.

It will catch, and tell you how to fix, basically every common error known to man.

mawarannahr
May 21, 2019

Phosphine posted:

I think there's basically no legit use case for a bash script that doesn't set at least -e, -u and probably pipefail. If you do actually need to run a command that might fail and still continue, there are ways to write that that clearly shows intent and doesn't ruin everything else. Same for potentially unset variables. Always do set -eu, it will definitely save your rear end some day if you write more than ten lines of bash in your life.

-u sucks, it breaks $_, and -e sucks for dying silently unless you set a trap. just catch all the errors with || .

cruft
Oct 25, 2007

Phosphine posted:

bash script

Am I the only one still alive who cares about scripts that run with Bourne Shell (dash for Linux people)?

Paging BlankSystemDaemon

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
We should all be using Perl for our scripting needs....

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

cruft posted:

The way I usually deal with this sort of thing is:

code:
ls *.jpg | while read fn; do
  mv "$fn" Media/
done

the safest way to do this is

code:
find . -name '*.jpg' -print0 | xargs -0 -I '{}' mv '{}' Media/
this handles files with whitespace in the names (a classic shell script failure mode), and doesn’t run into trouble with maximum shell command or argument count limits

E: and if you’re moving it to network storage or something else I/O bound, running the commands in parallel with -P can speed things up without having to manage background jobs and waiting for everything and all those headaches

Subjunctive fucked around with this message at 19:10 on Mar 23, 2024

cruft
Oct 25, 2007

Subjunctive posted:

the safest way to do this is

code:
find . -name '*.jpg' -print0 | xargs -0 -I '{}' mv '{}' Media/
this handles files with whitespace in the names (a classic shell script failure mode), and doesn’t run into trouble with maximum shell command or argument count limits

why not

code:
find . -name '*.jpg' -exec mv '{}' Media/
Actually OP was using *.jpg so to recreate that functionality you will need

code:
find . -name '*.jpg' -depth 1 -exec mv '{}' Media/
e:

Subjunctive posted:

E: and if you’re moving it to network storage or something else I/O bound, running the commands in parallel with -P can speed things up without having to manage background jobs and waiting for everything and all those headaches

But if you're moving across filesystems onto a FAT, running in parallel will increase fragmentation and slow down read times!

cruft fucked around with this message at 19:13 on Mar 23, 2024

cruft
Oct 25, 2007

In closing, OP, you should just write it in Python.

cruft
Oct 25, 2007

cruft posted:

In closing, OP, you should just write it in Python.

code:
import glob
import os
import shutil

for fn in glob.glob("*.jpg"):
    shutil.move(fn, os.path.join("Media", fn))

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

cruft posted:

why not

code:
find . -name '*.jpg' -exec mv '{}' Media/

because you’ll run a mv for each file instead of batching them; probably doesn’t matter here but in general is helpful if the program does more per invocation (like a new connection for scp)

it’s also easier to adapt my form to using an existing file of filenames, which I have often found useful when I want to filter out some part of the list using grep (I can never remember the proper way to use -a/-o/-!)

cruft
Oct 25, 2007

Subjunctive posted:

because you’ll run a mv for each file instead of batching them; probably doesn’t matter here but in general is helpful if the program does more per invocation (like a new connection for scp)

it’s also easier to adapt my form to using an existing file of filenames, which I have often found useful when I want to filter out some part of the list using grep (I can never remember the proper way to use -a/-o/-!)

LOL, okay, you're not wrong. I was going to call you out for splitting hairs until I re-read my posts on this page.

The Linux Questions Thread: Ultimate Bikeshedding

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

cruft posted:

LOL, okay, you're not wrong. I was going to call you out for splitting hairs until I re-read my posts on this page.

The Linux Questions Thread: Ultimate Bikeshedding

we learned these lessons the hard way, one mutilated directory tree or flooded /var/spool/mail/root at a time! we should pass them on!

hifi
Jul 25, 2012

I use parallel for anything more complicated than picking my nose

mawarannahr
May 21, 2019

cruft posted:

Am I the only one still alive who cares about scripts that run with Bourne Shell (dash for Linux people)?

Paging BlankSystemDaemon

No, tons of projects in GitHub that proudly advertise 100% POSIX compatibility. I have to use POSIX for some container stuff. Shellcheck is the way to go.

BrainDance
May 8, 2007

Disco all night long!

I feel ashamed of this. I know my network setup is stupid, and there are better ways to do it.

But I dont wanna redo it and if it works it works I guess? Sure curl didn't work, but I found another stupid solution that didn't require me redoing the whole thing. I just installed tinyproxy, made it only accessible from the local machine, and then curl --proxy "http://127.0.0.1:8888" "https://api.ipify.org" then suddenly it works for mysterious reasons? I don't know why tinyproxy works on the same interface but curl doesnt but, oh well, everything works now even if it is stuck together with duct tape and chewed gum.

The script works, my DNS stuff gets updated every time my ISP changes my IP, the jellyfin server works, I have actual reliable uptime now that isn't dependent on me manually updating my dns entries. I dunno, it's not stupid if it works, right?

The one thing I need to figure out though, it's minor, there is a rule that port 8096 always goes through the no-vpn interface. That way everyone can access my jellyfin server. Also though, I got a friend in another country running his own jellyfin server I want to connect to. That needs to go through the vpn interface. It doesn't though, because port 8096. I could change the port mine is on but people already got it bookmarked. I'm not sure what to do here though to be able to use the vpn interface to connect to his jellyfin server.

cruft
Oct 25, 2007

https://arstechnica.com/security/2024/03/backdoor-found-in-widely-used-linux-utility-breaks-encrypted-ssh-connections/

backdoor added by xz contributor with 2+ years of contributions

that sucks

xzzy
Mar 5, 2009

Redhat gave the CVE a 10.0 score too.. so yeah, patch.

The fact that it was discovered before the latest release trickled far and wide will really limit the damage though.

jkq
Nov 26, 2022

xzzy posted:

Redhat gave the CVE a 10.0 score too.. so yeah, patch.

The fact that it was discovered before the latest release trickled far and wide will really limit the damage though.

Yeah, Debian stable is on *checks system* ... 5.4.1. Yay?

AlexDeGruven
Jun 29, 2007

Watch me pull my dongle out of this tiny box


"There are no known reports of those versions being incorporated into any production releases for major Linux distributions, but both Red Hat and Debian reported that recently published beta releases used at least one of the backdoored versions—specifically, in Fedora 40 and Fedora Rawhide and Debian testing, unstable and experimental distributions. A stable release of Arch Linux is also affected. That distribution, however, isn't used in production systems."

At least everything isn't on fire this time. Also, lol@the Arch statement. I know there's at least one flogging it to management in every org.

cruft
Oct 25, 2007

AlexDeGruven posted:

At least everything isn't on fire this time.

Amen.

I work in incident response. This scenario keeps me awake at night. I'm really worried about the one nobody's found yet.

Klyith
Aug 3, 2007

GBS Pledge Week
For fellow users of arch-likes, if you have a default setup for the pacman cache, you can downgrade like so:
code:
sudo pacman -U file:///var/cache/pacman/pkg/xz-5.4.6-1-x86_64.pkg.tar.zst
sudo pacman -U file:///var/cache/pacman/pkg/lib32-xz-5.4.6-1-x86_64.pkg.tar.zst
https://wiki.archlinux.org/title/downgrading_packages


OTOH if you don't have sshd running, I don't think there is anything to worry about? So you have to be in the small intersection of people who:
1) use an unstable / rolling release distro
2) have enabled sshd
3) be passing ssh through your home firewall oh god why
to be critically vulnerable.


AlexDeGruven posted:

Also, lol@the Arch statement. I know there's at least one flogging it to management in every org.

oh god why

Truga
May 4, 2014
Lipstick Apathy

jkq posted:

Yeah, Debian stable is on *checks system* ... 5.4.1. Yay?

sid is now on liblzma5:amd64 5.6.1+really5.4.5-1 lmao

Volguus
Mar 3, 2009
Fedora 40 just downgraded xz, though from the original report it probably wasn't vulnerable. But you never know.

pre:
 xz                                                                     x86_64          1:5.4.6-3.fc40                                                           updates-testing                            2.0 MiB
  replacing xz                                                          x86_64          5.6.0-3.fc40                                                             updates-testing                            2.1 MiB
 xz-devel                                                               x86_64          1:5.4.6-3.fc40                                                           updates-testing                          255.8 KiB
  replacing xz-devel                                                    x86_64          5.6.0-3.fc40                                                             updates-testing                          255.7 KiB
 xz-libs                                                                i686            1:5.4.6-3.fc40                                                           updates-testing                          229.2 KiB
  replacing xz-libs                                                     i686            5.6.0-3.fc40                                                             updates-testing                          230.5 KiB
 xz-libs                                                                x86_64          1:5.4.6-3.fc40                                                           updates-testing                          209.8 KiB
   replacing xz-libs                                                    x86_64          5.6.0-3.fc40                                                             updates-testing                          211.1 KiB

Adbot
ADBOT LOVES YOU

Subjunctive
Sep 12, 2006

✨sparkle and shine✨

cruft posted:

Amen.

I work in incident response. This scenario keeps me awake at night. I'm really worried about the one nobody's found yet.

Keep looking!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply