Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

evol262 posted:

Can you ping out with static configuration? I'd guess that ip forwarding is off or -m physdevisbridged got unset

I haven't had a chance to mess with tcpdump yet, but I can ping out when I set a static ip. I cannot ping the vm from my LAN. I'm not sure what either of those things you mentioned is referring to. I'll go Google around and see what I can find.

Adbot
ADBOT LOVES YOU

RFC2324
Jun 7, 2012

http 418

Thermopyle posted:

I haven't had a chance to mess with tcpdump yet, but I can ping out when I set a static ip. I cannot ping the vm from my LAN. I'm not sure what either of those things you mentioned is referring to. I'll go Google around and see what I can find.

Did your VMs get set to a NAT network with no DHCP instead of the actual bridge?

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

Thermopyle posted:

I haven't had a chance to mess with tcpdump yet, but I can ping out when I set a static ip. I cannot ping the vm from my LAN. I'm not sure what either of those things you mentioned is referring to. I'll go Google around and see what I can find.
What's your iptables look like?

evol262
Nov 30, 2010
#!/usr/bin/perl
I'd just try this for a start:

https://docs.fedoraproject.org/en-U...th_libvirt.html

other people
Jun 27, 2004
Associate Christ
Just do the tcpdumps. Until you identify where the flow of traffic is disrupted you are just guessing.

>:(

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

other people posted:

Just do the tcpdumps. Until you identify where the flow of traffic is disrupted you are just guessing.

>:(

Yessir! :D

Unfortunately for everyone who is waiting with bated breath, I won't get time to work on it until tomorrow. Well, I've got time now but I'm tired and I'd rather post on SA and look at kitten videos.

Odette
Mar 19, 2011

So I'm looking at exporting all my mails from Google to my self-hosted mail server. The Dovecot Gmail migration mentions that Gmail uses virtual dirs, which tends to duplicate emails.

What's the best method to migrate everything without duplicating emails? Untag everything, delete all tags/folders, then migrate?

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
Could you use POP instead of IMAP?

GobiasIndustries
Dec 14, 2007

Lipstick Apathy
So this is all probably very basic but I'm new to this (and I hope this is the right thread): I've got a linux website with sftp access and just set up a local ubuntu vm running server 16.04.1 (installed lamp from the prompt). What I'd like is to use my local vm as a testing ground (running the same versions of php, ruby, etc.) and push updates to my server. I'd be doing development on my mac laptop (Sublime Text) and have a github account, is there an easy workflow for this?

Robo Reagan
Feb 12, 2012

by Fluffdaddy
I just wasted an entire evening pulling out my hair because I thought UEFI keeping me from installing Arch was too simple/dumb to be the reason. :shepface:

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

other people posted:

code:
# tcpdump -nn -e -i <interface> port 67 or port 68

(replace <interface> as needed)

OK, so I did this.

eth0 does not see any traffic. br0 and vnet0 both do see traffic.

So, for some reason my bridge isn't...bridging?

ExcessBLarg!
Sep 1, 2001
UEFI is awful. Parts of it are OK, perhaps necessary even, but the whole NVRAM boot order shenanigans and the inconsistency across vendors is horrible.

The one thing BIOS got right is that, if you attach a bootable disk to a machine, the machine will boot. With UEFI you often have to open the boot menu, and select the correct of the three "Disk 0" entries, which is never the first one. Or futz with the UEFI shell until you can get the machine booted and run grub-install again.

evol262
Nov 30, 2010
#!/usr/bin/perl

Thermopyle posted:

OK, so I did this.

eth0 does not see any traffic. br0 and vnet0 both do see traffic.

So, for some reason my bridge isn't...bridging?

Did you set the sysctls from the link above (Fedora docs)?

This is probably iptables. The sysctls will stop if from mucking with bridge traffic.

ExcessBLarg! posted:

UEFI is awful. Parts of it are OK, perhaps necessary even, but the whole NVRAM boot order shenanigans and the inconsistency across vendors is horrible.
As someone who works with it, it's much less awful than legacy boot. Trying to wedge a bunch off add-on cards in is hell on legacy. The mess of BIOS boot partitions on large disks is hell. Troubleshooting booting is hell (especially vs an EFI shell).

Yes, it's something new to learn. That's ok. And UEFI 2 is remarkably consistent unless you have a Lenovo consumer-grade laptop. Having a consistent boot setup across architectures and being able to write binaries in a native format is just icing on the cake.

ExcessBLarg! posted:

The one thing BIOS got right is that, if you attach a bootable disk to a machine, the machine will boot. With UEFI you often have to open the boot menu, and select the correct of the three "Disk 0" entries, which is never the first one. Or futz with the UEFI shell until you can get the machine booted and run grub-install again.

You actually don't. UEFI will attempt to find an EFI system partition on the first disk. This is no different from legacy boot. Except you can actually boot from other disks by setting nvram instead of hitting the BIOS menu and swapping disks around so Windows doesn't clobber grub/etc.

feedmegin
Jul 30, 2008

Speaking as someone who owns one, what's wrong with Lenovo UEFI?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

evol262 posted:

Did you set the sysctls from the link above (Fedora docs)?

This is probably iptables. The sysctls will stop if from mucking with bridge traffic.


Yeah I did. Doesn't seem to have made any difference.

For shits and giggles I just tried adding the iptables rule as mentioned on that page. And it fixed the problem!

I'm a little nervous and/or bewildered as to why the sysctl stuff didn't actually disable iptables for bridged traffic...

ExcessBLarg!
Sep 1, 2001

evol262 posted:

As someone who works with it, it's much less awful than legacy boot.
As someone who "casually" works with UEFI machines and GRUB it's still awful.

evol262 posted:

The mess of BIOS boot partitions on large disks is hell.
No worse than EFI partitions. In the old days all you needed to boot a disk was a valid MBR signature and the bootloader took care of the rest. Things got messy when BIOSes started peering into the MBR partition table and looked for a bootable partition, but this wasn't an issue before 2008.

Sure you might need a separate boot partition because of LBA offset restrictions, or because your bootloader doesn't understand the fancy root volume, but at least you were relatively free to partition things however you wanted subject to the offset restrictions. Now you need a GPT, a FAT32 (but not really FAT32) EFI partition, and hope you got your boot paths right.

evol262 posted:

Troubleshooting booting is hell (especially vs an EFI shell).
As far as fancy firmwares go troubleshooting was way easier on Open Firmware or SRM, which have had native serial console support since forever--don't even need a framebuffer card. PC BIOSes are obviously lacking here, but they'll generally still boot an attached disk in the default configuration, and since GRUB has always worked with VGA text modes and has serial console support itself, troubleshooting hasn't really been a problem.

On half the UEFI machines I use GRUB won't display anything unless you add EFI graphics support to grub.cfg, although maybe that's a GRUB issue. I also regularly encounter UEFI display problems on 4K monitors, making any kind of troubleshooting impossible without a monitor "downgrade". Serial is much simpler, but console redirection on PCs has always been a vendor-specific option and rarely works well.

evol262 posted:

And UEFI 2 is remarkably consistent unless you have a Lenovo consumer-grade laptop.
It's not. On Intel UEFI I can install a bootloader to "EFI/BOOT/BOOTX64.EFI" and it will automatically boot with the default NVRAM configuration, which is great. But Dell UEFI (not sure the actual vendor) won't boot unless I select it from the boot menu. That means I can't just clone an image on a disk and have it automatically boot on arbitrary UEFI hardware--it either requires boot-menu intervention or setting NVRAM variables (which itself requires boot intervention).

evol262 posted:

Having a consistent boot setup across architectures and being able to write binaries in a native format is just icing on the cake.
Open Firmware?

evol262 posted:

UEFI will attempt to find an EFI system partition on the first disk. This is no different from legacy boot.
Sure it will find an EFI system partition, but it won't (automatically) boot "EFI/ubuntu/grubx64.efi", which is the default Ubuntu bootloader location, until I launch it from a shell and set it in NVRAM.

ExcessBLarg! fucked around with this message at 20:08 on Feb 16, 2017

evol262
Nov 30, 2010
#!/usr/bin/perl

ExcessBLarg! posted:

As someone who "casually" works with UEFI machines and GRUB it's still awful.
It's ok to have different opinions.

ExcessBLarg! posted:

No worse than EFI partitions. In the old days all you needed to boot a disk was a valid MBR signature and the bootloader took care of the rest. Things got messy when BIOSes started peering into the MBR partition table and looked for a bootable partition, but this wasn't an issue before 2008.
Those days are long gone, though. I'd argue that things got messy when we carried over a design that relied on a fixed space with a bunch of crap shoved into "free" space in order to chainload a larger bootloader, with finicky ordering of option ROMs cards that couldn't always present real disks to BIOS that you could boot from.

ExcessBLarg! posted:

Sure you might need a separate boot partition because of LBA offset restrictions, or because your bootloader doesn't understand the fancy root volume, but at least you were relatively free to partition things however you wanted subject to the offset restrictions. Now you need a GPT, a FAT32 (but not really FAT32) EFI partition, and hope you got your boot paths right.
It's extremely likely that every system you've used in the last 5 years (EFI or not) has had a GPT partition table. EFI doesn't actually require it, though it's present in 99% of cases.

The EFI SP also doesn't need to be FAT32, but it almost always is. Some vendors don't do it, but don't blame EFI because people don't follow the specs.

ExcessBLarg! posted:

t
As far as fancy firmwares go troubleshooting was way easier on Open Firmware or SRM, which have had native serial console support since forever--don't even need a framebuffer card. PC BIOSes are obviously lacking here, but they'll generally still boot an attached disk in the default configuration, and since GRUB has always worked with VGA text modes and has serial console support itself, troubleshooting hasn't really been a problem.
I always liked ARCS. But SRM and openfirmware aren't coming to PC hardware. It's hard to argue that EFI isn't better than the alternatives you do have. And it's not like setting the boot order was nice in openfirmware/openboot either. I'd rather have sane, configurable defaults, than a completely inflexible system.

ExcessBLarg! posted:

On half the UEFI machines I use GRUB won't display anything unless you add EFI graphics support to grub.cfg, although maybe that's a GRUB issue. I also regularly encounter UEFI display problems on 4K monitors, making any kind of troubleshooting impossible without a monitor "downgrade". Serial is much simpler, but console redirection on PCs has always been a vendor-specific option and rarely works well.
EFI framebuffers are different from legacy. grub should handle this seamlessly when the config is generated. I don't know if you're deploying a single image over and over again (with a fixed grub configuration), but this definitely sounds like either a bug in the version of grub Ubuntu ships or a problem with your deployment environment. Not that I'm casting aspersions, but I've literally never seen this (or seen it reported) when using Anaconda. We also maintained our own installer for a while in the old days of oVirt Node, and I didn't see it (or see it reported) there, either. Then again, we detected whether or not your system was EFI and wrote the configuration appropriately.

grub actually can do console redirection, too. You'd just be missing it from the firmware menu.

ExcessBLarg! posted:

It's not. On Intel UEFI I can install a bootloader to "EFI/BOOT/BOOTX64.EFI" and it will automatically boot with the default NVRAM configuration, which is great. But Dell UEFI (not sure the actual vendor) won't boot unless I select it from the boot menu. That means I can't just clone an image on a disk and have it automatically boot on arbitrary UEFI hardware--it either requires boot-menu intervention or setting NVRAM variables (which itself requires boot intervention).
EFI/BOOT/BOOTX64.EFI is a failsafe default. If there's something else higher in the boot order which EFI can find (and boot from), it won't get picked. Yes, this is annoying. No, this isn't really different from the way legacy booting worked.

I'd bet a lot of money that Dell would tell you "upgrade your firmware to fix this", but that's a common problem with vendor firmwares anyway. Usually not something so common as booting, but still.

ExcessBLarg! posted:

Open Firmware?
Granted, but I don't think it was on any x86 systems other than the OLPC, and vendors changed the hell out of it anyway. EFI is pretty much EFI, which really matters me compared to the complete clusterfuck that is uboot on ARM. EFI on ARM is nice.

ExcessBLarg! posted:

Sure it will find an EFI system partition, but it won't (automatically) boot "EFI/ubuntu/grubx64.efi", which is the default Ubuntu bootloader location, until I launch it from a shell and set it in NVRAM.
No. It will boot BOOTX64.EFI (unless there's a bug in the firmware).

Kind of a dumb argument to have in general. Some people like it, some don't. But, like systemd, it has a number of technical advantages even if some implementations are flawed, and it is the future. You can miss sysvinit and legacy boot until the end of the world, but they're dead, baby. They're dead.

ToxicFrog
Apr 26, 2008


ExcessBLarg! posted:

The one thing BIOS got right is that, if you attach a bootable disk to a machine, the machine will boot.

What kind of charmed life have you been living? There's a shitload of BIOS machines out there (I have two in my basement!) that have ports you can attach mass storage devices to that nonetheless won't boot from them; laptops that have SD card slots but won't boot from SD, in particular, are common, although as late as 2012 I was still seeing machines with USB ports that wouldn't boot from USB.

On top of that, you have machines that will boot but have a really narrow definition of "bootable disk". The most prevalent of these are probably systems that support booting from USB, but only if the USB disk "looks like" a Zip disk, meaning it needs to conform to fairly specific restrictions on partition layout. Doesn't matter that the disk would boot fine if it tried executing the MBR; if you can't convince the BIOS to try that in the first place, it's not bootable.

My experiences with UEFI haven't been good, but they have at least been bad in ways that are more consistent and easier to debug than BIOS.

gently caress bootloaders, in hell, forever.

theperminator
Sep 16, 2009

by Smythe
Fun Shoe

Thermopyle posted:

Yeah I did. Doesn't seem to have made any difference.

For shits and giggles I just tried adding the iptables rule as mentioned on that page. And it fixed the problem!

I'm a little nervous and/or bewildered as to why the sysctl stuff didn't actually disable iptables for bridged traffic...

How did you apply the sysctl changes?
if you run
"sysctl net.bridge.bridge-nf-call-iptables"
what is the result?

ExcessBLarg!
Sep 1, 2001

evol262 posted:

EFI framebuffers are different from legacy. grub should handle this seamlessly when the config is generated.
Even without a grub.cfg, GRUB is supposed to drop into its command line where you can manually prod stuff. It's hard to use when the video doesn't also come up though. That's never been a problem with BIOS, but it's a problem in some UEFI environments. The inconsistency is killer.

evol262 posted:

I don't know if you're deploying a single image over and over again (with a fixed grub configuration),
Yes, it's common to use disk duplicators to blat an image onto an SSD build, and have a script that runs on first-boot to do all the hardware-specific configuration. Doing so without intervention requires that UEFI have reasonable boot order defaults, which, true is the same as BIOS, but it seems that non-sane default configurations are more prevalent in UEFI implementations.

evol262 posted:

Kind of a dumb argument to have in general. Some people like it, some don't. But, like systemd, it has a number of technical advantages even if some implementations are flawed, and it is the future. You can miss sysvinit and legacy boot until the end of the world, but they're dead, baby.
I actually like systemd. It solves a lot of long-standing problems, some of which were previously addressed by daemontools and upstart, but the difference is that everyone has adopted the one implementation of it. So while systemd operates differently from sysvinit or upstart, once you learn it, it pretty much works everywhere.

I really want to like UEFI. I knew it would be different, but it's a more powerful platform than legacy BIOS with a lot of advantages. However, since UEFI is just a spec to which the actual implementations have varying degrees of non-compliance or variation in behavior, it's resulted in practical headaches that I didn't have to deal with previously. This always happens with complicated specs with insufficient compliance checking. See ACPI, USB-C, etc.

ExcessBLarg! fucked around with this message at 14:48 on Feb 17, 2017

venutolo
Jun 4, 2003

Dinosaur Gum
I need to set up something to get remote access to my work box (Ubuntu MATE 16.04). I'm no Linux guru, so bear with me if I use the wrong terms or whatever.

I would like to be able to connect to my existing session and be able to handle the differences between my work display (3 monitors with different resolutions and orientations) and whatever computer I'm connecting from (typically a single 1920x1080 display). This would probably be something like being able to choose to only display one of the displays, and resize it so I don't have to scroll on my lower res client monitor. A Windows client would be nice for when I'm away from home at my Mom's place, but not entirely necessary.

I had tried NoMachine, which was a mixed bag. I cannot get any client to connect when the server computer is using an Nvidia driver. When I use the nouveau driver, I can connect, but that's not really a practical solution as my non-remote experience is much better with the Nvidia driver. My best guess based on some searching is that to get it to work with the Nvidia driver, I have to do something related to H.264. There's some stuff I've read that says I can either install the NoMachine AVC Pack ($5) or build/install libx264. I've done (or at least think I've done) the latter, to no positive effect. I've also tried setting it to use VP8 or MJPEG to no effect as well.

NoMachine allows me to connect to an existing session, and I can choose to just display my main remote display and resize it to fit my client screen. Aside from the above problems, I find the visual quality and responsiveness (including after messing with various quality options) to be sub-par relative to my experience with TigerVNC. I will admit I may have no idea what I'm doing. So the combination of the price for commercial use ($125 for a license), difficulty in getting it working (and so far being completely unable to get it to work with the Nvidia driver), and not great quality, I don't think it is a viable option.

After trying NoMachine, I have since tried TigerVNC. When running vncserver and connecting and starting a new session, everything is great. The visual quality is great and very responsive. I haven't yet had luck connecting to an existing session with x0vncserver, and even if I did, I'm not sure there is anything that would allow display it nicely in a 1920x1080 client.

There are so many remote access options, it is hard to sort through and know what to try. So is there something out there that will allow to me connect to my existing session, handle differences in resolution between server computer and client, and is either free for commercial use or has a license for a single instance that my boss won't balk at?

venutolo fucked around with this message at 15:11 on Feb 17, 2017

evol262
Nov 30, 2010
#!/usr/bin/perl

ExcessBLarg! posted:

Even without a grub.cfg, GRUB is supposed to drop into its command line where you can manually prod stuff. It's hard to use when the video doesn't also come up though. That's never been a problem with BIOS, but it's a problem in some UEFI environments. The inconsistency is killer.
It's likely that it does, but you can't see it. I'd suggest always using gop, but that doesn't help if your vendor doesn't default to BOOTX64, since grub presumably isn't loading.

ExcessBLarg! posted:

Yes, it's common to use disk duplicators to blat an image onto an SSD build, and have a script that runs on first-boot to do all the hardware-specific configuration. Doing so without intervention requires that UEFI have reasonable boot order defaults, which, true is the same as BIOS, but it seems that non-sane default configurations are more prevalent in UEFI implementations.
I haven't seen this done on physical hardware in years, but I haven't been an admin for years, and I always provisioned new hardware over PXE when I was.

Again, though, UEFI does have reasonable defaults. Don't blame the spec for vendors.

ExcessBLarg! posted:

I actually like systemd. It solves a lot of long-standing problems, some of which were previously addressed by daemontools and upstart, but the difference is that everyone has adopted the one implementation of it. So while systemd operates differently from sysvinit or upstart, once you learn it, it pretty much works everywhere.
Not exactly. systemd is an umbrella project, and lots of distros only ship the core. Sometimes that core is a year or more out of date, so you end up writing services against systemd-218 or something for compatibility. EL7 didn't rebase systemd in 7.3 without cause...

ExcessBLarg! posted:

I really want to like UEFI. I knew it would be different, but it's a more powerful platform than legacy BIOS with a lot of advantages. However, since UEFI is just a spec to which the actual implementations have varying degrees of non-compliance or variation in behavior, it's resulted in practical headaches that I didn't have to deal with previously. This always happens with complicated specs with insufficient compliance checking. See ACPI, USB-C, etc.
It honestly just sounds like you've had terrible experiences with some vendors and ACPI/EFI/etc, but no bad experiences with legacy booting/etc. Some people have had the opposite experience

Docjowles
Apr 9, 2009

GobiasIndustries posted:

So this is all probably very basic but I'm new to this (and I hope this is the right thread): I've got a linux website with sftp access and just set up a local ubuntu vm running server 16.04.1 (installed lamp from the prompt). What I'd like is to use my local vm as a testing ground (running the same versions of php, ruby, etc.) and push updates to my server. I'd be doing development on my mac laptop (Sublime Text) and have a github account, is there an easy workflow for this?

This post got kinda buried. But imo you want to use Vagrant and a "box" file that matches the OS you're running in production. And a simple shell provisioner that installs all the same configs and packages you are running on your site. This will give you a dev VM on your local machine that matches prod.

Pass your git repo through to the VM as a shared folder. When your tests pass locally, push to prod.

GobiasIndustries
Dec 14, 2007

Lipstick Apathy

Docjowles posted:

This post got kinda buried. But imo you want to use Vagrant and a "box" file that matches the OS you're running in production. And a simple shell provisioner that installs all the same configs and packages you are running on your site. This will give you a dev VM on your local machine that matches prod.

Pass your git repo through to the VM as a shared folder. When your tests pass locally, push to prod.

Thank you! Like I said I wasn't sure if this was the right thread but my main server is on linux and it involves so much stuff I figured I"d start here. I'm working this weekend but Monday I'm off so I'll give it a shot. If anyone else has any recs I'd love to hear them, it seems so simple in my mind (use my mac to dev stuff on a local vm, push to prod/save to github) but nothing is ever simple.

evol262
Nov 30, 2010
#!/usr/bin/perl
Honestly, I'd use a container for this.

It's pretty much the ideal case for containers.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Why is this bash script ignoring the 'for' loop when I add #!/bin/bash to the start?

code:
#!/bin/bash
#history -w
for i in $( history | grep magnet )
do
   linenumber=$(echo $i | cut -c 1-2)
   if [[ $linenumber == "fl" || $linenumber == '"m' ]]
   then
      echo $linenumber " is not a line number, ignoring"
   else
      echo $linenumber " will be deleted from history..now"
      history -d $linenumber
      history -w
      read $whatevs
   fi
sleep 1
done
If I remove line 1 it runs the 'for' loop. Even if I use #!/bin/sh instead it still doesn't run.

I'm doing something stupid, aren't I?

Also, I haven't managed to get the 'history -d $linenumber' part to work yet.

I'm doing something stupid there also, aren't I?

Horse Clocks
Dec 14, 2004


Is the shell you use day to day bash?

If not, I suspect you won't have any history to remove, so it loops through nothing

RFC2324
Jun 7, 2012

http 418

apropos man posted:

Why is this bash script ignoring the 'for' loop when I add #!/bin/bash to the start?

code:
#!/bin/bash
#history -w
for i in $( history | grep magnet )
do
   linenumber=$(echo $i | cut -c 1-2)
   if [[ $linenumber == "fl" || $linenumber == '"m' ]]
   then
      echo $linenumber " is not a line number, ignoring"
   else
      echo $linenumber " will be deleted from history..now"
      history -d $linenumber
      history -w
      read $whatevs
   fi
sleep 1
done
If I remove line 1 it runs the 'for' loop. Even if I use #!/bin/sh instead it still doesn't run.

I'm doing something stupid, aren't I?

Also, I haven't managed to get the 'history -d $linenumber' part to work yet.

I'm doing something stupid there also, aren't I?

When you call /bin/bash its initializing a new shell, so if what you are trying to grep out is in the current shells history and hasn't been written to disk, it won't be found by the script.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
I use byobu for my terminal from day to day. I don't think that it's a shell as such, rather a jazzy interface for bash.

If I disable byobu and use 'plain bash' instead then all my history is still there. They both seem to use the same .bash_history file, therefore the history command in that script works the same whether I use byobu or not.

I also tried changing it to use /usr/bin/sh as my terminal and when I close all terminals and open a /usr/bin/sh terminal I still have the same history, so the shell 'sh' is also using the same .bash_history file and I still don't see why specifying a shell in the first line of my script is causing it not to find any instances of the string 'magnet'.

ToxicFrog
Apr 26, 2008


What happens if you open the script with history -cr?

What's the value of $HISTFILE in your interactive shell? In the script?

What happens if you define it as a function, source the function definition, and call it rather than making it a stand-alone script? Writing a function is generally a more reliable way to write stuff that's meant to interact with your current shell session in any case.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
It seems that I didn't realise running history commands in a script (or non-interactive shell) results in having no history file enabled.

I have modified it and it's currently in this messy state:

code:
#!/bin/bash
#need to enable history for non interactive shell first
export HISTFILE=/home/foo/.bash_history
echo $HISTFILE
set -o history  #enable history
#history -r
for i in $( cat -n /home/foo/.bash_history | grep magnet )
do
   linenumber=$(echo $i | cut -c 1-2)
   if [[ $linenumber == "fl" || $linenumber == '"m' ]]
   then
      echo $linenumber " is not a line number, ignoring"
   else
      echo $linenumber " will be deleted from history..now"
      history -d $linenumber
      history -w
      read $whatevs
      sleep 1
   fi
done
The trouble now, is that the script is saving entries into .bash_history, in particular the long 'for' command in my script has started appearing in my .bash_history. I don't want this because it's messing up the grepping of the word 'magnet'.

ToxicFrog
Apr 26, 2008


Do you need the `set -o history`? I think the `history` command itself will still work without it.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
To be honest, my idea of grepping the word 'magnet' is not working either because enabling the history within the script causes the 'for xxxxxxx magnet xxxxxxx' line to be added to history and then the next time I run the script it's picking up that in the history and messing with the grep command because of the extra carriage returns in that line.

I just wanted a simple script to clean up instances of where I've been downloading torrents from the terminal, but I give up.

xzzy
Mar 5, 2009

Try this:

rm -f .bash_history

Problem solved, you don't really need to preserve all the other poo poo in your history do you?

Mao Zedong Thot
Oct 16, 2008


grep -v magnet .bash_history > temphistory; mv temphistory .bash_history

I'm sure that doesn't work for some snowflake bash history reason, but whatever.

jaegerx
Sep 10, 2012

Maybe this post will get me on your ignore list!


sed '/magnet/d' .bash_history

GobiasIndustries
Dec 14, 2007

Lipstick Apathy

evol262 posted:

Honestly, I'd use a container for this.

It's pretty much the ideal case for containers.

Docker?

evol262
Nov 30, 2010
#!/usr/bin/perl

jaegerx posted:

sed '/magnet/d' .bash_history

Ding. Sed is perfect. Wondered why it was a script to start with.

Sure. Set up a docker container with whatever base image you want, add on dependencies (ruby/Python/whatever). Check out the git repo. Install nginx. Start the webserver and export the port.

Wanna host it on a server? Dockerfile. New laptop? Dockerfile.

Vagrant is sweet if you need a "real" VM for some reason. Webdev really doesn't

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

xzzy posted:

Try this:

rm -f .bash_history

Problem solved, you don't really need to preserve all the other poo poo in your history do you?

Heh. I usually delete bash_history about once a month when it gets cluttered, that's with erasedups enabled.

VOTE YES ON 69 posted:

grep -v magnet .bash_history > temphistory; mv temphistory .bash_history

I'm sure that doesn't work for some snowflake bash history reason, but whatever.

Why didn't I think of some kind of inverse grepping? Because I can be incredibly forgetful/stupid. It's not like I haven't used inverse grep before, either.

jaegerx posted:

sed '/magnet/d' .bash_history

Bingo! Although I haven't got a clue how the syntax works here. When I see a regex I want to use I usually copy and paste and think "I dunno how it works, but it works". Cheers!

Adbot
ADBOT LOVES YOU

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Yet more annoying bash stuff. Bash is getting on my nerves this weekend.

I keep my scripts in /opt.

I refined the script to trim my history and popped it onto my server in the /opt directory and then logged out. I then ran scp to put a copy on my laptop. I ran scp without sudo, thus:

code:
scp j3710:/opt/trim_bash_history.sh /opt/
And it copied trim_bash_history.sh from /opt on my server to /opt on my laptop without the use of sudo.

If I try and copy a scrap file into /opt on my laptop it won't allow me without sudo, so why was scp allowed to do it?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply