Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Powered Descent
Jul 13, 2008

We haven't had that spirit here since 1969.

KoRMaK posted:

Oh gently caress, I figured it out

Lol, wtf. Well I guess that makes sense on why they weren't behaving the same. Why and or how could source not be available?

What shell are you actually using? You might be accidentally running sh instead of bash or something. Try: echo $SHELL

Adbot
ADBOT LOVES YOU

KoRMaK
Jul 31, 2012



Powered Descent posted:

What shell are you actually using? You might be accidentally running sh instead of bash or something. Try: echo $SHELL
Oh, I'm deliberately using sh in my shebang statement at the beginning of the script.

So, I guess I just learned an important difference in shells.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

KoRMaK posted:

Oh, I'm deliberately using sh in my shebang statement at the beginning of the script.

So, I guess I just learned an important difference in shells.
What /bin/sh even is is inconsistent between distributions, also. In distros like Red Hat, it's usually bash, run in a special mode that tries to preserve compatibility and semantics with the original Bourne shell. On Debian or Ubuntu, it's dash, which was written specifically to run startup scripts slightly faster than bash.

ExcessBLarg!
Sep 1, 2001

KoRMaK posted:

So, I guess I just learned an important difference in shells.
Unix shell have a long and interesting (or perhaps boring) history. The "default" Unix shell, /bin/sh, (which is the one most commonly used for shell scripting), dates back to the Bourne shell from the 70s and has been extended and gone by different names in various Unix flavors. The behavior of /bin/sh was standardized (along with much of the rest of "Unix" behavior) in POSIX, and so should more accurately be called the POSIX 1003.2/1003.2a standards shell, but nobody actually calls it that.

The traditional shell on Linux systems in GNU Bash, a heavily extended but otherwise mostly compatible implementation of the Bourne/POSIX shells. At one time, /bin/sh was a symlink to /bin/bash on Linux systems, and so shell scripts ran under the Bash interpreter (specifically, in Bash's POSIX compatibility mode). Folks decided, however, that Bash was too big, buggy, and slow, and so about ten years ago some Linux distributions switched to using Dash as /bin/sh. Dash itself, however, is a much more faithful implementation of POSX 1003.2/1003.2a, and so the switch exposed many instances of "Bashisms" in shell scripts (thus, the process of switching actually took a few years to complete).

Basically, on your typical Linux system there's three ways to write Bourne shell scripts:
  • Use /bin/bash, which gives you all of Bash's features and behaviors.
  • Use /bin/sh linked to /bin/bash (or /bin/bash --posix), which is supposed to invoke Bash in it's POSIX compatibility mode, but still accepts some extended features (Bashisms).
  • Use /bin/sh linked to /bin/dash (or /bin/dash to be explicit), which is much more strictly compatible to the POSIX shell.
Personally, if I had to write a shell script I'd just use /bin/bash and forget the others. Sure Bash is big, buggy, slow, and a security nightmare, but if you're writing shell scripts in 2015 you're pretty screwed anyways. Might as well make use of new features in the past 20 or 40 years.

ExcessBLarg! fucked around with this message at 02:41 on Oct 4, 2015

reading
Jul 27, 2013

reading posted:

I want to install OpenBSD on a single-board computer (a 64-bit Intel chip). I need to create a USB install disk thing that I can plug in, boot from, and install onto the machine. However it looks like OpenBSD does not provide any USB install images and I'm not sure how to create one (my host system is Xubuntu). All the blog posts I find online are contradictory. Can anyone help out? How can I put OpenBSD on this system that doesn't have a CD drive?

There's this list of .img, .fs, .iso files: http://ftp.spline.de/pub/OpenBSD/5.7/amd64/
And I think this blog post points in an interesting direction but it leaves too much out and I can't get it to work: http://www.tumfatig.net/20110423/install-openbsd-from-usb-stick/

Update: I'm using an ECS Liva mini-computer, and I still cannot get any OS (FreeBSD, OpenBSD, Debian) to install on this dang thing off USB. So far here's what I know:
1) I need to use UEFI. So, I formatted my USB stick with a gpt FAT32 partition. If I set the flag to "boot" using gparted, the Liva's BIOS will give me the chance to select the USB stick to boot from.
2) However, it just keeps sending me in to the BIOS menus, rather than actually installing or booting from the USB. The BIOS menus recognize the USB stick so I don't get what the issue is.
3) I've tried using dd to put the .img or .iso files onto the USB, I've tried using unetbootin, and I've tried just clicking and dragging.
4) I have tried two different USB sticks. Same results on both.

At what point do I know if I have a defective device? According to what little info there is on the web it should be booting from the USB stick and installing the OS with no problem.

Docjowles
Apr 9, 2009

Vulture Culture posted:

On Debian or Ubuntu, it's dash, which was written specifically to make you mad.

evol262
Nov 30, 2010
#!/usr/bin/perl

Droo posted:

It's just a mysqldump file that I am importing on a second server. The reason I turn off the binlogs temporarily for the import is just because I don't need the entire database import respammed to the binlogs. There are no shared tablespaces, and maybe I should clarify that everything APPEARS to be perfectly in order and I don't think there is actually a problem. I was just confused by the weird file sizes.


So, in other words all I'm doing is:

primary>>> mysqldump --opt --result-file dump.sql databasename
backup>>> run script that splits dump.sql into 50 individual files, dump.tablename.sql
backup>>> foreach tablename: mysql databasename < dump.tablename.sql

and I notice the weird file size discrepancies. But as far as I can tell, all the data is perfect.

Edit to add: there is a difference between mysql 5.1 builtin InnoDB and the "plugin" innoDB that we are using.

As long as the table checksums match, you're probably good, but you may have to ask the mariadb devs what's happening if that stackoverflow link wasn't enough

evol262
Nov 30, 2010
#!/usr/bin/perl

reading posted:

Update: I'm using an ECS Liva mini-computer, and I still cannot get any OS (FreeBSD, OpenBSD, Debian) to install on this dang thing off USB. So far here's what I know:
1) I need to use UEFI. So, I formatted my USB stick with a gpt FAT32 partition. If I set the flag to "boot" using gparted, the Liva's BIOS will give me the chance to select the USB stick to boot from.
2) However, it just keeps sending me in to the BIOS menus, rather than actually installing or booting from the USB. The BIOS menus recognize the USB stick so I don't get what the issue is.
3) I've tried using dd to put the .img or .iso files onto the USB, I've tried using unetbootin, and I've tried just clicking and dragging.
4) I have tried two different USB sticks. Same results on both.

At what point do I know if I have a defective device? According to what little info there is on the web it should be booting from the USB stick and installing the OS with no problem.

I'll just mentally sub every instance of "BIOS" with "firmware".

Create a UEFI VM. Attach the stick. Does it boot?

Find a UEFI system (like your laptop/desktop). Does it boot? In EFI mode, not legacy?

Is there an EFI system executable named bootx64.EFI on that partition? Is secureboot on or off? Is your image signed? Does dumpet -i show an EFI platform ID?

There's a lot that could be wrong here, and you should start with the basics, like "can I boot this on a known-good EFI system/vm in EFI mode?" and go from there once you're sure it works.

telcoM
Mar 21, 2009
Fallen Rib

KoRMaK posted:

Oh gently caress, I figured it out

code:
. is a special shell builtin
source: not found
Lol, wtf. Well I guess that makes sense on why they weren't behaving the same. Why and or how could source not be available?

"a special shell builtin"? I think I've seen this before...

code:
<my_fancy_prompt>$ dash
$ type .
. is a special shell builtin
$ type source
source: not found
...yeah. Exact match. Looks like your /bin/sh shell is probably dash, not bash.

If bash is symlinked to /bin/sh and run that way, it will run in a POSIX-compliant mode that will still include some functionality above and beyond POSIX.

dash is a shell that is aimed to be both as small as possible and POSIX-compliant. The "source" command is not specified in POSIX shell standard, so dash won't have it. Debian/Ubuntu switched over to using dash as a default /bin/sh. Read more here: https://wiki.ubuntu.com/DashAsBinSh

telcoM
Mar 21, 2009
Fallen Rib

reading posted:

Update: I'm using an ECS Liva mini-computer, and I still cannot get any OS (FreeBSD, OpenBSD, Debian) to install on this dang thing off USB. So far here's what I know:
1) I need to use UEFI. So, I formatted my USB stick with a gpt FAT32 partition. If I set the flag to "boot" using gparted, the Liva's BIOS will give me the chance to select the USB stick to boot from.
2) However, it just keeps sending me in to the BIOS menus, rather than actually installing or booting from the USB. The BIOS menus recognize the USB stick so I don't get what the issue is.
3) I've tried using dd to put the .img or .iso files onto the USB, I've tried using unetbootin, and I've tried just clicking and dragging.
4) I have tried two different USB sticks. Same results on both.

At what point do I know if I have a defective device? According to what little info there is on the web it should be booting from the USB stick and installing the OS with no problem.

1.) If you are using GPT partitioning, the important flag would be "esp". But if I'm reading the UEFI standard docs right, then it should not be necessary to use GPT on removable media - any FAT32 with the old MBR partitioning should do.

2.) The rule for GPT partitioned disks to be UEFI bootable is: it should have a FAT32 partition flagged as "esp" and in it, there should be a bootloader exactly at \EFI\BOOT\BOOTX64.efi.

For removable media, the GPT requirement can be waived, so the important thing is to have the bootloader binary with the right name in the right path. FAT32 is supposed to be case-insensitive, but there have been case-sensitive UEFI firmware implementations, so you may have to try all-uppercase and all-lowercase variations, too. And since your removable media now contains a GPT partition table, the firmware might want to see the "esp" flag too.

You might want to try with a USB stick with a plain old MBR partition table and a single FAT32 partition. Don't mark it as bootable or anything, just put the bootloader (or, for testing, perhaps an EFI shell, here) to \EFI\BOOT\BOOTX64.efi.

3.) If the firmware menus mention Secure Boot, you'll want to turn that off, at least for the installation.

SpeedoJoe
Feb 23, 2006
I'm trying to test rsync's block level backup functionality, but the time taken doesn't seem to decrease when I transfer a modified version of the large randomly generated file I'm working with. Any help would be appreciated.

These are the test files. rand2.txt is based on rand1.txt with the first character from each line removed.
pre:
-rw-rw-r-- 1 local local 954M Oct  4 17:59 rand1.txt
-rw-rw-r-- 1 local local 942M Oct  4 20:29 rand2.txt
Initial transfer command and console output.
code:
rsync -avhP --log-file=1st.txt /data/rsynctest/src/rand1.txt 192.168.0.5:/home/local/Desktop/dst/rand.txt

sending incremental file list
rand1.txt
          1.00G 100%   73.25MB/s    0:00:13 (xfr#1, to-chk=0/1)
sent 1.00G bytes  received 35 bytes  57.16M bytes/sec
total size is 1.00G  speedup is 1.00
Second transfer command and console output.
code:
rsync -avhP --log-file=2nd.txt /data/rsynctest/src/rand2.txt 192.168.0.5:/home/local/Desktop/dst/rand.txt

sending incremental file list
rand2.txt
        987.01M 100%   26.77MB/s    0:00:35 (xfr#1, to-chk=0/1)
sent 987.25M bytes  received 221.46K bytes  23.23M bytes/sec
total size is 987.01M  speedup is 1.00
Logs.
code:
::::::::::::::
1st.txt
::::::::::::::
2015/10/04 22:10:22 [19691] building file list
2015/10/04 22:10:35 [19691] <f+++++++++ rand1.txt
2015/10/04 22:10:35 [19691] sent 1.00G bytes  received 35 bytes  57.16M bytes/sec
2015/10/04 22:10:35 [19691] total size is 1.00G  speedup is 1.00
::::::::::::::
2nd.txt
::::::::::::::
2015/10/04 22:11:02 [19696] building file list
2015/10/04 22:11:39 [19696] <f.st...... rand2.txt
2015/10/04 22:11:40 [19696] sent 987.25M bytes  received 221.46K bytes  23.23M bytes/sec
2015/10/04 22:11:40 [19696] total size is 987.01M  speedup is 1.00
Just tried with matching source and destination file names, which made no difference.

I'm confused as to why the second transfer takes longer than the first. Shouldn't block level kick in, recognise the second transfer is smaller by 12M and complete almost instantly?
Does this only work when the second transfer is larger than the first? Is the difference between the two files not significant enough, therefore confusingly transferring the whole file? Or is it in fact working and my test isn't good enough?

Powered Descent
Jul 13, 2008

We haven't had that spirit here since 1969.

SpeedoJoe posted:

I'm trying to test rsync's block level backup functionality, but the time taken doesn't seem to decrease when I transfer a modified version of the large randomly generated file I'm working with. Any help would be appreciated.

These are the test files. rand2.txt is based on rand1.txt with the first character from each line removed.
...
I'm confused as to why the second transfer takes longer than the first. Shouldn't block level kick in, recognise the second transfer is smaller by 12M and complete almost instantly?
Does this only work when the second transfer is larger than the first? Is the difference between the two files not significant enough, therefore confusingly transferring the whole file? Or is it in fact working and my test isn't good enough?

My initial gut response is that rsync's default block size that it uses for the delta comparisons is going to be longer than one line of your file. (Of course that'd depend on how long your lines are, but I'm assuming something reasonably short, like in a logfile or source code.) In that case, since every line is different (missing one character), every block is going to be different, so all the time it takes trying to do comparisons ends up wasted, hence the slower operation.

If you were to re-run your experiment with a rand3.txt, which is the same as rand1.txt but missing a few dozen lines from random places in the file, I'd expect that scenario to go a lot faster, since most of the (multi-line) blocks would indeed be identical, and rsync's smarts would kick in and save you a lot of transfer time.

I think you can even set the block size in the rsync option flags, if you have an idea of what size would work best with your particular data set.


e: vvvv --- Same idea with more detail below.

Powered Descent fucked around with this message at 02:55 on Oct 5, 2015

evol262
Nov 30, 2010
#!/usr/bin/perl
No. rsync is not gzip (though using compression may help) -- especially with highly compressible text files (check with gzip, but it's probably really significant, because text is where compression excels), just piping gzip+tar over ssh or using rsync -Z may be an order of magnitude faster.

It's checksumming the destination, then figuring out which blocks differ and sending those. Since you're not using --block-size to specify (and I don't know how many lines are in that file), I'm assuming that the checksum of every single block is probably changed, and it's resending the entire thing.

Try removing one character from one line and sending it (not every line), or looping through with fseek and removing them from known intervals, then specifying a --block-size=...

Or rsync -vv --debug=deltasum (or --debug=all if you don't mind slogging through a ton of stuff) to watch what tis' doing.

Experto Crede
Aug 19, 2008

Keep on Truckin'
I've just gotten myself a dedicated box to learn some more about virtualisation and such, and I'm just wondering which hypervisor running on a Linux host would be considered best in terms of a good automation system. At work we use xen but I'm curious to learn more about different systems and how to automate them.

Docjowles
Apr 9, 2009

Experto Crede posted:

I've just gotten myself a dedicated box to learn some more about virtualisation and such, and I'm just wondering which hypervisor running on a Linux host would be considered best in terms of a good automation system. At work we use xen but I'm curious to learn more about different systems and how to automate them.

If you want a Linux hypervisor, your options are basically Xen and KVM. And if you don't want Xen... :v:

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
In terms of automation, you're going to be managing either KVM or Xen through libvirt anyway, so it should be pretty trivial to switch between the two.

On the other hand, oVirt supports KVM exclusively, and you'll probably want to give that a shot if you want something sane to manage.

Experto Crede
Aug 19, 2008

Keep on Truckin'
Okay, I've decided to go with kvm and seem to be making good progress, but I'm a bit confused by the networking (to clarify the host is a centos 6 box).

I have a /28 range from the server provider which has been setup on their end, and I can add the IPs to the nic and it works fine. However I want to be able to assign these IPs to guests.

Now I know this would be done via a bridge and setting the IP in the network config files of the guest, but setting up the bridge properly is where I'm a bit confused.

Do I need to create a bridge per IP and then assign them to each guest? Do I create one bridge with every IP in it or one that covers the whole /28 range? Or am I way off here?

other people
Jun 27, 2004
Associate Christ

Experto Crede posted:

Okay, I've decided to go with kvm and seem to be making good progress, but I'm a bit confused by the networking (to clarify the host is a centos 6 box).

I have a /28 range from the server provider which has been setup on their end, and I can add the IPs to the nic and it works fine. However I want to be able to assign these IPs to guests.

Now I know this would be done via a bridge and setting the IP in the network config files of the guest, but setting up the bridge properly is where I'm a bit confused.

Do I need to create a bridge per IP and then assign them to each guest? Do I create one bridge with every IP in it or one that covers the whole /28 range? Or am I way off here?

Create a single bridge on the hypervisor and assign an IP to that bridge. Attach a physical interface or bond or vlan to the bridge. Attach all the VM net devices to the same bridge. The VMs are now effectively "plugged in" to the same network as the hypervisor and you can assign your other IP addresses inside the VMs. Think of the bridge device as a switch as that is how it behaves; there is a MAC address table and it can do STP, etc...

Experto Crede
Aug 19, 2008

Keep on Truckin'

Kaluza-Klein posted:

Create a single bridge on the hypervisor and assign an IP to that bridge. Attach a physical interface or bond or vlan to the bridge. Attach all the VM net devices to the same bridge. The VMs are now effectively "plugged in" to the same network as the hypervisor and you can assign your other IP addresses inside the VMs. Think of the bridge device as a switch as that is how it behaves; there is a MAC address table and it can do STP, etc...

That makes sense, but how do I go about handling multiple IPs that way? Adding one IP to the bridge won't help with access to the other ones, will it?

other people
Jun 27, 2004
Associate Christ

Experto Crede posted:

That makes sense, but how do I go about handling multiple IPs that way? Adding one IP to the bridge won't help with access to the other ones, will it?

The bridge is a switch. Your VM interfaces are "connected" to ports on the bridge, as is your hypervisor's physical interface. Everyone is in the same broadcast domain and can communicate at layer 2.

Much like a switch, the bridge itself doesn't know/care about IPs other than the hypervisor's IP (the bridge br0 device is the first "port" of the bridge).

I can make a pretty diagram after I get to work but you'll probably find something nice on the libvirt web site.

evol262
Nov 30, 2010
#!/usr/bin/perl

Experto Crede posted:

That makes sense, but how do I go about handling multiple IPs that way? Adding one IP to the bridge won't help with access to the other ones, will it?

You put some IP (usually the IP that used to be on eth0 or whatever device you want bridged) on the bridge, then guests handle their own.

Just don't forget an iptables rule for physdev-is-bridged

Experto Crede
Aug 19, 2008

Keep on Truckin'
It seems that adding the bridge breaks networking.

This is what /etc/sysconfig/network-scripts/ifcfg-eth0 looks like:

code:
DEVICE=eth0
BOOTPROTO=static
IPADDR=$ipv4
NETMASK=255.255.255.0
ONBOOT=yes
GATEWAY=$gateway
IPV6INIT=yes
IPV6_AUTOCONF=no
IPV6ADDR=$ipv6
BRIDGE=br0
And /etc/sysconfig/network-scripts/ifcfg-br0 is:

code:
DEVICE=br0
TYPE=Bridge
ONBOOT=yes
IPADDR=$ipv4
NETMASK=255.255.255.240
GATEWAY=$gateway
With the ipv4 address on the bridge being from the separate range. Logs don't seem to indicate a problem with the interfaces coming up, it just doesn't communicate on the network when it's done like this.

I assume I'm missing something very obvious here but I'm not sure what.

evol262
Nov 30, 2010
#!/usr/bin/perl

Experto Crede posted:

It seems that adding the bridge breaks networking.

This is what /etc/sysconfig/network-scripts/ifcfg-eth0 looks like:

code:
DEVICE=eth0
BOOTPROTO=static
IPADDR=$ipv4
NETMASK=255.255.255.0
ONBOOT=yes
GATEWAY=$gateway
IPV6INIT=yes
IPV6_AUTOCONF=no
IPV6ADDR=$ipv6
BRIDGE=br0
And /etc/sysconfig/network-scripts/ifcfg-br0 is:

code:
DEVICE=br0
TYPE=Bridge
ONBOOT=yes
IPADDR=$ipv4
NETMASK=255.255.255.240
GATEWAY=$gateway
With the ipv4 address on the bridge being from the separate range. Logs don't seem to indicate a problem with the interfaces coming up, it just doesn't communicate on the network when it's done like this.

I assume I'm missing something very obvious here but I'm not sure what.

ifcfg-eth0 should not have any IP addressing information.

Experto Crede
Aug 19, 2008

Keep on Truckin'

evol262 posted:

ifcfg-eth0 should not have any IP addressing information.

ifcfg-br0 is now:

code:
DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
ONBOOT=yes
IPADDR=$ipv4
NETMASK=255.255.255.0
GATEWAY=$gateway
IPV6INIT=yes
IPV6_AUTOCONF=no
IPV6ADDR=$ipv6
ifcfg-eth0:

code:
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
BRIDGE=br0
But I still don't seem to have any access to it via the network, either ipv4 or ipv6. Is there anything obviously wrong there? I can't spot it.

evol262
Nov 30, 2010
#!/usr/bin/perl

Experto Crede posted:

ifcfg-br0 is now:

code:
DEVICE=br0
TYPE=Bridge
BOOTPROTO=static
ONBOOT=yes
IPADDR=$ipv4
NETMASK=255.255.255.0
GATEWAY=$gateway
IPV6INIT=yes
IPV6_AUTOCONF=no
IPV6ADDR=$ipv6
ifcfg-eth0:

code:
DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
BRIDGE=br0
But I still don't seem to have any access to it via the network, either ipv4 or ipv6. Is there anything obviously wrong there? I can't spot it.

You should follow this, including the nftables sysctls and iptables.

Experto Crede
Aug 19, 2008

Keep on Truckin'

evol262 posted:

You should follow this, including the nftables sysctls and iptables.

Thanks a lot, though strangely though, I did a reinstall on the server just to be sure I had a clean slate and added the second set of config files in again (IP stuff in br0, barebones eth0) and it worked. Probably a result of some previous tinkering perhaps?

other people
Jun 27, 2004
Associate Christ

Experto Crede posted:

Thanks a lot, though strangely though, I did a reinstall on the server just to be sure I had a clean slate and added the second set of config files in again (IP stuff in br0, barebones eth0) and it worked. Probably a result of some previous tinkering perhaps?

The ifcfg files are not the state of the network configuration. Always use tools like ip to see what is actually in place.

The network service scripts are dumb and rely on the ifcfg files to tell them what to do. They have no understanding of the network state. So, if you have configuration A in place and then edit the ifcfg files to some new configuration B, when you do an ifdown or service network restart, the scripts only know to try to undo configuration B. This may or may not leave parts of configuration A in place.

I would guess your old system ended up with the same IP assigned to both the physical device and the bridge after all your adjustments.

Xenomorph
Jun 13, 2001
I have a system having WiFi issues. ThinkPad running Trisquel Linux, GUI boot (Gnome3, I believe).

WiFi drivers are loaded (Atheros / ath9k). lsmod and lspci list the WiFi device.

On boot, WiFi is down, and Network Manager cannot toggle it.

As root, "ifconfig wlan0 up" brings up the WiFi interface, and the WiFi LED lights up. root can then use "iwlist scan" to list nearby SSID/networks.

However, regular users still cannot access the WiFi, and Network Manager still cannot manage it.

What do I need to check to see why this isn't getting enabled on boot, or why Network Manager and non-admins cannot access WiFi?

evol262
Nov 30, 2010
#!/usr/bin/perl
Check rfkill.

Xenomorph
Jun 13, 2001

evol262 posted:

Check rfkill.

rfkill wasn't even installed!

I installed it, but it said nothing was blocked.

I think I figured out the issue (maybe). Older versions of Network Manager had issues with 802.11X security. It would expect a certificate, even if you told it not to (for example: https://bugs.launchpad.net/ubuntu/+source/network-manager-applet/+bug/1104476).

As a work-around, the user had set up wpa_supplicant and manually configured text configs to handle WiFi. The SSID, security, username & password was set up that way. On system start, it would kick in and connect.

There was a big update for their system recently. I'm guessing Network Manager was part of that update. Its installation may have overwrote or conflicted with something that worked fine before. dmesg had NetworkManager giving errors on /etc/network/interfaces.

As a test, I commented out the manual WiFi config in the /etc/network/interfaces file, and restarted the system. NetworkManager loaded, and the WiFi LED came on. I tried to connect to the 802.11x/enterprise WiFi through the GUI, and it worked.

So, some hack to fix a bug stopped working when the bug was finally fixed. Maybe.

Not Wolverine
Jul 1, 2007

Xenomorph posted:

So, some hack to fix a bug stopped working when the bug was finally fixed. Maybe.

Arch_Linux.docx

evol262
Nov 30, 2010
#!/usr/bin/perl

Crotch Fruit posted:

Arch_Linux.docx

Somebody maintaining their own lovely patchset on top of upstream that breaks when upstream updates? That's literally debian.txt (or ubuntu).

reading
Jul 27, 2013

telcoM posted:

1.) If you are using GPT partitioning, the important flag would be "esp". But if I'm reading the UEFI standard docs right, then it should not be necessary to use GPT on removable media - any FAT32 with the old MBR partitioning should do.

2.) The rule for GPT partitioned disks to be UEFI bootable is: it should have a FAT32 partition flagged as "esp" and in it, there should be a bootloader exactly at \EFI\BOOT\BOOTX64.efi.

For removable media, the GPT requirement can be waived, so the important thing is to have the bootloader binary with the right name in the right path. FAT32 is supposed to be case-insensitive, but there have been case-sensitive UEFI firmware implementations, so you may have to try all-uppercase and all-lowercase variations, too. And since your removable media now contains a GPT partition table, the firmware might want to see the "esp" flag too.

You might want to try with a USB stick with a plain old MBR partition table and a single FAT32 partition. Don't mark it as bootable or anything, just put the bootloader (or, for testing, perhaps an EFI shell, here) to \EFI\BOOT\BOOTX64.efi.

3.) If the firmware menus mention Secure Boot, you'll want to turn that off, at least for the installation.

Thanks for this. I was able to get a Kubuntu install stick working with GPT partitioning, "boot" flag set, and Unetbootin put the /EFI/BOOT/BOOTX64.efi file there. But after booting into the Kubuntu install program it worked for an hour and then crashed because of trouble writing to the LIVA's emmc.

I couldn't get any other distro to boot- BSD or Linux. I'll keep working on this.

Hypnobeard
Sep 15, 2004

Obey the Beard



I'm responsible for a couple of RHEL6.7 servers here at work. Security has recently asked that we move five directories (/tmp, /var, /var/log, /var/log/audit, and /home) to separate partitions.

I've gotten the space allocated and the logical volumes created for each directory (appropriately sized), but I'm a bit stumped as to the procedure for safely moving the data from the existing directories to the new location and then ensuring the mount points move.

Can anyone point me at the appropriate documentation, or give me a rundown? Thanks.

evol262
Nov 30, 2010
#!/usr/bin/perl

Hypnobeard posted:

I'm responsible for a couple of RHEL6.7 servers here at work. Security has recently asked that we move five directories (/tmp, /var, /var/log, /var/log/audit, and /home) to separate partitions.

I've gotten the space allocated and the logical volumes created for each directory (appropriately sized), but I'm a bit stumped as to the procedure for safely moving the data from the existing directories to the new location and then ensuring the mount points move.

Can anyone point me at the appropriate documentation, or give me a rundown? Thanks.

Change fstab to reflect the new volumes. Bring it up in single user or write a dracut module (I can probably give you basic code for one if you want) to move /var/log and /var/log/audit before any of the normal logging/auditing starts, unless you don't mind losing a little bit. Use tar or rsync's archive option (or cp -p, maybe, depending on what you've got going on there) to move stuff over

ToxicFrog
Apr 26, 2008


evol262 posted:

Change fstab to reflect the new volumes. Bring it up in single user or write a dracut module (I can probably give you basic code for one if you want) to move /var/log and /var/log/audit before any of the normal logging/auditing starts, unless you don't mind losing a little bit. Use tar or rsync's archive option (or cp -p, maybe, depending on what you've got going on there) to move stuff over

Probably want cp -a rather than -p, but rsync -a is better than either, so

RyuHimora
Feb 22, 2009
Shell script can't find file on disk, manually entering the command at the shell works

I'm using this command to find the local IP on a PXE-booted Ubuntu XFCE image and compare it to a text file, then get remmina arguments to run on boot. The machines have a DHCP reservation for that address, so they should always have the right IP.
code:
remmina `ifconfig eth0 | grep 'inet addr:' | cut -d: -f2 | awk '{ print $1}' | fgrep -f - /usr/iptable.txt | cut -d: -f2`
After it's done the searching and cutting, the backtick'd portion will return something like this and then the line executes:
code:
remmina -c /usr/rdp/1992840438394.remmina
This has been working fine in a shell scrip that runs at boot for months. However, today I made a new Squashfs file and pushed it out, and the shell script I use to execute the command just returns with File Not Found. But when I manually enter the arguments into the shell, it executes with no problem. The exact same command works fine on the old Squashfs file as part of the startup shell script, but I can't figure out what's different between the two systems. I've checked the permissions of /usr/rdp, recreated the script and the directories, running the script from the old system on the new one, all to no avail. I don't remember needing to do anything special to get this running, it just stopped working. Does anyone know what I hosed up?

RyuHimora fucked around with this message at 00:41 on Oct 14, 2015

evol262
Nov 30, 2010
#!/usr/bin/perl
First, I'd suggest modifying your script to use "ip addr eth0" instead. But beyond that, are you sure remmina is in the path for that script?

It's in a script. Try breaking it out of interpolation into a real function that returns the value, then set +x, at least until you figure out what's having issues

DeaconBlues
Nov 9, 2011
<moved>

DeaconBlues fucked around with this message at 04:18 on Oct 14, 2015

Adbot
ADBOT LOVES YOU

Hypnobeard
Sep 15, 2004

Obey the Beard



evol262 posted:

Change fstab to reflect the new volumes. Bring it up in single user or write a dracut module (I can probably give you basic code for one if you want) to move /var/log and /var/log/audit before any of the normal logging/auditing starts, unless you don't mind losing a little bit. Use tar or rsync's archive option (or cp -p, maybe, depending on what you've got going on there) to move stuff over

How do I access the old locations if they're mounted on the new (currently empty) partitions? Does single-user mode enable this somehow?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply