Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
telcoM
Mar 21, 2009
Fallen Rib

Sointenly posted:

Any trick to installing Ubutu 10.04 on a hardware RAID 0 array?

I config the array in BIOS, but when I get into the OS install it still sees the drives as two separate disks. Of course at first I thought maybe my controller wasn't supported, but after doing a ton of reading that doesn't seem to be all that common of a complaint.

Your "hardware RAID" may actually be a "BIOS RAID" (sometimes also known as "fake RAID"). This became rather common at about the the time when SATA was introduced. Shortly after that, one of the Linux SATA developers wrote this FAQ document (note the trend):

https://ata.wiki.kernel.org/index.php/SATA_RAID_FAQ

As indicated at the bottom of the above-mentioned page, Linux *can* use some types of BIOS RAID, using a small bit of software called "dmraid". The way to write data on a RAID 0, RAID 1 or RAID 5 array is actually pretty standard; only the metadata that describes the array composition is manufacturer-specific. The dmraid software is just a small program that can read various manufacturer-specific RAID metadata formats and feed the necessary information to the tried-and-tested Linux software RAID code that does the actual work.

For Ubuntu, this might be the document you need:
https://help.ubuntu.com/community/FakeRaidHowto

But if you aren't planning to dual-boot, it might be easier to set up the system as a traditional Linux software RAID instead: the Linux OS installers usually have better support for straight-up software RAID than for dmraid.

Edit: Oh, the FakeRaidHowto only covers old versions of Ubuntu. Still, it has a good description of what is going on and why.

telcoM fucked around with this message at 19:34 on Sep 1, 2010

Adbot
ADBOT LOVES YOU

telcoM
Mar 21, 2009
Fallen Rib

fletcher posted:

I hadn't checked /var/log/btmp for a long time...

code:
[fletch@x ~]# lastb | wc -l
39743
Is it normal to have so many in a ~1 year timespan? Why doesn't it block an ip after a certain number of failed attempts?

If your system allows connections from the Internet, then yes. These attempts are mostly made by automated programs (intrusion tools or autonomous malware like worms and viruses).

If your system would automatically block an IP address after some failed attempts, it would allow an easy way to block you from accessing your system remotely: find the address you're connecting from, then send fake login attempts claiming to be from that address. You would have no clue this is happening, until you find you've been locked out of your own server. :smith:

Or send fake login attempts claiming to be from your ISP's DNS servers, and suddenly it will look like your internet is broken. :supaburn:

Of course, there are tools that can implement auto-blocking, but they are never enabled automatically, since you must understand the consequences of auto-blocking before activating it. It's a case of "you must read the documentation first".

telcoM
Mar 21, 2009
Fallen Rib

evol262 posted:

You'll probably want to actually configure ldap.conf, but does this look like a problem to you?

<SSL certificate not trusted>

What if you don't bind with TLS/SSL?

Is there a cert out there somewhere from your university that you can grab?

You can get the certificate of just about any TLS/SSL service (i.e. the publicly-accessible part of it, not the private key) using the "openssl s_client" command. This is useful if you need to set a certificate as trusted, but you don't know where to find the certificate.

For example, in this case you might use:
code:
openssl s_client -connect ldap.university.org:636 </dev/null
(The "</dev/null" part is because you probably don't actually want to start typing in LDAP protocol messages manually: we're only interested in the SSL/TLS session set-up here.)

You'll get a long output, which will include something like this in the middle:
code:
Server certificate
-----BEGIN CERTIFICATE-----
MIIGEDCCA/igAwJBAgIBCjANBgkqhkiG9w0BAQUFADCBmzEkMCIGA1UEAxMbTWF0
dGkgS3Vya2VsYSBQcml2YXRlIEMBIEcyMQ4wDAYDVQQIEwVFc3BvbzELMAkGA1UE
<several more lines of alphabet soup here>
-----END CERTIFICATE-----
This is the server's SSL/TLS certificate in PEM format: you can copy/paste it to wherever you need it.

Here are some instructions for setting up a certificate as trusted for OpenSSL and all the tools that use the OpenSSL library (including OpenLDAP tools and PHP):

http://gagravarr.org/writing/openssl-certs/others.shtml

telcoM
Mar 21, 2009
Fallen Rib

kyuss posted:

Question 2: I have a beefy DB2 (linux) server at work that performs abysmally and I may be tasked to fix this some day. It's a setup from last year, with 16 SAS disks configured as RAID6 and ample RAM. However, it's response times are considerably slower than the old system it it supposed to replace, with virtually no load on it.

spankmeister posted:

Blame the DBA.
Snarky but pretty much true: the configuration of the database itself can have a huge effect on its performance. For example, if the old database includes indexes that are appropriate for the most common queries and the new database has no indexes at all generated yet, that could easily drag the performance of the new system down to the dirt.

For a serious analysis, more information would be good. What is the type/model of the RAID controller? Is it a real hardware-accelerated RAID controller, or is RAID6 implemented at the driver level, and the hardware is just a "basic" SAS controller?

If it is a real hardware RAID controller, does it include a write cache unit?
A hardware RAID write cache includes some amount of very fast RAM, and typically either a back-up battery or a set of capacitors and Flash memory chips to protect the cached data if the system suddenly loses power. At least on HP Proliant servers, such a cache unit tends to be optional, but leaving it out can dramatically reduce the performance of the RAID controller.

What's the access pattern of your application like? In other words, what is the use of the database like?
  • write-mostly, with only infrequent queries (= a write cache unit would help a lot)
  • read-mostly, with only some writes/updates here and there (= improperly-configured indexes would cause extreme suckitude)
  • reading and writing about equally

How is the disk space allocated? You said you have 16 disks - are they configured as one big RAID6 set, or as two or three sets according to the purpose: one set for data, another for archive logs, and maybe a third for indexes.

Optimally, you'll want an independent RAID set for logs, so that the read/write heads can spend most of their time near the area where the last log entry was written (since the next one will usually be written immediately after them), and as many read/write heads as possible for the data and indexes, so that there will be more opportunities to parallelize multiple operations.

telcoM
Mar 21, 2009
Fallen Rib

Kaluza-Klein posted:

code:
Chain INPUT (policy DROP 176 packets, 8786 bytes)
 pkts bytes target     prot opt in     out     source               destination         

[...]

 2920  528K ACCEPT     all  --  eth0   *       0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
  206 10540 tcp_inbound  tcp  --  eth0   *       0.0.0.0/0            0.0.0.0/0           
    0     0 udp_inbound  udp  --  eth0   *       0.0.0.0/0            0.0.0.0/0           
    0     0 icmp_packets  icmp --  eth0   *       0.0.0.0/0            0.0.0.0/0 
[...]          

You would need to duplicate these four rules for your VPN tunnel device (tun0) too, otherwise the decrypted traffic incoming from tun0 gets dropped by the default DROP policy.

When traffic comes in through a VPN tunnel, first an encrypted VPN protocol packet comes in from the regular network device (eth0 here, I guess) and goes to the network socket of the VPN software. Then the VPN software unwraps the VPN headers and decrypts the package, which then returns to the OS's network driver code as an incoming packet from the tun0 device.

As far as iptables is concerned, tun0 is a real network device, completely separate from eth0. Only the VPN software knows that the encrypted traffic via eth0 and the decrypted traffic via tun0 are related to each other.

Kaluza-Klein posted:

code:
Chain FORWARD (policy DROP 16 packets, 960 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   39  2420            all  --  tun0   eth0    0.0.0.0/0            0.0.0.0/0
[...]           

This FORWARD rule is also suspect: since it has no target specified, it has no effect other than counting the amount of data matched by the rule. This probably is not what you want.

telcoM
Mar 21, 2009
Fallen Rib

GregNorc posted:

So I just got a linux machine at work. It runs redhat 6.1, and Firefox is way out of date in the repos. So I manually downloaded it from Mozilla, but I can't figure out where to put the resulting folder containing the binary, profile, etc.

I think that having it chill in my downloads folder is not the right place

According to the standard Linux/Unix filesystem layout, /usr/local is the place for stuff like that. /opt might also be a valid choice.

Move the folder to /usr/local/firefox, then set up desktop icons/menu entries for it as you prefer. You might also want to create a symbolic link for the main firefox binary in /usr/local/bin:

ln -s /usr/local/firefox/firefox /usr/local/bin/firefox

Most distributions have /usr/local/bin listed in $PATH by default, often before /usr/bin. If that's true for yours, any application that wants to start Firefox by executing a command like "firefox <some_URL>" should now start your shiny new Firefox instead of the old one from the repos.

telcoM
Mar 21, 2009
Fallen Rib

NOTinuyasha posted:

So I'm dealing with a Debian server which has a mdadm RAID1 with sda/sdb, md0 is boot md1 is swap (deal with it) and md2 is root.

sdb recently popped out for whatever reason so I faulted the disks, removed them, and re-added [...]

This appears to have worked for md0 and md1 but it seems to feel that sdb4 is a 'spare'

I guess it's because md1 is still syncing. Perhaps the RAID1 logic is smart enough to know that syncing multiple partitions of the same disk simultaneously would cause silly amounts of overhead (seeking back and forth), or the number of simultaneous on-going sync operations is being restricted.

Every time a disk is hot-added to a RAID set, it begins its life as a spare disk. It should automatically switch to being a regular RAID1 set member as soon as it begins syncing.

telcoM
Mar 21, 2009
Fallen Rib

Galler posted:

I had a problem with the usb subsystem not loading and after trying a dozen or so fixes I found one that worked.
code:
sudo umount /proc/bus/usb
sudo mount -n -t usbfs /proc/bus/usb /proc/bus/usb -o devgid=501,devmode=664
The problem is I have to run it every time I reboot, which isn't often, and I know there must be an easy way to automate this.

I guess 501 is the primary GID (group ID) number of your regular user account?

You are solving the problem by making all the USB devices owned by your personal group. While it works, it probably isn't the preferred way of solving this problem.

A user can belong to multiple groups simultaneously; a file can belong only to a single group at a time (without the use of ACLs, which is often considered an "advanced" technique and not usually required for normal use).

I don't have a Fedora system at hand at the moment, but I'd guess the "standard" solution to this problem would be to add your user account to the group that holds the USB devices normally.

After rebooting but before applying your fix, run "ls -l /proc/bus/usb" and make note of the group name. (If the default permissions don't allow group write access to files within /proc/bus/usb, then Fedora doesn't use this scheme and my advice is in error.)

Then run:

code:
sudo gpasswd -a <your_username> <group_name>
(obviously replace <your_username> and <group_name> as appropriate)

This adds <group_name> as a secondary group for your user account.
(You can belong to multiple groups, but only one of them at a time is the "primary" group; all the rest are "secondary" groups. When you create new files, the primary group is what is set as the group owner of the file. Otherwise, all groups are equal.)

Then log out and back in to make the change take effect. You should now be able to use the USB devices without your fix.

The reason why adding the corresponding line to /etc/fstab doesn't work is probably that the USB device filesystem gets mounted very early in the boot process, while the system is still running off a pre-packaged mini-filesystem stored in the initrd/initramfs file (which is stored in /boot).

The startup scripts within the initrd file might even mount /proc/bus/usb using explicit mount options, so even updating the initrd (so that an updated copy of /etc/fstab that includes your changes will be included in it) won't necessarily change the outcome.

Many Linux distributions have some pre-defined groups like this: adding your user account to one of them might allow you to use a CD/DVD burner, another to fully access USB devices, and so on. But don't just go adding yourself to every group your system has: some groups exist to provide isolation to certain system processes, and adding random users to those groups could weaken the security of your system.

telcoM
Mar 21, 2009
Fallen Rib

Galler posted:

Turns out umount isn't neccassary as there's nothing there to umount in the first place.

As Longinus00 said, /proc/bus/usb is an old way to do things: you will probably find the newer equivalent at /dev/bus/usb.

If you use some software that still uses /proc/bus/usb, you might want to check if an updated version is available. If you must stick with the current version, feel free - that's why the /proc/bus/usb still exists as an optionally-mountable pseudo-filesystem after all. But remember that the /proc/bus/usb will probably be removed as redundant at some future date, so you may have to find new solutions if you must run your current application on some newer Linux version.

Meanwhile, since it turned out Fedora does not use /proc/bus/usb by default, the simplest way to apply your current fix would be to place your mount command (without the sudo, but otherwise as-is) to /etc/rc.local: it's a script that will be run automatically at the end of system startup. It's the easiest way to add simple things to the startup procedure.

Another way (a bit "more professional" if you will) would be to add it to /etc/fstab... but I think you said you tried it before without success?

For /etc/fstab, you'll need to rearrange the mount parameters a bit. The line to match your current mount command would be:
code:
/proc/bus/usb /proc/bus/usb usbfs devgid=501,devmode=664 0 0

telcoM
Mar 21, 2009
Fallen Rib

Green Puddin posted:

I was going to assign a drive letter to [the big fat32 partition] in Windows but it already did that for me automagically, so now I'm facing this issue;

I'd like to have the fat32 partition mount automatically on Ubuntu, and I'm not exactly sure on how to go about that.

Picture just so you can see what the hell I'm talking about :



Looks like Ubuntu has automatically mounted the partition to /media/E0F9-3408 already.

The "E0F9-3408" looks like the serial number that is automatically assigned to each fat32 filesystem at creation time, so I guess you did not name the fat32 partition in Windows when you created it. If the filesystem had a name assigned to it, I guess Ubuntu would automatically mount it at /media/<filesystem_name> instead.

Disclaimer: I run Debian but the last time I even touched an Ubuntu system was about two years ago.

If you don't want to label the filesystem in Windows but don't like the /media/E0F9-3408, there are ways to fix that. The laziest way is just to make a symbolic link using the location and name of your choice, pointing to the /media/<serial number> directory. For example, if you want to reach your storage partition at /S, you could do this:
code:
sudo ln -s /media/E0F9-3408 /S
The "proper"/old-school way to do it would be to add a line in /etc/fstab that specifies exactly where you want the filesystem (and for fat32, the file permissions too: since a fat32 filesystem cannot store Unix-style file owners or permissions, the filesystem driver must sort of make them up).

To specify the file permissions, you'd need to know the user and group numbers (known as UID and GID) associated with your user account. This is simple: just run "id". If your user account is the first one created on the system, the UID and GID will probably both be 1000 for you. Let's assume you want to mount the filesystem at /storage, and don't mind if other users on your system can read the storage partition but want to stop them from writing to it.

So, you would use sudo and your favorite text editor to add a line like this to your /etc/fstab file:

code:
/dev/sda3 /storage vfat defaults,uid=1000,gid=1000,dmask=002,fmask=113 0 2
(Note: no spaces after the commas. This is an important detail.)

This means: "I want /dev/sda3 mounted at /storage. The filesystem type is one of the (V)FAT family. The owner of the files in the filesystem should be user 1000, group 1000. The permissions for directories should be drwxrwxr-x, and for regular files -rw-rw-r--. I'm not going to use the ancient "dump" program to back it up, but by all means check the filesystem at boot time if it looks broken, same as any other non-root filesystems."

Once you've written this line, you'll need to do one more thing: you should create an empty directory at /storage, to act as a mount point for the filesystem. Then you can unmount the filesystem from the old automatic location (and clean up its mount point) and remount it to the new one, or just reboot.

code:
sudo mkdir /storage

sudo umount /media/E0F9-3408
sudo mount /storage
sudo rmdir /media/E0F9-3408

telcoM
Mar 21, 2009
Fallen Rib

FISHMANPET posted:

I've got a .bash_profile at work that simply sources my .bashrc, where all the magic happens. My .bashrc also sources my .bash_aliases file that has aliases, as well as a few functions (because as far as I know bash aliases can't put inputs in the middle of the alias). This all used to work just fine, but I can't login graphically to Ubuntu 11.04 machines. When I check my .xsession_errors file, gnome is complaining about the functions. Once I comment out the functions, I can login graphically.

How should I be reshuffling my bash files so I can have my functions and login to machines?

Wrap everything Gnome complains about in a if...then...fi block like this:

code:
if [[ -t 0 ]]
then
    # here you put the stuff Gnome does not like
fi
The "if [[ -t 0 ]]" test means "if standard input is a terminal...".

Background:
When you login graphically, the first thing that is started is a shell - but this shell is not associated with any terminal device at all.
If described as a command line, it would be something like this:

/bin/bash /etc/X11/Xsession </dev/null >/dev/null 2>>$HOME/.xsession_errors

The /etc/X11/Xsession script then executes everything in /etc/X11/Xsession.d directory. One of these things is the command that starts up the GNOME desktop environment for you.

But before the shell starts running /etc/X11/Xsession, it performs all the regular pre-login actions, including .bash_profile and all that jazz. If the login scripts just assume that there is a terminal (e.g. by assuming that $TERM, $WIDTH or $HEIGHT exist, or that the terminal properties can be queried), you'll get into some trouble. At best, you just get some error messages in .xsession_errors or whatever; at worst, your graphical login will fail.

By the way, this is not the only situation where login scripts will be run without a terminal: for example, if you run a command on a remote host using "ssh remotehost some_command" (or equivalent), no terminal will be assigned for the connection on the remote host unless you explicitly request it (e.g. using "ssh -t").

So, when writing login scripts, it's a good idea to make all the aliases, functions and other "user convenience" stuff conditional, so that they will be skipped if the login shell has no terminal and therefore is not going to interact with the user directly anyway. The "if [[ -t 0 ]]" is one of the most compact ways to do this in bash. If you need to write your login script for a Bourne-type shell that is not bash, "if tty -s" does the same thing but requires one extra process to be started, as "tty" may not be a shell built-in command.

telcoM
Mar 21, 2009
Fallen Rib

Social Animal posted:

I'm currently using Fedora 16 (gnome3 fallback). I'm just not sure how to go about doing it as I'm pretty new to Linux in general. In the file manager I can see a network category with a browse network option. It shows up with Windows Network but it says "failed to retrieve share list from server." Is that a problem on the Windows box?

Older versions of Windows used to give a list of their shared folders to anyone who asked on the network, but I think the newer versions might be more like "identify yourself first :colbert:".

I personally prefer KDE, so I cannot tell the exact location, but somewhere in the GNOME settings there might/should be a way to configure a Windows username and password which will be presented to the Windows servers when looking for network shares. I expect at least the password field will be empty by default.

(On this Debian+KDE system, it's slightly illogically at System Settings -> Sharing, which you might expect be related to sharing stuff to others, not accessing other systems' shares. Meh.)

Specify a Windows username that can be expected to make sense to the Windows box (i.e. you may need to use the form DOMAIN\username in some cases) and the password associated with it, then try again.

telcoM
Mar 21, 2009
Fallen Rib

fletcher posted:


Are these values OK? This doesn't leave the box open to be turned into some spam box right? The only ports allowed through the firewall are 80 and a random high # for SSH.

Your firewall already provides one layer of protection, and this setting provides another:

quote:

IP-addresses to listen on for incoming SMTP connections: 127.0.0.1 ; ::1
.

Since you said your port 80 is open, you're probably running a website of some sort. Make sure any form submission handlers or any other web features (e.g. CGI/Perl/PHP scripts) that could be used to send email are properly protected: since these would run on your server, they could be used to bypass your firewall and SMTP listen address protections. With your current configuration, this would be the most likely way a spammer might try to abuse your server.

Some webservers and other web software packages include example scripts for sending emails from a webpage: you should assume all these default scripts are known to the spammers. Even if they are not used on your webpages, a spammer might try to submit carefully-crafted requests to utilize any unprotected scripts a webserver default configuration might have. So if you don't need those example scripts/modules/whatever, make sure they are disabled.

telcoM
Mar 21, 2009
Fallen Rib

Crush posted:

I have been reading about the differences between [..] and [[..]]. The only difference that stood out to me was that [..] is legacy and [[..]] is not (and [[.]] is more portable). If I am not working in older shells, is there really a reason to use [..] that maybe I missed in my prior research?

Dennis Handly, one of the veterans of the HP support forums, once posted this summary:

Dennis Handly posted:

There are three ways to compare in a real shell:

1) Arithmetic expressions: (( )), only handles numbers, not strings

2) [ ... ] or test builtin
Compound expressions are composed with -a and -o. Use of () must be quoted.
Arithmetic expressions, without (( )), can be used in arithmetic
comparison operators.
String = and != compare strings.
String < and > don't exist. (Treated as redirection.)

3) [[ ... ]]
Compound expressions are composed with && and ||. Use of () doesn't
need to be quoted.
Arithmetic expressions can't be used in arithmetic comparison operators.
String = and != match patterns.
String < and > exist.

So, there seems to be some things you must do differently depending on whether you use [..] or [[..]].

Example:
code:
$ [[ "foobar" == foo* ]] && echo true
true
$ [ "foobar" == foo* ] && echo true
$

telcoM
Mar 21, 2009
Fallen Rib

Appachai posted:

I have a number of network drives that I would like to mount using nfs. I added these lines to my /etc/init.d/nis file:

/etc/init.d/nis?? That should have nothing whatsoever to do with NFS. A strange choice, but as long as the commands and the NFS server configuration is correct, it should still work.

Appachai posted:

mount -t nfs 192.168.230.201:/home /networkhome
chmod u+s /networkhome

Unless you've specifically added the no_root_squash option at the NFS server side, the second command should fail: normally the /home directory should be owned by root, and as a NFS client, any file operations on the NFS mount point you run as root will be treated as if done by the user "nobody" by the NFS server. And "nobody" obviously has no business flipping setuid bits on directories owned by root.

Think of it as an incentive to set up correct file ownerships and permissions.

Appachai posted:

If I use a console and type "df" I can see the hard drive mounted on /networkhome, however I can't see any files in the directory. I think this is some kind of permissions error that I don't understand.



If you have a permissions error, you should be getting an error message. If you get one, what does it say?

Please show the output of these commands on your NFS client system:
code:
rpcinfo -p
rpcinfo -p 192.168.230.201
showmount -e 192.168.230.201
showmount -a 192.168.230.201
grep networkhome /proc/mounts
(The rpcinfo command will allow us to verify that all the necessary NFS services are active on both systems, and the showmount command tells us what the server thinks it's supposed to allow this client to mount, and what it thinks the client is currently having mounted. The last command will show the complete set of mount options applied to your NFS mount, if it actually exists.)

I'd also like to see the /etc/exports file of your NFS server system. (If Ubuntu does not have /etc/exports, then I guess the information I'm looking for might be in /etc/dfs/dfstab instead, Solaris-style.)

And if you type "getent hosts <IP-of-your-NFS-client-system>" on your NFS server system, does it come back with the hostname of your NFS client system?

telcoM
Mar 21, 2009
Fallen Rib

Appachai posted:

I didn't really see any when I went to the folders, but I just took a look in my /var/log/boot.log and I see this:
code:
fsck from util-linux 2.19.1
WARNING: Your /etc/fstab does not contain the fsck passno
	field.  I will kludge around things for you, but you
	should fix your /etc/fstab file as soon as you can.
This indicates your /etc/fstab syntax is currently somehow broken. I guess this might also explain why adding the NFS mounts to /etc/fstab did not work.

Each line in /etc/fstab that is not blank or commented out should follow the standard six-field format. On modern systems, the fifth field (for the ancient "dump" backup program) should practically always be zero. The sixth field (the fsck pass number) should be 1 for the root filesystem, 2 for all the other local filesystems that should be checked at boot time, and 0 for all pseudo-filesystems, NFS mounts and the like.

You probably should fix your /etc/fstab and reboot first, and see if your problem changes or goes away after that.

The right place for configuring NFS mounts is definitely /etc/fstab; if Ubuntu works the same as Debian here, the /etc/init.d/mountnfs.sh is the script that does the actual mounting. If you have the "chkconfig" tool installed, you can check if the various NFS components are configured to run at boot, by running "chkconfig --list |grep nfs".


Your rpcinfo listings indicate you're using NIS too. I guess that means your user accounts are using the same UID/GID numbers on all your systems. That should make setting the file permissions much easier: basically if you need two or more users to access the same files, you create a group, add the users to that group and set the directory and file permissions to allow appropriate access for that group. In a NIS environment, you probably should create the groups on the NIS master server, so that the new groups will be distributed to all the NIS clients.

I still don't see what you were trying to accomplish with that "chkconfig u+s /networkhome"; unlike the setgid bit, the setuid bit does nothing whatsoever on non-executables or directories on Linux, as far as I know.

Based on what you've told so far, it seems like the NFS mount operation has succeeded, but the actual file operations on the NFS-mounted file system might be failing, perhaps because something is blocking the necessary network connections.

The main NFS service is always in port 2049, but the NFS support services (status, nlockmgr, mountd and rquotad) are all by default on dynamically-determined ports, which may change as you reboot systems or restart daemons. If you are using iptables or have other firewalls in your network, this can be a pain in the rear end.

At least on Debian, you can configure fixed port numbers for NFS services, although it requires editing multiple configuration files. I guess the instructions for Debian should be mostly applicable to Ubuntu too, perhaps with some modifications.

Setting the nlockmgr port number involves adding a kernel module option; for example, you can create /etc/modprobe.d/nfslock.conf and write this line to it:
code:
options lockd nlm_udpport=4045 nlm_tcpport=4045
The status service can be configured by modifying /etc/default/nfs-common:
code:
STATDOPTS="-p 4046"
The mountd service is configured by modifying /etc/default/nfs-kernel-server:
code:
RPCMOUNTDOPTS="-p 4047"
And the rquota service by modifying /etc/default/quota:
code:
RPCRQUOTADOPTS="-p 4049"
After making the modifications, the appropriate server processes should be restarted and the lockd kernel module reloaded. It might be simpler to reboot the system.

Appachai posted:

code:
tcraig@mazama:~$ showmount -e 192.168.230.201
Export list for 192.168.230.201:
/library01    *
/usr/local    *
/mirrorbox2   *
/mnt/iscsi-2  *
/mnt/iscsi-1/ *
/home         (everyone)

Hmm, I wonder why the showmount command gives a different result on /home. The contents of /etc/exports does not quite explain that... If you have edited /etc/exports, have you run "exportfs -r" after that? If you haven't, you should do it.

Appachai posted:

code:
tcraig@mazama:~$ grep networkhome /proc/mounts
192.168.230.201:/home/ /networkhome nfs \ rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,\
proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.230.201,\
mountvers=3,mountport=853,mountproto=udp,local_lock=none,\
addr=192.168.230.201 0 0
Holy table breakage, Batman! (line breaks added)

This indicates you're using NFS v3, which is fine. The main NFS protocol is going over TCP, but the mountd protocol is using UDP. If you have firewalls configured, your firewall rules should match that. This also indicates you're using a default "hard" mount: it means if there's a problem accessing the server, the client will keep retrying until it's successful... and meanwhile the program that made the request will just hang. The alternative "soft" mounts are not recommended for most situations, because they may cause silent data corruption in some cases.

Your /etc/fstab line for this NFS mount should look like this:
code:
192.168.230.201:/home /networkhome nfs 0 0
You could also test if you can reach the TCP port 2049 of the NFS server from your client system:
code:
telnet 192.168.230.201 2049
If it says "Connection established" then there should be no firewall issues with this particular port. If it says "Connection refused" or hangs about a minute and then reports "Connection timed out", there is a network connectivity issue, possibly a firewall configuration error.

telcoM
Mar 21, 2009
Fallen Rib

Anjow posted:

Does anyone know how, using screen on a USB serial device, I can send the Cisco break sequence?

The break signal is a generic RS-232 feature, and it's usually referred to as "sending a break".

"man screen" tells me Control-A Control-B would be the default screen key combination to "send a break to window." As each window is a PTY device, and PTY devices were originally developed to be an one-to-one replacement for actual serial port connections, that should handle the break request just fine. The rest is a question of whether the USB serial device can actually produce a break signal, and whether the driver of your USB serial device can successfully make the device do it when requested.

telcoM
Mar 21, 2009
Fallen Rib
I looked at the repository and it seemed like the sun-java6-jdk and other packages were not included in the Packages.bz2 file of the repository.

edit: Hey, it just refreshed (at 13:23 of whatever timezone) and now the Java packages seem to be there. I guess somebody noticed that the repository metadata generation was broken and fixed it. Try again now.

telcoM
Mar 21, 2009
Fallen Rib

Corvettefisher posted:

Anyone know a legal way to obtain RHEL 6?

Register to access.redhat.com, then go here to get a free 30-day evaluation subscription:
https://www.redhat.com/products/enterprise-linux/server/download.html

Once your subscription becomes active, you can download the latest ISO images direct from RedHat: they will email the instructions to you.

As far as I've understood, the rules for RHEL are:
Since RHEL is built out of GPL-licenced and other open-source software, RedHat cannot legally stop you from installing it to as many systems as you want. But their package management tools will only give you updates for as many hosts as you have a valid subscription ("license") for, no more.

And if you tried to open a support issue with RedHat while having more active hosts associated with your RedHat account than you have valid subscriptions, your case just might be routed to a RedHat sales agent first, for the purpose of correcting the discrepancy... in other words, if you violate the terms of the subscription, you'll get no support.

As long as you or your organization has even a single valid subscription, you will be able to download the ISO images here:
https://rhn.redhat.com/rhn/software/downloads/SupportedISOs.do

telcoM
Mar 21, 2009
Fallen Rib

Corvettefisher posted:

Sadly it sees I don't a an email with a business domain so it redirects to fedora

:wtf:

I don't see RedHat intentionally turning away a potential customer, unless legally required to do so... do you live in Cuba or some other place that is on the USA's official poo poo list, or something like that? Or :tinfoil: maybe your ISP really hates RedHat or has a really strict "no servers on home networks" rule?

This is the actual webpage where you start the registration process to get a RedHat account:
https://www.redhat.com/wapps/ugc/register.html

When you go to that URL, it appends some session IDs and other crap to the URL, but you should see a page that looks like this:


You should select the option I highlighted in red: when you choose it, the "company information" fields will be removed from the form.

telcoM
Mar 21, 2009
Fallen Rib

movax posted:

I'm doing some driver development / messing with PCI memory allocation, checking BARs, etc. Is there a tool that gives me a nice breakdown of what memory regions are assigned to what, i.e. a memory map of system?

Have you already checked /proc/iomem and the "dmesg" output?

As soon as the kernel starts up, it dumps a lot of memory allocation information in the kernel message buffer. If you booted without the "quiet" boot option, you would see it scrolling by very rapidly. These messages are accessible later using the "dmesg" command, whether the "quiet" boot option is in effect or not. I think there are some kernel boot options which can make this output even more verbose, should you need it.

If a driver module makes changes to PCI configuration, the kernel usually outputs similar messages whenever changes are made.

The /sys/bus/pci/<PCI domain:bus:slot.function>/ directory might have some useful information too.

telcoM
Mar 21, 2009
Fallen Rib

Goon Matchmaker posted:

It's doing EXACTLY what it says it does. How can you blame it? Though it is stupid that it kills everything. I mean really what benefit or use does it have?

The Solaris killall command is intended to be used by the shutdown/reboot scripts only. Just about the only thing it doesn't kill is the shell it's started from - but if that shell is dependent on some system facility, the shell will get killed indirectly by SIGHUP as the system facility dies.

telcoM
Mar 21, 2009
Fallen Rib
If GRUB2 is written to the MBR of the disk, it will typically use other blocks in the first track. In a traditional PC-style partition table, the first track is otherwise completely unused.

I've made a small program to inspect installed bootloaders, and on my Debian system, it says the GRUB2 boot code is located in blocks #1-#48 (the MBR is block #0). So while copying the first 2048 blocks was overkill, it probably worked OK. Copying just the first 512 blocks would have been enough too.

In general, the "proper" way would probably be to run the boot code installation command twice - once for the first boot disk and once for the second one.

An added wrinkle is that the boot code on the second disk probably must be told to assume that as far as BIOS is concerned, it will be the primary disk at the time it gets actually used. (This is because it will only really be needed if the first drive is toast or gone.)

This could be achieved by using a tweaked /boot/grub/device.map file when installing the bootloader for the second drive.

For example, if your normal device.map was like this:
code:
(hd0)   /dev/sda
(hd1)   /dev/sdb
... you would use a device.map like this when installing the bootloader to sdb:
code:
(hd0)   /dev/sdb
(hd1)   /dev/sda
This is because the disk identifiers used by GRUB actually refer to the disks as detected by the system BIOS: (hd0) is the first detected disk, etc.

The big "enterprise" Linux distributions don't get this problem very often, since Serious(tm) Server Hardware tends to use hardware RAID controllers, which look like a single disk both to the BIOS and the operating system.

However, this might be changing, since the newer low-end server models from HP (and maybe other manufacturers too) have started to use software SATA RAID. Sooner or later, one of the enterprise distributions will create or adopt a tool for maintaining dual bootloaders on software RAID1 sets, and then the other distributions will do the same in an effort to match features.

Alternatively, the GRUB2 bootloader might include a native support for software RAID1 sets... as GRUB2 is the current "everything but the kitchen sink" bootloader, this might actually happen.

The third option is that all the major hardware manufacturers will move to UEFI, since it seems to be the only way to really support 2+ TB disks... and at that point, we all must learn a bunch of new things as regards to bootloaders.

telcoM
Mar 21, 2009
Fallen Rib

xPanda posted:

As for writing a hook, the /boot partition itself is on a RAID1 device, so that keeps it in sync across kernel updates. If I understand the terminology correct, that means that Stage 2 is redundant, but Stage 1 is not, and there appears to presently be no good solution for making it redundant.

And yeah, this secure boot thing sounds annoying. Hopefully they'll have a UEFI option to turn it off (wishful thinking).

The boot loader redundancy is something you set up once when installing the OS. It only needs maintenance when you upgrade bootloader versions or replace system disks.

A Linux bootloader does not, strictly speaking, need write access: it only needs to grab the kernel and initrd/initramfs files from wherever they are and get them running. Two separately-installed bootloaders in the MBR/first track of each disk in the RAID1 set does that nicely.

Stage 1 and Stage 2 are legacy GRUB terms (versions 0.9x): Stage 1 was the 440 or so bytes of code embedded into the actual MBR or partition boot record, the optional Stage 1.5 could be embedded on the first track of the disk or in the bootloader area of some filesystem types to allow loading the Stage 2 file using the filesystem metadata instead of just blindly picking up some disk blocks according to a pre-made list. If stage 1.5 was not used, stage 2 was loaded using the blocklist method.

In a software RAID1 situation, the optimal setup for legacy GRUB would have been to put a copy of Stage 1 in the MBR of each disk in the RAID set, and the Stage 1.5 in the first track of each disk in the RAID set. The system would boot from whichever disk was currently selected as "the first disk" by the BIOS, and after reading the MBR and Stage 1.5, it could read the Stage 2 and the bootloader configuration file from the /boot partition using the filesystem's normal file location information.

This is actually how GRUB2 installs by default, although the default installation only covers one disk of the RAID1 set.


On the subject of UEFI... meh. My current home/gaming system has an Asus P8P67 Pro motherboard, and happily dual-boots Windows 7 (64-bit) and Debian 6 (64-bit) using UEFI. I'm using rEFInd as a boot manager (= OS selection menu) and eLILO as the Linux bootloader.

In the future, I admit the secure boot requirement might cause some issues. If the future UEFI BIOSes can be convinced to not be anal in their requirement for security, I think Linux will survive just fine.

I could accept using BIOS's own boot menu to select which OS to boot: since it would mean that only the signed & trusted BIOS code has been running until the control is transferred to the Windows bootloader, it should work.

Making secure boot an absolute requirement that cannot be overridden by the BIOS would make new systems unable to run Windows 7 too. Our company still has to maintain some physical Windows 2000s in specific roles and we only recently managed to replace the last remaining physical NT4s with a virtualized solution: I guess large enterprises will need the ability to stay with Windows 7 for a while yet.

And if only one hardware manufacturer offers "Windows 7 fallback capability" (in other words, an off switch for the secure boot feature), that hardware manufacturer will get the money of all the enterprises that need new hardware but cannot migrate to Windows 8 just yet. So I think money will guide the hardware manufacturers to do the right thing.

Personally, I use Windows at home mostly for games like Skyrim, Legend of Grimrock and (whatever incarnation of) Civilization. I completely skipped Windows Vista; at the moment, I see no compelling reason to move to Windows 8.

telcoM
Mar 21, 2009
Fallen Rib
Goon Matchmaker, you should set up a RHEL or CentOS 5 virtual machine, for the express purpose of providing proof that the software sucks because it's a piece of poo poo, not because you're running it on a non-certified platform.

If you do it in advance, you will be prepared to :commissar: any ideas that the software failures are caused by the Ubuntu platform.

In the unlikely event that it works without segfaulting when run inside the VM... welp, you made it work.

Is it perhaps a poor winelib port of some Windows software?

telcoM
Mar 21, 2009
Fallen Rib
The important question is: who is supposed to control whether to record or not?
If the user is voluntarily recording his/her own session, the already-mentioned "script" and "screen" can do it.

But if you need something that records everything the user does and is not easy to work around or switch off, you might want sudosh2 or rootsh. Both are based on the original sudosh project, which seems to be no longer updated.

These are primarily intended to work with sudo (to fulfill requirements like "record everything that is done as root"), but if you need a Big Brother environment, they can be included in the standard login procedure too.

If you need to record the actions done as root, remember that root can always delete or modify any records that are stored locally. So if you need to protect against that, you must configure the system so that the session record will be sent through the network to another system, with someone else having the root access. At least rootsh can do that, when configured to use the syslog service for session logging. The same feature might by now be included to sudosh2 too.

telcoM
Mar 21, 2009
Fallen Rib

The Third Man posted:

Quick question, I'm setting up an Arch linux install and I've partitioned my hard drive into a small boot partition and 2 normal partitions.
[...]
It tells me that the syslinux install was successful, but that it failed to set the boot flag on /dev/mapper/Arch-lvroot. I'm still lost without the guided install at this point, so what do I need to do to correctly set Arch to boot from this partition? Can I just do
# mkfs.ext4 /dev/sda1 && mv /boot /dev/sda1/boot?

"Successful installation of syslinux bootloader" only means that the boot block and the ldlinux.sys file was written successfully. But it was apparently written to the start of the logical volume, which is a structure your BIOS won't understand, so the BIOS cannot find the boot block and therefore cannot boot.

mv /boot /dev/sda1/boot makes no sense: a device node is not a directory and cannot have subdirectories. Assuming that you meant mounting /dev/sda1 to some temporary location and moving the /boot directory in there, that won't work either: after mounting /dev/sda1 to /boot, you would find that the bootloader files are located in /boot/boot, which is not what you want.

There is not much point in using ext4 with the /boot filesystem, as the few files in it don't change that often, and when they do, they mostly tend to be completely replaced instead of incrementally modified. I would use ext3 on /boot to make it as simple as possible for the bootloader to understand. But you probably can use ext4 if you want.

Instead, you should do it this way:

# mkfs.ext3 /dev/sda1
# mkdir /temporarymountpoint
# mount /dev/sda1 /temporarymountpoint
# mv /boot/* /temporarymountpoint
# umount /temporarymountpoint && rmdir /temporarymountpoint
# mount /dev/sda1 /boot
# echo "/dev/sda1 /boot ext3 defaults 1 2" >>/etc/fstab

After this, your /boot looks just like it normally does... but it is located on /dev/sda1. Then you might try this again:

/usr/sbin/syslinux-install_update -iam


It should write the boot block to /dev/sda1 (not /dev/mapper/Arch-lvroot) and mark it active. I'm not familiar with Arch, but if my guess is correct, it should also make sure that you have some sensible boot code in your Master Boot Record in the very first block of /dev/sda.

Having /boot as a separate filesystem is a very common configuration. There are several reasons to use it: maybe your root filesystem is encrypted, software RAID5 or on a LVM logical volume, or maybe your BIOS cannot handle the true size of your system disk. In all these situations, you'll need to make sure that the kernel and the initramfs/initrd file are both located somewhere that is accessible to the BIOS, since the bootloader relies on BIOS disk access functions. Once those two files are successfully loaded, all the normal Linux features are available for the purpose of accessing the root filesystem, assuming that the necessary drivers and tools are packaged in the initrd file.

telcoM
Mar 21, 2009
Fallen Rib

Morkai posted:

Yes, I understand my post goes all over, but at least the VPN part had some fairly specific questions. It's also the keystone of the entire exercise and really the only part that matters for now.

[...]

All fun to think about, but I really need some VPN advice.

L2TP-over-IPSec VPN is certainly doable: I've done that and recently helped a friend set it up too, for a purpose similar to yours. It runs nicely with iOS devices and Mac OS X.

The problem is that it's a bitch and a half to configure the server side for it, if you have not done it before. And even if you have, you must get all the details exactly right or else it won't work. The error messages are not always very helpful either.

The link to Debian Wiki you posted uses freeradius, openswan and l2tpns. My solution was somewhat different: I used openswan and xl2tpd, which does not need freeradius. Also, xl2tpd did not need to be recompiled in order to work with the L2TP bug in the OS X and iOS implementations.

I'll gather my notes about the L2TP VPN setup and try to put together some instructions for setting it up over this weekend.

telcoM
Mar 21, 2009
Fallen Rib

Salt Fish posted:


edit:
I tried just moving the old version and it broke git entirely;
code:

mv /usr/local/bin/git /usr/local/bin/git.bak
git --version
-bash: /usr/local/bin/git: No such file or directory

This is the bash shell being a smartass.

In each session, the first time you run a command without an explicit path, e.g. "git", it scans the directories listed in the $PATH environment variable. When it finds the matching binary, it remembers the location so that the directory scan can be avoided when the command is used again. If $PATH includes NFS-mounted directories or other remote filesystems, this improves the interactive response of the shell.

But when you move the binary after using it once, bash gets confused when the binary is no longer where it was: it won't revert to the full $PATH scan.

Run "hash -r" to make it forget the old location of the git binary and perform the full scan again. Or open a new session.

telcoM
Mar 21, 2009
Fallen Rib

Doctor w-rw-rw- posted:


I know how to configure dnsmasq, but don't have a clue how to configure CentOS' network. I am fuzzy on what a VLAN is and can do for me, and whether it applies in this situation.

You won't need VLANs at all for what you're planning.
VLANs are used when you have more network segments than physical links, and won't be very useful unless your network switch is VLAN-capable.

Doctor w-rw-rw- posted:

I know what a bridge is, generally, but am slightly confused by how it works, i.e. whether packets pass through interfaces as if they were wired together, and what it means for a bridge to have an IP.

A network switch is just a different name for a multi-port bridge.

With a two-port bridge, any traffic seen by one port is passed to the other port unless the bridge already knows it's not needed, i.e. both the sender and the recipient are on the same side of the bridge.

An IP on a bridge is like an extra "internal" port on it.


Doctor w-rw-rw- posted:

How can I configure the interfaces so that eth0 NATs its connection and shares it with eth1 (which will be hooked up to a switch) and wlan0 (for my home wireless connection)? I also plan to run VMs on this internal network, since the "router" is also a beefy-rear end computer with boatloads of memory. Any further tips on top of this would also be appreciated.

This depends on some details you did not tell yet.

I guess your eth0 is connected to your ISP, right? Does it have a public or a private IP address at the moment? If private, what is its IP and netmask?

If you bridge the wlan0 and eth1 together, you can use a single set of IP addresses on both; if you don't, you can still route traffic between them, but the wireless and wired networks must then be separate IP segments.

telcoM
Mar 21, 2009
Fallen Rib

Morkai posted:


If you could post your notes that would rock.


OK, I transformed my notes into a hopefully-understandable document in English. (I suppose my originals in Finnish might not have been very useful to you.)

This ZIP file includes example configuration files and a README.txt that describes
how to put all this together:
http://koti.welho.com/mkurkela/l2tp-vpn.zip

The example configuration files all contain placeholders in all the locations you'll need to change to match your local network environment. See the README.txt for details. (Warning: contains a wall-of-text.)

There is also another text file describing the use of certificates with isakmpd. I found it somewhere on the internet and saved it in case it might be useful... but as it turns out, preshared keys are the easiest with MacOS X and apparently the only choice with iOS clients.

telcoM
Mar 21, 2009
Fallen Rib

angrytech posted:

Also, I ran aa-genprof someprogram, and it put a line in the apparmor profile for someprogram that reads
code:
deny /etc/passwd m,
Obviously it looks like it's denying access to /etc/passwd, but why would that even be included in the first place?

If a program needs to look up usernames (e.g. the name of the owner of a particular file), it needs to be able to read /etc/passwd. Otherwise the program can only display the UID number, which is not very user-friendly. This is why everyone is supposed to be allowed to read /etc/passwd by default.

In your case, the aa-genprof noticed that someprogram was not doing any username lookups, so access to the file could be denied without losing any program functionality.


Some historical background:
The name of the /etc/passwd file ended up being oxymoronic for historical reasons.

When Unix was designed, the file was used to store all user information: the username, the corresponding UID number, the GID number of the user's primary group, the user's shell and home directory, and the user's password in a hashed form. The password hashing algorithm (the classic Unix crypt(3)) was thought to be strong enough to protect the passwords.

This also meant that no special privileges were required to implement authentication in a program: if a user was writing a program and felt that some operation would merit a password check, no particular privileges were required to implement it. This made it simple to implement things like screen savers with a "lock screen" functionality.

As time passed, the processing power of computers increased, and brute-forcing the passwords turned out to be quite feasible. One component of the fix was to move the actual password hashes from /etc/passwd to a different file, usually /etc/shadow. Before PAM was implemented, this also required that any programs that needed to authenticate the user were either running with root privileges, or members of a special group "shadow", which had read-only access to the /etc/shadow file.

But at that point, changing the name of /etc/passwd was not practical: there already were many existing programs and scripts that did their username lookups by directly reading /etc/passwd. Changing the filename would have required changes to all those programs and scripts.

As a result, /etc/passwd now contains anything but the password.

telcoM
Mar 21, 2009
Fallen Rib

Elissimpark posted:

I'm trying to install Linux Mint 13 (64 bit)on a new system from a live USB, but I keep getting the message that "the 'grub-efi' package failed to install into /target/".

My guess is that you either did not create a EFI system partition for your new installation, or that something went wrong with it.

The installer seems to think that your system should use an EFI-based bootloader instead of a traditional PC BIOS-based one. In a new system, this might even be true. However, most modern systems with EFI actually have UEFI, which is the newer version of EFI + an optional compatibility layer that will allow the traditional BIOS-style bootloaders to work too... but the Arch installer might not be able to recognize that.

EFI allows you to use disks of more than 2 TB as boot disks without fuss. It also may achieve faster boot times than the traditional BIOS. On the other hand, it's very different from what the vast majority of Linux users are familiar with. There will be no MBR nor boot blocks of any type: only bootloader files on a EFI system partition.

Unfortunately, since EFI is a fairly new thing in PCs and the traditional BIOS-style boot has been used since the original IBM PC/AT was introduced in year 1984 or so, you should expect a certain number of challenges. With this, I mean poor EFI implementations and outright bugs. Already there are reports on stupid EFI implementations that just assume the system will be running Windows only, instead of following the EFI specifications.


First, some basic facts about EFI.

When booting EFI-style from a CD-ROM or DVD, an EFI system looks for a directory named "EFI" (theoretically case insensitive) at the root of the CD/DVD. If it exists, it looks for a sub-directory "BOOT" within the "EFI" directory, and a hardware-type-specific bootloader within it. For a 64-bit x86 system (which is what most modern PCs are), the expected bootloader name is "BOOTX64.EFI".

When booting from a hard disk, the situation is a bit more complex. Instead of a nice, well-known ISO9660 or UDF filesystem, there can be any number of partitions, RAID arrays, LVM and other complex things. Even the shiny newfangled EFI firmware cannot support all of this by itself.

On hard disks, EFI wants to see a GPT-style partition table, and a certain special partition within it. This is called "EFI System Partition": it should normally be about 100-200 MB in size, marked with a special boot identifier GUID, and have a FAT32 filesystem on it. Within it, there should be a directory named "EFI", and a bootloader-specific sub-directory and/or a "BOOT" directory just like on EFI-style bootable CDs/DVDs. If the EFI System Partition does not exist on a hard disk, then the hard disk is not bootable in the native EFI way. BIOS manufacturers may optionally add support for other things too, but this is the way things are supposed to work.

In Linux, this means a few things:

First, it is time to forget the old "fdisk" command and its siblings "cfdisk" and "sfdisk". They only understand the traditional BIOS-style partition table, which has a hard maximum limit at about 2 TB size. Instead, the "parted" tool should be used... and the Mint installer apparently already uses it.

Some Linux distributions mount the EFI system partition to /boot/efi. As the EFI system partition will also contain a EFI subdirectory, pathnames will have a silly double EFI component, like /boot/efi/EFI/redhat/grub.efi. But basically, you can think of it as "/boot/efi is the new /boot, with a new directory structure".

Other distributions (like Debian) don't mount the EFI system partition at all, and rely on mtools to access it when needed. I haven't used Mint, but a bit of Googling seems to indicate that Mint belongs to this group.


This image is from the Mint users' forum:


The strange /dev/mapper/isw_alphabet_soup device names indicate that Intel AHCI RAID support is being used. But the important things for you are the "New Partition Table..." button and that a partition with type "efi" is going to be created.

As I understand you're doing a clean install to a new system, you should click the "New Partition Table..." button. If it asks you to select the partition table type, pick GPT. Then make sure you create the EFI System Partition: its size should be about 100 - 500 MB (anything more than that is wasteful) and its type must be set to "efi" - this should make the installer create the appropriate partition IDs for you. Then create the rest of the partitions as you see fit.
(With GPT, there will be no "primary" and "extended" partitions - all partitions will be equal.)

The "Device for boot loader installation" field will apparently be completely ignored when grub-efi is used.

Elissimpark posted:

Topping this off, I can't get it to recognise that its plugged into a router either!

Has it recognized the existence of the NIC at all?
Please run this in a command prompt window:
code:
/sbin/ifconfig -a
Does the output include a block of text for the "eth0" network interface?
If not, the driver module for the NIC is probably not loaded: the output of the "lspci -v" command would be needed to identify your NIC and the correct driver for it.

If eth0 is listed but does not include the keywords UP and RUNNING, the driver has been loaded but the NIC has not been configured. "sudo ethtool eth0" could be used to see if the NIC detects a link at all, but some newer NICs switch completely off if they are not UP. So, before running the ethtool command, run "sudo ifconfig eth0 up" to make sure the NIC is powered up first.

If ethtool output includes "Link detected: yes", the hardware side of things is probably OK. If there is no link detected, the system is probably smart enough to not even bother wasting time with DHCP queries to get an IP address, but it may not generate any error messages for that: the system will just assume the cable is not yet plugged in.

telcoM
Mar 21, 2009
Fallen Rib

beuges posted:

I have a fetchmail question. I have mail delivered to an Ubuntu Server box running exim4.
[...]
I need to get fetchmail to fetch the mail and deliver it to a remote MS Exchange server that requires TLS and client authentication.
Any clues as to what I may be missing?

It is not obvious to me why you think you need fetchmail if the incoming mail is already being processed by a real MTA (Exim). If the incoming mail is addressed to a local user, a simple entry in /etc/aliases should be enough to make Exim forward it to the Exchange server instead of storing it locally.

code:
localuser: [email]exchangeuser@xxx.com[/email]
(Traditionally you are supposed to run "newaliases" after editing /etc/aliases, but with Exim, that might not be necessary.)

In order to have Exim authenticate when connecting to the Exchange server, you'll need to add the authentication information to /etc/exim4/passwd.client:

code:
xxx.com:username:password
If your Exchange server is a multi-host large enterprise configuration, you might need a separate line in /etc/exim4/passwd.client for each individual Exchange host. See "man exim4-config_files" for details.

Disclaimer: I don't have actually done this with Exim, just reading the documentation. And since I don't have a Ubuntu Server handy at the moment, I'm reading the Exim documentation from a Debian system instead.

telcoM
Mar 21, 2009
Fallen Rib

Tolan posted:

We've updated resolv.conf with the new nameserver IPs and commented out the old ones. named and network are restarted, but the old nameservers still show DNS requests from the servers until a reboot.

When each process starts, it loads the glibc library, which includes the DNS resolver routines. The library startup code reads /etc/nsswitch.conf, /etc/resolv.conf and a bunch of other stuff just before the actual program starts running. As a result, unless the glibc does some clever tricks, each process will keep using the name resolver settings that were in effect at the time the process started. I'm not sure if RHEL5's glibc belongs to the "clever" category or not.

Fix: After changing the nameserver settings, restart any long-running processes that use DNS.

telcoM
Mar 21, 2009
Fallen Rib

Modern Pragmatist posted:

I've recently setup a DAAP server on my server so that I can stream my music to music players through a VPN. I would really like to know the bandwidth that I'm using since our house is on a monthly cap. What is the best solution for me to record usage information to a log file? I know the IP address from which I will be streaming the data (as well as the port number). Any suggestions?

Edit: Basically I want the output of ifstat but just for traffic to a particular IP/host and port.

Every iptables rule includes packet and byte counters for traffic that matches the rule. You can also make iptables rules that match specific IP address and port but don't actually do anything: they would act as counters only.

Then log the value of the counters at suitable intervals. A simple calculation will give you the average bandwidth used for the packets matching the rule on each interval:

bandwidth_used = (counter_at_period_end - counter_at_period_start) / period_length

Or if you use something like http://oss.oetiker.ch/rrdtool/ to store the counter values, you can easily have it draw nice graphs for you too. (You probably find rrdtool is included in your Linux distribution: just tell your package manager application to install it.)

You might set up a cron job or an infinite-loop script to feed the current counter values into a .rrd file, and a CGI script in a web server to update the graph only when you want to look at it (to minimize CPU usage). The rrdtool package documentation includes tools and examples for setups just like this.

telcoM
Mar 21, 2009
Fallen Rib

picosecond posted:

So I've dl'ed an .iso of it instead. I tried to go into terminal, mount it from the desktop to a /media/iso folder I've created but it keeps saying: "admin/home/Desktop/blablah.iso: No such file or directory"

Unless you have done something quite unusual, the pathname should probably be "/home/admin/Desktop/blahblah.iso":
- when you specify an absolute path, the pathname should always have a slash (/) at the beginning.
- the standard filesystem layout is /home/admin, not /admin/home.

picosecond posted:

I double-click the mount & it just opens up the files on the .iso -- and clicking on what look like the executables(or the Linux version of them, I guess) doesn't do anything. So... what am I supposed to click to run the iso?

Did the name of the ISO file happen to have the word "-live-" in it somewhere?

If it did, you are looking at an ISO of a Live CD: that is, a CD that is designed to be used to boot the system into a minimal self-contained Linux installation that includes the DRBL server. The files on the CD are typically the ISOlinux version of the Syslinux bootloader, maybe some utilities and informative text files, and the kernel, initrd and a SquashFS filesystem image file that holds the entire minimal Linux installation. Nothing here is designed to be executable within a "host" Linux system, so the execute permission bits are probably all disabled.

While it would be possible to mount the squashfs image and pick out the executables and other files you need for your DRBL server, that's the extra-hard way of doing things.

If you want DRBL, go here:
http://drbl.sourceforge.net/download/

Look at the section that says: "DRBL packages for GNU/Linux (to be installed on your GNU/Linux)". It will lead you to a standard Sourceforge download page. It tries to be helpful by saying: "Looking for the latest version? Download drbl-live-xfce-2.0.1-4-amd64.iso (420.6 MB)", but unfortunately that autogenerated tip points to a live ISO image, which is not what you want.

Instead, choose the version you want from the list below (the latest, unless you have reason to do otherwise), and you'll get here:
http://drbl.sourceforge.net/download/stable/pkg-files.php

Here is a .noarch.rpm file for distributions with RPM-based package management, and a .deb package for Debian, Ubuntu and whatever distributions that use the .deb format. Since you have (X)Ubuntu, pick the .deb file.

There is also a .src.rpm and a .tar.gz package. Both of those are source code packages: you might need one of them if you had a really special minority distribution whose library versions don't match the major players' choices.

Once you've downloaded the .deb, it's time to install it using the standard package management tools. I'm not familiar with Xubuntu, but since *Ubuntu is supposed to be a user-friendly desktop distribution, just double-clicking the .deb package will probably pop up some package management tool that will ask you "this software package is from unknown source, do you really want to install it, or just look at the list of contents or something?"

Failing that, installing on the command line is not too difficult either:
code:
sudo dpkg -i /home/admin/Desktop/drbl_1.12.15-1drbl_all.deb
Looks like the package installs some text files to /opt/drbl/doc/, so you might want to take a peek at them.

telcoM
Mar 21, 2009
Fallen Rib

Houston Rockets posted:

I'm trying to alias an ssh tunnel.

Let me explain. I have a LocalForward statement in my ~/.ssh/config, bringing a remote resource over:
code:
Host XYZ
Hostname server.com
LocalForward localhost:9090 foo.server.com:8080
So now that I have access to localhost:9090, I would like to assign a virtual host to it, like foo.remote, so when I access foo.remote from any program, it will forward that request to localhost:9090, and therefore to foo.server.com:8080 over the tunnel.

Is this possible?

As Doctor w-rw-rw- said, "virtual hosts" is not quite the right term for this.

Furthermore, there is no easy and universal way to assign port numbers to hostnames. If you create virtual interfaces, you can use the same port number locally as the remote real service uses, which may allow you to omit the port number. But even that has a restriction: if you want to use "privileged" ports (= port numbers 0-1023), you must run your local SSH client as root.

Creating the virtual localhost interfaces (essentially IP Aliases for localhost) is simple:
code:
ifconfig lo:1 127.0.1.1
ifconfig lo:2 127.0.1.2
...
The ifconfig settings are not persistent, so you must either write them into a script that runs at boot time, or add them to the network configuration files.
Your Linux distribution probably already has some way to specify IP Aliases in network configuration files: check the distribution's documentation and support resources.

Then assign names for the virtual interfaces in /etc/hosts:
code:
127.0.1.1 foo.remote
127.0.1.2 bar.remote
...
Change your ~/.ssh/config to use the virtual interfaces and the same ports as the actual server does:
code:
Host XYZ
Hostname server.com
LocalForward foo.remote:8080 foo.server.com:8080
LocalForward bar.remote:8080 bar.server.com:8080
Since you now have multiple virtual "localhost" IP addresses to bind the local end of the tunnel to, you can now reuse the port numbers as the overlapping local port numbers are assigned to different localhost IPs.

After this, when start "ssh XYZ" and then tell any program to connect to port 8080 on foo.remote, the connection should pass through the SSH tunnel to port 8080 on foo.server.com. Likewise, connections to port 8080 on bar.remote should go to bar.server.com.

Now, if the default port number of the application can be used in the configuration above, the need to explicitly specify the port number may be removed. But even if you must still specify it, you can now standardize to a particular port number to minimize your memory workload (i.e. "when using whatever.remote, the port number shall always be 8080").

telcoM
Mar 21, 2009
Fallen Rib

Xenomorph posted:


Giving a user permission to Read a folder via ACL suddenly changes the POSIX permission files in that folder to "read/write/execute". You can change the POSIX permission back (g-wx), and the ACL stays correct. So it's obvious that the POSIX and ACLs can remain separate. I'd like Samba to just leave the POSIX permissions alone. I want it to only touch extended attributes/ACLs. I swear it didn't work like that on FreeBSD.

It's not Samba, it's the POSIX ACL implementation.

acl(5) posted:

CORRESPONDENCE BETWEEN ACL ENTRIES AND FILE PERMISSION BITS
The permissions defined by ACLs are a superset of the permissions speci‐
fied by the file permission bits.

There is a correspondence between the file owner, group, and other per‐
missions and specific ACL entries: the owner permissions correspond to
the permissions of the ACL_USER_OBJ entry. If the ACL has an ACL_MASK
entry, the group permissions correspond to the permissions of the
ACL_MASK entry. Otherwise, if the ACL has no ACL_MASK entry, the group
permissions correspond to the permissions of the ACL_GROUP_OBJ entry.
The other permissions correspond to the permissions of the ACL_OTHER_OBJ
entry.

The file owner, group, and other permissions always match the permissions
of the corresponding ACL entry. Modification of the file permission bits
results in the modification of the associated ACL entries, and modifica‐
tion of these ACL entries results in the modification of the file permis‐
sion bits.

[...]

RATIONALE
IEEE 1003.1e draft 17 defines Access Control Lists that include entries
of tag type ACL_MASK, and defines a mapping between file permission bits
that is not constant. The standard working group defined this relatively
complex interface in order to ensure that applications that are compliant
with IEEE 1003.1 (“POSIX.1”) will still function as expected on systems
with ACLs. The IEEE 1003.1e draft 17 contains the rationale for choosing
this interface in section B.23.

That seems to say that the permission bits explicitly are not separate from ACLs.

So, the result is something of a mess because of an effort to maintain backward compatibility.

Way back when I sat on a course on some other Unix, I was told that when ACLs are placed on a file, the behavior of the "ls -l" command changes: instead of displaying the actual state of the permission bits, they reflect the overall presence of read/write/execute permissions/ACLs for user(s) and group(s). So, if you had a file that is displayed as "-rwx------+" in a "ls -l" listing, that would mean the file has read, write and execute permissions for some named users, but they would not necessarily all apply for the same user. For example, user joe might have read and execute permissions, but not write, and user mike might have read and write permissions, but not permission to execute.

Likewise, the group bits would describe what kind of privileges have been granted to specific groups, but not all displayed permissions would necessarily apply to the same group.

The instructor suggested that the proper course of action was to ignore the permission bits completely whenever you see the '+' sign that indicates an ACL is present; instead you should use the appropriate command to view the actual ACL to get the real deal.

This advice has served me well over the years on Linux, Solaris, HP-UX and occasionally some other Unixes.

I could not quickly find specific documentation on the behavior of the GNU ls command in the presence of ACLs. I guess I might have to RTFS if I want to get to the bottom of it.

But on Linux, the command to view the complete Posix ACL is "getfacl".
For Xenomorph, I think replicating the situation and running a "getfacl" before and after the chmod is probably the only way to really understand what is going on.

Adbot
ADBOT LOVES YOU

telcoM
Mar 21, 2009
Fallen Rib

Crush posted:

\(expression\) Group operator.
\n Backreference - matches nth group


With the group operator, you can identify parts of the data you are matching with a regular expression, so you can refer to them later with a backreference.

For example, if there is a file named "testfile.txt" with contents like this:

testfile.txt posted:

foo=foo
foo=bar
bar=foo
bar=bar


... then this command could be used to pick out only the lines where the three characters *before* the equals sign are the same as *after* it, whatever the characters are:
code:
$ sed -ne '/^\(...\)=\1/p' < testfile.txt
foo=foo
bar=bar
Regexp analysis:
code:
^   = In the beginning of a line...
\(  = ... we will be interested in...
... = ... a sequence of three characters, which can be whatever.
\)  = The interesting part ends here.
=   = Then there must be an equals sign...
\1  = ... and another exact copy of the previously-described interesting part.
You can also mark multiple "interesting parts" (called groups): the first one can be referred to with \1, the second with \2, etc.

In a search-and-replace construct (s/regexp/replacement/), you can use group operators in the regexp section to mark things, and refer to the marked groups in the replacement section.

A silly example:
code:
sed -e 's/^\([^:]*\):[^:]*:\([0-9]*\):.*/\2:\1/' < /etc/passwd
It reads /etc/passwd, picks out the username and UID fields, and prints them out in "<uid>:<username>" format. Essentially, each line is completely replaced with its re-written form.

Analysis of the regexp part:
code:
^     = Start from beginning of a line.
\(    = First group begins.
[^:]* = Any number of characters except a colon.
\)    = First group ends.
:     = Then there must be an actual colon.
[^:]* = Then another string of characters that does not contain a colon.
:     = Another colon.
\(    = Second group begins.
[0-9]*= A string of digits.
\)    = Second group ends.
:.*   = Then a colon and anything at all after that.
The replacement part is just "\2:\1", i.e. "put here group 2, then a colon, then group 1."

Clear as mud?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply