Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Peanutmonger
Dec 6, 2002

Anunnaki posted:

code:
/home/skyler/Desktop/Libs/glib-2.15.0/gobject/.libs/lt-glib-genmarshal: error while loading shared libraries: libiconv.so.2: cannot open shared object file: No such file or directory
make[2]: *** [stamp-gmarshal.h] Error 127
make[2]: Leaving directory `/home/skyler/Desktop/Libs/glib-2.15.0/gobject'
make[1]: *** [all-recursive] Error 1
make[1]: Leaving directory `/home/skyler/Desktop/Libs/glib-2.15.0'
make: *** [all] Error 2
I found libiconv.so.2 in /usr/local/lib/ and I've tried copying it to /home/skyler/Desktop/Libs/glib-2.15.0/gobject/.libs/, where it said it was missing, but it still gives me the same error.

WHY.. WON'T.. IT.. INSTALL!? :bang:

Err, what distro are you running? I would expect if you were running any sort of decent distro, they would have a package available that would handle everything for you. That is really the best way to do it.

But to answer your question, the error wasn't saying the library was missing from that spot, but that the executable "lt-glib-genmarshal" was linked against "libiconv.so.2" and it can't find it at the path it was originally linked to (probably /usr/lib/libiconv.so.2). You could run "ldd /home/skyler/Desktop/Libs/glib-2.15.0/gobject/.libs/lt-glib-genmarshal" and you'd be able to find what it thinks the path to the shared library is, and you could copy the library there, but that's a dirty way to solve the problem. If this binary was compiled during your installation, I would suggest running configure with an option that includes /usr/local/lib in the library path, something like "LDFLAGS=-L/usr/local/lib ./configure". I would think configure would include that by default, but maybe not. If this doesn't fix it, then it must have been a binary that came along with the package and symlinks/copying the library may be the way to go so long as it's only needed for building and you clean it up after. This could also be solved with LD_LIBRARY_PATH instead of symlinks/copying, but using that is stepping upon a dark and dangerous path to ruin...

Adbot
ADBOT LOVES YOU

Peanutmonger
Dec 6, 2002

JoeNotCharles posted:

Sounds like it's shipped in a half-configured state - you need to run "automake" or "autoconf" or "aclocal" or something before ./configure.

EDIT: if you think this is way too much work just to build a drat program, you're right! This is why autoconf sucks such incredible amounts of rear end.

Are you serious? Autotools is a godsend when it comes to building a package that compiles on the myriad of Unixes that exist these days. If the package you download wasn't created with `make dist`, that sure as hell isn't autotool's fault. I don't know how much experience you've had with installing packages that don't use autotools, but they aren't pretty, and if you get one that was written a while ago, it's gonna need some elbow grease that the `./configure` stage would have taken care of automatically. Yes, it can be complicated, but this complexity is the cost of being incredibly robust.

On a side note, you're supposed to use `autoreconf` if you want to re-run autotools and don't remember the correct order, but I can't tell by "they didn't work" whether they don't exist or didn't resolve the problem.

Also, to Anunnaki, are you using Ubuntu still? If so, they have packages for libmtp and libglib, all you have to do is use the GUI package manager to grab 'em. If you want to use libmtp to talk to a media device, I suggest you try something besides MTPFS with FUSE, as the archive MTPFS has on their site is not correctly packaged at all. It definitely wasn't made with `make dist` (since it comes with symlinks to install-sh and friends when they should be packed in it), it has a Makefile already, the version on the file is different from the one in the configure scripts, the directory made by the archive doesn't match the name of the archive, and I could go on. Let's just say it's massively broken so it's obvious there were problems.

You might want to check out the other available clients for using MTP. Again, I really think you should use your package manager to handle getting programs, that's what they're there for.

Peanutmonger
Dec 6, 2002

JoeNotCharles posted:

First of all, you can just do "/sbin/mdadm". Secondly, check your PATH environment variable ("echo $PATH") - it probably has /bin but not /sbin in it. That's because /sbin is for system tools that most users don't need to use every day, so it's not in the default PATH.

I hope he's not doing his RAID work as a non-root user. And if he is root, it sounds like he's not su'ing properly, since I would expect any decent /etc/profile these days to set up the correct PATHs for root.

BiohazrD, are you using `su`? And if so, are you adding the dash, as in `su -`? Without the dash, the environment stays the same as your calling shell, and thus your PATH doesn't include sbin. If you have a dash, it acts as though you logged in as root, and you get the correct PATH, etc. I can't think of any instances where you wouldn't want to use the dash.

Peanutmonger
Dec 6, 2002

Boody posted:

Anyone running Gentoo and Paludis?

I've a gentoo box that's been pretty reliable for the last few years. Needed the occasional trip to the gentoo forums to figure out what the hell was going on with masked packages but other than that no real problems. Wondering if there is any real benefit to moving to Paludis or just sticking with portage as it is.

I've been using this setup on a machine for a while now. For the most part, it acts the same. Dependency resolution is definitely faster, but I still use eix to search for packages anyway. There are a few beneficial differences, though. For example, it operates under the premise of having a group of repositories, rather than a main repository and overlays. In this way, it's not nearly as tied to portage as emerge is. It also, by default, does the test phase for packages, which caused some interesting realizations for me (what? They ship this with a make test that they know fails?). When displaying dependencies, it shows the reasons for each package being included. You can configure its response to nearly every event it encounters, pre/post fetch/compile/test/etc success/failure/etc. You can also add scripts to hooks for another large set of events. The last thing I'll mention is the --report feature, which tells you which packages you have installed that have dropped out of their repository/no longer have a reason to be installed/are mentioned as broken in a GLSA.

It's still a work in progress, though. Until the latest alphas, there wasn't a replacement of revdep-rebuild, so you had to make do with an edited emerge-version. And those latest alphas break some cran/ruby stuff (which I don't use, so I use the alphas). It certainly seems like the next step for package management as far as Gentoo goes, it's too bad it isn't backed by Gentoo proper.

Last I checked, you're able to move back and forth between emerge and paludis freely, I think. There might be some paludis features that will break going back to emerge, but I can't remember what. I assume you'd read up on it first on your own though, anyway, right?

Peanutmonger
Dec 6, 2002

bitprophet posted:

EDIT: In before anyone says LVM - that's actually a decent response to many of my concerns, I'd wager, at least in some situations. So far I only use it for Xen virtual disks, though. Does anyone use it for the whole disk on productions systems?

I have. It honestly really does solve a lot of the problems you described. Especially with ext3 online resizing, it's pretty amazing. `lvresize -L+5G lvname; resize2fs /dev/vg/lvname` is how you add 5 gigs to a given logical volume and have it appear instantly. It lets you be more reactive to the OS' needs, which is the heart of most of your issues. Instead of digging a hole for yourself, you can give the different directories a modest amount of space and leave the extra as slack in the LVM partition. Then you can just grow your partitions as you need to. It's miles beyond juggling actual partitions, and I don't think I could ever go back to that.

If anyone has anything bad to say about LVM (other than "it's not ZFS!"), I'd love to hear it.

Peanutmonger
Dec 6, 2002

Kaluza-Klein posted:

When I ssh into my host and start mucking about with my site in vi, I can no longer highlight text in the session and then middle click somewhere else (locally) and paste.

I can highlight text in the session and middle click inside the session to paste into vi though, which is nice. But how do I copy things from the session and paste it locally?

This only happens inside vi. If I am just at the bash prompt and I highlight something I can then paste that locally.

Does that make any sense?

That sounds like you have GPM support enabled in vim (or something along those lines). Get rid of that and it should leave your mouse clicks alone.

Peanutmonger fucked around with this message at 03:03 on Apr 20, 2008

Peanutmonger
Dec 6, 2002

fourwood posted:

The only problem is that new windows pop up in the middle of the entire desktop span, leaving them straddling the border between monitors. It gets pretty annoying when every dialog box is split between the two monitors. Is there any way to get things to center themselves on the primary monitor?

Edit: Most programs seem to at least properly remember their last-used window positions, but smaller dialog boxes apparently don't do this and continually center themselves across both screens, regardless of the fact that last time I moved that particular dialog box.

Sounds like you need xinerama support in your applications. It's an Xorg option that programs have to support, but it gives them information on desktop properties, such as borders, so windows will "full screen" properly and spawn at the correct point for the "center". You still get goofy stuff with Java applications and other frameworks that don't give a crap, but for the most part it works well. I'm not sure how to install/enable xinerama in packages in Ubuntu, though. Chances are it will involve tweaking xorg.conf (You will never escape!), unless there's an Ubuntu widget for doing it automatically.

Peanutmonger
Dec 6, 2002

Munkaboo posted:

This is what it looks like.

As I said before I'm trying to forward Xbox Live's ports (UDP 88, TCP/UDP 3074) to all IP's on our network. The 192.168.1.189 is my xbox's IP, which I was trying for shits and giggles but didnt work.

No, you've got to put the entries into the PREROUTING chain. Basically, the difference with simply opening ports and forwarding ports (what you're trying to do) is that you need to mangle the destination IP rather than simply allow the traffic in. I manage my firewall through scripts, but this should give you the general idea of what you're looking to do:
$IPTABLES -t nat -A PREROUTING -p udp -m udp -d $i_eth0 --dport 7777 -j DNAT --to-destination 192.168.1.4

When a packet comes in and is destined for $i_eth0 (which is my outside interface) on port 7777, the "-j DNAT" option specifies that the destination should be changed to the "--to-destination" field, in this case "192.168.1.4". You mangle the destination to the host you want to forward the port to. As far as my setup is, which is pretty open on the internal side, I don't need any other rules for it to just work. For TCP, you should also only match on SYN packets, since state should take care of the rest of the mangling for any opened connections.

I hope that helps.

Peanutmonger
Dec 6, 2002

JoeNotCharles posted:

You've got lots of options for this.

If the program only uses $HOME for its config files, you can probably start it with "HOME=$HOME/.config appname". You might be able to use chroot to make it run completely in its own directory. Or you could write a library that redefines the "open" system call (at least) and load it with LD_PRELOAD. Or if you want to install one daemon which monitors everything system-wide, try libFAM. (That's a higher-level wrapper around inotify - use that instead of using the kernel directly. I don't know too much about it so I'm not sure if it's completely suitable.)

What about the easiest way, symlinking ~/.original to ~/.config/.original? Those other options seem likely to break other things...

edit: However, I guess that still leaves a file in your home directory. But that's what hidden files are for, anyway.

Peanutmonger
Dec 6, 2002

Alowishus posted:

Do you need local GUI, or just GUI via VNC? Remember that a headless server can run remote VNC sessions... so if possible, save your memory by not running X on the server's display.

Do you mean remote VNC sessions, or X forwarding? X forwarding may be an option if everyone has an X server on their desktop, but if they plan on running long processes it might not work out so great.

Alternatively, if you need X to run on the server but don't need it (and/or can't get it to work) on a real display, try using 'Driver "dummy"' for your video device, just remember to set "VideoRam 32000" or something large enough to hold the frame buffer. When you fire it up, you can use the Xvnc module or x11vnc or whatever to vnc to it, and enjoy.

Peanutmonger
Dec 6, 2002

Combat Pretzel posted:

Password auth is still possible. It's mainly just to stop scp from being annoying with passwords. :)

Check out ControlMaster in ssh_config, you can have new calls to ssh/scp use an existing connection. Just don't use it on a multi-user system if you don't trust the people who have root access to that system...

Peanutmonger
Dec 6, 2002

derdewey posted:

Today:
I tried just running fsck on /dev/sdc1 when it was unmounted and it spits out a can't read error. So now it's 100% dead? I shouldn't bother reformatting?

You could also try installing smartmontools and use smartctl to view the error log on the drive and run tests. That usually removes any doubt.

Peanutmonger
Dec 6, 2002

Kaluza-Klein posted:

Can some one explain to me how to change the default menu items in xfce?


First, the menu that's being used is almost certainly in your home directory, not in /etc. Second, I usually use the GUI editor to edit menu items. On the second tab of the desktop menu in the settings menu there's a box that launches the menu editor. That seems like a long way to find it, I thought there was a shorter way, but I can't think of one right now.

Peanutmonger
Dec 6, 2002

Pardot posted:

Also is there a way to see what keycodes buttons are? I'd like to remap the record button to something too, maybe.

You can use xev to verify the key code that it is showing up as. If it really is 115 and 116 I'm not sure what's wrong...

Peanutmonger
Dec 6, 2002

There Will Be Penalty posted:

EDIT: But quick summary: both files must be sorted. Then: join -v 1 file-1.txt file-2.txt

To be a bash ninja, you can use here-files and do join -v 1 <(sort file-1.txt) <(sort file-2.txt)

Peanutmonger
Dec 6, 2002

Supersonic posted:

Gonna try re-installing Arch and hope that works. Good thing that this is only a playing around machine and not production. This is why I don't like running Arch on my production machines.

Not to be a jerk, but why rebuild a machine for problems with X? Unless you have significant package manager issues (trampled/broken packages causing missing libraries or something) I can't see a reason for a full reinstall. I always thought that was the beauty of it all, not having to reinstall because one single package was misbehaving.

I haven't had any troubles with xorg-server-1.5, myself. I've been on it since I built this machine (G45 chipset woes). I also run without a xorg.conf, so I think I'm in a minority...

Also, the instructions Zom Aur is referring to appears to be just a link to this:
http://wiki.archlinux.org/index.php/Xorg_input_hotplugging
Which makes me think it's less of an xorg-server 1.5 issue and an Arch default options issue.

GuyGizmo posted:

I could, but I want to avoid ls if at all possible. The size in memory of a list of the full paths of every file in this folder could be over a gigabyte. I would estimate that at least 2 million of the files have a path that's longer than 130 characters.

You do realize piping doesn't fully produce the output of the first program before it starts the second, right? They're run in parallel, so the actual amount of memory used is just a buffer for pumping data between the two. You shouldn't be afraid of trying "find dir | wc -l" just to see how long it will take. It does seem like piping is inefficient, but I think it may be "efficient enough" that a better tool for the job hasn't been created.

If you don't believe me, consider using pipes that never stop producing output. If you do "cat /dev/urandom | od", you will see the octal dump of random data stream by. No waiting, no extreme memory usage.

Peanutmonger
Dec 6, 2002

NZAmoeba posted:

I've inherited this server, and found that I already have dsa keys in my /etc/ssh folder (apparently what they do on other servers uses dsa), though I have no idea if they were made with a passphrase or not (yay inherited undocumented servers!) Does this change anything?

I didn't see this made clear and it's rather important. DO NOT hand out the private key in /etc/ssh. It won't have a passphrase because it's the key used for sshd, the server. When you ssh to a box for the first time, you're seeing the fingerprint of the key in that directory. Whoever holds that private key can masquerade as that server. You really don't want to use that for anything else, especially not for a user.

epswing's tips should work, but a basic understanding of public key cryptography, especially when toying with ssh, is incredibly useful.

Peanutmonger
Dec 6, 2002

TekLok posted:

Is there any way at all to get Midnight Commander to match the default terminal colors? I've tried drat near to everything but it seems the only convention provided for changing that loving horrible white on bright blue scheme is in the poorly documented initialization file which gives you the ability to choose between... whoop!... sixteen other flamboyantly bright eye-burning colors. And no it will not read from rgb.txt or take hex values.

I'm pretty sure that since midnight commander is a text mode application you really only get to choose from the 16 or so ANSI colors. If you wanted different colors, you could change the color mappings in your terminal, but that would change them everywhere you see them in your terminal.

Peanutmonger
Dec 6, 2002

Fortuitous Bumble posted:

Is there some difference between the linux/unix traceroute and the Microsoft one? For some reason my apartment's internet connection seems to run through the local university which makes it impossible to use tracert for anything on Windows (it blocks it or something), but it works fine on my linux system. Just thought that was weird.

I'm going to guess that Windows traceroute uses ICMP pings for tracing, whereas your linux traceroute is using UDP packets for tracing. Somewhere along the line your ISP is dropping pings, but they can't reasonably drop UDP packets. Try the -I switch on the linux traceroute, it should then exhibit the same behavior as the Windows version.

Peanutmonger
Dec 6, 2002

Grigori Rasputin posted:

What's the best way to rip CDs in ubuntu? EAC is awesome and after a little looking it seems like there might not be anything quite as perfect on linux? I found a guide to set up EAC in WINE if needed.

Ideally I'd just like to write a script so that I can pop a CD in, it will get FreeDB info, rip to V0 and FLAC and then tag them.

Look into abcde. It makes ripping a CD literally typing "abcde", choosing the correct track info, and letting it go.

Peanutmonger
Dec 6, 2002

NZAmoeba posted:

Any idea on what I can do to try and diagnose the problem? I'm running Fedora 10.

RANCID is...special. I can't guess at what its getting hung up on, though the bare bones configuration given in the guide seems smaller than it should be. If you look at the default that comes packaged with RANCID you'll see what I mean. Also, you might want to check permissions on directories, files, etc, although I would expect it to break rather than hang in that case.

Peanutmonger
Dec 6, 2002

TekLok posted:

What's the best way to verify the integrity and completeness of a large transfer with many files (i.e. after backing up to an external usb hdd)? Before I format the old system I want to make sure I didn't lose ANYTHING in the 800gb transfer, including many small files. File system is ext4.

You could also use diff with -r and -q (recursive and brief, to only show different files) on the source and destination directories. If there's no output, that means nothing was different, which means nothing was missing/extra.

Peanutmonger
Dec 6, 2002

Interlude posted:


So, which of these is preferable? Or does it not matter? Would the command in option #1 have to be reissued on reboot? The cron option seems more permanent.

I think most distros provide an init script for spawning mdadm to watch for failures when you install mdadm, so it would probably be easiest to use that. That's what I use, anyway, and it's worked so far (I've gotten emails for RAID issues in the past from it).

Peanutmonger
Dec 6, 2002

HatfulOfHollow posted:

Keep in mind that when you do this in scripts you may need to supply the full path to the binary since the command is being run without your environment variables, specifically your path.

I've found it better to just set PATH at the beginning of your script to a known good value, say, "PATH=/bin:/sbin:/usr/bin:/usr/sbin". It makes the script a lot more readable if you don't have to slap long paths in front of every command and the script will fail when you test it despite whatever outlandish PATH you might normally use. Plus, you'll end up using built-ins if they're available. bash has built-ins for the simple commands, like echo, printf, test, time, pwd, and so on. If you type "help" it shows you all of them.

Peanutmonger
Dec 6, 2002

Wuhao posted:


Does anyone know what I might be doing wrong here?

I've done some stuff with bridging but the only thing that really caused me problems was STP (as in, nothing worked while spanning tree took its sweet time converging). You can verify that your mac addresses are learned and spanning tree isn't causing problems with brctl's showmacs/showstp commands.

Peanutmonger
Dec 6, 2002

huge sesh posted:

I'll test some more in the morning, but I didn't have any problems with a simpler version of my script in Linux--it might be some mysterious difference between the BSD and Linux implementation, if that's possible.

I don't have a BSD box handy, but I do know that pipes are supposed to send an EOF to the reader if the writer closes the pipe. Since mplayer is used to fixed length files, I would expect it to assume that EOF is the last of what's coming. That and pipes tend to be readers matched to writers, so if you have multiple readers you might get unexpected results (so I wouldn't rely on "keeping it open" with cat). You might have better luck with `mplayer <pipe` instead of reading the pipe directly with mplayer.

On the other hand, do you need to use pipes? It would be a lot easier to do something like:
code:
(command file; command file; command file) | mplayer -
bash is better at aggregating output from multiple commands than pipes. If you're also saving the output, you could do:
code:
(command file; command file; command file) | tee savefile | mplayer -

Peanutmonger
Dec 6, 2002

flyboi posted:

<script>

Bash has much easier ways to test writability, among other things. See "help test" to see various tests that can be performed by [ (also known as test, see "man test"). For example:
code:
#!/bin/bash

STATE_OK=0
STATE_CRITICAL=2

if [ -w "${1}" ]
then
        echo "${1} IS WRITABLE"
        exit $STATE_OK
else
        echo "ERROR ${1} IS READ ONLY"
        exit $STATE_CRITICAL
fi

Peanutmonger fucked around with this message at 03:19 on Jan 28, 2010

Peanutmonger
Dec 6, 2002

Carthag posted:

Any suggestions for a program that'll give me here-and-now output of how much a specific process is using in bandwidth? Could also be how much consumed since last check, or something similar.

The only thing I can think of for this is to create iptables rules to match the traffic (it's perfectly normal to have iptables rules that don't accept or deny traffic, simply collect statistics).

For example, if you ran:
iptables -I OUTPUT -p tcp --sport 80

You would add an entry that records statistics at the top of the OUTPUT chain for traffic originating from port 80. Then, later on, you can see this in iptables -vnxL OUTPUT:

code:
Chain OUTPUT (policy ACCEPT 9632524 packets, 13560594421 bytes)
    pkts      bytes target     prot opt in     out     source               destination         
    1146  3505316            tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp spt:80 

Then you just run it multiple times and watch the numbers climb.

There's some drawbacks, though. You can't view this as non-root and there doesn't seem to be an easy scripting interface to it (you have to run iptables and parse it). But it's the only way I know of gathering statistics for specific types of traffic on a host without impacting performance (which using a packet capture-based statistics gatherer will definitely do).

Peanutmonger
Dec 6, 2002

rugbert posted:

ahhhh yes, that term was rolling around in my head as I wrote it but I wasnt sure about it. Thanks!

Watch out, as --update doesn't remove old versions. Technically, you'd want that with backups, but it also means your tarball will grow and grow. For more robust incremental backups you should use something like rsnapshot. Even if you use it for something as simple as only keeping two previous versions it can still be a lifesaver.

Peanutmonger
Dec 6, 2002

Kilson posted:

I don't care how it's accomplished, but is such a thing even possible? Aliasing or bridging didn't seem to be the answer, but maybe I'm missing something.

If you don't have the different subnets divided onto different virtual LANs, you don't even need to do any interface magic. Testing on my home network, I'm able to use arping to receive ARP replies from a different subnet (though I don't know if Windows is picky about the source IP).

code:
# arping -I eth1 -s 10.1.1.2 192.168.100.2
ARPING 192.168.100.2 from 10.1.1.2 eth1
Unicast reply from 192.168.100.2 [00:57:f1:b3:42:56]  0.664ms
If you actually do need an IP on the subnet, you want aliasing. With ifconfig you have to create extra interfaces, whereas iproute2 represents them more clearly as just another IP address on the interface.

For ifconfig:
ifconfig eth0:0 10.1.1.1 netmask 255.255.255.0

You also need to have unique virtual names with the ifconfig method. eth0:0, eth0:1, eth0:2...

For iproute2:
ip addr add 10.1.1.1/24 dev eth0

Either way, you should then be able to communicate using 10.1.1.1 to anything on that subnet. arping doesn't look like it's smart enough to figure that out on its own, but once you source it from the right IP and tell it to go out the right interface it works. The tool you're using might be intelligent enough to do so on its own.

You can remove the ifconfig interfaces by setting it to the down state "ifconfig eth0:0 down". For the iproute2 method, just change the "add" to a "del".

Hope that gets you on your way.

Peanutmonger
Dec 6, 2002

Kilson posted:

This doesn't actually work (hence my question in the first place), you still get the gateway address only.


Wow, thinking about this has hurt my brain a little. :) This definitely sounds like what something like a private VLAN setup would behave like (as in, completely different from normal), especially with proxy ARP:
http://en.wikipedia.org/wiki/Private_VLAN

I mistakenly thought you were the network admin, or at least working with them. They'll (hopefully) be able to instantly tell you whether or not it will work, whether it's due to some restrictive network design or multiple VLANs. In a simple (crappy) setup this should work. I think it's obvious your's is not simple...

Peanutmonger
Dec 6, 2002

FISHMANPET posted:

It's a regular expression.

Actually, as indirectly mentioned by ShoulderDaemon, shell expansion doesn't use regexes, they use globbing, and there's a big difference between the two. See: `man 7 glob` vs `man 7 regex`. There's some very important distinctions so it's a good idea to distinguish between the two when discussing pattern matching expressions.

Peanutmonger
Dec 6, 2002

elite burrito posted:

uh, I'm p sure that stateful inspection has been in netfilter for about 10 years. if I remember correctly the timeout on newly opened udp sessions (before any return data comes across) is 30 seconds.

I think his point is that even though there's stateful inspection within netfilter you still have to have a rule that enforces it. There are no implicit drop/allow decisions based on state. Thus, it's entirely possible to design a set of rules where outgoing connections ARE impacted by fail2ban rules. You just have to make the mistake of evaluating state after you evaluate your fail2ban rules. With that in mind, a blanket gaurantee that state will always allow traffic to flow is incorrect.

Peanutmonger
Dec 6, 2002

waffle iron posted:

Should I declare bankruptcy and just change the uids/guids on my laptop to match my server and go on with my life?

As far as I've discovered, idmapd will take the names passed on the wire via NFSv4 and map them to uids, but nothing will work if you're using AUTH_SYS and your uids don't match. It's terrible, it's annoying, etc etc. Hey guys, let's make a protocol that makes coordinating uids not required, then make the default security mode require coordinated uids.

Here's a reference on the subject:

http://www.vanemery.com/Linux/NFSv4/NFSv4-no-rpcsec.html#intro posted:

NFSv4 supports several security flavors, including AUTH_SYS and RPCSEC_GSS. AUTH_SYS (also known as AUTH_UNIX) represents the traditional low-security model found in NFSv2/v3. AUTH_SYS provides UNIX-style credentials by using UIDs and GIDs to identify the sender and recipient of RPC messages.

Apparently there are references to an AUTH_NONE mode in docs, but I can't get it to work after 5-10 minutes of poking at it. Rather, I can get AUTH_NONE to give me a read-only mount, but I can't get write access (and it's hard to verify that it's actually working).

To be honest, I just keep all my uids the same and try to focus on less frustrating things.

Adbot
ADBOT LOVES YOU

Peanutmonger
Dec 6, 2002

Thermopyle posted:

I'm doing an online filesystem resize with resize2fs. I have no idea how long this is supposed to take. Anyway to see progress on this? Alternatively, any ideas how long it should take to resize from 3TB to 6TB?

You could do some sloppy approximations by using df to look at the current size of the volume you're resizing. In my experience, it adds the new space in chunks, so the volume slowly grows to its new full size.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply