Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
xzzy
Mar 5, 2009

Usually it means setting the account's shell to /sbin/nologin so that no one can get a shell as that user.

The "outside login" is less common, I'm guessing it's implying that local access (ie, su) is acceptable but coming in through ssh is not. In that case you would use sshd's weird as poo poo "Match" option to build a list of usernames that you forbid authentication to.

I wouldn't bother and would just lock the account completely.

Adbot
ADBOT LOVES YOU

xzzy
Mar 5, 2009

Pretty much. "Environments" are a giant waste of time.

Fortunately it's easy to avoid them if you're the kind of nerd that cares about that sort of thing. Install xorg, install a window manager, and you're done.

(step two is the hard part, perusing lists of window managers for THE ONE has been an ordeal since the mid-90's and never got easier (well somewhat easier because now wikipedia has a surprisingly good list of available wm's))

xzzy
Mar 5, 2009

That's the job of the oom killer.

It generally keeps the OS from going tits up. But it also doesn't, which keeps life exciting!

xzzy
Mar 5, 2009

Ulimit can't handle child processes (each fork gets a new limit) and nice only throttles if something higher priority isn't waiting on CPU.

Cgroups are the modern way to do it.

Or just not run a make with -j 1024.

xzzy
Mar 5, 2009

No, to get the fast charging you're going to need a dedicated charge device. USB spec can't deliver the amps modern phones ask for, which is why none of the high output stuff can handle data.

xzzy
Mar 5, 2009

Almost all of those file systems are virtual.. they're provided by the kernel as an access point to various features.

Looks like the only real partition on that system is /dev/sda5 so yes that's where all your programs will end up.

xzzy
Mar 5, 2009

I guess conceptually the /sys /run and /dev paths can be thought of as an API to the kernel itself. They give you an easy way to fetch information about the kernel and hardware with standard unix tools. In some cases you can change settings in the kernel in those areas as well. /proc fits this category too, though it doesn't show up in a df.

For example when you type the command "mount" to get a list of all mounted disks, it's actually pulling that information from /proc/mounts.

xzzy
Mar 5, 2009

No, if you blank out the root password in shadow you'll never be able to log into that account again.

Never heard of a magic file resetting the root password either, probably some IT hack that someone implemented and it made it to the internet. Seems like a horrible idea to me.

xzzy
Mar 5, 2009

anthonypants posted:

If some automated process changed the root password to a random string, wouldn't that also mean you'd never be able to log into that account again?

Yep!

Which makes me think it's some kind of daemon someone wrote that looks for a special file, and if one is found, prompts on boot for a password reset.

xzzy
Mar 5, 2009

You can't su to accounts (even as root) that have an invalid binary for its shell (/sbin/nologin or /bin/false are common for these cases).

xzzy
Mar 5, 2009

RFC2324 posted:

I've always understood this to be the only real way to completely lock shell access, but what if I do a 'su -c /bin/bash -u <user>' ? I just thought of it, and wonder if it would work.

-c won't get you in (it runs the command in a shell owned by the user), but -s will.

xzzy
Mar 5, 2009

I still have a usable 32mb flash drive, it has freedos on it and we use it to run dos only tools on our servers (usually raid controllers dumping diag info for the vendor because of course they'd never want to make linux friendly tools, no one ever uses linux in server environments).

xzzy
Mar 5, 2009

evol262 posted:

Docker has a lot of nasty docker caveats unless you're running a very new version.

There's a RHEL channel and a CentOS SIG which tracks this fairly closely, though

I ran face first into the file system namespace issue this past week. Rumored to be fixed for rhel 7.4. :downs:

xzzy
Mar 5, 2009

Nah, Gentoo takes it too far. You want to cause pain, but not murder anyone.

Slackware is what your first distro should be. :colbert:

(which I haven't installed for 10+ years so maybe it's super easy now but I'd be surprised)

xzzy
Mar 5, 2009

The way it's being sold to people where I work is "you can write a config file to create whatever environment you want, then run your analysis/simulation/number crunching code ANYWHERE!!"

Which is a huge draw in a world where root access is strictly controlled and researchers are constantly begging for new privileges.


I personally am using it as an load balancer on the cheap because no one here can afford a proper solution.. lots of containers with haproxy daemons.

xzzy
Mar 5, 2009

Vulture Culture posted:

I hate to be the bearer of bad news, but docker run -it -v /:/hostroot alpine

We fixed it with sudo-enabled wrapper scripts to sanitize -v. :downs:

edit - and other stuff we don't want users to do.

xzzy
Mar 5, 2009

Well until Docker pulls their head out of their rear end regarding basic system security, I'm not sure what other options there are.

(I don't consider "users submit to me composer files and I spin up and maintain the container" an option, and it would be rejected by everyone anyways because docker is being sold to users as a do-anything-you-want product)

xzzy
Mar 5, 2009

I've taken to using /srv for "content storage" because /opt to me always sounded like that's where optional software goes, and if you get spergy enough to look it up that's what it was designated as.

Which is also what I think of /usr/local as being, but /usr/local was born to deal with /usr being an nfs mount (when hard drives were too small to hold a full set of binaries) and /usr/local was when you actually needed some binaries stored locally.

So basically everything is all poo poo and use whatever makes you happy because there's a billion different interpretations floating around out there.

xzzy
Mar 5, 2009

Yes, that's how it works. Almost all dockerfiles on the hub use inheritance to simplify their configuration. If you were to trace it all the way back to the top (say, the ubuntu or centos dockerfiles) you'd eventually discover they're doing nothing more than unpacking a tgz to create the base image. Then child dockerfiles add whatever they need on top of that.

It's a pretty convenient system once you get cozy with it.

(certainly is a billion times better than old chroot setups which I had to maintain in the bad old days and I still get grumpy thinking about having to touch those environments)

xzzy
Mar 5, 2009

I don't know graphana (which is a failure on my part that I keep meaning to fix but :effort:) but when I see problems like that with ganglia it's usually an issue with the rrd creation.. it doesn't have an RRA that can provide metrics at the requested time resolution.

xzzy
Mar 5, 2009

Because su doesn't change your context, as far as its concerned you're still root.

Run the su as a normal user and see what you get out of id -Z.

xzzy
Mar 5, 2009

Contexts and transitions in selinux are clear as mud, it's the sort of thing I know of know intuitively at this point but can't talk about it competently at all.

I usually end up re-reading the gentoo docs, which have a pretty verbose description of the process:

https://wiki.gentoo.org/wiki/SELinux/Tutorials/How_does_a_process_get_into_a_certain_context

IMO, it's made worse by the fact it is not well documented at all to actually trace what selinux is doing as you start processes. Really puts a crimp on learning through experimenting.

xzzy
Mar 5, 2009

Depends.. how much scripting on your own do you want to do?

Because we swear by check_mk where I'm at, you do anything with notifications you can dream up. But it doesn't do much out of the box.

xzzy
Mar 5, 2009

Paul MaudDib posted:

If there's good tools and strategies to build reliable, non-brittle logging sure, I don't care about one-time write costs for setting up the rules. Is Nagios suitable for use over a network connection for (eg) a rack of machines hosting docker container services dumping their stdout/stderr? i.e. potentially hundreds/thousands of channels open at once? I'd probably be running it against a Postgres DB.

I don't know about the stdout/stderr part, wait a few months and I'll let you know as we're working on that right now. :v:

But our install is monitoring 2700 servers with 140k checks and it handles it well. The only headache is inventorying hosts, a config reload can take 5 minutes.

Intuitively I'd think you'd run into issues with that many open file handles, no matter what monitoring tool you choose. But check_mk lets you do custom health checks asynchronously, you drop a script in a special folder that gets run every so often and dumps a status string to a file, which check_mk will scoop up and return when the central server phones in.

Regardless, nagios scales way better than the other open source options we've tried in the past (big brother, zabbix, ganglia). It is a monster to configure though.

xzzy
Mar 5, 2009

ELK seems pretty slick for visualizing what's going on, but how easy is it to generate alerts or interface with ticketing systems? Is it roll your own territory?

edit - yes I saw the bosun link. it seems like a pretty steep ramp up, is that true?

xzzy
Mar 5, 2009

jre posted:

It is quite a steep ramp, not as easy as using a point and click interface. However basic threshold alerts are awful if you are going to be woken by them. Depends on who the users of the system are going to be.

Yeah, it seems like everything is. We've spent years optimizing our nagios install to get it exactly where we want it.

elasticsearch+graphana is the current hotness and people around me keep talking about it, but gently caress if I want to start over from scratch again.

xzzy
Mar 5, 2009

CentOS is pretty solid, but has the same issue as any RHEL derivative: it's years behind the latest and greatest.

I would never want to use it on a desktop. In a server room though? Nothing out there I'd prefer more.

(though if you're going headfirst into docker, I'm really digging CoreOS)

xzzy
Mar 5, 2009

Where I'm at, version updates are generally limited by compatibility. Physicists write code for an experiment or one off chunk of hardware that has a 15+ year lifecycle, you try to tell them they need to upgrade their gcc they gonna flip out on you. And since they bring in the grant money, they always win.

Virtualization has helped a lot, if they truly are incapable of updating their code to work on a new OS that at least gives us a way to give them a sandbox. But we've done chroot jails in the past as well.

Our final RHEL4 based system was removed from duty just last year. :v:

xzzy
Mar 5, 2009

I was spitting nails last night trying to get an unencrypted adhoc network set up on my raspberry. Doing it with four iw commands? Easy, had it working in two minutes.

Doing it "the right way" with netctl and systemd? gently caress off forever. Options in the man pages don't do what they say, and there's no documentation anywhere for nonstandard configurations.

End result is I gave up and wrote a systemd unit to run a script.

xzzy
Mar 5, 2009

evol262 posted:

So systemd is still doing its job? systemd-network isn't the default anywhere I know of. I'd probably do the same

What job is that? Obfuscating system init so it's impossible to trace what it's actually doing? Because gently caress yeah it's doing a great job of it. :v:

Perhaps my failure comes from using arch documentation. Too bad when you google for this stuff that's always the top link.

xzzy
Mar 5, 2009

What would you recommend?

xzzy
Mar 5, 2009

I'm just mad about systemd's documentation. freedesktop.org has an exhaustive reference and that's good, but every situation where you want to figure out how to do something you end up reading someone's blog post where they give you a recipe for one specific configuration and that's all you're getting on the topic because no one cares about building context of what's going on.

It's great once you know the answer, it's just a shame getting to expert status requires flailing around in quicksand for hours.

xzzy
Mar 5, 2009

I did IT for a software startup a long loving time ago, and they had a single HP-UX system for a single client that needed the company's software to run on HP-UX.

It was by far the most painful unix based system I ever managed, so much stuff was sort of like solaris or linux, but just different enough that you had to retrain your brain for simple tasks.

IRIX and Solaris were trivial by comparison (though back then, Solaris was also pretty much poo poo before you added all the gnu packages).

xzzy
Mar 5, 2009

joewm, because it's easy on dependencies, no eye candy, and pretty good ability to set key bindings exactly how you want them.

I guess the eye candy thing means you're not interested though. :v:

herbstluftwm is my favorite tiler, but I don't really need that anymore because I pretty much run only three windows (terminator with all my xterms, slack, and a browser).

xzzy
Mar 5, 2009

Having X installed on a server is fine (and frequently requested, users love their vnc) but leaving one running full time? That's just crazy talk!

xzzy
Mar 5, 2009

That's why my group still uses a home brew provisioning package I cobbled together from scratch like ten years ago. Every so often we review newer stuff like cobbler or spacewalk or most recently satellite/foreman and all of it dumps you into a burning hell of clicking on bad gui's and praying every time you think about updating the software to fix bugs.

So we continue to use my garbage solution because it Just Works. :v:

xzzy
Mar 5, 2009

~. is ssh's special bailout command.

Sometimes you have to clear the buffer by pressing enter once or twice, ssh will ignore tilde commands if it thinks the tilde is inside a string.

xzzy
Mar 5, 2009

You first need to decide what you consider ideal for monitoring. Is logging to a file inside the container good enough? Or do you intend to dump output to stdout/stderr and use docker commands to monitor? Or will an external process be running docker exec commands to determine status?

As for starting the processes, the two main options I know of are systemd and supervisord. systemd obviously comes with a lot of baggage. supervisord is kind of cool in that it has a simple config file and works well with docker.

xzzy
Mar 5, 2009

jre posted:

The entry point for the container should start the process, and the container should die if the process stops, best to avoid supervisor.
Don't keep state in the container, log to stdout and store it your centralised logging.

I understand that's considered best practice, but from what I've experienced the "one process per container" approach falls apart fast when you actually want to provide a service.

Adbot
ADBOT LOVES YOU

xzzy
Mar 5, 2009

Vulture Culture posted:

I get that not every single application is a good fit for the Docker/12-factor model, but suggesting that the approach fails "when you actually want to provide a service" is kind of a ridiculous thing to say when other companies' services running on Docker-style containerization platforms are serving literally billions more users per month than whatever it is that you're hosting

drat, straight for the dick measuring?

I am thoroughly owned!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply