Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
RFC2324
Jun 7, 2012

http 418

Satanic snipe

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

apropos man posted:

So I feel like I'm kinda good enough for what the exam is asking of me but I know what you mean: there is something a little bit esoteric about SELinux and I feel like I'd have to be specifically dealing with it on a daily basis to have deep knowledge of it.
.

If you want to write custom policies, yeah. Otherwise, you only really need to understand transitions with http_exec_t -> httpd_t, for example.

/etc/sysconfig/selinux is plaintext files and will show you what happens where

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

evol262 posted:

If you want to write custom policies, yeah. Otherwise, you only really need to understand transitions with http_exec_t -> httpd_t, for example.

/etc/sysconfig/selinux is plaintext files and will show you what happens where

The only part I'm worried about is AutoFS. I finished the Jang book a few weeks ago and tonight I finish the Ghori book. I need to do some last minute AutoFS cramming tomorrow night and hope that it slots into place.

LochNessMonster
Feb 3, 2005

I need about three fitty


apropos man posted:

The only part I'm worried about is AutoFS. I finished the Jang book a few weeks ago and tonight I finish the Ghori book. I need to do some last minute AutoFS cramming tomorrow night and hope that it slots into place.

Pro tip, learn autofs inside out. Also with ldap authentication.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
I want to set up a centralized logging/notification system for high-level application/system events across a bunch of applications, and send notifications to services/email/a phone app/etc according to configurable filters. Is there an open-source way to do this with good compatibility across the variety of notification systems?

Paul MaudDib fucked around with this message at 03:07 on Jun 8, 2017

xzzy
Mar 5, 2009

Depends.. how much scripting on your own do you want to do?

Because we swear by check_mk where I'm at, you do anything with notifications you can dream up. But it doesn't do much out of the box.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

xzzy posted:

Depends.. how much scripting on your own do you want to do?

Because we swear by check_mk where I'm at, you do anything with notifications you can dream up. But it doesn't do much out of the box.

If there's good tools and strategies to build reliable, non-brittle logging sure, I don't care about one-time write costs for setting up the rules. Is Nagios suitable for use over a network connection for (eg) a rack of machines hosting docker container services dumping their stdout/stderr? i.e. potentially hundreds/thousands of channels open at once? I'd probably be running it against a Postgres DB.

Paul MaudDib fucked around with this message at 05:17 on Jun 8, 2017

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

LochNessMonster posted:

Pro tip, learn autofs inside out. Also with ldap authentication.

I will do that tonight. Thank you Sir.

jre
Sep 2, 2011

To the cloud ?



Paul MaudDib posted:

I want to set up a centralized logging/notification system for high-level application/system events across a bunch of applications, and send notifications to services/email/a phone app/etc according to configurable filters. Is there an open-source way to do this with good compatibility across the variety of notification systems?

Centralised open source logging = ELK

high-level application/system events:
What do you mean by this ?

You can set up collectors on all your servers and store system metrics in a time series db like graphite/opentsdb/influx then use grafana to visualise those, and bosun to alert on them.
For true application level event monitoring you'd need to rewrite the app to send metrics to your time series.

Alerting:
For simple alerting grafana now has threshold based alerts built in and integration with most of the common paging platforms.

Bosun is an interesting piece of software because you can generate alerts from metrics , but also from elastic search searches. So you can alert on the presence or absence of specific messages in the logs

All of the above is a non trivial amount of work, and if you've a reasonable amount of servers and don't have a dedicated ops/sre team you'd probably be better off using a hosted non open source product like datadog or signalfx for the metrics portion

jre fucked around with this message at 08:11 on Jun 8, 2017

xzzy
Mar 5, 2009

Paul MaudDib posted:

If there's good tools and strategies to build reliable, non-brittle logging sure, I don't care about one-time write costs for setting up the rules. Is Nagios suitable for use over a network connection for (eg) a rack of machines hosting docker container services dumping their stdout/stderr? i.e. potentially hundreds/thousands of channels open at once? I'd probably be running it against a Postgres DB.

I don't know about the stdout/stderr part, wait a few months and I'll let you know as we're working on that right now. :v:

But our install is monitoring 2700 servers with 140k checks and it handles it well. The only headache is inventorying hosts, a config reload can take 5 minutes.

Intuitively I'd think you'd run into issues with that many open file handles, no matter what monitoring tool you choose. But check_mk lets you do custom health checks asynchronously, you drop a script in a special folder that gets run every so often and dumps a status string to a file, which check_mk will scoop up and return when the central server phones in.

Regardless, nagios scales way better than the other open source options we've tried in the past (big brother, zabbix, ganglia). It is a monster to configure though.

LochNessMonster
Feb 3, 2005

I need about three fitty


Seconding the ELK recommendation.

xzzy
Mar 5, 2009

ELK seems pretty slick for visualizing what's going on, but how easy is it to generate alerts or interface with ticketing systems? Is it roll your own territory?

edit - yes I saw the bosun link. it seems like a pretty steep ramp up, is that true?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

jre posted:

You can set up collectors on all your servers and store system metrics in a time series db like graphite/opentsdb/influx then use grafana to visualise those, and bosun to alert on them.
Grafana can actually handle the alerting itself now if you don't need Bosun's crazytown expression language

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

xzzy posted:

ELK seems pretty slick for visualizing what's going on, but how easy is it to generate alerts or interface with ticketing systems? Is it roll your own territory?

edit - yes I saw the bosun link. it seems like a pretty steep ramp up, is that true?
Every integration on a worthwhile piece of software is at least vaguely into roll-your-own territory. If you find something that has every integration you're looking for, it's because no development effort actually went into the core product.

JIRA's actually supported if you happen to use Service Desk, though.

Docjowles
Apr 9, 2009

Logstash has an assload of output plugins you can use to send something to PagerDuty / JIRA / email / whatever, if a log line matches a given pattern. You can google for other third-party plugins, too, like Slack.

https://www.elastic.co/guide/en/logstash/current/output-plugins.html#output-plugins

Cidrick
Jun 10, 2001

Praise the siamese
As an aside, I haven't touched Logstash since I started using fluentd in its place. It's much easier to work with in my opinion, and doesn't require having a JDK/JRE installed on your boxes just to parse logs.

fluentd can't natively ship logs to elasticsearch, but you can install a quick ruby gem to enable that for you. We've been using fluentd for a few months now to collect docker logs and ship to elasticsearch, and it's been flawless.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Cidrick posted:

As an aside, I haven't touched Logstash since I started using fluentd in its place. It's much easier to work with in my opinion, and doesn't require having a JDK/JRE installed on your boxes just to parse logs.

fluentd can't natively ship logs to elasticsearch, but you can install a quick ruby gem to enable that for you. We've been using fluentd for a few months now to collect docker logs and ship to elasticsearch, and it's been flawless.
Logstash is just a stream processor that has inputs, outputs, and filters. You can use it to pick up logs (or literally any other timestamped event data) from syslog, Kafka, whatever, do the processing you need centrally, then dump to ES or do whatever else you want. I'm using a single Logstash server per environment, with the bulk of logs being shipped from rsyslog.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
Does Apache Flume do anything relevant to this task? The whole "route messages between sources and sinks" kinda seems like it fits here but I'm not 100% clear on what Flume does in the first place beyond that cliffnotes-level summary.

Sounds like logstash may be a better start-to-finish framework whereas Flume is more concerned with just routing messages?

We use Rhapsody on another project at work to "route messages" from various systems that give us electronic documents, is that a thing you would use in this context?

jre
Sep 2, 2011

To the cloud ?



Vulture Culture posted:

Grafana can actually handle the alerting itself now if you don't need Bosun's crazytown expression language


jre posted:

Alerting:
For simple alerting grafana now has threshold based alerts built in and integration with most of the common paging platforms.

No kidding


xzzy posted:

edit - yes I saw the bosun link. it seems like a pretty steep ramp up, is that true?
It is quite a steep ramp, not as easy as using a point and click interface. However basic threshold alerts are awful if you are going to be woken by them. Depends on who the users of the system are going to be.

quote:

ELK seems pretty slick for visualizing what's going on, but how easy is it to generate alerts or interface with ticketing systems? Is it roll your own territory?
So, you could still use grafana for this as it has an elastic data source. The performance hit of querying large elastic indices repeatedly for alerting purposes is not inconsiderable. In most cases you are better using time series to drive alerts. So either stream process the logs to produce a metric or emit metrics from the application. Many apps have out the box graphite / tsdb outputs now. If it's a java app you can use jmxtrans to get really detailed metrics https://github.com/jmxtrans/jmxtrans

jre fucked around with this message at 19:23 on Jun 8, 2017

xzzy
Mar 5, 2009

jre posted:

It is quite a steep ramp, not as easy as using a point and click interface. However basic threshold alerts are awful if you are going to be woken by them. Depends on who the users of the system are going to be.

Yeah, it seems like everything is. We've spent years optimizing our nagios install to get it exactly where we want it.

elasticsearch+graphana is the current hotness and people around me keep talking about it, but gently caress if I want to start over from scratch again.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

jre posted:

No kidding
This is a sad, but inevitable, consequence of using a browser extension that limits my social media usage to 15 minutes a day. :saddowns:

jre
Sep 2, 2011

To the cloud ?



Vulture Culture posted:

This is a sad, but inevitable, consequence of using a browser extension that limits my social media usage to 15 minutes a day. :saddowns:

Work thing or personal choice ?

fuf
Sep 12, 2004

haha
I have a little digitalocean VPS with Plesk installed that I host a few websites on.

Usually it hardly uses any resources but today this happened:


I couldn't connect to the domain that Plesk uses, and I couldn't even login via SSH using the IP (both just timed out). The weird thing is that the actual websites hosted on there kept working fine.

Restarting the VPS fixed it (that's what's happening at the end of the graph) but I guess I should try and work out what actually happened.

Any tips on which log files should I be checking or what I should be looking for? /var/log/syslog just has a load of normal-looking stuff from postfix and dovecot.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

jre posted:

Work thing or personal choice ?
Two small kids, time management.

Docjowles
Apr 9, 2009

fuf posted:

I have a little digitalocean VPS with Plesk installed that I host a few websites on.

Usually it hardly uses any resources but today this happened:


I couldn't connect to the domain that Plesk uses, and I couldn't even login via SSH using the IP (both just timed out). The weird thing is that the actual websites hosted on there kept working fine.

Restarting the VPS fixed it (that's what's happening at the end of the graph) but I guess I should try and work out what actually happened.

Any tips on which log files should I be checking or what I should be looking for? /var/log/syslog just has a load of normal-looking stuff from postfix and dovecot.

A couple ideas would be your http access logs and mail logs. It could be you were being crawled by a bot like Google and they hit some page that runs an intensive database call over and over. Not that I've ever been owned by that in production or anything :ninja: Or just some rear end in a top hat running vulnerability scanners against random ips that managed to trigger bad behavior.

It's also possible you got hacked and were being used to send out spam. If they were just using the existing MTA instead of installing their own, you'd see it in the logs of whatever mailer Plesk uses. Or at least via huge outbound bandwidth usage if you graph that.

Gozinbulx
Feb 19, 2004
More and more I'm considering making some distro of linux my main Laptop OS but obviously I need some windows stuff for work/ect, including some processor intensive software like photoshop and after effects (and no, I don't want to use GIMP)

Should I just dual boot, or is virtualization up the the point where you can function alot with just that? I have a fairly old i7 (2nd gen) 16gb ram, SSD, ect. is virtualization an option for me?

RFC2324
Jun 7, 2012

http 418

Docjowles posted:

A couple ideas would be your http access logs and mail logs. It could be you were being crawled by a bot like Google and they hit some page that runs an intensive database call over and over. Not that I've ever been owned by that in production or anything :ninja: Or just some rear end in a top hat running vulnerability scanners against random ips that managed to trigger bad behavior.

It's also possible you got hacked and were being used to send out spam. If they were just using the existing MTA instead of installing their own, you'd see it in the logs of whatever mailer Plesk uses. Or at least via huge outbound bandwidth usage if you graph that.

I would also look at sar data for that timeframe to see if something stupid like high io wait was causing it.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

Gozinbulx posted:

Should I just dual boot, or is virtualization up the the point where you can function alot with just that? I have a fairly old i7 (2nd gen) 16gb ram, SSD, ect. is virtualization an option for me?

What tools do you want to use on the Linux side? VM performance has been excellent for me, I hardly notice a difference. I would say give that a shot and see if it works for you since it's so easy to try it without screwing with your boot drive.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

fletcher posted:

What tools do you want to use on the Linux side? VM performance has been excellent for me, I hardly notice a difference. I would say give that a shot and see if it works for you since it's so easy to try it without screwing with your boot drive.

This has been my experience as well.

I've found the whole experience just way nicer and more integrated instead of dual-booting. I tried getting into Linux for years and years, but rebooting to go from Windows to Linux was always such a roadblock (both mentally and technically...try copy pasting between OS's when you're dual booting!) that I never was able to make any progress. It wasn't until virtualization was so good that it was mostly indistinguishable from using the OS natively that I really started to use Linux a lot.

(That being said, I find myself using my Linux VM's a lot less now that I can use Bash on windows.)

Gozinbulx
Feb 19, 2004
I actually meant to run Windows in the VM lol. I want to deep dive into *nix

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Docjowles posted:

A couple ideas would be your http access logs and mail logs. It could be you were being crawled by a bot like Google and they hit some page that runs an intensive database call over and over. Not that I've ever been owned by that in production or anything :ninja: Or just some rear end in a top hat running vulnerability scanners against random ips that managed to trigger bad behavior.

It's also possible you got hacked and were being used to send out spam. If they were just using the existing MTA instead of installing their own, you'd see it in the logs of whatever mailer Plesk uses. Or at least via huge outbound bandwidth usage if you graph that.

Google spidering doesn't look like that. They are very throttled.

Go through your logs and I can bet you it most likely from China or Russia.

Dudes scan my poo poo like a minute after a server is available.

See if you can get readouts of web logs . You can tell from the http traffic if they are doing a fuzz or a very direct attack. But for that long, that's odd.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

For the heck of it I got 256 colors working in my bash console in Windows 10.

Now the question is...whats some useful/interesting things I can use to take advantage of all these colors?

LochNessMonster
Feb 3, 2005

I need about three fitty


Thermopyle posted:

For the heck of it I got 256 colors working in my bash console in Windows 10.

Now the question is...whats some useful/interesting things I can use to take advantage of all these colors?

Get an oldschool 'scrolling' BBS motd.

some kinda jackal
Feb 25, 2003

 
 
Can I manage RedHat AND CentOS patches using the latest RH Satellite or do I need to set up two separate patch mgmt systems?

If not, can anyone recommend a distribution-agnostic patch compliance engine?

some kinda jackal fucked around with this message at 13:53 on Jun 12, 2017

Mao Zedong Thot
Oct 16, 2008


I previously used https://sysward.com/ in a mixed environment. It's pretty alright.

evol262
Nov 30, 2010
#!/usr/bin/perl

Martytoof posted:

Can I manage RedHat AND CentOS patches using the latest RH Satellite or do I need to set up two separate patch mgmt systems

Short answer is yes. Add a new repo in Pulp for CentOS, and you're set.

Xeom
Mar 16, 2007
Thinking about making the switch to Linux now that most games I want to play are supported on it. I'm think about using gnome ubuntu, but does this mean I will have to make a clean install when ubuntu 18 comes out?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
How do I do "mycmd | tee "mycmd-$(date +'%Y.%m.%d-%H.%M.%S').log" but also have tee gzip/bzip2 the output?

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Xeom posted:

Thinking about making the switch to Linux now that most games I want to play are supported on it. I'm think about using gnome ubuntu, but does this mean I will have to make a clean install when ubuntu 18 comes out?

I think It's almost always better to backup and do a clean reinstall than to upgrade an OS. I was first playing with Linux in 2012. I upgraded I think 12.04 to 12.10 and it didn't go all that well. Lots of stuff was just slightly broken, it was all fixable and was a helpful way to learn about Linux for me, but eventually I just wiped it and went back to a clean install of 12.04. The LTSs are the best choice by far, the .10s tend to be less stable.

For the most part it's really not necessary to update immediately for each LTS release if you don't want to. My home fileserver is still running 14.04 because I didn't set it up that way at the time and I don't give enough of a gently caress to actually reinstall stuff on a new install. It's pretty common for most stuff to target at least one LTS back from current (eg right now 14.04).

If you are definitely going to want to update then you may want to structure your partitions accordingly to make a switchover easier - for example a separate /home or /backup partition so that can persist through a reinstall.

Also, set up etckeeper, it can make your life so much easier. It will put most of your /etc folder into a source-control repository so you have a complete log of your configuration changes. It's smart enough to automatically hook the package manager to log package installation/removal, and you just need to type 'etckeeper commit "iptables: opened port 80" or whatever any time you make manual changes. Sometimes the default configs put config files somewhere other than /etc, my usual approach is to move the folder to /etc and to leave a symlink to the new location (ln -s)

You can similarly put config folders in your homedir under a git repository as well. Also consider starting a dotfile if you need - you can put it on a local PC or something like BitBucket or Github (don't put any passwords or private keys or any other secret in a dotfile repo). This can give you easy macros for stuff like "install this list of packages that I am used to working with". Taking typescripts of what you're doing when you configure the server is usually helpful to look back on later, but do be aware they contain screen-hidden things like passwords too, perhaps use a screen with logfile enabled.

Also, use Nano :kiddo:

(I've been meaning to learn some emacs or vi for a while but it seems overly complicated)

Paul MaudDib fucked around with this message at 02:29 on Jun 17, 2017

Adbot
ADBOT LOVES YOU

Polygynous
Dec 13, 2006
welp

Paul MaudDib posted:

How do I do "mycmd | tee "mycmd-$(date +'%Y.%m.%d-%H.%M.%S').log" but also have tee gzip/bzip2 the output?

mycmd | tee TMPFILE | gzip >whatever & tail -f TMPFILE

I think? v:v:v

At least I don't think tee really does anything other than what it says.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply