|
Satanic snipe
|
# ? Jun 7, 2017 02:28 |
|
|
# ? Apr 23, 2024 21:37 |
|
apropos man posted:So I feel like I'm kinda good enough for what the exam is asking of me but I know what you mean: there is something a little bit esoteric about SELinux and I feel like I'd have to be specifically dealing with it on a daily basis to have deep knowledge of it. If you want to write custom policies, yeah. Otherwise, you only really need to understand transitions with http_exec_t -> httpd_t, for example. /etc/sysconfig/selinux is plaintext files and will show you what happens where
|
# ? Jun 7, 2017 02:40 |
|
evol262 posted:If you want to write custom policies, yeah. Otherwise, you only really need to understand transitions with http_exec_t -> httpd_t, for example. The only part I'm worried about is AutoFS. I finished the Jang book a few weeks ago and tonight I finish the Ghori book. I need to do some last minute AutoFS cramming tomorrow night and hope that it slots into place.
|
# ? Jun 7, 2017 06:06 |
|
apropos man posted:The only part I'm worried about is AutoFS. I finished the Jang book a few weeks ago and tonight I finish the Ghori book. I need to do some last minute AutoFS cramming tomorrow night and hope that it slots into place. Pro tip, learn autofs inside out. Also with ldap authentication.
|
# ? Jun 7, 2017 22:41 |
|
I want to set up a centralized logging/notification system for high-level application/system events across a bunch of applications, and send notifications to services/email/a phone app/etc according to configurable filters. Is there an open-source way to do this with good compatibility across the variety of notification systems?
Paul MaudDib fucked around with this message at 03:07 on Jun 8, 2017 |
# ? Jun 8, 2017 03:04 |
|
Depends.. how much scripting on your own do you want to do? Because we swear by check_mk where I'm at, you do anything with notifications you can dream up. But it doesn't do much out of the box.
|
# ? Jun 8, 2017 03:26 |
|
xzzy posted:Depends.. how much scripting on your own do you want to do? If there's good tools and strategies to build reliable, non-brittle logging sure, I don't care about one-time write costs for setting up the rules. Is Nagios suitable for use over a network connection for (eg) a rack of machines hosting docker container services dumping their stdout/stderr? i.e. potentially hundreds/thousands of channels open at once? I'd probably be running it against a Postgres DB. Paul MaudDib fucked around with this message at 05:17 on Jun 8, 2017 |
# ? Jun 8, 2017 05:12 |
|
LochNessMonster posted:Pro tip, learn autofs inside out. Also with ldap authentication. I will do that tonight. Thank you Sir.
|
# ? Jun 8, 2017 05:15 |
|
Paul MaudDib posted:I want to set up a centralized logging/notification system for high-level application/system events across a bunch of applications, and send notifications to services/email/a phone app/etc according to configurable filters. Is there an open-source way to do this with good compatibility across the variety of notification systems? Centralised open source logging = ELK high-level application/system events: What do you mean by this ? You can set up collectors on all your servers and store system metrics in a time series db like graphite/opentsdb/influx then use grafana to visualise those, and bosun to alert on them. For true application level event monitoring you'd need to rewrite the app to send metrics to your time series. Alerting: For simple alerting grafana now has threshold based alerts built in and integration with most of the common paging platforms. Bosun is an interesting piece of software because you can generate alerts from metrics , but also from elastic search searches. So you can alert on the presence or absence of specific messages in the logs All of the above is a non trivial amount of work, and if you've a reasonable amount of servers and don't have a dedicated ops/sre team you'd probably be better off using a hosted non open source product like datadog or signalfx for the metrics portion jre fucked around with this message at 08:11 on Jun 8, 2017 |
# ? Jun 8, 2017 08:08 |
|
Paul MaudDib posted:If there's good tools and strategies to build reliable, non-brittle logging sure, I don't care about one-time write costs for setting up the rules. Is Nagios suitable for use over a network connection for (eg) a rack of machines hosting docker container services dumping their stdout/stderr? i.e. potentially hundreds/thousands of channels open at once? I'd probably be running it against a Postgres DB. I don't know about the stdout/stderr part, wait a few months and I'll let you know as we're working on that right now. But our install is monitoring 2700 servers with 140k checks and it handles it well. The only headache is inventorying hosts, a config reload can take 5 minutes. Intuitively I'd think you'd run into issues with that many open file handles, no matter what monitoring tool you choose. But check_mk lets you do custom health checks asynchronously, you drop a script in a special folder that gets run every so often and dumps a status string to a file, which check_mk will scoop up and return when the central server phones in. Regardless, nagios scales way better than the other open source options we've tried in the past (big brother, zabbix, ganglia). It is a monster to configure though.
|
# ? Jun 8, 2017 15:43 |
|
Seconding the ELK recommendation.
|
# ? Jun 8, 2017 16:43 |
|
ELK seems pretty slick for visualizing what's going on, but how easy is it to generate alerts or interface with ticketing systems? Is it roll your own territory? edit - yes I saw the bosun link. it seems like a pretty steep ramp up, is that true?
|
# ? Jun 8, 2017 16:50 |
|
jre posted:You can set up collectors on all your servers and store system metrics in a time series db like graphite/opentsdb/influx then use grafana to visualise those, and bosun to alert on them.
|
# ? Jun 8, 2017 17:24 |
|
xzzy posted:ELK seems pretty slick for visualizing what's going on, but how easy is it to generate alerts or interface with ticketing systems? Is it roll your own territory? JIRA's actually supported if you happen to use Service Desk, though.
|
# ? Jun 8, 2017 17:26 |
|
Logstash has an assload of output plugins you can use to send something to PagerDuty / JIRA / email / whatever, if a log line matches a given pattern. You can google for other third-party plugins, too, like Slack. https://www.elastic.co/guide/en/logstash/current/output-plugins.html#output-plugins
|
# ? Jun 8, 2017 17:28 |
|
As an aside, I haven't touched Logstash since I started using fluentd in its place. It's much easier to work with in my opinion, and doesn't require having a JDK/JRE installed on your boxes just to parse logs. fluentd can't natively ship logs to elasticsearch, but you can install a quick ruby gem to enable that for you. We've been using fluentd for a few months now to collect docker logs and ship to elasticsearch, and it's been flawless.
|
# ? Jun 8, 2017 17:58 |
|
Cidrick posted:As an aside, I haven't touched Logstash since I started using fluentd in its place. It's much easier to work with in my opinion, and doesn't require having a JDK/JRE installed on your boxes just to parse logs.
|
# ? Jun 8, 2017 18:02 |
|
Does Apache Flume do anything relevant to this task? The whole "route messages between sources and sinks" kinda seems like it fits here but I'm not 100% clear on what Flume does in the first place beyond that cliffnotes-level summary. Sounds like logstash may be a better start-to-finish framework whereas Flume is more concerned with just routing messages? We use Rhapsody on another project at work to "route messages" from various systems that give us electronic documents, is that a thing you would use in this context?
|
# ? Jun 8, 2017 18:43 |
|
Vulture Culture posted:Grafana can actually handle the alerting itself now if you don't need Bosun's crazytown expression language jre posted:Alerting: No kidding xzzy posted:edit - yes I saw the bosun link. it seems like a pretty steep ramp up, is that true? quote:ELK seems pretty slick for visualizing what's going on, but how easy is it to generate alerts or interface with ticketing systems? Is it roll your own territory? jre fucked around with this message at 19:23 on Jun 8, 2017 |
# ? Jun 8, 2017 19:14 |
|
jre posted:It is quite a steep ramp, not as easy as using a point and click interface. However basic threshold alerts are awful if you are going to be woken by them. Depends on who the users of the system are going to be. Yeah, it seems like everything is. We've spent years optimizing our nagios install to get it exactly where we want it. elasticsearch+graphana is the current hotness and people around me keep talking about it, but gently caress if I want to start over from scratch again.
|
# ? Jun 8, 2017 20:13 |
|
jre posted:No kidding
|
# ? Jun 9, 2017 06:36 |
|
Vulture Culture posted:This is a sad, but inevitable, consequence of using a browser extension that limits my social media usage to 15 minutes a day. Work thing or personal choice ?
|
# ? Jun 9, 2017 09:48 |
|
I have a little digitalocean VPS with Plesk installed that I host a few websites on. Usually it hardly uses any resources but today this happened: I couldn't connect to the domain that Plesk uses, and I couldn't even login via SSH using the IP (both just timed out). The weird thing is that the actual websites hosted on there kept working fine. Restarting the VPS fixed it (that's what's happening at the end of the graph) but I guess I should try and work out what actually happened. Any tips on which log files should I be checking or what I should be looking for? /var/log/syslog just has a load of normal-looking stuff from postfix and dovecot.
|
# ? Jun 9, 2017 12:09 |
|
jre posted:Work thing or personal choice ?
|
# ? Jun 9, 2017 12:57 |
|
fuf posted:I have a little digitalocean VPS with Plesk installed that I host a few websites on. A couple ideas would be your http access logs and mail logs. It could be you were being crawled by a bot like Google and they hit some page that runs an intensive database call over and over. Not that I've ever been owned by that in production or anything Or just some rear end in a top hat running vulnerability scanners against random ips that managed to trigger bad behavior. It's also possible you got hacked and were being used to send out spam. If they were just using the existing MTA instead of installing their own, you'd see it in the logs of whatever mailer Plesk uses. Or at least via huge outbound bandwidth usage if you graph that.
|
# ? Jun 9, 2017 13:19 |
|
More and more I'm considering making some distro of linux my main Laptop OS but obviously I need some windows stuff for work/ect, including some processor intensive software like photoshop and after effects (and no, I don't want to use GIMP) Should I just dual boot, or is virtualization up the the point where you can function alot with just that? I have a fairly old i7 (2nd gen) 16gb ram, SSD, ect. is virtualization an option for me?
|
# ? Jun 9, 2017 16:59 |
|
Docjowles posted:A couple ideas would be your http access logs and mail logs. It could be you were being crawled by a bot like Google and they hit some page that runs an intensive database call over and over. Not that I've ever been owned by that in production or anything Or just some rear end in a top hat running vulnerability scanners against random ips that managed to trigger bad behavior. I would also look at sar data for that timeframe to see if something stupid like high io wait was causing it.
|
# ? Jun 9, 2017 18:03 |
Gozinbulx posted:Should I just dual boot, or is virtualization up the the point where you can function alot with just that? I have a fairly old i7 (2nd gen) 16gb ram, SSD, ect. is virtualization an option for me? What tools do you want to use on the Linux side? VM performance has been excellent for me, I hardly notice a difference. I would say give that a shot and see if it works for you since it's so easy to try it without screwing with your boot drive.
|
|
# ? Jun 9, 2017 20:19 |
|
fletcher posted:What tools do you want to use on the Linux side? VM performance has been excellent for me, I hardly notice a difference. I would say give that a shot and see if it works for you since it's so easy to try it without screwing with your boot drive. This has been my experience as well. I've found the whole experience just way nicer and more integrated instead of dual-booting. I tried getting into Linux for years and years, but rebooting to go from Windows to Linux was always such a roadblock (both mentally and technically...try copy pasting between OS's when you're dual booting!) that I never was able to make any progress. It wasn't until virtualization was so good that it was mostly indistinguishable from using the OS natively that I really started to use Linux a lot. (That being said, I find myself using my Linux VM's a lot less now that I can use Bash on windows.)
|
# ? Jun 9, 2017 21:01 |
|
I actually meant to run Windows in the VM lol. I want to deep dive into *nix
|
# ? Jun 10, 2017 16:14 |
|
Docjowles posted:A couple ideas would be your http access logs and mail logs. It could be you were being crawled by a bot like Google and they hit some page that runs an intensive database call over and over. Not that I've ever been owned by that in production or anything Or just some rear end in a top hat running vulnerability scanners against random ips that managed to trigger bad behavior. Google spidering doesn't look like that. They are very throttled. Go through your logs and I can bet you it most likely from China or Russia. Dudes scan my poo poo like a minute after a server is available. See if you can get readouts of web logs . You can tell from the http traffic if they are doing a fuzz or a very direct attack. But for that long, that's odd.
|
# ? Jun 10, 2017 16:33 |
|
For the heck of it I got 256 colors working in my bash console in Windows 10. Now the question is...whats some useful/interesting things I can use to take advantage of all these colors?
|
# ? Jun 11, 2017 19:02 |
|
Thermopyle posted:For the heck of it I got 256 colors working in my bash console in Windows 10. Get an oldschool 'scrolling' BBS motd.
|
# ? Jun 11, 2017 19:32 |
|
Can I manage RedHat AND CentOS patches using the latest RH Satellite or do I need to set up two separate patch mgmt systems? If not, can anyone recommend a distribution-agnostic patch compliance engine? some kinda jackal fucked around with this message at 13:53 on Jun 12, 2017 |
# ? Jun 12, 2017 13:47 |
|
I previously used https://sysward.com/ in a mixed environment. It's pretty alright.
|
# ? Jun 12, 2017 16:05 |
|
Martytoof posted:Can I manage RedHat AND CentOS patches using the latest RH Satellite or do I need to set up two separate patch mgmt systems Short answer is yes. Add a new repo in Pulp for CentOS, and you're set.
|
# ? Jun 12, 2017 19:20 |
|
Thinking about making the switch to Linux now that most games I want to play are supported on it. I'm think about using gnome ubuntu, but does this mean I will have to make a clean install when ubuntu 18 comes out?
|
# ? Jun 17, 2017 01:44 |
|
How do I do "mycmd | tee "mycmd-$(date +'%Y.%m.%d-%H.%M.%S').log" but also have tee gzip/bzip2 the output?
|
# ? Jun 17, 2017 01:44 |
|
Xeom posted:Thinking about making the switch to Linux now that most games I want to play are supported on it. I'm think about using gnome ubuntu, but does this mean I will have to make a clean install when ubuntu 18 comes out? I think It's almost always better to backup and do a clean reinstall than to upgrade an OS. I was first playing with Linux in 2012. I upgraded I think 12.04 to 12.10 and it didn't go all that well. Lots of stuff was just slightly broken, it was all fixable and was a helpful way to learn about Linux for me, but eventually I just wiped it and went back to a clean install of 12.04. The LTSs are the best choice by far, the .10s tend to be less stable. For the most part it's really not necessary to update immediately for each LTS release if you don't want to. My home fileserver is still running 14.04 because I didn't set it up that way at the time and I don't give enough of a gently caress to actually reinstall stuff on a new install. It's pretty common for most stuff to target at least one LTS back from current (eg right now 14.04). If you are definitely going to want to update then you may want to structure your partitions accordingly to make a switchover easier - for example a separate /home or /backup partition so that can persist through a reinstall. Also, set up etckeeper, it can make your life so much easier. It will put most of your /etc folder into a source-control repository so you have a complete log of your configuration changes. It's smart enough to automatically hook the package manager to log package installation/removal, and you just need to type 'etckeeper commit "iptables: opened port 80" or whatever any time you make manual changes. Sometimes the default configs put config files somewhere other than /etc, my usual approach is to move the folder to /etc and to leave a symlink to the new location (ln -s) You can similarly put config folders in your homedir under a git repository as well. Also consider starting a dotfile if you need - you can put it on a local PC or something like BitBucket or Github (don't put any passwords or private keys or any other secret in a dotfile repo). This can give you easy macros for stuff like "install this list of packages that I am used to working with". Taking typescripts of what you're doing when you configure the server is usually helpful to look back on later, but do be aware they contain screen-hidden things like passwords too, perhaps use a screen with logfile enabled. Also, use Nano (I've been meaning to learn some emacs or vi for a while but it seems overly complicated) Paul MaudDib fucked around with this message at 02:29 on Jun 17, 2017 |
# ? Jun 17, 2017 02:23 |
|
|
# ? Apr 23, 2024 21:37 |
|
Paul MaudDib posted:How do I do "mycmd | tee "mycmd-$(date +'%Y.%m.%d-%H.%M.%S').log" but also have tee gzip/bzip2 the output? mycmd | tee TMPFILE | gzip >whatever & tail -f TMPFILE I think? vv At least I don't think tee really does anything other than what it says.
|
# ? Jun 17, 2017 02:27 |