|
Over the years, I've been a member of a number of technical forums, and one thing that I have consistently seen, is problems that could have been prevented by a log management solution. Most of the time, the obvious resolution to a problem is "check your logs", yet many experienced system administrators lack the awareness of this tool. Syslog started in the 1980's as part of the Sendmail project. Initially, the author, Eric Allman, used it solely to determine the status of messages, but it didn't take long to become an unofficial standard once other application developers saw its value. By 1990 it had become the industry standard as outlined in RFC 3164. It was later updated and revised in RFC 5424, which was intended to make the original RFC obsolete. It was never widely adopted, and most systems still use the original specification. One of the larger early adopters of this logging format is Cisco Sytems' IOS, used initially on their high-end routers. Currently, there are even low end routers and switches aimed at home users that utilize the protocol. Syslog messages are text-based messages that should contain (in RFC3164) the following fields; Facility, Severity, Timestamp, Hostname, and Structured Data (the message itself). From RFC5424, there are many more fields that utilize a more structured approach to providing infromat in the message test, which was intended to make messages easier to parse on the receiving end. So, what good it? Here are a few things that a decent system logging solution can help you with: Security and Risk Management Fault Detection / Root Cause Analysis Performance Mangement Network Optimization Configuration Management Traffic Bottleneck Analysis Compliance Reporting Integration w/other software Active Wan Optimization Tell us about what solution you're using and what you like about it, or feel free to ask questions about how to use syslog management to make your job easier.
|
# ? Oct 20, 2015 22:19 |
|
|
# ? Apr 26, 2024 01:06 |
|
Logstash -> elasticsearch -> kibana It's really great at visualizing trends, and you can query the elasticsearch database with some json nosql stuff, which makes it pretty powerful for report generation.
|
# ? Oct 20, 2015 23:22 |
|
adorai posted:Logstash -> elasticsearch -> kibana How big is the dataset you're searching? I've seen some performance issues with nosql when you receive over 10 million events per day, but that might just be the hardware I was using.
|
# ? Oct 21, 2015 01:31 |
|
Lucid Nonsense posted:How big is the dataset you're searching? I've seen some performance issues with nosql when you receive over 10 million events per day, but that might just be the hardware I was using.
|
# ? Oct 21, 2015 03:45 |
|
adorai posted:Logstash -> elasticsearch -> kibana elasticsearch and kibana are great, not so wild about logstash anymore. grok and such left a lot to be desired. i've been working with fluentd and i like it. i appreciate how easy it is to collect various formats (java stack traces etc) which is a benefit that depends on how heterogenous your environment is. http://www.fluentd.org/
|
# ? Oct 27, 2015 03:01 |
|
pram posted:elasticsearch and kibana are great, not so wild about logstash anymore. grok and such left a lot to be desired. Haven't worked with fluentd. What's the difference between that and just using syslog-ng to collect and filter?
|
# ? Nov 5, 2015 16:57 |
|
We've got requirements around making sure that our log data integrity isn't compromised for PCI 3.1 (ugh). This appears to be the one significant thing that Shield on the ELK stack doesn't do, so I'm curious what people are doing for that?
|
# ? Nov 5, 2015 17:41 |
|
Maneki Neko posted:We've got requirements around making sure that our log data integrity isn't compromised for PCI 3.1 (ugh). This appears to be the one significant thing that Shield on the ELK stack doesn't do, so I'm curious what people are doing for that? Do you mean regarding SSL and early versions of TLS being depracated, or the security of the system that logs are stored on?
|
# ? Nov 6, 2015 17:41 |
|
Maneki Neko posted:We've got requirements around making sure that our log data integrity isn't compromised for PCI 3.1 (ugh). This appears to be the one significant thing that Shield on the ELK stack doesn't do, so I'm curious what people are doing for that? That section of PCI has been clarified a bit since prior versions. 10.5.2 now states that network segregation is fine. Having said that, we still use Splunk because it's more mature in terms of access control and the immutability of the database 10.5.2 Current audit trail files are protected from unauthorized modifications via access control mechanisms, physical segregation, and/or network segregation. Adequate protection of the audit logs includes strong access control (limit access to logs based on “need to know” only), and use of physical or network segregation to make the logs harder to find and modify. Promptly backing up the logs to a centralized log server or media that is difficult to alter keeps the logs protected even if the system generating the logs becomes compromised.
|
# ? Nov 6, 2015 18:09 |
|
Nukelear v.2 posted:That section of PCI has been clarified a bit since prior versions. 10.5.2 now states that network segregation is fine. Having said that, we still use Splunk because it's more mature in terms of access control and the immutability of the database 10.5.5 was I was referring to, which Splunk seems to have mechanism to take care of, but I haven't seen much discussion on with ELK. 10.5.5 Use file-integrity monitoring or change-detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert). File-integrity monitoring or change-detection systems check for changes to critical files, and notify when such changes are noted. For file integrity monitoring purposes, an entity usually monitors files that don’t regularly change, but when changed indicate a possible compromise.
|
# ? Nov 6, 2015 23:47 |
|
Maneki Neko posted:10.5.5 was I was referring to, which Splunk seems to have mechanism to take care of, but I haven't seen much discussion on with ELK. With most logging solutions, once the data has been written to an index it can't be changed unless you have database admin access. Even then, changes should be logged to an audit file. I'd guess that most of them provide adequate protection, especially since I haven't heard of anyone failing an audit over this requirement.
|
# ? Nov 10, 2015 17:52 |
|
|
# ? Apr 26, 2024 01:06 |
|
pram posted:elasticsearch and kibana are great, not so wild about logstash anymore. grok and such left a lot to be desired. fluentd owns esp when someone else deals w/ it like the treasuredata people who wrote fluentd
|
# ? Nov 10, 2015 19:28 |