Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day
Over the years, I've been a member of a number of technical forums, and one thing that I have consistently seen, is problems that could have been prevented by a log management solution. Most of the time, the obvious resolution to a problem is "check your logs", yet many experienced system administrators lack the awareness of this tool.

Syslog started in the 1980's as part of the Sendmail project. Initially, the author, Eric Allman, used it solely to determine the status of messages, but it didn't take long to become an unofficial standard once other application developers saw its value. By 1990 it had become the industry standard as outlined in RFC 3164. It was later updated and revised in RFC 5424, which was intended to make the original RFC obsolete. It was never widely adopted, and most systems still use the original specification. One of the larger early adopters of this logging format is Cisco Sytems' IOS, used initially on their high-end routers. Currently, there are even low end routers and switches aimed at home users that utilize the protocol.

Syslog messages are text-based messages that should contain (in RFC3164) the following fields; Facility, Severity, Timestamp, Hostname, and Structured Data (the message itself). From RFC5424, there are many more fields that utilize a more structured approach to providing infromat in the message test, which was intended to make messages easier to parse on the receiving end.

So, what good it? Here are a few things that a decent system logging solution can help you with:

Security and Risk Management
Fault Detection / Root Cause Analysis
Performance Mangement
Network Optimization
Configuration Management
Traffic Bottleneck Analysis
Compliance Reporting
Integration w/other software
Active Wan Optimization

Tell us about what solution you're using and what you like about it, or feel free to ask questions about how to use syslog management to make your job easier.

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Logstash -> elasticsearch -> kibana

It's really great at visualizing trends, and you can query the elasticsearch database with some json nosql stuff, which makes it pretty powerful for report generation.

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

adorai posted:

Logstash -> elasticsearch -> kibana

It's really great at visualizing trends, and you can query the elasticsearch database with some json nosql stuff, which makes it pretty powerful for report generation.

How big is the dataset you're searching? I've seen some performance issues with nosql when you receive over 10 million events per day, but that might just be the hardware I was using.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Lucid Nonsense posted:

How big is the dataset you're searching? I've seen some performance issues with nosql when you receive over 10 million events per day, but that might just be the hardware I was using.
Not that big, 5 figures per day. I have 3 elasticsearch nodes seperate from my logstash and kibana nodes. I only have issues when I try to search large datasets, like 90+ days at once and even then it's usually ok. I have a seperate instance that does indexes CDR from my cisco callmanager and that does have some performance issues with large dashboards in kibana.

pram
Jun 10, 2001

adorai posted:

Logstash -> elasticsearch -> kibana

It's really great at visualizing trends, and you can query the elasticsearch database with some json nosql stuff, which makes it pretty powerful for report generation.

elasticsearch and kibana are great, not so wild about logstash anymore. grok and such left a lot to be desired.

i've been working with fluentd and i like it. i appreciate how easy it is to collect various formats (java stack traces etc) which is a benefit that depends on how heterogenous your environment is.

http://www.fluentd.org/

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

pram posted:

elasticsearch and kibana are great, not so wild about logstash anymore. grok and such left a lot to be desired.

i've been working with fluentd and i like it. i appreciate how easy it is to collect various formats (java stack traces etc) which is a benefit that depends on how heterogenous your environment is.

http://www.fluentd.org/

Haven't worked with fluentd. What's the difference between that and just using syslog-ng to collect and filter?

Maneki Neko
Oct 27, 2000

We've got requirements around making sure that our log data integrity isn't compromised for PCI 3.1 (ugh). This appears to be the one significant thing that Shield on the ELK stack doesn't do, so I'm curious what people are doing for that?

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

Maneki Neko posted:

We've got requirements around making sure that our log data integrity isn't compromised for PCI 3.1 (ugh). This appears to be the one significant thing that Shield on the ELK stack doesn't do, so I'm curious what people are doing for that?

Do you mean regarding SSL and early versions of TLS being depracated, or the security of the system that logs are stored on?

Nukelear v.2
Jun 25, 2004
My optional title text

Maneki Neko posted:

We've got requirements around making sure that our log data integrity isn't compromised for PCI 3.1 (ugh). This appears to be the one significant thing that Shield on the ELK stack doesn't do, so I'm curious what people are doing for that?

That section of PCI has been clarified a bit since prior versions. 10.5.2 now states that network segregation is fine. Having said that, we still use Splunk because it's more mature in terms of access control and the immutability of the database

10.5.2 Current audit trail files are protected from unauthorized
modifications via access control mechanisms, physical
segregation, and/or network segregation.

Adequate protection of the audit logs includes
strong access control (limit access to logs based
on “need to know” only), and use of physical or
network segregation to make the logs harder to
find and modify.
Promptly backing up the logs to a centralized log
server or media that is difficult to alter keeps the
logs protected even if the system generating the
logs becomes compromised.

Maneki Neko
Oct 27, 2000

Nukelear v.2 posted:

That section of PCI has been clarified a bit since prior versions. 10.5.2 now states that network segregation is fine. Having said that, we still use Splunk because it's more mature in terms of access control and the immutability of the database

10.5.2 Current audit trail files are protected from unauthorized
modifications via access control mechanisms, physical
segregation, and/or network segregation.

Adequate protection of the audit logs includes
strong access control (limit access to logs based
on “need to know” only), and use of physical or
network segregation to make the logs harder to
find and modify.
Promptly backing up the logs to a centralized log
server or media that is difficult to alter keeps the
logs protected even if the system generating the
logs becomes compromised.

10.5.5 was I was referring to, which Splunk seems to have mechanism to take care of, but I haven't seen much discussion on with ELK.

10.5.5 Use file-integrity monitoring or change-detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert).

File-integrity monitoring or change-detection systems check for changes to critical files, and notify when such changes are noted. For file integrity monitoring purposes, an entity usually monitors files that don’t regularly change, but when changed indicate a possible compromise.

Lucid Nonsense
Aug 6, 2009

Welcome to the jungle, it gets worse here every day

Maneki Neko posted:

10.5.5 was I was referring to, which Splunk seems to have mechanism to take care of, but I haven't seen much discussion on with ELK.

10.5.5 Use file-integrity monitoring or change-detection software on logs to ensure that existing log data cannot be changed without generating alerts (although new data being added should not cause an alert).

File-integrity monitoring or change-detection systems check for changes to critical files, and notify when such changes are noted. For file integrity monitoring purposes, an entity usually monitors files that don’t regularly change, but when changed indicate a possible compromise.

With most logging solutions, once the data has been written to an index it can't be changed unless you have database admin access. Even then, changes should be logged to an audit file. I'd guess that most of them provide adequate protection, especially since I haven't heard of anyone failing an audit over this requirement.

Adbot
ADBOT LOVES YOU

Malcolm XML
Aug 8, 2009

I always knew it would end like this.

pram posted:

elasticsearch and kibana are great, not so wild about logstash anymore. grok and such left a lot to be desired.

i've been working with fluentd and i like it. i appreciate how easy it is to collect various formats (java stack traces etc) which is a benefit that depends on how heterogenous your environment is.

http://www.fluentd.org/

fluentd owns esp when someone else deals w/ it like the treasuredata people who wrote fluentd

  • Locked thread