Friday, November 02, 2007

Ideal Log Management Tool?

The idea came from Jeremiah Grossman (here) when he described "The Best Web Application Vulnerability Scanner in the World" thus: "Within a few moments of pressing the scan button it’ll find every vulnerability, with zero false positives, generate a pretty looking report, and voila you’re compliant with GLBA, HIPAA, and PCI-DSS. Of course, we all know such a web application scanner is simply not possible to create for a variety of reasons."

So, let's imagine the idea log management application.

  1. Logging configuration: the ideal log app will go and find all possible log sources (systems, devices, applications, etc) and then enable the right kind of logging on them according to a high level policy given to it (required: God-like powers)
  2. Log collection: it will collect all the above logs securely (and without using any risky super-user access ) and with little to no impact to networks and systems (required: God-like powers)
  3. Log storage: it can security store the above logs in the original format for as long as needed and in a manner allowing quick access to them - in both raw and summarized/enriched form (required: plenty of hardware)
  4. Log analysis: this ideal application will be able to look at all kinds of logs, known to it and previously unseen, from standard and custom log sources, and tell the user what they need to know about their environment and based on their needs: what is broken? what is hacked? where? what is in violation of regulations/policies? what will break soon? who is doing this stuff? The analysis will power all of the following: automated actions, real-time notifications, long-term historical analysis as well as compliance relevance analysis (required: AI)
  5. Information presentation: this tool will distill the above data, information and conclusions generated by the analytic components and present then in a manner consistent with the user's role: from operator to analyst to engineer to executive. Interactive visual and drillable text-based data presentation across all log sources. The users can also customize the data presentation based on their wishes and job needs, as well as information perception styles (required: nothing more than a bunch of daring UI designers)
  6. Automation: the ideal log management tool will be able to take limited automated actions to resolve discovered and confirmed issues as well as generate guidance to users so that they know what actions to take, when full-auto mode is not appropriate. The responses will range from full-auto actions to assisted actions ('click here to fix it') to issuing detailed remediation guidance. The output will include a TODO-list of discovered items complete with actions suggested, ordered by priority (required: AI + some luck + some user stupidity :-))
  7. Compliance: this tool can also be used directly by auditors to validate or prove compliance with relevant regulations by using regulation-specific content and all the collected data. The tool will also point at gaps in data collection as it applies to specific regulations that the user is interested in complying (required: God-like powers)

In other words, this magic black box will have crap shoveled from one side and will have answers to questions about the meaning of Life :-) coming out the other side...

What? :-) Am I nuts? Well, can I dream for a second? :-)

Technorati tags: , , , ,


Gary said...

Hi Anton.

Dream away! Wouldn't it be good, though, if the programs writing logs were somewhow able to filter out the sand leaving only the gold? Imagine if they were able to communicate with each other in order dynamically to set their filtration rules and identify anomalous trans-system events, perhaps even responding to the number of open issues currently undergoing analysis (i.e. "Anton's overloaded: I'll hold on to this bunch for now but give him this urgent one")?

I used to work for a power company that spent $$$ on smart software for real-time control. It was dynamically adjusting the presentation of alarms and alerts according to the general state of the unit, the number of open alarms/alerts already raised, and the urgency of the events (e.g. safety critical trumps engineering issues). Busy unit ops could put open alarms "on the shelf" but they would figuratively slide along the shelf and eventually fall back into the "do something" open alarms. Would this work for logs??

If we're really dreaming, wouldn't it be good to have nothing to log i.e. nothing the systems couldn't handle all by themselves?!

Cheers & keep up the good work,

Anton Chuvakin said...

Wow, thanks for the insightful comment!

>programs writing logs were somewhow
>able to filter out the sand leaving
>only the gold

Not a chance - the program writing logs cannot and will not be aware of all the uses for logs (does your firewall know about PCI? :-))

>Imagine if they were able to
>communicate with each other in
>order dynamically to set their
>filtration rules

Well, self-organizing programs cooperating on log analysis does sound cool! But such self-organization will likely be hard without the central "authority" of a log analysis/log management tool....

>It was dynamically adjusting the
>presentation of alarms and alerts
> according to the general state
>of the unit, the number of open
>alarms/alerts already raised,

This IS a neat idea indeed - when alerting, keep in mind how overwhelmed the operator is!

>nothing the systems couldn't
>handle all by themselves?

Nah, forget it :-) There are way too many (and there will be more) external requirement to take care of... E.g. compliance, security use for non-security logs, operational use of security logs, debugging based on various logs, etc

Dr Anton Chuvakin