Monday, July 16, 2007

New Paper: "Log management in the age of compliance"

Yeah, I know, not too technical, but still fun - my paper "Log management in the age of compliance" on ComputerWorld: "In my previous article, I described the way in which three regulations (FISMA, HIPAA and PCI-DSS) affect incident response processes. This triumvirate also affects log management, since they [A.C. - these and other regulations] call for enabling logging as well as for log review."

A quote: "The major effect the age of compliance has had on log management is to turn it into a requirement rather than just a recommendation, and this change is certainly to the advantage of any organization subject to these regulations. It is easy to see why log collection and management is important, and the explicit inclusion of log management activities in major regulations like FISMA, HIPAA and PCI-DSS highlights how key it truly is to enterprise security as well as broader risk management needs."

In other words, if you didn't implement log management, because of the obvious value you can get out of it, now you'd do it 'cause the auditor will get you otherwise :-)

11 comments:

rybolov said...

See, there's that C-word again. =)

It's not really c*mpliance if the rules say "do risk planning and tie it into the budget"--there is a universe full of solutions that match the intent of FISMA. And SP 800-53 is a guideline which means that you can feel free to tailor it as you need to.

Seriously, though, the biggest problem we have with the government is deciding what logs we really need to keep. For instance, if I have 500 Windows servers, which is very realistic for a large agency, they generate a ton of noise. It's way expensive to aggregate logs from them, so you have to pick-and-choose the best and then have the others available on the server to comb through for more detail. That's why NIST uses this wonderful phrase called "organizationally-defined parameters".

And for the record, this post did make it into the Eiswein Security Feed. Doesn't your ego feel better? =)

Anton Chuvakin said...

>800-53 is a guideline which means
>that you can feel free to tailor
>it as you need to.

I still tend to pour it into a compliance barrel (to try to go with the wine metaphor), since I think "compliance" starts to sound like "any reason to 'do stuff' which is forced/strongly suggested externally by govt or other authority"

>is deciding what logs we really
>need to keep. For instance, if I

Well, that is true. Indeed, there is no way to know it up front without making a mistake (e.g. see http://www.slideshare.net/anton_chuvakin/csi-netsec-2007-six-mistakes-of-log-management-by-anton-chuvakin)

>It's way expensive to aggregate
>logs from them, so you have to
>pick-and-choose the best and then
>have the others available on the
>server to comb through for more
>detail.

Warning: vendor hat=on. It is entirely possible and not too expensive to aggregate all logs. Tools such as LogLogic were built to work under these exact circumstances of complete collection (unlike other tools that rely on pre-collection filtering), where you collect everything and then keep logs for various time period, depending upon the need (and look only at a subset, of course)

>And for the record, this post did
>make it into the Eiswein Security
>Feed. Doesn't your ego feel
>better? =)

Yeah, definitely, it feels a little better, thank you very much :-)

Anonymous said...

Good points, but logging for compliance will certainly be more useful when audit occurs on a per user basis at the data file level.

Do you care to comment on the overall futility of logging based on Scott Berintino's article, since the same anti forensic't tools would ruin logging for compliance also?

The Rise of Antiforensics

http://www.csoonline.com/read/060107/fea_antiforensics.html

Anton Chuvakin said...

>audit occurs on a per user basis at
>the data file level.

True, and I am pretty sure it is coming soon "from an auditor near you" :-)

>Do you care to comment on the
>overall futility of logging

Oh, I already have a "blogfight" about it here (inspired by the same paper):

http://chuvakin.blogspot.com/2007/07/more-on-do-real-hackers-get-logged.html

I have a sneaking suspicion that I said most/all of the things to be said about it ... check it out and then comment again.

Anonymous said...

If there is secondary log information scattered over the network, Berintino's point about requiring a criminal investigation to last 17 hours to be canned would certainly apply to compliance audits. Would an auditor declare the network a polluted mess or forge ahead until the company goes bankrupt over audit fees?

Is the only real defense against these tools a trusted system where logs are not tamperable even by the system admin or security officer?

Anton Chuvakin said...

>mess

While many networks can be qualified as "polluted mess", I suspect a relative approach will work: namely, the scale of messines from "logs from an owned system" (low) to logs from another system (medium) to logs from network gear (medium-high) to logs from network security gear (highest - still not guaranteed for sure!)

>Is the only real defense against
>these tools a trusted system where
>logs are not tamperable even by the
>system admin or security officer?

Well, maybe not as severe as that: trusted ENOUGH not to be tampered by the SUSPECTED party, I'd say. Firewall logs will probably pass this test.

Anonymous said...

> trusted ENOUGH not to be tampered by the SUSPECTED party

Figuring that out is part of the problem though-who is the suspected party, and what if it is an insider.

Even firewalls and network security gear have vulnerabilties and therefore can be compromised, so perhaps they need to be trusted, in order to be guarantee trustworthyness.

Trusted systems would avoid unauthorized tampering in the first place, so that much of the after the fact investigation would be eliminated, or the audit trail would be facilitated from the get-go, so that investigations would proceed rapidly.

Anton Chuvakin said...

>Trusted systems would avoid
>unauthorized tampering in the first
>place

Very true, but still: how many of trusted (in the Rainbow Series sense) systems are out there... not many.

As a result, you choose between 'no trust whatsoever' and 'somewhat trusted' in most cases... anti-forensic tools just shift more system logs towards less trust...

Anonymous said...

Not too many, as their complexity was a barrier to practical use.

Just for you interest we have an innovative implementation of the Rainbow principles that has made this level of trust possbible and cost-effective.

The problem as you know, is that as long as there are gaps at os level, there will always be potential work arounds higher up the security stack, so app layer defenses etc. will untimately be limited in their degree of success.

Anton Chuvakin said...

>Just for you interest we have an
>innovative implementation of the
>Rainbow principles that has made
>this level of trust possbible and
>cost-effective.

Wow, really? Where can I learn more?

>The problem as you know,
Indeed - holey appps over holey OS is our reality. Is it forever? Who knows ....

Anonymous said...

>>>where can I learn more?

Sent you some links.

Dr Anton Chuvakin