Monday, November 15, 2010

Project Honeynet “Log Mysteries” Challenge Lessons

We just finished grading the results of Project Honeynet “Log Mysteries” Challenge #5 and there are some useful lessons for BOTH future challenge respondents and to log analysts and incident investigators everywhere.

If you look at the challenge at high level, things seem straightforward: a bunch of log data (not that much data, mind you – only  1.14MB compressed) from a Linux system. You can squeak by even if you use manual analysis and simple scripting. Fancier tools would have worked too, of course. The questions lead you to believe that compromise might have occurred.

Overall, people did get to some of the truth, but there were a few lapses which are worth highlighting.

First, justifying that a login activity pattern is malicious was required. Yes, a very long (hundreds) string of login failures followed by success, all from the same IP address in China, smells very fishy,  but shorter login sequences theoretically can be legit. Few people chose to justify it – in all their excitement after finding a compromise. Jumping to conclusions is one of the biggest risks during the incident investigations, especially if things can go to court.

Second, just because you found reliable compromise indication does not mean there isn’t more – of the same OR completely different type. You got the login  brute forcing via SSH, now check for web app hacking, will ya? More successful attacks in the same log? Anything of the same sort in other logs? Anything completely different but just as ominous? Finding one means little – cast a wide net again and again, even after you find reliable signs of system compromise. A good approach is to pretend that you found nothing and then try harder!

Third, post-compromise activity investigation is often as important as incident detection. Yeah, we got 0wned by “parties unknown.” And? What did the parties do after they got root? Did they drop an IRC bot to chat with their buddies or did they clean your crown jewels? Did they impact other systems and possibly other business processes? Maybe they DoS’d NSA from your box and that whirring noise you are hearing is a mean-looking SWAT team heli-dropping on your data center roof….? Smile

Finally, everybody chose to miserably FAIL at my trick question about timing: “How certain are you of the timing?” Everybody was in the range from “certain” to “fairly sure.” Guys, you are given a bunch of logs by somebody possibly untrusted (me, in this case). How on Earth can you be sure about timestamps in the logs reflecting reality? Did you set up that NTP server? Did you check it before the incident? Did you maintain chain of custody of logs after they were captured? WTH!? Of course, you cannot be sure at all about the absolute time and you can make a reasonably good bet that relative to themselves log timestamps are consistent. A good bet, BTW, is not the same as being certain. An opposing side lawyer will tear you  a new one in a second if you show up with that “certainty” in a court of law…

There was also an open-ended question about attacker motivations. Why did we ask it – think about it! So that you can learn a more social part of investigative trade. What can you hypothesize and prove? What can you learn by comparing this case with other cases you might have seen or even read about? Is this hot APT shit? Or is this a lone monkey-boy who can barely type?

Regarding commercial SIEM correlation tools: IMHO many should have picked this bruteforcing by using basic correlation rules (if count[login_failure]>100 followed by login_success where src IP = same across all failures and success, then alert). Check your tool and make sure you have rules like this (or hire somebody to build you a useful correlation ruleset Smile), even OSSEC can be used for this! Your exposed DMZ servers might be owned already Smile 

More challenges are coming!

Possibly related posts:

Enhanced by Zemanta

Dr Anton Chuvakin