Thursday, February 24, 2011

The Honeynet Project Releases New Tool: Cuckoo

Here is another cool tool release from The Honeynet Project: Cuckoo Box by Claudio Guarnieri. Cuckoo is a binary analysis sandbox, designed and developed with the general purpose of automating the analysis of malware. Read more about the tool here, grab the tool here – but please read detailed setup guide here (make sure to read it!). BTW, this tool is really well-documented, so make use of it before deploying it.

Cuckoo is a lightweight solution that performs automated dynamic analysis of provided Windows binaries. It is able to return comprehensive reports on key API calls and network activity. Current features are:

  • Retrieve files from remote URLs and analyze them.
  • Trace relevant API calls for behavioral analysis.
  • Recursively monitor newly spawned processes.
  • Dump generated network traffic.
  • Run concurrent analysis on multiple machines.
  • Support custom analysis package based on AutoIt3 scripting.
  • Intercept downloaded and deleted files.
  • Take screenshots during runtime.

Please try the tool and send the feedback to the author – or sign up for a mailing list devoted to this tool here.

Possibly related posts:

Tuesday, February 22, 2011

On Cloud Logging Standards, Unique IDs and Other Exciting Logging Matters

Two of my esteemed colleagues, Misha Govshtein of AlertLogic and Raffael Marty of Loggly had a bit of an argument over something fairly central to logging and log management, especially as it applies to the coming cloud wave. Let’s review what happened.

In 2010, AlertLogic  folks have submitted an IETF draft of what they called “Syslog Extension for Cloud Using Syslog Structured Data”. Draft is available here and AlertLogic team explanation of its mission and purpose can be found here and  here (unfortunately in MP3 form). The draft reads as if they are proposing a new cloud log standard since the very first sentence of the document is: “This document provides an open and extensible log format to be used  by any cloud entity or cloud application to log and trace activities  that occur in the cloud.”

Said draft has found its way to the CEE Editorial Board (via IETF list message) and has caused some interest and, dare I say, unrest. And some disagreements. Raffael Marty of Loggly has published his position on the draft here. Further exchange of opinions can be seen in comments here, as well as heard in the hallways of RSA 2011 conference.

What do I think of this? I think both of these renowned log literati are both right and wrong (at this point, somebody might say “Anton…you are such a consultant”… and I am Smile)

Unquestionably, I believe that the idea of cloud logging having its very own special standard, completely disconnected from all other logs is misguided. Being disconnected from both the rest of the logging domain and current log standardization efforts (like CEE, XDAS, etc) only makes this idea more misguided. In essence, if you grab an example of a current bad application log, add “cloudiness” to it (more on this later) and then publish it as “cloud log standard”, you generate mostly hilarity and not value for the IT community. Logically, it goes like this:

  1. Bad log + cloud ID = really bad cloud log.
  2. Really bad cloud log + public IETF draft = really bad, standard cloud log, exposed in public
  3. Really bad standard log in the cloud EXPOSED in public = stupidity
  4. Stupidity –> funny blog posts from Anton, like, for example, this one.

This just reminds me of Chris Hoff saying “Cloud security suffers from the exact same siloed security telemetry problems as legacy operational models…except now it does it at scale.” In fact, here is an example from the draft:

Aug 16 13:34:18 [context aid="149683FC-8DF5-1004-E1A8-00000A000152"
provider="example.com" rid="1:123"][transit client="172.16.1.82"]
User authentication successful for 1:123


Would YOU like to spend your mornings analyzing logs like this? If you expose such examples in a purported standard draft, future generations of log analysts will hate you with a passion….



However!



I also happen to think that there are significant differences of logging from/at cloud computing platforms (whether SaaS, PaaS or IaaS) compared to BOTH traditional system logging AND distributed application logging. Cloud computing (as defined by NIST) has inherent multi-tenancy, elasticity, immediate provisioning and other fun properties, not found in traditional applications and platforms – whether distributed or not. All of these happen to affect accountability, auditability and transparency – all the goals logs serve – in a number of big ways. Thus, cloud computing must change how logging is done and it will change it. Specifically, adding a unique ID (“audit identifier which uniquely  identifies an external request for activity”) to logs in order to enable serves a useful purpose.



So, we must change logging for the cloud AND we must improve logging  everywhere through standard work. It will result in GOOD, USEFUL LOGS that ALSO WORK WELL IN THE CLOUD. The caveat? We need it sooner than CEE is finished and adopted on a broad scale. “CloudLog” effort contains useful ideas that need to be implemented in future logs produced by cloud framework components, but the method chosen (uncooked IETF draft choke full of bad log examples) deserves mostly ridicule…

Monday, February 14, 2011

LogChat Podcast 5: Anton Chuvakin and Andrew Hay Talk Logs

LogChat Podcast is back again – sorry for a brief delay! Everybody knows that all this world needs is a podcast devoted to logs, logging and log management (as well as SIEM, incident response and other fun related subjects).

And now you have it AGAIN with edition #5 - through the sheer combined genius of Andrew Hay and myself, Anton Chuvakin. Our topic today is scaling and sizing log management and SIEM: scalability, sizing, estimating log volumes, hard EPS limits (evil!), scalability of the entire system vs component scalability, peak vs ongoing log rates, EPS, petabytes of logs, “log math”, capacity planning as well as how to “slap your vendor” (obviously, a quote is from Andrew, not myself Smile) in regards to the scalability of their tools.

Some administrative items:
  1. We plan for this to happen periodically, such as maybe every three weeks - recorded on Wednesday, posted on Thursday. However, due to our work schedules, irregularities occur all the time. If you have not seen or heard a new LogChat podcast for a few weeks, be aware that we are not dead; just busy taking over the world.
  2. No, we are still not ready with transcribing and, yes, we still want it.  I did try Amazon Mechanical Turk, but it didn't turn to be as inexpensive as people claimed. If you have ideas for a good AND cheap transcribing service, we are all ears.
  3. Please suggest topics to cover as well - even though we are not likely to run out of ideas for a few years.
  4. Any other feedback is HUGELY useful. Is it too long? Too loud? Too rant-y? Too technical? Not enough jokes? Too few mentions of the "cloud"? Feedback please!
And now, in all its glory - the podcast: link to #5 MP3 is here [MP3], RSS feed is here - it is also on iTunes now.

Enjoy THE LogChat!


Possibly related posts:

Wednesday, February 09, 2011

The Honeynet Project Releases New Tool: PhoneyC

    As promised, I will be reposting some of the cool new announcements from The Honeynet Project here on my blog since I now serve as Project’s Chief PR Officer.Honeynet_logo_ppt_400px

    Here is one more: a release of a new tool called PhoneyC, a virtual client honeypot.

    PhoneyC is a virtual client honeypot, meaning it is not a real application (that can be compromised by attackers and then monitored for analysis of attacker behavior), but rather an emulated client, implemented in Python. The main thing it does is scour web pages looking for those that attack the browser.

    It can be run, for example, as: $ python phoneyc.py -v www.google.com

    By using dynamic analysis, PhoneyC is able to remove the obfuscation from many malicious pages. Furthermore, PhoneyC emulates specific vulnerabilities to pinpoint the attack vector. PhoneyC is a modular framework that enables the study of malicious HTTP pages and understands modern vulnerabilities and attacker techniques.

    Download version 0.1 (a contained readme contains installation instructions) here: phoneyc_v0_1_rev1631.tar_.gz

    v0.1 feature highlights include:

    * Interpretation of useful HTML tags for remote links
    - hrefs, imgs, etc ...
    - iframes, frames, etc
    * Interpretation of scripting languages
    - javascript (through spidermonkey)
    - supports deobfuscation, remote script sources
    * ActiveX vulnerability "modules" for exploit detection
    * Shellcode detection and analysis (through libemu)
    * Heap spray detection

    PhoneyC is hosted on http://code.google.com/p/phoneyc/ from which the newest development version can be obtained via SVN. For any issues turn to the Google Group dedicated to the project: http://groups.google.com/group/phoneyc.

Possibly related posts:

Monday, February 07, 2011

Test Your Mad Logging and Log Management Skills NOW!

Love those easy unscientific quizzes you see all over the Internet? Here is one such quiz on LOGGING and LOG MANAGEMENT that I created specially for LogManagementCentral.

Go check what you really know about logs and figure out whether you are a mere bunny logger or a log management ninja.image

Result scales:

  • Bunny logger (score of 10%)
  • Eager log beaver (score of 20 – 40%)
  • I know my way around logs (score of 50 – 70%)
  • I changed my name to “Log Logger” (score of 80 – 90%)
  • Log management ninja (score of 100.00% and nothing less!)

Don’t be afraid … I did put a couple of tricky questions in there.

Thursday, February 03, 2011

Proactive and Continuous Compliance? For Real?

At one of the first security conferences I ever attended (probably in 2001 or so), there was this vendor dude who would not stop rambling about continuous compliance. I listened to him and it suddenly dawned on me: what an awesome idea! Running a security-focused, ongoing, multi-regulation program that delivers value to both business units and reduces risk – what’s not to love here?

However, over the years I’ve gotten more cynical on this matter; we all know our beloved security industry does this to people Smile As I said in my infamous “Top PCI DSS Security Marketing Annoyances”, ““Ongoing compliance” theme is awesome. Sadly, a majority of your customers [I was addressing security vendors in that post – A.C.] don’t do it like this (to their own loss – this why it is sad). They still have assessment-time rush, pleasing the assessor approach and checklist-oh-we-are-DONE! mentality. If you want to sell continuous compliance, you need to educate them first!

Despite such sentiment, I still love the idea of continuous, proactive, cross-regulatory approach to compliance. A mere fact that most organizations don’t do it like this, should not discourage the education efforts to make this more common.

In fact, some recent research indicates that maybe – just maybe – the tide is turning and organizations will start revolting against the “annual assessment rush”, “audit mentality” and “audit done? see ya next year, security!” themes. Even if very weak, there are other indicators that the value of running an ongoing compliance program with technical control assessment automation is growing more popular and newer tools may make it more real. Verizon Breach 2010 report and Verizon PCI report also seem to indicate that compliance programs help security, while annual compliance audits only work to unearth negligence and incompetence. The drive to operationalize PCI DSS controls (example) and to stay compliant (example) also seems to be growing, at least among the larger merchants. One more example comes from the whole FISMA theater – NIST folks now are all about “continuous monitoring” for FISMA compliance (see this FAQ).

In light of this, maybe the times of continuous, [more] automated compliance are upon us? It so happens that I’d be doing a SANS webcast to explore this topic on February 11. Join the conversation as well as a fight for useful and continuous compliance in service of security.

Is continuous compliance a reality at your organization? Are you doing something 9, 6, 3 months before the annual PCI DSS assessment? Do you meet the auditor once a year? Or do you make an effort to stay compliant?

Wednesday, February 02, 2011

Monthly Blog Round-Up – January 2011

Blogs are "stateless" and people often pay attention only to what they see today. Thus a lot of useful security reading material gets lost.  These monthly round-ups is my way of reminding people about interesting blog content. If you are “too busy to read the blogs,” at least read these.

So, here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

  1. The hilarious “Top 10 Things Your Log Management Vendor Won't Tell You”, written for LogManagementCentral, reign supreme  this month! Read, laugh, weep…log.
  2. My PCI DSS log review procedures that I created for a consulting client and started posting on the blog (sanitized, of course!)  took the top spot again: the first post “Complete PCI DSS Log Review Procedures, Part 1” and the whole series “PCI_Log_Review” are expected to be useful to most large organization  under PCI DSS as well as other regulations
  3. To my great excitement, “Today The Industry Is Changed!” that announced the relaunch of Security Scoreboard, is one of the most popular posts this month. This great project is taking off like a rocket – and will hopefully will make our industry better soon!
  4. Another LogManagementCentral special, “Bottom 11 Log Management "Worst Practices"”, is next on the top list. Hate security “best practices”? Check out the bottom worst practices instead!
  5. Oh wow! Yet another LogManagementCentral special, “11 Log Resolutions for 2011” takes the #5 spot this month. Make (and stick to!) these resolutions in your environment as well!
  6. Final, 6th of 5, Smile position is again held by my free log management tool list (“On Free Log Management Tools”) from my consulting site. The original version was written as a companion to our “Log Review Checklist” that also sits on the top list this month.

Also, below I am thanking my top 3 referrers this month (those who are people, not organizations). So, thanks a lot to the following people whose blogs sent the most visitors to my blog:

  1. Lenny Zeltser
  2. Kevin Riggins
  3. Dancho Danchev (who, BTW, is back)

Also see my past annual “Top Posts” - 2007, 20082009, 2010). Next, see you in February for the next monthly top list.

Possibly related posts / past monthly popular blog round-ups:

Tuesday, February 01, 2011

First-ever Honeynet Project Public Conference–Paris 2011

It is with great pleasure I announce the first-ever Project Honeynet Public Conference, held alongside with the traditional The Honeynet Project Annual Workshop. The event is held on March 21, 2011 in Paris. For those who just want to register now, go here.

Date
21 March 2011 (Monday)
8:30AM ~ 18:00PM (GMT+1)
Location:
ESIEA Paris, 9 rue Vesale 75005 Paris
(Nearest subway station: Les Gobelins(line #7))
About the event:
The 2011 Honeynet Project  Security Workshop brings together experts in the field of information security from around the world to share the latest advances and threats in information security research. Organized by the not-for-profit The Honeynet Project and co-sponsored by the ESIEA Engineering School, this full day workshop creates opportunities for networking, collaboration and lessons-learned featuring a rare, outstanding line-up of international security professionals who will present on the latest research tools and findings in the field.
This year’s workshop will be held in Paris, France on 21 March 2011 and is the first time that the workshop has opened a day to the public. Starting at 9:00 GMT+1, the workshop program features a format that includes presentations in five sessions and two bonus hands-on activities. The bonus activities include a technically challenging capture-the-flag (CTF) session and a tough forensics challenge (FC) that will allow attendees to apply their expertise and compete for prizes. If you’re looking to attend a high quality and challenging security workshop, then we encourage you to take advantage of this rare opportunity.
Note:
1. Attendee limitation is 180
2. Participants can bring their Computer to play CTF and Forensics Challenges (FC).
3. Security workshop will be conducted in English.
Full agenda is available here; some highlights are below:
SESSION 2: Combating the Ever-Evolving Malware
10:30~11:00
Efficient Analysis of Malicious Bytecode Linespeed Shellcode Detection and Fast Sandboxing
Georg 'oxff' Wicherski
McAfee
11:00~11:30
High-Performance Packet Sniffing
Tillmann Werner
Kaspersky Lab
11:30~12:00
Reversing android malware
Mahmud Ab rahman
MyCERT, Cybersecurity Malaysia

Enjoy the event!

Dr Anton Chuvakin