Friday, June 25, 2010

SANS Log Management Class in California?

This post is not just an announcement; it contains a BIG question to my readers, mostly in California and around.

As you know, I have authored a SANS Log Management Class (SEC434) which is almost out of beta and near production stage, after a few years of tuning and trial runs. We are thinking of teaching it in California during the second week of August 2010. Via this blog post, I wanted to get some quick feedback from my readers about how many might want to sign up for it. So, please just leave a comment here if you’d like to attend!

Also, I wanted to check whether anybody’s employer (a log management or SIEM vendor perhaps…) would be willing to provide a venue to teach a class. We just need a room with a projector, nothing fancy. In exchange for that, SANS will give you some free attendance seats for the class. So, drop me an email, DM or something, if you’d like to take this opportunity.

The updated information on the class follows below:

“This first-ever dedicated log management class teaches system, network, and security logs, their analysis and management and covers the complete lifecycle of dealing with logs: the whys, hows and whats.

You will learn how to enable logging and then how to deal with the resulting data deluge by managing data retention, analyzing data using search, filtering and correlation as well as how to apply what you learned to key business and security problems. The class also teaches applications of logging to forensics, incident response and regulatory compliance.

In the beginning, you will learn what to do with various log types and provide brief configuration guidance for common information systems. Next, you will learn a phased approach to implementing a company-wide log management program, and go into specific log-related tasks that needs to be done on a daily, weekly, and monthly basis in regards to log review and monitoring.

Everyone is looking for a path through the PCI DSS and other regulatory compliance maze and that is what you will learn in the next section of the course. Logs are essential for resolving compliance challenges; this class will teach you what you need to concentrate on and how to make your log management compliance-friendly. And people who are already using log management for compliance will learn how to expand the benefits of you log management tools beyond compliance.

You will learn to leverage logs for critical tasks related to incident response, forensics, and operational monitoring. Logs provide one of the key information sources while responding to an incident and this class will teach you how to utilize various log types in the frenzy of an incident investigation.

Finally, the class author, Dr. Anton Chuvakin, probably has more experience in the application of logs to IT and IT security than anyone else in the industry. This means he and the other instructors chosen to teach this course have made a lot of mistakes along the way. You can save yourself a lot of pain and your organization a lot of money by learning about the common mistakes people make working with logs.”

P.S. Response to comments might be delayed, I am away from my computers.

Possibly related posts:

Wednesday, June 23, 2010

SLAML 2010 Log Analysis Workshop

This year, Workshop on the Analysis of System Logs (WASL) is reborn as SLAML. Please consider submitting a short paper (no need to do a full academic write-up!). The deadline is July 11.

Join us in Vancouver, BC, Canada, October 2–3, 2010, for the Workshop on Managing Systems via Log Analysis and Machine Learning Techniques. Modern large-scale systems are challenging to manage. Fortunately, as these systems generate massive amounts of performance and diagnostic data, there is an opportunity to make system administration and development simpler via automated techniques to extract actionable information from the data. SLAML '10 workshop addresses this problem in two thrusts: (i) the analysis of raw system data logs and (ii) the application of machine learning to systems problems. The large overlap in these topics should promote a rich interchange of ideas between the areas.

SLAML '10 combines the Workshop on the Analysis of System Logs (WASL) and the Workshop on Tackling Computer Systems Problems with Machine Learning Techniques (SysML)."

The part related to logs is:

Log Analysis: It is well known that raw system logs are an abundant source of information for the analysis and diagnosis of system problems and prediction of future system events. However, a lack of organization and semantic consistency between system data from various software and hardware vendors means that most of this information content is wasted. Current approaches to extracting information from the raw system data capture only a fraction of the information available and do not scale to the large systems common in business and supercomputing environments. It is thus a significant research challenge to determine how to better process and combine information from these data sources.”

The topics sought are:

“Topics include but are not limited to:

  • Reports on publicly available sources of sample system logs
  • Prediction of malfunction or misuse based on system data
  • Statistical analysis of system logs
  • Applications of Natural-Language Processing (NLP) to system data
  • Techniques for system log analysis, comparison, standardization, compression, anonymization, and visualization
  • Applications of log analysis to system administration problems
  • Use of machine learning techniques to address reliability, performance, power management, security, fault diagnosis, scheduling, or manageability issues
  • Challenges of scale in applying machine learning to large systems
  • Integration of machine learning into real-world systems and processes
  • Evaluating the quality of learned models, including assessing the confidence/reliability of models and comparisons between different methods”

Please submit to advance the state of log analysis research! Past workshop information is here (2008, 2009).
SLAML '10

P.S. This is posted by a scheduler; response to comments may be delayed since I might be away from computers.

Possibly related posts:

    Monday, June 21, 2010

    Ultimate Security Survey is ON!

    Securosis folks are starting off a new data security survey “focused on evaluating perceived effectiveness of various controls, as well as some other incident data.” In other words, they are starting The Holy Grail of Security Surveys: how/why what we do works or fails. It only takes about 10-20 minutes to complete – but can provide hugely useful data.

    Please participate!

    The survey is available at http://www.surveymonkey.com/s/datasec2010

    If asked for a code, enter "SecurosisAwesome"

    Enjoy!

    Wednesday, June 16, 2010

    How PCI Leads to DLP?

    By now, it is increasingly obvious that PCI DSS does not (and likely will not) mandate the use of Data Leak Prevention (DLP) technology now or in the near future. This applies to both discovery and monitoring/enforcement aspects of DLP. However, I am hearing that the percentage of DLP deployments driven by PCI DSS compliance is rising. What’s the story with that?
    While a certain percentage of such deployments  simply point “in the general direction of PCI” to get budget (huh…nothing wrong with that :-)), I’d like to comment on the fact that DLP often makes a decent compensating control for many PCI DSS requirements.
    PCI Compliance: Understand and Implement Effective PCI Data Security Standard Compliance
    First, unless you read the PCI book already, read Branden’s chapter on the Art of Compensating Control (this paper [PDF] has some of the same coverage).
    So, here is where I have seen DLP boxes used as compensating controls (warning: evidence of QSA actually accepting it was not available in all cases, so use this advice at your own risk)
    • Stored data encryption (Requirement 3.4 “Render PAN, at minimum, unreadable anywhere it is stored”): DLP was used to compensate for the lack of STORED data encryption. The thinking was that if the data cannot leave the storage (…via the network), DLP was satisfying the same intent as encryption in the original requirement.  Would I agree that “it goes above and beyond” the original? Good question :-)
    • Access control (Requirement 7.1 “Limit access to system components and cardholder data to only those individuals whose job requires such access.”): DLP was used to reduce the chance of PANs falling into the wrong hands and thus satisfying the spirit of this requirement.
    • Monitoring access to data (Requirement 10.2 “Implement automated audit trails for all system components to reconstruct the following events:  […] All individual accesses to cardholder data”): while logging is a common choice here, DLP was used to make sure that all network access to cardholder data is recorded. The reason for choosing DLP over logging was due to the fact that the company didn’t know how to configure logging, but knew how to buy a DLP box :-)
    Others examples of auxiliary use of DLP for PCI DSS included verifying that Requirement 4.1 (“Use strong cryptography and
    security protocols such as SSL/TLS or IPSEC to safeguard sensitive cardholder data during transmission over open, public networks”) is indeed being followed. In this case, DLP served as a glorified PAN sniffer.
    On top of this, the discovery components of DLP tools are often used for scoping. See some fierce debate on this issue, referenced from here. To summarize, the use of DLP and standalone data discovery tools for PCI scoping is certainly not mandatory, but very helpful. On top of this, one can use a DLP system to make sure that scope does not explode when people pull card data from the payment environment to development, QA, etc, etc.
    Finally, I see the fact that PCI-motivated use of DLP tools is growing as something positive. To me it says that people are following the spirit of DSS and not simply its letter (of course, one can also say that they are reaching for a DLP box as an easy way out). Indeed, despite everything that was said above, deleting cardholder data is still a better way to make sure it does not get stolen or “lost”…
    (also, as a disclosure, I serve on an advisory board of a DLP company, nexTier Networks that has a product called Compliance Enforcer)
    Possibly related posts:

    Saturday, June 12, 2010

    How Do I Get The Best SIEM?

    Given that I spent this entire week getting back into a SIEM-building game [don’t ask :-)], a few thoughts on the state of Security Information and Event Management have dawned on me.

    SIEM_MQ2003

    Some security technologies – like network firewalls - are getting pretty darn close to being commoditized and differences between products are ever-so-close to being wiped out.

    SIEM, let me tell you, is nowhere near this.  Maybe this also has something to do with the fact that Gartner SIEM MQ 2010 (see this fun commentary from Rocky and his view on SIEM history) contain so many players for so many years. To follow up on this, here is a fun quote from Gartner MQ on SIEM: “There are signs of general convergence on a core set of [SIEM] capabilities.

    Do you know WHEN the above was written? March 2003!

    2003! In other words, full 7 (!) years after first SIEM products were built. And also - full 7 (!) years  ago. Look to the right to see how SIEM realm looked back then [yes, Brian, I just reread all SIEM MQs from 2003 to 2010 – just for fun :-)]

    Today, in 2010, there is still NO “best SIEM for everybody” and there is NO feature parity even across key capabilities.

    Yes, there is a SIEM tool that seems better for large enterprises with unlimited budget. But overall “best SIEM"? Nope.

    In fact, I bet that …

    If you pick five top SIEM requirements AND 5 “top” SIEM vendors, then at least one of the tools will REALLY SUCK on at least one of the key requirements.

    The reality is that after so many years, all – well, most -  SIEM tools actually “run” - but do they always “work?” Let me explain the difference between a software that RUNS from the one that WORKS. “Runs” means that code compiles and, when executed, does not throw an exception. On the other hand, “works” means that it delivers value to its buyer. For example, rule-based correlation runs (well, unless it runs out of memory…oops!), but doesn’t work in many environments (see recent Securosis piece on that). Real-time dashboards run, but aren’t even utilized in many environments. Visualization tools run, but often users cannot get them to work. Risk scoring / statistical correlation runs, but often doesn’t deliver useful results.

    And you known, believe it or not, SIEM vendors are NOT the ones to blame for it. Many are honest in saying that “Yes, to succeed,  a SIEM project takes work BY it’s buyer/user.” So, your SIEM likely will WORK, if you WORK on it.

    Now, let’s turn this into something practical and useful? What’s a poor SIEM buyer – whether enterprise or mid-market - to do? How to pick the right SIEM?

    The only choice I see is the one that won’t surprise my readers: focus on requirements, define your SIEM use cases – and then test the products. Buy the one that WORKS FOR YOU! Some ideas on the selection process can be found here.

    Enjoy!

    Possibly related posts:

    Enhanced by Zemanta

    Tuesday, June 01, 2010

    Monthly Blog Round-Up – May 2010

    Blogs are "stateless" and people often only pay attention to what they see today. Thus a lot of useful security reading material gets lost.  These monthly round-ups is my way of reminding people about interesting content. If you are “too busy to read the blogs,” at least read these.

    So, here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics.

    1. By a HUGE margin again, the #1 post this month is “Simple Log Review Checklist Released!” Grab our log review checklist here, if you have not done so already. It is perfect to hand out to junior sysadmins who are just starting up with logs. Another similar resource is in the works…
    2. Next up are my notes from University PCI DSS workshop where I delivered a keynote: “My Best PCI DSS Presentation EVER!” (the infamous “compliance kitten” quotes comes from here)
    3. Everybody loved my whitepaper on SIEM+ log management, released via the post called “Two New Logging Resources Published.” Check out the paper here (registration with Novell required). Just as a  preview, another big research whitepaper on SIEM is in the works….
    4. A recent post “On Choosing SIEM“ went to the top like lighting last month and stayed there this month. If you are thinking of getting a SIEM or a log management tool, check it out – please also look at related resources there at the end.
    5. Proving that all SIEM/LM vendor product managers read this blog, the post “Log Management / SIEM Users: “Minimalist” vs “Analyst”” is in the Top5 too. It is about two vastly different types of people who buy and [try to] use SIEM and log management tools.
    6. Just for a good measure, the item #6 of my Top 5 :-) is “Compliance Mega-Epiphany!

    Also, below I am thanking my top 5 referrers this month (those who are people, not organizations). So, thanks a lot to the following people whose blogs sent the most visitors to my blog:

    1. Walt Conway
    2. Michał Wiczyński
    3. Dancho Danchev
    4. Cédric Blancher
    5. Kevin  Riggins

    See you in May ; also see my annual “Top Posts” - 2007, 20082009!

    Possibly related posts / past monthly popular blog round-ups:

    Dr Anton Chuvakin