Sunday, July 31, 2011

The Last Blog Post!

This is my last blog post –for the foreseeable future. It is dated 7/31/2011 at 11:59PM. What happens tomorrow? A new life, of course!

As only very few of you know, I have accepted a position of Research Director with Gartner, Inc. Tomorrow I am joining a stellar team lead by Phil Schacter, formerly from Burton Group.

I spent two VERY successful years consulting, working with companies like Novell, RSA, LogLogic, NitroSecurity, eGestalt, ObserveIT, Tripwire, AlienVault, “Big MSSP”, “Big Insurance Company”, “SaaS Log Management Company”, “IT Management Software Company”, “SMB Security Company”, “Big Networking Equipment Company”  and others. I defined,  built, deployed, and marketed security products, mostly in the area of SIEM and log management. I helped organizations with security and PCI DSS strategy. I advised security vendor management on compliance strategy for their products. I have spoken at clients’ events and have written more whitepapers than I care to admit… as well as did a lot of other fun things!

It was fun and I loved it - and as my clients can attest, I was good at it. Also, I was more busy than I thought I’d be, and occasionally more than I wanted to be. However, at some point I started to feel that I need another step up. And so I am making that step now!

In accordance with my future employer policy, I have resigned from the Advisory Boards of Dasient, Securonix, nexTier Networks, Savant Protection, eGestalt, and Rapid.IO. Good luck to all of you!

In all likelihood, I will eventually resurface at Gartner blogs – please look for me there.  And finally, those who love my personal blogging (all 4007 of you as of today), don’t despair – I will still occasionally blog here on non-infosec subjects: think good books, laser weapons, hypnosis, skiing, travel and my other weird hobbies Smile

Finally, I want to give very special thanks to Lee Kushner for his super-valuable career counseling that helped me make this difficult career choice.

Possibly related posts – my past “career decisions” blog posts:

On SIEM Services

Executive summary: you need to procure services when you buy a SIEM tool, if you don’t – you’d be sorry later.

Even if you are amazingly intelligent and have extensive SIEM experience – see above.  Even if you saw a successful SIEM project that didn’t include vendor or 3rd party services with your very eyes – see above. Even if your SIEM vendor tells you “you don’t need services” – see above. See above! See above!! See above!!! Smile


Let’s analyze this “SIEM services paradox.” A lot of organizations – way too many, in fact – balk at the need to procure related services before, during and after their SIEM purchase. The thinking often goes like this: we need a SIEM and this box <points at the appliance still in the box> is a SIEM. That’s all we need. What services? Why services? Huh?

In reality -  and this is what I sometimes call “secret to SIEM magic” – that box is not a SIEM. That box, when racked and connected to your network, is STILL not quite a SIEM. Only when you “operationalize” it (see picture), then you can say that you have Security Information and Event Management (SIEM) capability in your organization and that you do “real-time” security monitoring.

Now, be honest, do you know how to deploy a SIEM tool and then figure out the shortest path to its operational success? Probably not… thus services/consultants who will work WITH you to make it a reality by arriving at the best possible way of benefitting from SIEM in your environment. Which use case give you the best bang for the SIEM buck? Which one will show a “quick win” to your management? Which one is more likely to detect an attacker in your network?

When a SIEM vendor tries to sell you services, it is NOT vendor greed – but simply common sense. And if you say “no”, it is not “saving money” – but being stupid. SIEM success  out-of-the-box (while real, in some cases!) is a pale shadow of what a well-thought through deployment looks like! My [broken] analogy is: you buying a nice shiny Aston Martin and then only using it to commute to a train station 1 mile from home. Will it work? Yes. Is this a good investment and a good experience? Hell  no!

So, no, SIEM is NOT useless without services, but it is unlikely to reach its full potential. Pitfalls to SIEM success are many, and navigating them requires help.

And, no, outsiders alone cannot do it. You will need to help them  help you.

This also leads to the rise of managed or co-managed SIEM options (which are NOT MSSPs, BTW!) as more people realize that a) they need a SIEM and b) they cannot handle a SIEM. Future cloud SIEM will (when it emerges) try to tackle the same problem of being simpler to operate and thus simpler to operationalize.

Today, most SIEM vendors offer an extensive menu of services to go with a product, and there are also some smart third parties.  Many services around SIEM can be organized as follows.

Pre-sale services examples:

  • Product selection help
  • Vendor differentiation analysis and shortlist definition
  • Regulation analysis and business cases review
  • Product strengths/weaknesses analysis
  • Product fit for type of project
  • Product fit for vertical / business type
  • RFP definition assistance
  • Current tools vs requirements gap analysis

Services offered during SIEM acquisition and deployment:

  • SIEM implementation
  • SIEM project planning
  • Proof-of-concept deployment management
  • Product testing in production environment
  • Data source integration and collection architecture
  • Default contents tuning

Post-sale, operational services:

  • SIEM analyst training
  • Performance tuning and capacity planning
  • SIEM project management
  • Custom content creation
  • Custom device integration
  • SOC building

Vendors and consulting firms offer other types of services as well all the way up to “co-managed SIEM” where a 3rd party firm manages your SIEM deployment for you. Will future SIEM work better out of the box? Yes, I think so. Will SIEM ever be as simple as a firewall? No, never: it is inherent complexity of security monitoring that cannot be squeezed out even by creative engineering…

Enjoy ... as this was my final blog post on SIEM.

Possibly related posts on SIEM:

Saturday, July 30, 2011

Old Content Posted: Presentations, Documents, etc

In preparation for a career change (stand by for an announcement on midnight July 31, 2011), I am posting A LOT of my old presentations and documents online for the community.

See for such gems as my HITB 2010 keynote “Security Chasm”Brief SIEM Primer, “Making Log Data Useful” as well as the most recent "Five Best and Five Worst SIEM Practices"

See for a bunch of older documents on security, logging, SIEM, PCI DSS – including such gems as Logging Haikufirewall logging primer, etc


Friday, July 29, 2011

On Broken SIEM Deployments

Imagine you own a broken, dilapidated, failing SIEM crap deployment. What? Really… that, like, never happens, dude! SIEM is what makes unicorns shine and be happy all the time, right?

Well…mmm… no comment. In this post, I want to address one common  #FAIL scenario: a SIEM that is failing because it was deployed with a goal of real-time security monitoring, all the while the company was nowhere near ready (not mature enough) to have any monitoring process and operations (criteria for it).  On my log/SIEM maturity scale (presented here, also see this related post from Raffy), they are either in the ignorance phase or maybe log collection phase.

And herein lies the problem: if you deployed one of the legacy, born in the 1990s SIEMs that are not based on a solid log management platform, the tool will actually suck at the very fundamental level: log collection. The specific issue here is that most of these early tools were designed to only selectively collect what was deemed necessary for real-time security monitoring (vs all log data). In essence, you have a tool with monitoring features (that you don’t use) and with weak collection features (that you can use, but they are weak).

What to do? You have these options:

  1. Leave it to rot; you can always keep it just to boast to your friends (and PCI QSAs) that “ye own one of ‘em olde SIEMs
  2. Blow it away and join the “SIEM doesn’t work” crowd – and maybe buy a simple log management tool later
  3. Deploy a log management tool to “undergird” your crappy SIEM; you have a choice of buying from the same SIEM vendor (if they have it) or a different vendor
  4. Built your own log management layer on syslog and open source tools

I have seen people take either of the above four. Personally, I have seen much more success with the option #3 (buy log management) and not infrequently with #4 (built log management) – BTW, this deck might help you choose. You want to move your SIEM setup from “get some logs – ignore all logs” model to “collect all/more logs – review some logs” which is typically much more aligned with your level of maturity. And then grow and solve more problems with your SIEM and demonstrate “quick wins.” While you are at it, review some architecture choices discussed here.

Enjoy …while it lasts.

Possibly related posts on SIEM:

Thursday, July 28, 2011

Got A Pile of Logs from an Incident: What to Do?

As I am going through my backlog of topics I wanted to blog about (but didn’t have time for the last 4-6 months), this is the one I really wanted to explore. Here is the scenario:


  1. Something blows up, hits the fan, starts to smell bad, <insert your favorite incident metaphor> … either in your IT environment or at one of your clients’
  2. Logs (mostly) and other evidence is taken from all the components of the affected system and packaged for offline analysis
  3. You get a nice 10MB-10GB pile of juicy log data – and they wants “answers
  4. What do you do FIRST? With what tools?

Let’s explore this situation. I know most of you would say “just pile’em into splunk”  and, of course, I will do that. However, that is not a full story. As I point out in this 2007 blog post (“Do You Enjoy Searching?”), to succeed with search you need to know what to search for. At this point of our incident investigation, we actually don’t! Meanwhile, the volume of log data beyond a few megabytes makes “trial and error” approach of searching for common clues fairly ineffective.

If you received any hints with the log pile (“I think the user ‘jsmith’ did it” or “it seems  like IP was involved”), then you can search for this (and then branch out to co-occurring and related issues and drill-down as needed), but then your investigation will suffer from “tunnel vision” of only seeing this initially reported issue and that is, obviously, a bad idea.

Let’s take a step back and think: what do we want here? what is our problem?  We want a way to explore ALL the logs in  a pile, across log types, across devices, across all time AND then also following a timeline of events. In other words, we ain’t in “searchland” here, buddy…

If you have an enterprise SIEM sitting around (and one with well-engineered support for diverse historical log imports – which is NOT a certainty, BTW), you should definitely load the logs there as well. I like this approach since you can then run cross-device summary reports over the entire set, slice the set in various ways (type of log source, log source instance, type of log entry – categorized, time period filter, time trend, etc) and data visualization tools (treemaps, trend lines, link maps, and other advanced visuals on parsed, normalized and categorized) help get a big picture view of our pile.

Looking at the open source log tools, does anything look promising for the task? OSSIM can do the trick (even though their historical log import is not my favorite), but nothing else does. In some cases, I used sawmill (free trial) for my “big picture first look”, but it is not cross-device and only shows reports for each log type individually. If I were feeling really adventurous (and was on hourly billing), I could actually send all the logs via a syslog streamer into OSSEC (in order to see the log entries the tool will flag as interesting/alertable), but this is not really something I’d enjoy doing. I am almost tempted to say that you can use something like afterglow, but it relies on parsed data that you’d sill need to cook somehow (such as again using a SIEM). Log2timeline is useful, but only for one dimension – and the one that splunk actually addresses pretty well already.

To generalize, you need (a) a search tool and (b) an exploration tool. The search tool should help you quickly answer SPECIFIC questions. The exploration tool should use data to generate “hints” on WHAT questions you should start asking…

Wednesday, July 27, 2011

Top 10 Criteria for a SIEM?

OK, this WILL be taken the wrong way! I spent years whining about how use cases and your requirements should be THE MAIN thing driving your SIEM purchase. And suddenly Anton shows up with a simple ‘Top 10 list’, so…. blame it on that cognac.

This list is AN EXAMPLE. SAMPLE. ILLUSTRATION. It is here FOR FUN. If you use it to buy a SIEM for your organization, your girlfriend will sleep with your plumber.  All sorts of bad things can and likely will happen to you and/or your dog – and even your pet squirrel might go nuts. Please look up the word “EXAMPLE” in the dictionary before proceeding!

On top of this, this list was built with some underlying assumptions which I am not at liberty to disclose. Think large, maybe think SOC, think complex environment, etc. Obviously,  an environment with its own peculiarities … just like yours.

With that out of the way, Top 10 Criteria for Choosing a SIEM … EXAMPLE!

1. User interface and analyst experience in general: ease of performing common tasks, streamlined workflows, small number of clicks to critical functions and efficient and quick information lookups (including external information) when needed during the investigation

2. Correlation: correlation engine performance, ease of rule creation and modification, canned rule content, cross-device correlation based on normalized/categorized data; additional analytics methods including analysis of stored/historical log data; ability to test rules before production deployment

3. Log source coverage: full integration of most (better: all) needed log sources before operational deployment, detailed parsing and normalization of all fields needed for the analysts’ work; coverage of device, OS and application logs; wide use of real-time log collection methods, even at a cost of using agents

4. Dashboards and analyst views: availability of required analyst views, flexibility and customization, drilldown capability to see additional details, ease of modification and tuning, real-time operation (not periodic polling)

5. Reporting: report performance, visual clarity, ease of modification and default/canned report content, ability to create custom reports on all data in a flexible manner without knowing the SIEM product internal structures and other esoterica

6. Search and query: high (seconds) performance of searches and queries when investigating an incident, access to raw log data via an efficient search command, tied to the main interface

7. Escalation, shift and analyst collaboration support: a system to manage collaborative investigations of security issues, take notes, add details and review/approve the workflow; likely this requires an advanced case management / ticketing system to be built in

8. Ability to gradually expand storage on demand when the environment is growing; this applies to both parsed/normalized data storage as well as raw log storage

9. Complete log categorization and normalization for cross-device correlation that enables the analysts to “cross-train” and not “device-train” before using the SIEM well.

10. New log source integration technology and process: ability to either quickly integrate new log sources or have vendor do it promptly (days to few weeks) upon request

Got any comments?

If not, well, enjoy it … while it lasts.

Possibly related posts:

Enhanced by Zemanta

Tuesday, July 26, 2011

NIST EMAP Workshop–Aug 2011

A lot of good work on logging standards as well as standards for the “surrounding areas” (correlation rules, parsing rules, etc) will happen at this first-ever NIST workshop on EMAP.

Please mark your calendars to save the date for an EMAP Developer Workshop to be held August 29-30, 2011 at the NIST Campus in Gaithersburg, Maryland.  We are still formalizing the agenda, but topics to be covered will include:

· Discussion of target use cases and requirements as identified by EMAP working group.

· CEE Overview and in-depth discussion of current issues.

· Discussion of EMAP component specifications and issues/questions for the community.

· Discussion of EMAP roadmap and connections with other efforts within security automation.

We are in the process of standing up a registration page and creating the agenda.  A teleconference line will be provided for those who cannot attend in person.  More details to come in the near future, we hope to see you there.

If you are dealing with logs and SIEM (such as building, or even using the tools) and care about standards, please consider attending – but only if you will contribute!

Possibly related posts:

Monday, July 25, 2011

Speaking at Catalyst 2011 in San Diego Tomorrow

Just FYI, I am speaking at Gartner Catalyst 2011 event in San Diego tomorrow. The topic is “Five Best and Five Worst Practices for SIEM.”

“Implementing SIEM sounds straightforward, but reality sometimes begs to differ. In this session, Dr. Anton Chuvakin will share the five best and worst practices for implementing SIEM as part of security monitoring and intelligence. Understanding how to avoid pitfalls and create a successful SIEM implementation will help maximize security and compliance value, and avoid costly obstacles, inefficiencies, and risks.”

Time: Tuesday, 26 July 2011
02:45 PM to 03:20 PM

Location:   Hilton San Diego Bayfront
1 Park Boulevard
San Diego, CA 92101

If you are around, come see me here.

Log Management at $0 and 1hr/week?

As I was drinking cognac on the upper deck of a 747, flying TPE-SFO back from a client meeting, the following idea crossed my mind:  CAN one REALLY do a decent job with log management (including log review) if their budget is $0 AND their “time budget” is 1 hour/week? I got asked that when I was teaching my SANS SEC434 class a few months ago and the idea stuck in my head – and now cognac, courtesy of China Airlines, helped  stimulate it into a full blog post.

So, $0 budget points to using open-source,  free tools (duh!), but 1hr/week points in exactly the opposite direction: commercial or even outsourced model.

The only slightly plausible way it that I came up with is:

  1. Spend your 1st hour building a syslog server; it can be done, especially if starting from a old Linux box that you found in the basement (at $0); don’t forget logrotate or equivalent
  2. Spend a few next weeks (i.e. hours) configuring various Unix, Linux and network devices (essentially, all syslog log sources) to log to it
  3. Consider deploying Snare on a few Windows boxes (if needed); it would likely be easier to do than doing remote pull – too much tuning might be needed
  4. Next, drop a default OSSEC install on your log server and – gasp! – enable all alerts
  5. Spend the next  few hours (in the next few weeks) turning off the ones that are too numerous, irrelevant or don’t trigger any action in your environment.
  6. If you log volume fits within a free splunk license size (500MB/day), also spend an hour deploying splunk on your log server and have it index all gathered logs
  7. Now you’d be spending your “one log hour each week” on reviewing alerts and (if installed) digging in splunk for additional details
  8. Congrats! $0 and 1hr/week gave you semblance of log management and even monitoring….

What do you think? It just might work for organizations with severe time AND money constraints.

Enjoy the post … while it lasts.

BTW, on a completely unrelated note:  do you think EVERY organization above a certain size NEEDS a SIEM? Or WILL NEED a SIEM in the future?

Monday, July 18, 2011

Job: Director of Product Marketing at SIEM Vendor

I am posting this as a small favor to my friends at NitroSecurity.


The Director, Product Marketing is responsible for developing, planning and executing externally-focused product marketing strategies, plans & programs for the industry leading NitroView SIEM, log management, database monitoring, application monitoring and IDS/IPS solution. They will research & understand security market trends by working with industry analysts and engaging prospects & customers, closely monitor & analyze competitor offerings and develop value propositions, product positioning and messages for enterprise and government markets worldwide. They will drive and lead all new product launch and introduction activities, and support on-going product and solution campaigns and programs.

Candidates in metro Boston, metro Washington DC or open to virtual, home office arrangements are welcomed to apply to


a. Work closely with Product Management, Engineering and Operations to fully understand current and planned technologies, products and solutions

b. Conduct competitive research and provide analysis on competitive advantages & competitor claims relative to customer needs.

c. Determine product positioning & product messaging and create & manage a broad range of product and solution collateral, on-line content, white papers, blogs & sales tools

d. Develop and deliver new product training to field sales, systems engineers and channel partner and technology partner organizations

e. Key company spokesperson, presenting to prospects, customers, partners, press and analysts in person, via webcasts and at industry conferences.

Experience and Qualifications:
a. 10+ years of product marketing experience in security/networking assignments
b. 5+ years security industry experience

c. Excellent speaking, writing and presentation skills
d. Strong analytical skills, including business, markets and competition

e. Team player with proven success in high growth environments

f. Technical undergraduate degree preferred or equivalent, MBA or equivalent advanced degree preferred

Apply via:

Monday, July 04, 2011

PCI in the Cloud Class July 8: Location Finalized

Just  a quick announcements about my “PCI in the cloud” class that I am teaching this week.  The location has been finalized:
Location (map):
Ariba Silicon Valley Office
Sequoia Conference Room

910 Hermosa Court,
Sunnyvale, CA

(please use the main entrance and tell receptionist  that you are there for CSA PCI class, lunch and coffee will be provided)
Date: Friday July 8, 2011 at 9AM
There are still, I think, 2-3 seats left at $20/seat (beta price! must provide class feedback!!), so go and register here.

UPDATE: 7/4/2011 5:50PM Sorry, sold out! I will check with the host tomorrow about the room size and there is a slight chance that we can fit more than 25 people, but it is not a certainty.

Possibly related posts:

Friday, July 01, 2011

Monthly Blog Round-Up – June 2011

Blogs are "stateless" and people often pay attention only to what they see today. Thus a lot of useful security reading material gets lost. These monthly round-ups is my way of reminding people about interesting and useful blog content. If you are “too busy to read the blogs,” at least read these.

So, here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

  1. PCI DSS in the Cloud … By the Council” posts is my quick review of recent PCI DSS guidance on virtualization, focusing on cloud computing guidance.
  2. On Choosing SIEM” tops the charts again this month. The post is about the least wrong way of choosing a SIEM tool – as well as why the right way is so unpopular. A related read is “SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?”, check it out as well. While reading this, also check this presentation.
  3. Simple Log Review Checklist Released!” is still one of the most popular posts on my blog. Grab the log review checklist here, if you have not done so already. It is perfect to hand out to junior sysadmins who are just starting up with logs. A related “UPDATED Free Log Management Tools” is also still on top - it is a repost of my free log tools list to the blog.
  4. Algorithmic SIEM “Correlation” Is Back?” is a post that I never thought would make it to my monthly top as it covers a bit of SIEM esoterica. Surprise!
  5. NIST EMAP Out” is my quick announcement/summary of the NIST EMAP standard efforts, the log/event “brother” of SCAP and an extension of CEE work

Also, as a tradition, I am thanking my top 3 referrers this month (those who are people, not organizations). So, thanks a lot to the following people whose blogs sent the most visitors to my blog:

  1. Anonymous “PCI Guru”
  2. Dmitry Orlov
  3. Lenny Zeltzer

Also see my past annual “Top Posts” - 2007, 2008, 2009, 2010). Next, see you in July for the next monthly top list.

Possibly related posts / past monthly popular blog round-ups:

Dr Anton Chuvakin