Monday, March 28, 2011

My “Recent” Security Writing and Speaking

Now that I flooded with work (with more on the way), I am eternally procrastinating  on my “Fun Security Reading” blog posts. So, let me at least try to blog about what I was WRITING if I don’t have time to blog about what I was reading (Google Reader shared item feed). The list is loosely sorted by time:

My writing:

  1. HIPAA Logging HOWTO, Part 1
  2. “HIPAA Logging HOWTO, Part 2”
  3. PCI Security: Q&A with Anton Chuvakin, PCI Compliance Expert
  4. PCI Security: Q&A with Anton Chuvakin, PCI Compliance Expert, PART 2
  5. “ASSESSMENT SUCCESS: PCI DSS STANDARDS AND SECURE DATA STORAGE
  6. "How to Do Application Logging Right" (with Gunnar Petersen)
  7. FISMA Logging HowTo, Part 1
  8. Logging for FISMA part 2 : Detailed FISMA logging guidance
  9. Log management software can aid data security, boost IT accountability
  10. Log review for incident response, Part 1
  11. A Pragmatic Approach to SIEM: Buy for Compliance, Use for Security
  12. Log review for incident response, Part 2
  13. PCI DSS 2.0 Fun Facts
  14. Logs vs Bots and Malware Today
  15. PCI DSS Today and Tomorrow: Logging is the Key
  16. Logs for Insider Abuse Investigations

Presentations:

  1. Log Standards and Future Trends” (BrightTalk)
  2. What PCI DSS Taught Us About Security” (BrightTalk)
  3. You Got That SIEM. Now What Do You Do?"(BayThreat 2010)
  4. Achieve PCI Compliance and Ensure Security in a Data Deluge” (Focus.com webcast)
  5. Address Network Security & Dramatically Reduce PCI DSS Scope with Gateway Tokenization” (Intel – NRF (!) webcast)
  6. Proactive Compliance for new PCI-DSS 2.0” (SANS webcast)
  7. Using Logs for Breach Investigations and Incident Response” (Brightalk webcast) and presentation
  8. PCI Compliance: Tips, Tricks & Emerging Technologies” (BankInfoSec webcast)
  9. You can always see more on my Slideshare page.

Audio/podcasts/etc:

  1. Cloudchasers podcast “Cloud security and compliance: its all about the logs – May 20, 2010” (mp3)
  2. Cloudchasers podcast “IT Security industry consolidation and the cloud – Sept 16, 2010” (mp3)
  3. Logs, Clouds and Open Source, Oh My!
  4. ETM podcast “Insight into SIEM” (mp3)
  5. McAfee podcast about retail security (mp3)
  6. …and, obviously, our own log podcast LogChat

Miscelaneous:

  1. Scaling the Security Chasm” is not by me, but it is written based on my HITB keynote last year
  2. How to handle PCI DSS requirements for log management in the cloud” is also not by my, but has significant input from me

BTW, if you’d like to see what I’ve been reading, subscribe up for my Google Reader shared item feed and Like feed/Buzz. Or use the widget below:

And, no, Twitter didn’t kill blogging, but it sure looks like Twitter is intent on killing Twitter Smile

P.S. Posted by a scheduler – please don’t laugh, but I am in Siberia now Smile Responses to comments will happen when I am back.

Possibly related posts:

Friday, March 25, 2011

UPDATED Free Log Management Tools

FYI, I have updated my list of free log analysis and log management on my consulting site. Here it is, reposted:

Version 1.3 updated 3/8/2011 (original location)
This page lists a few popular free open-source log management and log analysis tools. The page is a supplement to "Critical Log Review Checklist for Security Incidents" that can be found here or as PDF or DOC (feel free to modify it for your own purposes or for internal distribution - but please keep the attribution). The log cheat sheet presents a checklist for reviewing critical system, network and security logs when responding to a security incident. It can also be used for routine periodic log review. It was authored by Dr. Anton Chuvakin and Lenny Zeltser.
The open source log management tools are:
    1. OSSEC (ossec.net)  an open source tool for  analysis of real-time log data from Unix systems, Windows servers and network devices. It includes a set of useful default alerting rules as well as a web-based graphical user interface. This is THE tool to use, if you are starting up your log review program. It even has a book written about it.
    2. Snare agent (intersectalliance.com/projects/index.html) and ProjectLasso remote collector (sourceforge.net/projects/lassolog) are used to convert Windows Event Logs into syslog, a key component of any log management infrastructure today (at least until Visa/W7 log aggregation tools become mainstream).
    3. syslog-ng (balabit.com/network-security/syslog-ng/) is a replacement and improvement of classic syslog service - it also has a Windows version that can be used the same way as Snare
    4. rsyslog (rsyslog.com) is another notable replacement and improvement of syslog service that uses traditional (rather than ng-style) format for syslog.conf configuration files. No Windows version, but it has an associated front-end called phpLogCon
    5. Among the somewhat dated tools, Logwatch (logwatch.org), Lire (logreport.org) and LogSurfer (crypt.gen.nz/logsurfer) can all be used to summarize logs into readable reports.
    6. sec (simple-evcorr.sourceforge.net) can be used for correlating logs, even though most people will likely find OSSEC correlation a bit easier to use
    7. LogHound (ristov.users.sourceforge.net/loghound) and slct (ristov.users.sourceforge.net/slct) are more "research-grade" tools, that are still very useful for going thru a large pool of barely-structured log data.
    8. Log2timeline (log2timeline.net/) is a useful tool for investigative review of logs; it can create a timeline view out of raw log data.
    9. LogZilla (aka php-syslog-ng) (code.google.com/p/php-syslog-ng) is a simple PHP-based visual front-end for a syslog server to do searches, reports, etc
      The next list is "honorable mentions" list which includes logging tools that don't quite fit the definition above:
      • Splunk is neither free nor open source, but is has a free version usable for searching up to 500MB of log data per day - think of it as a smart search engine for logs. Splunk includes a tool to extracting parameters out of log data
      • Offering both fast index searches and parsed data reports, Novell Sentinel Log Manager 25 is not open source, but can be used for free forever as long as your log data volume does not exceed 25 log messages/second (25 EPS). Unlike splunk above, it includes log data parsing for select log formats and thus can be used for running reports out of the box, not just searching
      • Q1Labs is also neither free nor open source, but is has a free version usable for managing up to 50EPS (roughly 2GB/day). It can be downloaded as a virtual appliance
      • OSSIM  is not just for logs and also includes OSSEC; it  is an open source SIEM tool and can be used much the same way as commercial Security Information and Event Management tools are used (SIEM use cases)
      • Microsoft Log Parser is a handy free tool to cut thru various Windows logs, not just Windows Event Logs. A somewhat similar tool for Windows Event log analysis is Mandiant Highlighter (mandiant.com/products/free_software/highlighter)
      • Sguil is not a log analysis tools, but a  network security monitoring (NSM) tool, but it uses logs in its analysis.
      • Loggly cloud logging service now offers free developer accounts (at loggly.com/signup) for their cloud log management service. The volume limitation is 200MB/day and retention time limitation is 7 days. If you'd like to collect and search your logs without running any software, this is for you.
      For a list of commercial log management tools go to Security Scoreboard site. A few of the commercial tools offer free trials for up to 30 days or longer.

      P.S. I’d love to finally test GrayLog in my lab since it looks very promising, but – sorry – I was not able to get it to work Sad smile Too much Ruby and Java for my Linux box… BTW, I got a couple more of fun new tools that I plan to test and then possibly add to this list.

      P.P.S. Comment response will be slow, I am away from computers.

      Possibly related posts:

      Tuesday, March 22, 2011

      Log Forensics and “Original” Events

      I did this fun presentation on log forensics (here) and the question of “original” (aka “native”, “raw”, “unmodified”) log events came up again. Since the early days of my involvement in SIEM and log management, this question generated a lot of delusions and just sheer idiocy. A lot of people spout stuff like “you need original logs in court” without having any knowledge about either logs or court – or forensics in general. Or, as I sometimes feel, even computers in general. 

      So, WTH is an “original” event? Let’s explore this a bit. 

      For example, let’s take Windows 7 Event Logs. Before you read further, without focusing too much on the real meaning of “original”, think what you’d consider an original event log record …

      Is this original – the EVTX file itself:

      image

      Is this – an XML view via Event Viewer on the computer where the log is produced:

      image

      Is this – a “friendly” view in the same Event Viewer on the same “original” computer:

      image

      As you might know, the above view is actually enriched i.e. has new information added compared to the EVTX file. Does it break the originality?
      What if the EVTX file is copied to another computer and then opened in an Event Viewer? It might look a bit different due to various ID dereference operation, and it might enrich the contents with slightly different information.

      How about this – exported to CSV at another computer. Is this still original?

      image

      And what about the one that is converted to syslog in a similar fashion? What if, or horror, TABs are replaces with spaces? Smile

      So, what’s the lesson here?  Obsession about “original”, “native”, raw” logs is just not a useful pursuit and it dead-ends pretty quickly.

      Instead, you need a clearly understood and documented path of all event records that unambiguously tracks all changes to event records (removals, addition of details, modifications of contents, new headers, etc), not fake and impossible quest for “originality.” For additional reference on trusting logs, check out what Eric Fitzerald wrote about log trust back in the days of his ownership of the Event Log.

      Possibly related posts:

      Monday, March 14, 2011

      SecurityBSides San Francisco at RSA 2011 Presentation

      My account of RSA 2011 cannot be complete without-  yes! - SecurityBSides San Francisco. I was holding this post hoping to include links to videos, but – despite the power of Google – I was not able to figure out where AND whether the video are posted.  So, you have to enjoy my new fun SIEM presentation (below) without my voice and an image of me pointing at the sky Smile

      Enjoy!

      Possibly related notes:

      Monday, March 07, 2011

      SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?

      One of the ugliest, painfulest, saddest issues with SIEM is resourcing. Yes, that SIEM appliance might set us back $75,000 in hard earned security budget dollars, but how much more will we have to spend in the next 3 years deploying, integrating, using, tuning, cursing, expanding the thing? How much manpower will the new operational procedures (example) cost us? And if we actually build a SOC or “a virtual SOC”, how much will we have to spend on an ongoing basis to get the value and benefits? In fact, how much will the coffee cost if we have to work 20 hours in a row recovering that crashed SIEM database partition?

      These and other questions are super-important for every SIEM and log management project. And the time has come for me to reveal my secret knowledge of SIEM resourcing. OK, that’s a joke – it is not a secret, just a bunch of things that are often unpleasant for many SIEM buyers, users and sellers to hear.

      So:

      NEWSFLASH! SIEM costs money. “Free” SIEM (example) costs money too, BTW. Let’s try to delve into what those costs are. I will be not-quite-scientific in regards to real “hard money” costs (e.g. software license purchasing) and “soft costs” costs (e.g. staff time costs), but I will try to clearly mark each kind of SIEM cost below.

      First, assumptions and limitations:

      • This is NOT “SOC staffing” , but simply “running a SIEM” staffing. SOC implies more processes and more tasks and a broader mission.
      • Assumes in-sourced, traditional “buy and run” SIEM; outsourced, co-managed/co-sourced cost model would look different.

      Here is the rough cost model for some of the most common SIEM cost categories:

      1. Initial “hard” costs 
        1. SIEM license costs: base price +per user, per node, per EPS, per CPU (and per CPU core), etc costs – however your favorite vendor charges
        2. For software SIEM, also hardware, OS, database costs for as many servers as you need
        3. If any, mandatory 3rd party software license costs (occasionally, agents, reporting tools, etc)
        4. If chosen, vendor or consultant deployment services costs. If not chosen, staff time for deployment will pop out in soft costs below!
        5. Vendor training or 3rd party training on logs, log management,  SIEM, etc
        6. Additional external storage (in most cases)
      2. SIEM ongoing, operating “hard” costs
        1. Various SIEM vendor services: support (typically mandatory), ongoing professional services costs
        2. Personnel to operate a SIEM: from part of FTE (very small scale, few use cases for a SIEM) to 1 FTE (small appliance deployments) to many FTEs of various roles (+much more for SOC staffing if live monitoring is implemented). 0 FTE for SIEM = SIEM project FAIL with 100.00% probability.
      3. Periodic or occasional “hard” costs
        1. Various SIEM vendor services: professional services, custom development work for device integration (some of these may go into soft costs if done internally – for advanced organization or those experienced with SIEM already)
        2. Periodic staff training on SIEM operation and tuning
        3. Specialty staffing: DBA, sysadmins, node sysadmins, in-house developers for custom connectors, Crystal Reports administrator (yuck!), etc – some of these might go into “soft” costs if “poaching” existing personnel time
        4. Deployment expansion costs: same as initial costs, but for additional systems, software, hardware, etc as you grow; these sneak up really fast if SIEM is licensed using many dimensions such as user+CPU+node+server+something else.
        5. External storage expansion costs – yes, you will buy more storage if your volume grows, and log retention time stays the same (e.g. 1 year for PCI DSS)

      On the other hand, here are some of the “soft” costs, such as time expenditures by existing resources:

      1. Initial “soft” costs 
        1. Deployment time for the SIEM project – allocate more time if deploying purely using internal personnel, not vendor or consultant
        2. Log source configuration and integration – this will likely take way more than simply installing the tool. This is what makes SIEM deployment projects go for months in complex, distributed organizations with many silos.
        3. Initial tuning, content creation  and adapting the tool to your environment  (however lightweight it may be)
        4. Training and other staff time costs to jumpstart the operation (Congratulations! You bought ta SIEM. Now you need to operate it…)
      2. SIEM ongoing, operating “soft” costs
        1. Report review and other ongoing monitoring tasks – from 24/7 to daily to weekly
        2. Alert response and escalation; SIEM implies correlation and automated alerting
        3. Other “using SIEM” tasks such as reviewing the dashboards
        4. Uptime maintenance tasks i.e. caring for your SIEM as well as storage – backups, updates, minor troubleshooting, etc
      3. Periodic or occasional “soft” costs
        1. SIEM rule tuning, reports creation, dashboard customization, new log source integration, other ongoing SIEM tasks
        2. Periodic training and related staff time costs
        3. Expansion: same as initial soft costs

      As was suggested by a discussion on SIEMusers.org (shhh…the site is not ready for launch yet), it is useful to separate soft costs  that are mandatory FOR SIEM operation from those which commonly arise FROM SIEM operation. The most obvious example is incident response due to increased awareness of network and system activities, delivered by your SIEM.

      ”Soft” costs that commonly result from SIEM:
      1. Added cost of incident response: more incidents are likely to be detected
      2. Resulting incident remediation costs and even cost of new technologies deployed for preventing the discovered issues
      3. Other department personnel time for dealing with issues revealed by SIEM – the soft costs do leak out of the security department to IT and even beyond (legal, HR, etc).

      Anything big I missed that you experienced? BTW, in my experience, I have seen the total cost of a SIEM project (hard + soft) range from 10% of SIEM license costs (for shelfware SIEM “deployments”) to a mind-boggling 20x of license cost.

      P.S. Finally, if you want to really annoy Anton, mention “SIEM ROI.” If you do that, I will send you to Gal Shpantser for a mandatory “why he avoids SIEM!” class Smile

      Possibly related posts:

      Friday, March 04, 2011

      New Honeynet Project Challenge (#7): Forensic Analysis of a Compromised Server

      The plot? As usual:

      A Linux server was possibly compromised and a forensic analysis is required in order to understand what really happened. Hard disk dumps and memory snapshots of the machine are provided in order to solve the challenge.

      Are you up to the challenge? Here are the questions that need your answers:

      • What service and what account triggered the alert? (1pt)
      • What kind of system runs on targeted server? (OS, CPU, etc) (1pt)
      • What processes were running on targeted server? (2pts)
      • What are attackers IP and target IP addresses? (2pts)
      • What service was attacked? (1pt)
      • What attacks were launched against targeted server? (2pts)
      • What flaws or vulnerabilities did he exploit? (2pts)
      • Were the attacks successful? Did some fail? (2pts)
      • What did the attacker obtain with attacks? (2pts)
      • Did the attacker download files? Which ones? Give a quick analysis of those files. (3pts)
      • What can you say about the attacker? (Motivation, skills, etc) (2pts)
      • Do you think these attacks were automated? Why? (1pt)
      • What could have prevented the attacks? (2pts)

        Bonus question: From memory image, can you say what network connections were opened and in which state ?

      Get the information here (warning: even compressed, disk images are large) and start your investigation. As a reminder to beginners, be careful when handling untrusted files!

      Possibly related posts:

      Thursday, March 03, 2011

      RSA 2011 PCI Council Interview

      Just like last year, I did this great interview with Bob Russo, the GM of PCI Council. There is no audio recording,  what follows below are my notes reviewed by the Council. Italic emphasis is added by me for additional clarity.

      Q1. PCI DSS 2.0 is out. What do you think its impact is, so far?
      A:
      We are just entering the implementation phase, but it seems like there is no major impact yet, it is definitely too early to say what the impact would be.
      Using data discovery – merchants looking to confirm that PAN data does not exist outside of the defined PCI DSS scope - seem to be becoming more prominent and this seems to be a direct result of PCI DSS 2.0. Accidental exposure of cardholder data is a known risk. By identifying where the data truly resides first, through a tool or a methodology, should aid organizations in their assessment efforts and ongoing security.
      By the way, despite moving to the longer three year process, we can still update the standard in between via errata mechanism [described hereadded by A.C.] or using additional guidance produced by the Council and SIGs. For example, if there is a new threat, we can issue additional guidance on how to deal with it within the framework of PCI DSS.
      Q2. QSA assessment quality is said to be improving due to QSA QA. On the other hand, reports of many SAQs being “inaccurate” are fairly widespread. Is anything being done to improve SAQ quality at Level2 and smaller merchants?
      A:
      Well, some merchants do “answer Yes to every question”- is that what you mean by inaccurate?! We see education as the answer to this. For example, there are plans for making SAQ easier to fill in– think about a TurboTax type model for SAQ – a wizard process for answering the pertinent SAQ questions and for presenting the right questions to the merchant in a logical order.
      Education efforts can help a merchant understand that honest and accurate SAQ are for “their protection.” Everyone needs to include security in their daily process. The Council will seek to help by providing additional guidance on how to become more secure, comply with the Standards and how to validate that compliance. Some of this is being addressed with the new general Awareness Training we have launched, offering a high level overview of what PCI is and the role that every employee plays in keeping card holder data secure.
      Q3. While we are on the SAQ theme, can anything be done to have more merchants stay compliant, not just get validated every year and then forget about PCI DSS until the next validation?
      A:
      Definitely, more education is needed and we are trying to fill that vacuum, like with the Awareness Trainings we have rolled out. For example, educating merchants that PCI DSS is about data security – not checkbox compliance - is a big focus. Merchants also need to be reminded that they need to get secure and compliant and stay secure and compliant. It requires ongoing vigilance. Unfortunately, some merchants think that “PCI DSS is about a questionnaire and a scan” and this mentality needs to be addressed by educating merchants about data security.
      Q4. Visa new EMV rules might make merchants in Europe and Asia care even less about payment data security. What do you think the impact of the new rules will be on PCI?
      A:
      It is too early to tell at this stage as the rules were announced last week [first week of February 2011 – A.C.]. In essence though, this is a compliance or reporting issue. Nothing has changed for the Council or the standards. PCI DSS still remains the foundation for card security for all payment brands. Ecommerce merchants in those regions remain still must adhere to the PCI DSS even with the new rules. In essence, the new rules imply that the merchants do not need to continue validate compliance, however, we understand that the merchants still has to become and stay compliant, and have proof of that even before considering this program by that brand.
      As far as we know, acquirers still plan to get their merchants compliant and validated, so “nothing has changed” for them in the new VISA program. Also, according to public information on the new program, acquirers can still be fined for non-compliance under the new rules as well. This should continue to lead them to get their merchants PCI compliant to reduce the risk of the acquiring bank.
      It’s early to tell what merchants think and how they will react to this at this time.
      Q5. Will PCI DSS ever move away from the model where the merchants are either compliant with the entire PCI or they are not? Isn’t it better if 100% of merchants implement 10 critical controls vs 10% of all merchants implement 100% of controls?
      A:
      We are continuing to look at ways for merchants and others in the payment chain to reduce and minimize their card data environment. Some technologies can help, but only if done right. That is why we are putting so much effort in really scrutinizing these technologies to ensure that they are indeed effective, and under circumstances.
      For those just starting their compliance journey, using the PCI milestones and Prioritized Approach [see here – A.C.] will also increase in the future. For example, in the new standards we suggest a risk based approach to compliance programs. Mitigate the biggest risks first and you are doing yourself a great favor and moving that much closer to compliance. As an example of this, updating requirement 6.2 to allow vulnerabilities to be ranked and prioritized according to risk. You will hear more from the Council about this in 2011.
      Q6. Some QSAs (and merchants) still complain that “QSAs are subjective.” Will there be more prescriptive assessment procedures?
      A:
      Compliance cannot be absolute and completely objective since merchant environments differ greatly. For example, look at compensating controls – they are an example of flexibility with working with the Standards.
      If we get more rigid, and do not include flexibility within the Standard for compensating controls, more people will believe that PCI DSS is forcing them to do things “our way.” We think the current standard is at or close to a balance in this regard, allowing security and flexibility to protect card data within everyone’s own unique environment. People should feel free to ask the PCI Council if there is any doubt about a particular QSA decision.
      The Council also receives details on QSA performance, outside of just merchants. We keep a close watch on this to ensure a consistent level of QSA performance. Also, merchants are not the only ones who can report bad QSAs to the Council. [I suspect, although I am not sure, that they are talking about other QSAs here – A.C.]
      In addition, we hope that more organizations will take advantage of our Internal Security Assessor program to help their internal employees better understand the process of an external assessment and how to maintain a strong security program between assessments.
      Q7. Does council plan to “certify” any other security technologies, like you do for ASV vulnerability scanning?
      A:
      We do not currently have plans to do so. More guidance will likely be released on using technologies to help with PCI DSS compliance and data security. There are no plans to certify other security technologies in a manner similar to vulnerability scanning and ASVs.
      Many technologies, such as possibly logging and log review, may get additional guidance in the future. While the DSS 2.0, added a sub-requirement for payment applications to support centralized logging [PA-DSS Requirement 4.4 – A.C.], it is a known area where many merchants are struggling and additional guidance could go a long way.
      Q8. There is definitely a need for more scoping guidance, especially for complex environments, involving virtualization, cloud providers, 3rd party partners, etc. When will scoping SIG guidance be released?
      A:
      PCI DSS 2.0 does recommend using data discovery for better scoping. We’ve reinforced that all locations and flows of cardholder data should be identified and documented to ensure accurate scoping of cardholder data environment. Merchants should not be guessing at what the scope is, but completely and objectively determine that scope. Simple scoping guidance is a challenge. It is difficult to create a single set of parameters that one can undertake to determine the scope of PCI applicability across a complex environment. It is an inherently complicated task.
      However, we hope to provide some additional guidance on this process soon, perhaps, a few steps at a time to begin to help merchants better understand this process.
      Enjoy!
      Possibly related notes:

      Wednesday, March 02, 2011

      Honeynet Project Blog Top Posts in February 2011

      FYI, I won’t be posting these here all the time (they are written for The Honeynet Project blog – original location for this post), but I figured I’d post the first one here just to tell people about all the fun stuff from the Honeynet blog that I now take care of as a project PR officer..
      The following are the Top 5 popular blog posts from The Honeynet Project blog this month.
      1. Observing Botnets” talks about tools to observe bot traffic on the network; it is an excerpt from “Know Your Enemy: Tracking Botnets” paper (fun quote: ‘"A botnet is comparable to compulsory military service for windows boxes" – Stromberg’)
      2. The Honeynet Project Releases New Tool: Cuckoo” covers Cuckoo, a binary analysis sandbox, designed and developed with the general purpose of automating the analysis of malware.
      3. First-ever Honeynet Project Public Conference–Paris 2011” announces the first-ever Honeynet Project Public Conference, held alongside with the traditional Honeynet Project Annual Workshop.
      4. Client Side Attacks” is a brief primer on client-side attacks where a server attacks a client that connects to it; it is an excerpt from “Know Your Enemy: Malicious Web Servers” paper
      5. Use of Botnets” is also an excerpt from “Know Your Enemy: Tracking Botnets” paper. It talks about the malicious use for botnets.
      Look for more of such Honeynet blog highlights in the future!
      Possibly related posts:
      Enhanced by Zemanta

      RSA 2011 Conference Notes

      Here is my account of RSA 2011 conference – with all its awesomeness! I LOVE RSA and I always say that if you can only attend one security event a year – make it RSA. Now, it takes some [admittedly, small] effort to get value out of your RSA experience: the conference is not about the keynotes and not really about [way too many] tracks of presentations. It is about our industry gathering – pretty much the entire security industry as it exists in 2011! For security training you go to SANS, for latest attacks – to BlackHat/DEFCON (or, increasingly, to smaller conferences),  but for getting a sense of the entire security industry … SECURITY BUSINESS, if I may… you MUST go to RSA.

      I spent my first RSA2011 day – Monday (aka The Valentine’s Day) at Metricon.  This year Metricon – and I admit to only attending about 2/3 of the day – just disappointed. This is the second year I am sacrificing all sorts of fun RSA-related events – CSA, AGC, etc – for security metrics and I promise I won’t do it again. Metricon this year was a shoutfest, not a conversation, about metrics. Yes, there was awesomeness there, for sure: Verizon crew showed their early results from Veris community incident data collection (“Baker, Wade and Alex Hutton - Veris Data/Veris Community”). I loved the presentation on log analysis of DNS server data (“Fruhwirth, Proschinger, Lendl, Savola - Name Server Log Data”) which did show a few new log tricks. Then a guy from Finnish CERT talked about automated incident reporting.  Chris Eng  on “Critical Consumption of Infosec Stats” was fun to watch as well, although it did turn loud a few times… A few other presentation turned into a mess, and I won’t go into details – it was painful enough being there.
      RSA proper started for me on Tuesday, since – yes, I know, it is unbelievable – I spent Monday evening celebrating Valentine’s Day instead. But before, there was one awesomeness-induced day at SecurityBSides San Francisco, where I presented on SIEM (to be covered in a separate blog post).

      So, apart from current and future client meetings (these always “taste better” at RSA somehow :-)), I had a chance to spent some time in RSA Vendor Exhibition on Tuesday. Usually I allocate 5-6 hours to walk the vendor hall, talk to people (old and new) and figure what’s up – and who’s down (=HBGary, obviously, this year). What did I see?
      • Since I expected the cloud to be a huge oppressive presence, I was not surprised. In fact, I was surprised that some booths did NOT have cloud written all over them. Cloud, BTW, is not just “a security trend of the day” ! It is part of a massive “trifecta of security evil”- Virtualization + Cloud + Mobile – which will absolutely change the way we do information security in the next 3-5 years and possibly longer.
      • BTW, I learned a new definition of “virtualization security” at RSA: “a belief that your virtual infrastructure is as secure as your physical infrastructure”…. aka "secured by faith" 
      • The third leg of the trifecta – mobile – was not visible at all. I am not talking about the silly “mobile anti-virus” stuff, but about security solutions focused on mobile security problems (no, viruses is not one of them!). After RSA, somebody introduced me to Nukona which will serve here as an example of mobile security solutions focused on mobile security problems (no, I am not on their advisory board Smile)
      • I didn’t see enough application security, even counting all incarnations. Obviously, application security plays a leading role in security of the above “trifecta of security evil”, but somehow I have not noticed enough new approaches to appsec. I did notice a bit more whitelisting, I guess, and this approach definitely deserves to finally go into the mainstream.
      • Funnily, I noticed some sad loser vendors with big booths. What’s up, dudes? Have you blown your entire 2011 marketing budget on that RSA booth and now somebody will surely acquire you?
      • Maybe it is just me , but I have never noticed Asian companies at RSA before  – this year there were a few. Is this a new trend?
      • It was also interesting to see a theme of “we unify security and compliance” (as if compliance ever existed on its own ..well…it kinda did, unfortunately). What’s going on here is vendors sold a lot of gear for compliance and now need to “sell” the worldview that all that gear is useful for security – what a shocker!
      • I also noticed a lot of network traffic and flow analysis, but absolutely no DLP. Has DLP fallen into that pig trough of disillusionment?
      • Yes, booth babes are mostly gone (except for the NSA booth, but that is totally different). However, it seems like booth monkeys are in: I had an unfortunate experience of talking with people at booths who had a very, very vague idea about security, despite having lofty titles like “VP of Marketing.” If you show up at RSA, please do your homework!
      • And sorry for a mildly idiotic final point, but why don’t we use email encryption in 2011? There was not even one vendor with a new and creative email encryption scheme. Even without painful HBGary reminder, it seems clear that organizations treat email as sensitive protected data. How dumb is that? Please remember the old saying: unless you encrypt, email is a postcard…
      On Wednesday, apart from more meetings, I did another interview with PCI Council’s Bob Russo (to be published under separate cover).

      The rest of Wednesday was spent in fun meetings with potential clients (and a quick trip to Palo Alto …don’t ask Smile). Thursday was spend advancing CEE log standard and even – surprise! – attending a few RSA sessions.

      Fridays at RSA are always fun – not too many people at the sessions. I spent my morning  at BUS-402 “analyst roundtable” session with Kupplinger Cole, Gartner and Forrester, moderated by Asheem Chandna from Greylock VC firm.  Most “analyst takeaways” from RSA 2011 were pretty much about cloud and mobility. I’ve heard a fun opinion on IT consumerizatiion: if you deal with the security of employee devices by banning them, you will automatically make your organization unattractive to the best employees – thus increasing, not reducing, your business risk (not sure how true it is, really). Also, I  didn’t realize that virtualization platform vendors abandon security; this was strangely stated as a fact by the analysts.

      Finally, I went to President Clinton keynote. After tolerating the ever-so-annoying Hugh Thompson, we got the full “Clinton experience” for more than an hour. Clinton keynote was great – unexpectedly so. He mentioned tea Party 3x times of his mentions of Obama (in the form of “Obamacare”), spoke how he is a “socially progressive / fiscally conservative” (which is pretty awesome, IMHO). I am still shocked that I’d appreciate the politician speech at a security conference that much. He was more specific and fact-based than a few other keynoters at RSA2011… If the video of his keynote surfaces (maybe), do listen, just for fun.

      Other fun RSA2011 accounts are tagged here: http://www.delicious.com/anton18/RSA+2011. A few fun example are “Change we can believe in?”, “RSA 2011: In Summary”, “RSA 2011: What’s My Theme?


      Possibly related posts:

      Tuesday, March 01, 2011

      Monthly Blog Round-Up – February 2011

      Blogs are "stateless" and people often pay attention only to what they see today. Thus a lot of useful security reading material gets lost.  These monthly round-ups is my way of reminding people about interesting and useful blog content. If you are “too busy to read the blogs,” at least read these.

      So, here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

      1. The Honeynet Project Releases New Tool: PhoneyC” leads all posts this month – this is reposted to my blog since I recently began serving as  [volunteer] Chief PR Officer for The Honeynet Project. Another recent Project release is “The Honeynet Project Releases New Tool: Cuckoo
      2. Simple Log Review Checklist Released!” is still one of the most popular posts on my blog. Grab the log review checklist here, if you have not done so already. It is perfect to hand out to junior sysadmins who are just starting up with logs.
      3. My PCI DSS log review procedures that I created for a consulting client and posted on the blog (sanitized, of course!)  took one of the top spots again: the first post “Complete PCI DSS Log Review Procedures, Part 1” and the whole series “PCI_Log_Review” would be useful to most large organization  under PCI DSS as well as other regulations
      4. Test Your Mad Logging and Log Management Skills NOW!” is a fun test you can take to check your skills related to logs, logging, log analysis and log management. Another LogManagementCentral special, “Bottom 11 Log Management "Worst Practices"”, is next on the top list. Hate security “best practices”? Check out the bottom worst practices instead!  Yet another LogManagementCentral special, “11 Log Resolutions for 2011” is up here as well.
      5. The hilarious “Top 10 Things Your Log Management Vendor Won't Tell You”, written for LogManagementCentral, reign supreme  this month! Read, laugh, weep… log.

      Also, below I am thanking my top 3 referrers this month (those who are people, not organizations). So, thanks a lot to the following people whose blogs sent the most visitors to my blog:

      1. Walt Conway
      2. Lenny Zeltser
      3. Anonymous SIEM Ninja

      Also see my past annual “Top Posts” - 2007, 20082009, 2010). Next, see you in March for the next monthly top list.

      Possibly related posts / past monthly popular blog round-ups:

      Dr Anton Chuvakin