Tuesday, November 30, 2010

Complete PCI DSS Log Review Procedures, Part 2

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company. As I am preparing to handle more of such engagements (including ones not focused on PCI DSS, but covering other compliance or purely security log reviews), I decided to publish a heavily sanitized version of that log review guidance as a long blog post series, tagged “PCI_Log_Review.”  It was written to be a complete and self-contained guidance document that can be provided to people NOT yet skilled in the sublime art of logging and  log analysis  in order to enable them to do the job and then grow their skills. It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all.

This is the second post in the long, long series (part 1 is here)... prepare to see lots of process flow charts.

A few tips on how you can use it in your organization:

  • If you need to establish log review practices to satisfy PCI DSS Requirement 10.6 “Review logs for all system components at least daily”, feel free to steal from this document and adapt it tor your environment. I can do that for you too.
  • There is a slight bias towards application and OS logging in this document (as per client request) – an you do need to review network and security device logs as well. The methods and practices apply to them as well.
  • This was created before PCI DSS 2.0 release, but has been checked to “comply” with the most recent standard (and Requirement 10 has not changed much in 2.0)
  • A QSA looked at it and liked it– but YMMV. Your QSA is always the ultimate authority in regards to what will “make you compliant.”
  • Don’t forget to buy me a beer if you find it useful. Better – contract me to create something similar for your organization.  Are you doing a good job with log review today?

And so we continue:

Role and Responsibilities

The following roles are mentioned in the document and are involved in Log Review process.

Role

Responsibility

Example Involvement in Log Review

Application administrator

Administers the application

Configured application logging settings, may perform daily log review for operational reasons

System or network administrator

Administers the underlying operating system or network

Configured logging settings, may perform daily log review  for operational reasons

Application business owner

Business manager who is responsible for the application

Approves the changes to application configuration required for logging and log review

Security administrator

Administers security controls on one or more systems or applications

Configured security and logging settings, performs daily log review (not of his own activities!)

Security analyst

Deals with operational security processes

Accesses security systems and analyzes logs and other data,  performs daily log review

Security director or manager

Oversees security policy, process and operation

Owns Log Review Procedures, updates the procedures as needed

Incident responder

Gets involved in security incident response

Deal with security incidents, reviews logs during the response process

These roles and responsibilities are covered in depth throughout the document.

Introduction: PCI and Logging Basics

This background section covers the basics of PCI DSS logging and what is required by PCI DSS. It should be noted that logging and monitoring are not constrained to Requirement 10, but, in fact, pervades all 12 of the PCI DSS requirement. The key focus areas for this project are Requirement 10 and sections of Requirement 11.

Key Requirement 10

We will go through it line by line and then go into details, examples, and implementation guidance. [A.C. – this references PCI DSS 1.2.1 – I will mention where PCI DSS 2.0 differs for your convenience]

10.1

Specifically, Requirement 10.1 covers “establish[ing] a process for linking all access to system components (especially access done with administrative privileges such as root) to each individual user.” PCI DSS doesn’t just mandate for logs to be there or for a logging process to be set, but instead mentions that logs must be tied to individual persons (not computers or “devices” where they are produced). It is this requirement that often creates problems for PCI implementers, since many think of logs as “records of people actions,” while in reality they will only have the “records of computer actions.” By the way, PCI DSS requirement 8.1 which mandates that an organization “assigns all users a unique ID before allowing them to access system components or cardholder data” helps to make the logs more useful here.

10.2

Next, Section 10.2 defines a minimum list of system events to be logged (or, to allow “the events to be reconstructed”). Such requirements are motivated by the need to assess and monitor user actions as well as other events that can affect credit card data (such as system failures).

Following is the list from the requirements (events that must be logged) from PCI DSS (v. 1.2.1):

“10.2.1 All individual user accesses to cardholder data

10.2.2 All actions taken by any individual with root or administrative privileges

10.2.3 Access to all audit trails

10.2.4 Invalid logical access attempts

10.2 5 Use of identification and authentication mechanisms

10.2.6 Initialization of the audit logs

10.2.7 Creation and deletion of system-level objects”

As can be seen, this covers data access, privileged user actions, log access and initialization, failed and invalid access attempts, authentication and authorization decisions, and system object changes. It is important to note that such a list has its roots in IT governance “best practices,” which prescribe monitoring access, authentication, authorization change management, system availability, and suspicious activity.

10.3

Moreover, PCI DSS Requirement 10 goes into an even deeper level of detail and covers specific data fields or values that need to be logged for each event. They provide a healthy minimum requirement, which is commonly exceeded by logging mechanisms in various IT platforms.

Such fields are (quoted from PCI DSS):

“10.3.1 User identification

10.3.2 Type of event

10.3.3 Date and time

10.3.4 Success or failure indication

10.3.5 Origination of event

10.3.6 Identity or name of affected data, system component, or resource”

As can be seen, this minimum list contains all of the basic attributes needed for incident analysis and for answering the questions: when, who, where, what, and where from. For example, if trying to discover who modified a credit card database to copy all of the transactions with all the details into a hidden file (a typical insider privilege abuse), knowing all of the above records is useful.

10.4

The next requirement, 10.4, addresses a commonly overlooked but critical requirement: a need to have accurate and consistent time in all of the logs. It seems fairly straightforward that time and security event monitoring would go hand in hand as well. System time is frequently found to be arbitrary in a home or small office network. It’s whatever time your server was set at, or if you designed your network for some level of reliance, you’re systems are configured to obtain time synchronization from a reliable source, such as NTP service.

[A.C. – this section 10.4 is slightly different in PCI DSS 2.0, but the key point – you must have reliable time – is the same]

To be continued.

Follow PCI_Log_Review to see all posts.

Possibly related posts:

Monday, November 29, 2010

My SANS Log Management Class Still Has Seats Left – Los Angeles on December 9,10

Just as a reminder, I am teaching my SANS Log Management Class (SEC434) in its 1.5 day version (and still at “beta prices” of 50% off!) in Los Angeles next month and it still has some seats left. Sign up now!
Class title: Log Management In-Depth: Compliance, Security, Forensics, and Troubleshooting
Date: Thursday, December 9, 2010 - Friday, December 10, 2010
Time: Day 1: 9:00am - 5:00pm  and Day 2: 9:00am - 12:00pm
Location:
UCLA Extension Building
10995 Le Conte Avenue
Los Angeles, CA 90024
Official SANS SEC434 description:
This first-ever dedicated log management class teaches system, network, and security logs, their analysis and management and covers the complete lifecycle of dealing with logs: the whys, hows and whats.
You will learn how to enable logging and then how to deal with the resulting data deluge by managing data retention, analyzing data using search, filtering and correlation as well as how to apply what you learned to key business and security problems. The class also teaches applications of logging to forensics, incident response and regulatory compliance.
In the beginning, you will learn what to do with various log types and provide brief configuration guidance for common information systems. Next, you will learn a phased approach to implementing a company-wide log management program, and go into specific log-related tasks that needs to be done on a daily, weekly, and monthly basis in regards to log review and monitoring.
Everyone is looking for a path through the PCI DSS and other regulatory compliance maze and that is what you will learn in the next section of the course. Logs are essential for resolving compliance challenges; this class will teach you what you need to concentrate on and how to make your log management compliance-friendly. And people who are already using log management for compliance will learn how to expand the benefits of you log management tools beyond compliance.
You will learn to leverage logs for critical tasks related to incident response, forensics, and operational monitoring. Logs provide one of the key information sources while responding to an incident and this class will teach you how to utilize various log types in the frenzy of an incident investigation.
Finally, the class author, Dr. Anton Chuvakin, probably has more experience in the application of logs to IT and IT security than anyone else in the industry. This means he and the other instructors chosen to teach this course have made a lot of mistakes along the way. You can save yourself a lot of pain and your organization a lot of money by learning about the common mistakes people make working with logs.
Sign up here (likely to be full in a couple of days – so please hurry)

Wednesday, November 24, 2010

Complete PCI DSS Log Review Procedures, Part 1

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company. As I am preparing to handle more of such engagements (including ones not focused on PCI DSS, but covering other compliance or purely security log reviews), I decided to publish a heavily sanitized version of that log review guidance as a long blog post series, tagged “PCI_Log_Review.”  It was written to be a complete and self-contained guidance document that can be provided to people NOT yet skilled in the sublime art of logging and  log analysis (a key requirement for this project – guidance was to be useful to such people) in order to enable them to do the job and then grow their skills. It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation (or without any compliance flavor – of course!)
This is the first post in the long, long series... prepare to see lots of process flow charts Smile
A few tips on how you can use it in your organization:
  • If you need to establish log review practices to satisfy PCI DSS Requirement 10.6 “Review logs for all system components at least daily”, feel free to steal from this document and adapt it tor your environment. I can do that for you too.
  • There is a slight bias towards application and OS logging in this document (as per client request) – an you do need to review network and security device logs as well. The methods and practices apply to them as well.
  • This was created before PCI DSS 2.0 release, but has been checked to “comply” with the most recent standard (and Requirement 10 has not changed much in 2.0)
  • A QSA looked at it and liked it– but YMMV. Your QSA is always the ultimate authority in regards to what will “make you compliant”
  • Don’t forget to buy me a beer if you find it useful. Better – contract me to create something similar for your organization.  Are you doing a good job with log review today? Owning an expensive  SIEM product but not using it well does not magically make you compliant or secure (it can make you poor though Smile) – but then again, you already knew it….
And so we begin our journey.

Project Goals

The goal of this project is to create a comprehensive Log Review Procedures document for PCI DSS applications. Such document needs to cover log review procedures, tasks and practices and incorporate other systems in review workflow and  also document all stages of log review.  If implemented in operational practice, this Log Review Procedure document should satisfy PCI DSS requirements in select sections of PCI DSS Requirement 10 and 12 and should be adequate to pass PCI compliance validation[1].

Project Assumptions, Requirements and Precautions

These critical items are essential for a success of PCI logging, log management and log review project. It is assumed that the following requirements are satisfied before the Log Review Procedures are put into operational practice.

Requirements

A set of requirements needs to be in place before the operational procedures described in this document can be used effectively:
1. Logging policy is created to codify PCI DSS log-related requirements as well as other regulatory and operational logging requirements
2. Logging is enabled on the in-scope systems
3. Interruption or termination of logging is in itself logged and monitored
4. Events mandated in PCI DSS documentation are logged
5. Generated logs satisfy PCI DSS logging requirements (e.g. Req 10.3)
6. Time is synchronized across the in-scope systems and with the reliable time server (NTP or other as per PCI DSS Req 10.4)
7. Time zones of all logging systems are known and recorded and can be reviewed in conjunction with logs

Precautions

This additional precautions need to be taken in order to make logs useful for PCI DSS compliance, other regulations as well as security, forensics and operational requirement:
· Key precaution: the person whose actions are logged on a particular system cannot be the sole party responsible for log review on that same system.
· Key precaution: PCI DSS mandates log security measures (detailed below), all access to logs should be logged and monitored to identify attempts to terminate or otherwise affect the presence and quality of logging.

[1] No assurance or guarantee of PCI compliance or passing PCI validation with one or more PCI DSS requirements can be given in this document. Only each organization’s QSA can be the judge of compliant status, as per PCI Council guidelines.

Out-of-scope Items

The following items are not covered in the document despite the fact that they might be essential for becoming PCI DSS compliant:
Out-of-scope Item Why out of scope?
What events to log for each application? Scope of the project is defined to cover log review only. It is assumed that proper logging is already implemented as per corporate logging policy.
What details to log for each logged event for each application? Scope of the project is defined to cover log review only. It is assumed that proper logging is already implemented as per corporate logging policy.
High-level logging and monitoring policy It is known that such policy is already in place.
Log aggregation, rotation and retention policies and procedures Even though PCI DSS prescribes log retention, such procedures are not covered in this document.
Security incident response process Scope of the project is defined to cover log review only. Log review procedures sometimes call for initiation of a security incident response process and investigation
Application that are not in scope for PCI DSS Scope of the project is defined to cover PCI DSS applications only
Network devices that are OR are not in scope for PCI DSS Scope of the project is defined to cover PCI DSS applications only.
A.C. note when posting: make sure you do include network devices I your PCI logging project!
Access control to stored logs, protecting the confidentiality and integrity of log data Even though PCI DSS prescribes access control guidelines for aggregated logs, such procedures are not covered in this document as per project definition.
Compensating controls when logging is not possible Scope of the project is defined to cover log review only. Log review is always possible whenever logging is possible. However, situation where logging is not possible is not covered in this document
Real-time monitoring of central logging health, performance, etc Scope of the project is defined to cover periodic log review only.
Any and all logging requirements in PCI DSS outside of Requirements 10 and 12. Scope of the project is defined to cover log review procedures in PCI requirements 10 and 12 only. A brief overview of PCI logging requirements in other sections is provided, but no detailed operational guidance is given.
Guarantee of passing PCI DSS assessment Only each organization QSA can provide such assurance or guarantee after the assessment.
Correlation rules for PCI monitoring While correlation rules can be created to automate some of the items discussed in the document, the project is scoped to cover log review and not correlation
Log record preservation for forensic purposes Log record preservation should be a part of a security incident response workflow.
Note that some or all of the above items may be mandatory for passing PCI compliance validation
To be continued.


Go to PCI_Log_Review to see all 18 posts.

P.S. This posted by a scheduler. I am away from computers and response to comments will be slow.


Possibly related posts:

Monday, November 22, 2010

CEE Log Standard for Dummies!

We wrote a very clear and concise note about Common Event Expression (CEE) approach to log standards. Even marketing people can read it Smile

Quoted from here:

“I'd like to make you aware that the CEE editorial board has published a short overview white paper describing the overall CEE effort including the problems and approaches that CEE is taking.

If you want a quick summary of what CEE is and how the different parts of the effort work, we'd encourage you to take a look at this paper.
The document is available for download in PDF form on the CEE web site: http://cee.mitre.org/documents.html.

And as always, we'd encourage your feedback- please feel free to post any comments to the CEE discussion mailing list at cee-discussion-list@lists.mitre.org. “

P.S. Posted by a scheduler. I am away from the computers – responses to comments will be delayed.

Monday, November 15, 2010

How to Write an OK SIEM RFP?

Ok, some people think consultants are supposed to make money off helping enterprises write RFPs, but I am busy enough and so it goes. This is what happens if Anton is stuck in a metal tube for 5 hours in seat 1A Smile

Question: How do I go about writing a SIEM or log management RFP?

Quick answer: don’t. This “purchase method” is probably equally hated by vendors and end-users. As somebody who was “volunteered” to help sales folks with 1600 page (yes, really!) and smaller RFPs more than once during my “vendor years,” I can tell you that – with a tongue firmly in cheek:

a) if you ask a vague question in your RFP, you will get either a “Yes” or a nice blurb taken from a random location in vendor datasheet

b) if you ask a question starting with “How…?”, you will get a nice blurb taken from a vendor datasheet

c) if you ask a silly question (“do you have an Albanian language interface?”), you will get either a “Yes” or a nice blurb taken from a random location in vendor datasheet

d) if you ask a question that is impossible to answer (“Can your product handle the high load?”), you will likely get a “Yes” – surprise!

e) if you ask an honest question that might cast a product in a negative light (“will you every lose log data?”), what do you think you will get….?

See a theme emerge here?  Note that I am not trying to imply that any particular vendor would lie in their RFP responses – the term here is “defensible creative exaggeration.” BTW, what do you think happens when a standard enterprises RFP template collides with a standard vendor RFP “boiler plate” response? Boom! The explosion of high-grade concentrated idiocy…  And if you think that I am a bit cynical about this whole thing, than maybe you are correct… making sausage for a long time does distort one’s personality a wee bit Smile

Despite the above, there are two exceptions to this rule of not doing RFPs:

  1. You are obligated to do a RFP (government, etc)
  2. You’d like to use your RFP as a chance to distill and focus your SIEM/LM requirements.

Let’s address them both at the same time. If you are case #1 above, you should really turn it into case #2.  As you recall (if not, review these posts here), one of the most important things an organization should do before buying a SIEM is to set its own goals, requirements, use cases, etc. BTW, this recent SIEM presentation stresses the same point – esp. see slide 16 and around.  This older presentation has some things to avoid at the product selection stage – see “worst practices” 1-4.

So, based on my experience on both sides of the RFP “interface”, here are some of my SIEM RFP tips:

  • Keep it short! If you cannot express what you need in under 10 pages, go back and rethink it. “Every time an organization releases a 500+ page RFP, God kills an intern.” Yes, that very intern who is tasked with responding to that monster, of course.
  • Start from your REAL main reason for getting a SIEM, your problem statement – monitor PCI DSS CDE, perform IDS/IPS alert analysis, monitor servers for suspicious logins, protect web applications via log correlation, etc
  • Include your use cases – which simply means to describe how you plan to use the system and what you expect the system to do for you. Some examples are shown here  (more high level) and here (more detailed inside the whitepaper).
  • Based on your goals and use cases (and that is important!), describe SIEM product functionality that is essential for your mission: agentless collection, bandwidth throttling, rule-based correlation, visual dashboards, trend reports, whatever…
  • Include log sources / devices that you absolutely need supported and what you mean by “supported” (e.g. parsed, normalized, categorized, suitable for correlation, covered by default correlation rules, updated promptly when log source changes, etc). This area is notorious for extra-high volume of “creative exaggeration” (“of course we support VidgetMaster 7.2! – via our generic LogMahgic 1.0 (TM) collector … which  dumps log files right into storage without analysis … and then rotates them to oblivion within 7 days”)
  • Avoid or reduce the usage of vague terms: ”scalable”, “high”, “flexible”, “effective”, “advanced”, “automatic”, “proper”, etc. Why tempt the other side unnecessarily? Smile
  • Clarify most other terms, even those that look clear to you: “correlation”, “reporting”, “keyword search”, “trend”, “responsive”, etc
  • Size the environment before writing an RFP, as we discuss in LogChat #2. Baseline your log sources for 2-4 weeks to get your average EPS rate then include both the volume of data and number of log sources that you absolutely need supported. Also, specify response time for reports and searches while you are at it.
  • Make phases of your SIEM project clear up front – don’t say “400,000 devices and 4,000,000 EPS enterprise-wide.” I got news for you – you probably will never get there… Be very clear about your Phase 1-2 and simply keep later phases in mind for the coming years.
  • Try hard to avoid idiotic statements (sorry!): “Vendor MUST specify their efforts and processes to guarantee that products and services provided will completely satisfy us or exceed our expectations” (quote from a real RFP)

And – hold on to your pants – despite the above effort you should be prepared to take the responses with a HUGE grain of salt. One of my contacts on the enterprise side put it simply: “of course we ignore all the specifics in RFP responses.” Sad smile With this approach to RFP writing, you WILL still benefit even if you don’t read the responses…

Finally, a more useful question than “how do I write a SIEM RFP?” is “how do I buy the right SIEM for my organization?” Keep this in mind while tuning your RFP. Or just retain me to help – a $20k consulting project is known to sometimes save an organization from  a $500k SIEM failure….

Possibly related posts:

Enhanced by Zemanta

Project Honeynet “Log Mysteries” Challenge Lessons

We just finished grading the results of Project Honeynet “Log Mysteries” Challenge #5 and there are some useful lessons for BOTH future challenge respondents and to log analysts and incident investigators everywhere.

If you look at the challenge at high level, things seem straightforward: a bunch of log data (not that much data, mind you – only  1.14MB compressed) from a Linux system. You can squeak by even if you use manual analysis and simple scripting. Fancier tools would have worked too, of course. The questions lead you to believe that compromise might have occurred.

Overall, people did get to some of the truth, but there were a few lapses which are worth highlighting.

First, justifying that a login activity pattern is malicious was required. Yes, a very long (hundreds) string of login failures followed by success, all from the same IP address in China, smells very fishy,  but shorter login sequences theoretically can be legit. Few people chose to justify it – in all their excitement after finding a compromise. Jumping to conclusions is one of the biggest risks during the incident investigations, especially if things can go to court.

Second, just because you found reliable compromise indication does not mean there isn’t more – of the same OR completely different type. You got the login  brute forcing via SSH, now check for web app hacking, will ya? More successful attacks in the same log? Anything of the same sort in other logs? Anything completely different but just as ominous? Finding one means little – cast a wide net again and again, even after you find reliable signs of system compromise. A good approach is to pretend that you found nothing and then try harder!

Third, post-compromise activity investigation is often as important as incident detection. Yeah, we got 0wned by “parties unknown.” And? What did the parties do after they got root? Did they drop an IRC bot to chat with their buddies or did they clean your crown jewels? Did they impact other systems and possibly other business processes? Maybe they DoS’d NSA from your box and that whirring noise you are hearing is a mean-looking SWAT team heli-dropping on your data center roof….? Smile

Finally, everybody chose to miserably FAIL at my trick question about timing: “How certain are you of the timing?” Everybody was in the range from “certain” to “fairly sure.” Guys, you are given a bunch of logs by somebody possibly untrusted (me, in this case). How on Earth can you be sure about timestamps in the logs reflecting reality? Did you set up that NTP server? Did you check it before the incident? Did you maintain chain of custody of logs after they were captured? WTH!? Of course, you cannot be sure at all about the absolute time and you can make a reasonably good bet that relative to themselves log timestamps are consistent. A good bet, BTW, is not the same as being certain. An opposing side lawyer will tear you  a new one in a second if you show up with that “certainty” in a court of law…

There was also an open-ended question about attacker motivations. Why did we ask it – think about it! So that you can learn a more social part of investigative trade. What can you hypothesize and prove? What can you learn by comparing this case with other cases you might have seen or even read about? Is this hot APT shit? Or is this a lone monkey-boy who can barely type?

Regarding commercial SIEM correlation tools: IMHO many should have picked this bruteforcing by using basic correlation rules (if count[login_failure]>100 followed by login_success where src IP = same across all failures and success, then alert). Check your tool and make sure you have rules like this (or hire somebody to build you a useful correlation ruleset Smile), even OSSEC can be used for this! Your exposed DMZ servers might be owned already Smile 

More challenges are coming!

Possibly related posts:

Enhanced by Zemanta

Wednesday, November 10, 2010

SANS Log Management Class SEC434 in LA, December 9-10

Just FYI, I am teaching my SANS Log Management Class (SEC434) in its 1.5 day version in Los Angeles next month. SANS has some juicy discounts since the class is still in beta (this is hopefully the last one). Sign up now!

Class title: Log Management In-Depth: Compliance, Security, Forensics, and Troubleshooting
Date: Thursday, December 9, 2010 - Friday, December 10, 2010
Time:

Day 1: 9:00am - 5:00pm
Day 2: 9:00am - 12:00pm

Location:

UCLA Extension Building
10995 Le Conte Avenue
Los Angeles, CA 90024

Description:

This first-ever dedicated log management class teaches system, network, and security logs, their analysis and management and covers the complete lifecycle of dealing with logs: the whys, hows and whats.

You will learn how to enable logging and then how to deal with the resulting data deluge by managing data retention, analyzing data using search, filtering and correlation as well as how to apply what you learned to key business and security problems. The class also teaches applications of logging to forensics, incident response and regulatory compliance.

In the beginning, you will learn what to do with various log types and provide brief configuration guidance for common information systems. Next, you will learn a phased approach to implementing a company-wide log management program, and go into specific log-related tasks that needs to be done on a daily, weekly, and monthly basis in regards to log review and monitoring.

Everyone is looking for a path through the PCI DSS and other regulatory compliance maze and that is what you will learn in the next section of the course. Logs are essential for resolving compliance challenges; this class will teach you what you need to concentrate on and how to make your log management compliance-friendly. And people who are already using log management for compliance will learn how to expand the benefits of you log management tools beyond compliance.

You will learn to leverage logs for critical tasks related to incident response, forensics, and operational monitoring. Logs provide one of the key information sources while responding to an incident and this class will teach you how to utilize various log types in the frenzy of an incident investigation.

Finally, the class author, Dr. Anton Chuvakin, probably has more experience in the application of logs to IT and IT security than anyone else in the industry. This means he and the other instructors chosen to teach this course have made a lot of mistakes along the way. You can save yourself a lot of pain and your organization a lot of money by learning about the common mistakes people make working with logs.

Sign up here (likely to be full in a couple of days – so please hurry)

Tuesday, November 09, 2010

Random Fun Highlights from PCI DSS 2.0 …

… for people who’d never read the whole thing (yes, I mean you,  marketing people :-))


Use of a PA-DSS compliant application by itself does not make an entity PCI DSS compliant, since that application must be implemented into a PCI DSS compliant environment and according to the PA-DSS Implementation Guide” – this is useful for ..ahem… reminding merchants about it.


verify that no cardholder data exists outside of the currently defined cardholder data environment” – scoping stuff became much better and this also smells like DLP to me. In any case, I head DLP vendors are partying over this already Smile

“Where virtualization technologies are in use, implement only one primary function per virtual system component” – this is what got added to 2.2.1 and it is great! Virtualization now officially in.

“Establish a process to identify and assign a risk ranking to newly discovered security vulnerabilities” – my guess is a lot of people read too much into this change of 6.2. It pretty much means the same: “bad vuln? fix it!” I don’t believe it will lead to reduced patching and increased risk acceptance. But I am sure some vendors that mix up firewall rules with vulnerability data will be ecstatic over this one…

“Reviewing public-facing web applications via manual or automated application vulnerability security assessment tools or methods, at least annually and after any  changes” – lack of change here in 6.6 which lead a lot of merchants to think that web app scanners needs to be run ANNUALLY is sad. My guess is that ‘after ANY change’ will be conveniently missed…

Using time-synchronization technology, synchronize all critical system clocks and times and ensure that the following is implemented for acquiring, distributing, and storing time” – updates to 10.4 are interesting; there are no new requirements (you still need to sync time), but there are more details here now. There is definitely more importance placed on this one in PCI 2.0!

“Methods that may be used in the process [or wireless ‘scanning’] include but are not limited to wireless network scans, physical/logical inspections of system components and infrastructure, network access control (NAC), or wireless IDS/IPS.” – this sure kicked some wireless IDS/IPS vendors in the balls (or so I’ve heard Smile) as this can be interpreted as ‘wireline AP detection is just fine’

Perform quarterly internal vulnerability scans” -  a new 11.2.1 which used to be rolled into 11.2 is a good idea: internal scanning was completely ignored by many merchants, sadly. And now this req got its own number. Same happened to scanning after changes (a new 11.2.3) which is good too. Finally, ‘rescan internal until fixed’ is a useful reminder for merchants who sometimes just scan for scanning’s sake.

“Includes an annual process that identifies threats, and vulnerabilities, and results in a formal risk assessment. (Examples of risk assessment methodologies include but are not limited to OCTAVE, ISO 27005 and NIST SP 800-30.)” – adding example to 12.1.2 as well as a testing procedure are handy. We don’t need people creating their own idiotic “risk” “assessment” methods…

“Use intrusion-detection systems, and/or intrusion-prevention systems to monitor all traffic at the perimeter of the cardholder data environment as well as at critical points inside of the cardholder data environment, and alert personnel to suspected compromises.” – this made 11.4 more palatable to merchant, I am sure; adding ‘at critical points in CDE’ is useful.

So, is it perfect now? Come on…  But there are many small but useful changes that will help merchants protect the cardholder data. I can see how this version can survive for 3 years just fine.

Enjoy!

Friday, November 05, 2010

LogChat Podcast 3: Anton Chuvakin and Raffy Marty (!) Talk Logs

LogChat Podcast is back again - and now on iTunes as well! Everybody knows that all this world needs is a podcast devoted to logs, logging and log management (as well as SIEM, incident response and other fun related subjects).

And now you have it AGAIN with edition #3 - through the sheer combined genius of our “guest hostRaffael Marty (sorry,  Andrew Hay – please get well soon, the world of logging needs you!) and myself, Anton Chuvakin.

As usual, administrative items first:
  1. So far, we are still not ready with transcribing.  I did try Amazon Mechanical Turk, but it didn't turn to be as inexpensive as people claimed. If you have ideas for a good/inexpensive transcribing service, we are all ears.
  2. We plan for this to happen every three weeks - recorded on Wednesday, posted on Thursday. However, due to our work schedules, irregularities will occur :-)
  3. Please suggest topics to cover as well - even though we are not likely to run out of ideas for a few years. Our topic today is building a business case for log management: justifications for logging, log collection and log review, time/money savings, availability monitoring, logs for incident response AND system troubleshooting, “going beyond compliance”, business case for SIEM vs log management, etc
  4. Any other feedback is HUGELY useful. Is it too long? Too loud? Too rant-y? Too technical? Not enough jokes? Too few mentions of the "cloud"? Feedback please!
And now, in all its glory - the podcast: link to #3 MP3 is here [MP3], RSS feed is here - it is also on iTunes now.

Enjoy THE LogChat!
Possibly related posts:

Wednesday, November 03, 2010

Log Management Tool Selection Checklist Out!

Knowing how much people love checklists, here is one more: a checklist for comparing log management tools.

It is being released at the new log management related site, Log Management Central (subscribe to RSS, follow on Twitter).

  • The announcement and brief description is here.
  • Printable PDF version is here.
  • Spreadsheet XLS version with adjustable criteria scoring is here.

Disclosure: creation of this checklist was funded by a vendor, but it did not affect my choice of criteria or any other content decision. It also does not reduce awesomeness in any way! In other words, it is up to you how to use it (and whether to use it) and what decision to make after evaluating the tools. Just don’t make a decision of letting your logs rot Smile

Please feel free to make suggestions to make the checklist more useful! Is anything missing? Worded in a non-vendor neutral way? Anything else?

Possibly related posts:

Enhanced by Zemanta

Monday, November 01, 2010

CFP for RSA 2011 Metricon 5.5 Event: Be There!

“Mini-MetriCon 5.5 (organized by securitymetrics.org, loosely defined Smile) is intended as a forum for lively, discussion in the area of security metrics. It is a forum for quantifiable approaches and results to problems afflicting information security today, with a bias towards specific approaches that demonstrate the value of security metrics with respect to a security-related goal. Topics and presentations will be selected for their potential to stimulate discussion in the workshop.
Mini-MetriCon will be a one-day event, Monday, February 14, 2010, co-located with the RSA Conference, the meeting room is a courtesy of RSA.
Mini-Metricon begins at 8:30am, and lunch is taken in the meeting room.
Attendance will be by invitation and limited in size.
All participants are expected to be willing to address the group in some fashion. Potential Mini-Metricon participants are expected to submit a discussion topic. Abstracts of papers, research projects, or practitioner presentations are encouraged and may result in a session allocation devoted to the submission topic. We also welcome ideas for 5-to-10-minute lightning talks on topics such as security-related data sets or key problems and challenges in security metrics. Collections of these talks are expected to result in group discussion on the submitter's topic of interest.
Submissions should be sent to  metricon5.5@securitymetrics.org  by November 12, 2010.”
Remember, the ONLY way to be there is to propose a discussion topic! There is no non-participating audience, as per event chapter Smile
P.S. Last year I had to pass on both Cloud Security Alliance meet-up and some VC meetings in order to be at the Metricon – and I didn’t regret it one bit. As you can guess, I can recognize deep awesomeness, when I see it Smile


Possibly related posts:

Monthly Blog Round-Up – October 2010

Blogs are "stateless" and people often pay attention only to what they see today. Thus a lot of useful security reading material gets lost.  These monthly round-ups is my way of reminding people about interesting blog content. If you are “too busy to read the blogs,” at least read these.

So, here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

  1. By far, the top position in October is held by my repost of my free log management tool list (“On Free Log Management Tools”) from my consulting site. The list was reposted and retweeted like crazy. The original version was written as a companion to our “Log Review Checklist” that also sits on the top list this month.
  2. The notes from my reading of Verizon PCI report (“Verizon PCI Report is Out”) are next. The report is really, really good so you should read it along with their data breach reports.
  3. On Choosing SIEM“, a companion to  “How Do I Get The Best SIEM?”, held the next top position. If you are thinking of getting a SIEM or a log management tool, check them out and also look at related resources at the end of these posts.  “The Myth of SIEM as “An Analyst-in-the-box” or How NOT to Pick a SIEM-II?” and ““I Want to Buy Correlation” or How NOT to Pick a SIEM?” also stay at the top – it seems like smaller organizations are looking at deploying SIEM and log management and there is a lot of interest in simple guidance.  BTW, the newest post in this loose series is “So, What Should I Want?” or How NOT to Pick a SIEM-III?” And you can always get me to help with the selection, of course.
  4. Career posts are always super-popular somehow: “Gartner-heads vs Packet-heads” post is no exception. The previous post in my security career series (“Skills for Work vs Skills for Getting Hired”) still shows up in Top10 as well as their predecessor “Myth of an Expert Generalist.”
  5. Updated With Community Feedback SANS Top 7 Essential Log Reports DRAFT2”, “SANS Top 5 Essential Log Reports Update!” and their predecessor  “Top5 SANS Log Reports Update DRAFT” also show up close to the top. Now that I have a bit more time, I will finally finish the write-up and submit it to SANS for distribution…
  6. Our LogChat podcast release is next on the list – the third issue is coming next week. The podcast is now on iTunes as well – check it out.

Also, below I am thanking my top 3 referrers this month (those who are people, not organizations). So, thanks a lot to the following people whose blogs sent the most visitors to my blog:

  1. Walt Conway
  2. Ben Tomhave
  3. Michał Wiczyński

See you in October; also see my annual “Top Posts” - 2007, 20082009!

Possibly related posts / past monthly popular blog round-ups:

Dr Anton Chuvakin