Friday, December 31, 2010

Complete PCI DSS Log Review Procedures, Part 15

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company.  They are focused on PCI DSS compliance, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all. The practices can be implemented with commercial log management or SIEM tools, open source log analysis tools or manually. As you undoubtfully know, tools alone don’t make anybody compliant!

This is the 15th post in the long, long series  (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.

And so we continue with our Complete PCI DSS Log Review Procedures (please consider reading from Part 1 – at this stage we are deep in the details and these sections might seem out of context without reading earlier parts):

PCI Compliance Evidence Package

Finally, it is useful to create a “PCI Compliance Evidence Package” based on the established and implemented procedures to show it to the QSA. It will help establish your compliance with three key of PCI DSS logging requirements:

· Presence and adequacy of logging

· Log review

· Exception handling

While it is possible to prepare the evidence package before the assessment, it is much easier to maintain it on the ongoing basis. For example, keep printed or electronic copies of the following:

1. Logging policy that covers all of the PCI DSS in-scope systems

2. Logging and log review procedures (this document)

3. List of log sources – all systems and their components (applications) from the in-scope environment

4. Sampling of configuration files that indicate that logging is configured according to the policy (e.g. /etc/syslog.conf for Unix, screenshots of audit policy for Windows, etc)

5. Sampling of logs from in-scope systems that indicate that logs are being generated according to the policy and satisfy PCI DSS logging requirements

6. Exported or printed report from a log management tools that shows that log reviews are taking place

7. Up-to-date logbook defined above

This will allow always establishing compliant status and proving ongoing compliance.

To be continued.

Follow PCI_Log_Review to see all posts.

Possibly related posts:

Wednesday, December 29, 2010

SANS Log Management Survey is OUT!

Just quoted from here:

Christmas in May: Take the SANS 2011 Annual Log Management Survey

Take the 7th Annual Log Management Survey and be entered to win a $250
American Express Gift card.

This comprehensive survey has become a
leading indicator of how well log management and automation helps
organizations with their security and compliance needs. To take our
survey, follow this link:
http://www.sans.org/info/68369

The results will be released in early May during a short series of live
webcasts with Jerry Shenk and Dave Shackleford.

Do the survey, please. Past years results have been very insightful due to good participation.

Possibly related posts:

Complete PCI DSS Log Review Procedures, Part 14

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company.  They are focused on PCI DSS compliance, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all. The practices can be implemented with commercial log management or SIEM tools, open source log analysis tools or manually. As you undoubtfully know, tools alone cannot and do not make anybody compliant!

This is the 14th post in the long, long series  (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.

And so we continue with our Complete PCI DSS Log Review Procedures (please consider reading from Part 1 – at this stage we are deep in the details and these sections might seem out of context without reading earlier parts):

Example Logbook Entry

Here is an example following the above pattern:

1. Date/time/time zone this logbook entry was started: November 23, 2009, 4:15PM PST

2. Name and role of the person starting the logbook entry: Anton Chuvakin, principal consultant.

3. Reason the logbook entry is started: log exception (copied from log aggregation tool or from the original log file), make sure that the entire log is copied, especially its time stamp (which is likely to be different from the time of this record) and system from which it came from (what/when/where, etc):

clip_image002

Time/date of log: 10/21/2009 10:01:23 PM PST

System: OLGA.example.com

4. Detailed on why the log is not routine and why this analysis is undertaken: this event ID (Windows event ID 11) from this application event source (Source crypt32) was never seen before on any of the systems where logs are reviewed across our organization.

5. Information about the system that produced the exception log record or the one this log exception is about

a. Hostname: OLGA.example.com

b. OS: Windows XP SP 3

c. Application name: N/A

d. IP address(s): 10.1.1.1

e. Location: Home office

f. Ownership (if known): Olga Chuvakin, President and CEO

g. System criticality (if defined and applicable): critical, main laptop of the executive

h. Under patch management, change management, FIM, etc: yes

6. Information about the user whose activity produced the log: N/A, no user activity involved

7. Investigation procedure followed, tools used, screenshots, etc: procedure for “Initial Investigation” described above

8. Investigative actions taken: following the procedure for “Initial Investigation” described above, it was determined that this log entry is followed by a successful completion of the action logged. Specifically, on the same day, 1 second later the following log entry appeared:

clip_image004

This entry indicates the successful completion of the action referenced in our exception log entry and thus no adverse impact from the error/failure is present.

9. People contacted in the course of the log analysis: none

10. Impact determine during the course of the analysis: impact was determined to be low to non-existent; no functionality was adversely affected, no system was at risk.

11. Recommendations for actions, mitigations (if needed): no mitigation needed, added this log entry to baseline to be ignored in the future as long as the subsequent log entry exists.

The logbook of that sort is used as compliance evidence since it establishes log exceptions follow-up, required in item 10.6.a of PCI DSS validation procedure, which states “Obtain and examine security policies and procedures to verify that they include procedures to review security logs at least daily and that follow-up to exceptions is required.”

The logbook (whether in electronic or paper form) can be presented to a QSA or other auditor, if requested. I recommend retaining the log book for 3 years or at least 2x of the log retention period (1 year for PCI DSS)

To be continued.

Follow PCI_Log_Review to see all posts.

Possibly related posts:

Monday, December 27, 2010

Complete PCI DSS Log Review Procedures, Part 13

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company.  They are focused on PCI DSS compliance, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all. The practices can be implemented with commercial log management or SIEM tools, open source log analysis tools or manually. As you undoubtfully know, tools alone don’t make anybody compliant!

This is the 13th post in the long, long series  (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.

And so we continue with our Complete PCI DSS Log Review Procedures (please consider reading from Part 1 – at this stage we are deep in the details and these sections might seem out of context without reading earlier parts):

Logbook – Evidence of Exception of Investigations

How to create a logbook that proves that you are reviewing logs and following up with exception analysis, as prescribed by PCI DSS Requirement 10? The logbook is used to document everything related to analyzing and investigating the exceptions flagged during daily review. While the same logbook approach is used in the incident handling process (such as SANS Incident Response Workflow), in this document it is utilized as compliance evidence.

The logbook should record all systems involved, all people interviewed, all actions taken as well as their justifications, what outcome resulted, what tools and commands were used (with their results), etc.

Here is one recommendation for a logbook entry:

Recommended Logbook Format

Logbook entry:

1. Date/time/time zone this logbook entry was started

2. Name and role of the person starting the logbook entry

3. Reason it is started: log exception (copied from log aggregation tool or from the original log file), make sure that the entire log is copied, especially its time stamp (which is likely to be different from the time of this record) and system from which it came from (what/when/where, etc)

4. Detailed on why the log is not routine and why this analysis is undertaken

5. Information about the system that produced the exception log record or the one this log exception is about

a. Hostname

b. OS

c. Application name

d. IP address(s)

e. Location

f. Ownership (if known)

g. System criticality (if defined and applicable)

h. Under patch management, change management, FIM, etc.

6. Information about the user whose activity produced the log (if applicable)

7. Investigation procedure followed, tools used, screenshots, etc

8. Investigative actions taken

9. People contacted in the course of the log analysis

10. Impact determine during the course of the analysis

11. Recommendations for actions, mitigations (if needed)

To be continued.

Follow PCI_Log_Review to see all posts.

Possibly related posts:

Friday, December 24, 2010

Complete PCI DSS Log Review Procedures, Part 12

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company.  It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all. It can be performed manually (at small log volumes), using free open source log analysis tools or using commercial log management or SIEM tools.
This is the 12th post in the long, long series (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.
And so we continue with our Complete PCI DSS Log Review Procedures (please consider reading from Part 1 – at this stage we are deep in the details and these pieces might seem out of context without reading earlier parts):

Validation of Log Review

Final and critical part of compliance-motivated log review is making sure that there is sufficient evidence of the process, its real-world implementation and diligence in following the log review process. The good news here is that the same data can be used for management reporting about the logging and log review processes, so you are not doing just for PCI DSS compliance.
Let’s determine what documentation should be produced as proof of log review.
First, the common misconception is that having the actual logs provides that. That is not really true: ”having logs” and “having logs reviewed” are completely different and sometime years of maturing the security and compliance program separates one and the other. Please make sure that your team members keep that in mind.
Just as a reminder, we have several major pieces that we need to prove for PCI DSS compliance validation. Here is the master-list of all compliance proof we will assemble. Unlike other sections, here we will cover proof of logging and not just proof of log review since the latter is so dependent on the former:
· Presence and adequacy of logging
· Presence of log review processes and its implementation
· Exception handling process and its implementation.
Now we can organize the proof around those areas and then build processes to collect such proof.
Proof of Logging
The first category is: proof of presence and adequacy of logging. This section is the easiest to prove out of the three.
The following items serve as proof of logging:
1. Documented logging policy, covering both logged events and details logged for each event
2. System / application configuration files implementing the above policy
3. Logs produced by the above applications while following the policy.
As stated previously, your QSA is the ultimate judge of what proof of compliance will be adequate for your organization. These tips has been known to be found adequate, but see disclaimers in earlier parts for details.
Proof of Log Review
The second category: proof of log review processes and its implementation. This section is harder to prove compared to the previous one.
The following items serve as proof of log review:
1. Documented logging policy, covering log review
2. Documented operational procedures, detailing the exact steps taken to review the logs
3. Records of log review tasks being executed by the appropriate personnel (some log management products create an audit log of reviewed reports and events; such audit trail should cover it – the case of manual review is covered below) – think about this item as “log review log”
4. Also, records of exceptions being investigated (next section) indirectly proves that log review is taken place as well.
Proof of Exception Handling
The third category: proof of exception handling process and its implementation. This section is by far the hardest to prove out of these three.
The following items serve as proof of log exception process:
1. Documented logging policy, covering exceptions and their handling
2. Documented operational procedures, detailing the exact steps taken to investigate exceptions found during log review (this document)
3. A log of all exceptions investigated with actions taken (“logbook”)

The above evidence should provide ample proof that the organization follows PCI DSS guidance with diligence. Let’s focus on producing this proof – the table has the details.
PCI Compliance Logging Sub-Domain Proof of Compliance How to Obtain Proof?
Proof of presence and adequacy of logging Documented logging policy Create policy, if not present
Proof of presence and adequacy of logging System / application configuration files After deployment, preserve the configuration files as a master copy
Proof of presence and adequacy of logging Logs produced by the above applications Collect sample logs and save as proof of compliance
Proof of log review Documented logging policy Create policy, if not present
Proof of log review Documented operational procedures <this document!>
Proof of log review Records of log review tasks being executed Either use the tool or create a “logbook” (format below)
Proof of log review Records of exceptions being investigated Create a “logbook” of investigations
Proof of exception handling Documented logging policy Create policy, if not present
Proof of exception handling Documented operational procedures <this document!>
Proof of exception handling A log of all exceptions investigated Create a “logbook” of investigations or “knowledge base”
These items directly map to PCI DSS Requirements 10 and PCI DSS validation procedures.
The critical item from the above list is “a logbook” that is used to record exception follow-up and investigation, thus creating powerful evidence of compliance with PCI DSS requirements. In a more advanced form, the logbook can even grow into an investigative “knowledge base” that contains all past exception analysis cases.
To be continued.
Follow PCI_Log_Review to see all posts.
Possibly related posts:

Wednesday, December 22, 2010

Checking My 2010 Security Predictions

People should be banned from making new industry predictions before checking how their past predictions fared – and possibly embarrassing themselves again and again (see “The Year of Mobile Malware” Smile)
My 2010 predictions were here: http://chuvakin.blogspot.com/2009/12/security-predictions-2010.html
Proceeding to check them below!
#1
Compliance: as many other observers (Joshua at 451 Group comes to mind) noted, many of the security activities in 2010 will be defined by regulatory mandates such as PCI DSS, HIPAA/HITECH and others.  […]
Sadly, this is as true as ever. As security moves downstream/downmarket, compliance plays bigger role. WIN – but an easy one. BTW, some people did predict “the death of compliance”, but this sure isn't happening any time soon…
#2
Bad shit: what we have here is an intersection of two opposite trends: rampant, professional cybercrime and low occurrence of card fraud (as a percentage of card transaction volume). I explain this conundrum by predicting a scary picture of huge criminal opportunity, which still exists unchanged.  […]
Shit is indeed pretty bad. WIN – but an easy one; no fame points getting this right. This will get worse before they get better and we are in the “climb to REALLY bad shit phase”, IMHO.
#3


Intrusion tolerance is another trend (and its continues existence is in fact my prediction for 2010) which helps the “bad guys”: it is highly likely that most organizations have bots on their networks. What are they doing about it? Nothing much that actually helps. It is too hard; and many businesses just aren’t equipped – both skill-wise and technology wise – to combat a well-managed criminal operation which also happens to not be very disruptive to the operation of their own business.  […]
Same thing – predicting this was like taking candy from a baby. WIN, but with no extra credit. Organization will likely stay owned, despite regulations, media attentions, big security budgets, etc.
#4
Cloud security: I predict much more noise and a bit more clarity (due to CSA work) in regards to information security requirements as more and more IT migrates to the cloud. The Holy Grail of “cloud security” – a credible cloud provider assessment guide/checklist – will emerge during 2010.
A WIN here too - more clarity on cloud security is here. CSA work (CSA 2.0 guide,  recent cloud compliance matrix and CloudAudit releases) are helping.  Still, there is a lot of delusional cloud noises from many vendors….
#5

Platform security: just like Vista didn’t in 2007, Windows 7 won’t “make us secure.” The volume of W7 hacking  will increase as the year progresses.  Also, in 2008, I predicted an increase in Mac hacking. I’d like to repeat it as there is still room there :-) […]
And, only the truly lazy won’t predict more web application attacks. Of course! It is a true no-brainer, if there ever were one. Web application hacking is “a remote network service overflow” of the 2000s….
So, a partial WIN here, but then again – predicting “more attacks” is stupidly easy. BTW, Windows 7 is holding pretty well and there is no dramatic rise in public W7 vuln releases. Are people hoarding them (possible) or the vulnerabilities just aren’t there? Or maybe everybody is owning Adobe now (NEWFLASH: Adobe 2 days without a 0-day vulnerability!)


#6

Incidents: just like in 2008, I predict no major utility/SCADA intrusion and thus no true “cyber-terrorism” (not yet). Everybody predicts this one forever (as Rich mentions), but I am guessing we would need to wait at least few years for this one (see my upcoming predictions for 2020!) Sure, it makes for interesting thinking about why it did not happen; surely there is a massive fun factor in sending some sewage towards your enemies.  I'm happy to be correct here and have no such incidents happen, but I was predicting that something major and world changing would NOT happen so Feynman paradox is on my side. […]
WIN – but a reluctant one. I still won’t predict it for 2011 (predictions out soon), but even thinking about this one freaks me out…

UPDATED: in comments, Alex has [likely] correctly called me on this one - what about Stuxnet and Iran's nuclear control gear? Won't this qualify as "major industrial control incident"? OK, maybe - but we don't know what damage they suffered, beyond annoyance. In any case, I am changing this for partial FAIL from WIN.

UPDATED2: this prediction is an official FAIL. It was reported that Stuxnet DID in fact significantly impact Iranian nuclear facilities by accelerating an unknown number of centrifuges to beyond safe limits, and likely causing their breakage. We have proof - sort of - that you can blow up sensitive equipment nicely using malware. So...the future begins...NOW?

A massive data theft to dwarf Heartland will probably be on the books. And it will include not some silly credit card number (really, who cares? :-)), but full identity - SSN and all.
FAIL. No such breach materialized – at least not publicly.

UPDATED3: as pointed out in comments, Wikileaks is just such a breach - big, wide-ranging; it matters even though I thought it would be a PII breach and not a confidential document breach. Changing FAIL to partial WIN.


#7

Malware: sorry guys, but this year won’t be the Year of Mobile Malware either. As I discussed here, mobile malware is "a good idea" (for attackers) provided there is something valuable to steal – but it is just not the case yet in the US. There will be more PoC malware for iPhone, BlackBerry, maybe the Droid, but there will be no rampage. On the fun side, maybe we will finally see that Facebook malware/malicious application (that I predicted and consequently missed in 2008). This one will be fun to watch (others agree), and current malware defenses will definitely not stop this "bad boy," at least not before it does damage.
WIN. Read my lips: no..year..of…mobile…malware! Yes, I know AV vendors want it badly (in their ongoing fight for relevance) and keep predicting it  but it ain’t coming. Sorry!

#8
Risk management: more confusion. Enough said. In 2008, I said “Will we know what risk management actually isin the context of IT security? No!It sounds like we know no more now.
WIN, but maybe not for long. Growing amount of security data might change it in the next few years. Maybe. For now, as Mike said it, "Risk scoring is still a load of crap"

Conclusion: I can predict, but mostly easily predictable stuff. I am an extrapolator, not a Nostradamus.
Possibly related posts:

Tuesday, December 21, 2010

Complete PCI DSS Log Review Procedures, Part 11

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company.  It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all.

This is the 11th post in the long, long series (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.

And so we continue with our Complete PCI DSS Log Review Procedures (please read in order- at this point we are pretty deep in the details and this piece might look out of context):

External Information Sources Investigation

Here is the procedure to follow in this case:

clip_image002

This procedure can be expanded to cover other sources of information available at the organization.

The main idea of this procedure it to identify and then query information sources (such as IdM, change management, integrity checking, network flow analysis, etc), based on the type of the exception log entry and then to identify its impact and the required actions (if any)

The procedure works to roughly identify the type of a log entry and then to query the relevant information sources. In some cases, then the log entry is deemed to be an indication of a serious issue, incident response process is triggered.

However, it sometimes happens that neither the preliminary analysis nor the query of external systems yields the results and the “exception” log entry is exceptional. In this case, the collaborative workflow is triggered. See the next section for details

Escalation to Others Procedure – Collaborative Workflow

The investigation and escalation process is shown below:

clip_image002[5]

This process allows tapping into the knowledge of other people at the organization who might know what this “anomaly” is about.

The main idea of this procedure it to identify and then interview the correct people who might have knowledge about the events taking place on the application then to identify its impact and the required actions (if any).

The very last resource is to query the application vendor; such info request is typically time consuming or even expensive (depends on the support contract available) so it should be used sparingly.

To be continued.

Follow PCI_Log_Review to see all posts.

Possibly related posts:

Monday, December 20, 2010

Security Reflections and Musings on the Year 2010

Here is my new annual post (on top of my annual top post chart and annual predictions):  Security Reflections and Musings on a Passing Year.

Totally informal. Subjective! No science has been harmed while making it!

So, what security events, things, happenings do I remember from 2010 (in no particular order):

  • 86% of breached companies had intrusion evidence in their logs” and other super-juicy bits from Verizon breach report.
  • Wikileaks. Your data will be stolen  and, if you are lucky, leaked. If you are not lucky, sold and then used against you. Boom! That was your business going down.
  • PCI DSS 2.0 is here – but the fight goes on. Now you merchants finally have to do it (or outsource card processing)
  • APT. Please forget APT (most people – NOT all) – while you are reading in the media about APT, your barely-there-security is being owned by Backwards Non-persistent Whaaa-you-call-that-a-threat? (BNW). Boom!!
  • TSA JunkGrabGate – please don’t laugh, but “S” in TSA actually …OK, stop laughing NOW… stands for …yeah, I know, I know… “security.” So, it counts as a part of security reflections for the year. It is definitely stuck in my head – and probably will be stuck in my head for more than a year.
  • RSA2010  conference – this was my first show where I was as an independent consultant (no vendor hat in hand) and I loved it. I am sooo looking forward to this year – and my press pass is already confirmed.

Maybe I can tag others to reflect on the year? Hey, others, Smile want to do it?

Stand by for my review of 2010 predictions and – yes!- 2011 predictions.

Friday, December 17, 2010

Complete PCI DSS Log Review Procedures, Part 10

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company.  It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all.

This is the tenth post in the long, long series (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.

And so we continue with our Complete PCI DSS Log Review Procedures:

Exception Investigation and Analysis

A message not fitting the profile of a normal is flagged “an exception.” It is important to note that an exception is not the same as a security incident, but it might be an early indication that one is taking place.

At this stage we have an individual log message that is outside of routine/normal operation. How do we figure out whether it is significant, determine impact on security and PCI compliance status?

Initial Investigation

The following high-level investigative process (“Initial Investigation”) is used on each “exception” entry (more details are added further in the document):

clip_image002

Specifically, the above process makes use of a log investigative checklist, which is explained below in more details.

 

1. Look at log entries at the same time: this technique involves looking at an increasing range of time periods around the log message that is being investigated. Most log management products can allow you to review logs or to search for all logs within a specific time frame. For example:

a. First, look at other log messages triggered 1 minute before and 1 minute after the “suspicious” log

b. Second, look at other log messages triggered 10 minute before and 10 minute after the “suspicious” log

c. Third, look at other log messages triggered 1 hour before and 1 hour after the “suspicious” log

 

2. Look at other entries from same user: this technique includes looking for other log entries produced by the activities of the same user. It often happens that a particular logged event of a user activity can only be interpreted in the context of other activities of the same user. Most log management products can allow you to “drill down into” or search for a specific user within a specific time frame.

 

3. Look at the same type of entry on other systems: this method covers looking for other log messages of the same type, but on different systems in order to determine its impact. Learning when the same message was products on other system may hold clues to understanding the impact of this log message.

 

4. Look at entries from same source (if applicable): this method involves reviewing all other log messages from the network source address (where relevant).

 

5. Look at entries from same app module (if applicable): this method involves reviewing all other log messages from the same application module or components. While other messages in the same time frame (see item 1. above) may be significant, reviewing all recent logs from the same components typically helps to reveal what is going on.

 

In some cases, the above checklist will not render the result. Namely, the exception log entry will remain of unknown impact to security and PCI DSS compliance. In this case, we need to acquire information from other systems, such as File Integrity Monitoring, Vulnerability Management, Anti-malware, Patch Management, Identity Management, Network Management and others.

To be continued.

Follow PCI_Log_Review to see all posts.

Possibly related posts:

Wednesday, December 15, 2010

Complete PCI DSS Log Review Procedures, Part 9

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company.  It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all.

This is the ninth post in the long, long series (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.

Today’s section covers one of the most critical parts of any log review process – the main daily workflow! Pay attention, please Smile

And so we continue with our Complete PCI DSS Log Review Procedures:

Main Workflow: Daily Log Review

This is the very central piece of the log review – comparing the logs produced over the last day (in case of a daily review) with an accumulated baseline.

Daily workflow follows this model:

clip_image002

This diagram summarizes the actions of the log analyst who performs daily log review. Before we proceed, the issue of frequency of the log review needs to be addressed.

Frequency of Periodic Log Review

PCI DSS requirement 10.6 explicitly states that “Review logs for all system components at least daily.” It is assumed that daily log review procedures will be followed every day. Only your QSA may approve less frequent log reviews, based on the same principle that QSAs use for compensating controls. What are some of the reasons when less frequent log reviews may be approved? The list below contains some of the reasons why daily log review may be performed less frequently than every day.

· Application or system does not produce logs every day. If log records are not added every day, then daily log review is unlikely to be needed

· Log review is performed using a log management system that collects log in batch mode, and batches of logs arrive less frequently than once a day[1]

· Application does not handle or store credit card data; it is only in scope since it is directly connected to

Remember that only your QSA’s opinion on this is binding and nobody else’s!

How does one actually compare today’s batch of logs to a baseline? Two methods are possible; both are widely used for log review – the selection can be made based on the available resources and tools used. Specifically:

clip_image004

Out of the two methods, the first method only considers log types not observed before and can be done manually as well as with tools. Despite its simplicity, it is extremely effective with many types of logs: simply noticing that a new log message type is produced is typically very insightful for security, compliance and operations.

For example, if log messages with IDs 1,2,3,4,5,6 and 7 are produced every day in large numbers, but log message with ID 8 is never seen, each occurrence of such log message is reason for an investigation. If it is confirmed that the message is benign and no action is triggered, it can be later added to the baseline.

So, the summary of comparison methods for daily log review is:

 

· Basic method:

o Log type not seen before (NEW log message type)

 

· Advanced methods:

o Log type not seen before (NEW log message type)

o Log type seen more frequently than in baseline

o Log type seen less frequently than in baseline)

o Log type not seen before (for particular user)

o Log type not seen before (for particular application module)

o Log type not seen before (on the weekend)

o Log type not seen before (during work day)

o New user activity noted (any log from a user not seen before on the system)

 

While following the advanced method, other comparison algorithms can be used by the log management tools as well.

After the message is flagged as an exception, we move to a different stage in our daily workflow – from daily review to investigation and analysis.


[1] While such rare collection is not recommended, it is not entirely uncommon either.

To be continued.

Follow PCI_Log_Review to see all posts.

Possibly related posts:

Tuesday, December 14, 2010

Some Recent and Upcoming Speaking Ops

Recent:

  • "You Got That SIEM. How What Do You Do?" at BayThreat 2010 in San Jose, CA

Presentation is embedded below and available here:

Upcoming:

Enjoy!

Monday, December 13, 2010

LogChat Podcast 4: Anton Chuvakin and Andrew Hay Talk Logs

LogChat Podcast is back again - and now on iTunes as well! Everybody knows that all this world needs is a podcast devoted to logs, logging and log management (as well as SIEM, incident response and other fun related subjects).

And now you have it AGAIN with edition #4 - through the sheer combined genius of Andrew Hay and myself, Anton Chuvakin.

Our topic today is log management IN the cloud: in the cloud –not for the cloud, NIST cloud definitions and hosted log management, log management AND SIEM in the cloud, real-time correlation in the cloud – is it possible, hybrid solutions, sensitivity of log data, barriers to market entry,  log collection for the cloud, etc.
All that + how not to anger Chris Hoff with your cloud log management tool Smile

Some administrative items:
  1. No, we are still not ready with transcribing and, yes, we still want it.  I did try Amazon Mechanical Turk, but it didn't turn to be as inexpensive as people claimed. If you have ideas for a good AND cheap transcribing service, we are all ears.
  2. We plan for this to happen every three weeks - recorded on Wednesday, posted on Thursday. However, due to our work schedules, irregularities will occur all the time….
  3. Please suggest topics to cover as well - even though we are not likely to run out of ideas for a few years.
  4. Any other feedback is HUGELY useful. Is it too long? Too loud? Too rant-y? Too technical? Not enough jokes? Too few mentions of the "cloud"? Feedback please!
And now, in all its glory - the podcast: link to #4 MP3 is here [MP3], RSS feed is here - it is also on iTunes now.

Enjoy THE LogChat!


Possibly related posts:

Saturday, December 11, 2010

Complete PCI DSS Log Review Procedures, Part 8

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company. As I am preparing to handle more of such engagements (including ones not focused on PCI DSS, but covering other compliance or purely security log reviews), I decided to publish a heavily sanitized version of that log review guidance as a long blog post series, tagged “PCI_Log_Review.”  It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all.
This is the eighth post in the long, long series (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.
And so we continue with our Complete PCI DSS Log Review Procedures:
Building an Initial Baseline Manually
To build a baseline without using a log management tool has to be done when logs are not compatible with an available tool or the available tool has poor understanding of log data (text indexing tool). To do it, perform the following:
1. Make sure that relevant logs from a PCI application are saved in one location
2. Select a time period for an initial baseline: “90 days” or “all time” if logs have been collected for less than 90 days; check the timestamp on the earliest logs to determine that
3. Review log entries starting from the oldest to the newest, attempting to identify their types
4. Manually create a summary of all observed types; if realistic, collect the counts of time each message was seen (not likely in case of high log data volume)
5. Assuming that no breaches of card data have been discovered in that time period , we can accept the above report as a baseline for “routine operation”
6. An additional step should be performed while creating a baseline: even though we assume that no compromise of card data has taken place, there is a chance that some of the log messages recorded over the 90 day period triggered some kind of action or remediation. Such messages are referred to as “known bad” and should be marked as such.
Example: Building an Initial Baseline Manually
Here is an example process of the above, performed on a Windows system in-scope for PCI DSS that also contains PCI DSS application called “SecureFAIL.”
1. Make sure that relevant logs from a PCI application are saved in one location
First, verify Windows event logging is running:
a. Go to “Control Panel”, click on “Administrative Tools”, click on “Event Viewer”
b. Right-click on “Security Log”, select “Properties.” The result should match this:
clip_image002
c. Next, review audit policy
Second, verify SecureFAIL dedicated logging:
a. Go to “C:\Program Files\SecureFAIL\Logs”
b. Review the contents of the directory, it should show the following:
clip_image004
2. Select a time period for an initial baseline: “90 days” or “all time” if logs have been collected for less than 90 days; check the timestamp on the earliest logs to determine that
a. Windows event logs: available for 30 days on the system, might be available for longer
b. SecureFAIL logs: available for 90 days on the system, might be available for longer
Baselining will be performed over last 30 days since data is available for 30 days only.

3. Review log entries starting from the oldest to the newest, attempting to identify their types
a. Review all using MS LogParser tool (can be obtained http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en).
Run the tool as follows:
C:\Tools\LogParser.exe "SELECT SourceName, EventCategoryName, Message INTO event_log_summary.txt GROUP BY EventCategoryName FROM Security'" -resolveSIDs:ON
and review the resulting summary of event types.
b. Open the file “secureFAIL_log-082009.txt” in notepad and review the entries. LogParser tool above may also be used to analyze logs in plain text files (detailed instructions on using the tool fall outside the scope of this document)
4. Manually create a summary of all observed types; if realistic, collect the counts of time each message was seen (not likely in case of high log data volume)

This step is the same as when using the automated tools – the baseline is a table of all event types as shown below:
Event ID Event Description Count Average Count/day
1517 Registry failure 212 2.3
562 Login failed 200 2.2
563 Login succeeded 24 0.3
550 User credentials updated 12 0.1
666 Memory exhausted 1 0.0

Assuming that no breaches of card data have been discovered in that time period, we can accept the above report as a baseline for “routine operation.” However, during the first review it logs, it might be necessary to investigate some of the logged events before we accept them as normal (such as the last even in the table). The next step explains how this is done.
5. An additional step should be performed while creating a baseline: even though we assume that no compromise of card data has taken place, there is a chance that some of the log messages recorded over the 90 day period triggered some kind of action or remediation. Such messages are referred to as “known bad” and should be marked as such.
Same as when using the automated log management tools, we notice the last line, the log record with an event ID = 666 and event name “Memory exhausted” that only occurred once during the 90 day period. Such rarity of the event is at least interesting; the message description (“Memory exhausted”) might also indicate a potentially serious issue and thus needs to be investigated as described below in the investigative procedures.
What are some of the messages that will be “known bad” for most applications?
Guidance for Identifying “Known Bad” Messages
The following are some rough guidelines for marking some messages as “known bad” during the process of creating the baseline. If generated, these messages will be looked at first during the daily review process. MANY site-specific messages might need to be added but this provides a useful starting point.
1. Login and other “access granted” log messages occurring at unusual hour[1]
2. Credential and access modifications log messages occurring outside of a change window
3. Any log messages produced by the expired user accounts
4. Reboot/restart messages outside of maintenance window (if defined)
5. Backup/export of data outside of backup windows (if defined)
6. Log data deletion
7. Logging termination on system or application
8. Any change to logging configuration on the system or application
9. Any log message that has triggered any action in the past: system configuration, investigation, etc
10. Other logs clearly associated with security policy violations.
As we can see, this list is also very useful for creating “what to monitor in near-real-time?” policy and not just for logging. Over time, this list should be expanded based on the knowledge of local application logs and past investigations.
After we built the initial baselines we can start the daily log review.

[1] Technically, this also requires a creation of a baseline for better accuracy. However, logins occurring outside of business hours (for the correct time zone!) are typically at least “interesting” to review.


To be continued.

Follow PCI_Log_Review to see all posts.


Possibly related posts:

Friday, December 10, 2010

Complete PCI DSS Log Review Procedures, Part 7

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company. As I am preparing to handle more of such engagements (including ones not focused on PCI DSS, but covering other compliance or purely security log reviews), I decided to publish a heavily sanitized version of that log review guidance as a long blog post series, tagged “PCI_Log_Review.”  It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all.
This is the seventh post in the long, long series (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.
And so we continue with our Complete PCI DSS Log Review Procedures:
Building an Initial Baseline Using a Log Management Tool
To build a baseline using a log management tool perform the following:
1. Make sure that relevant logs from a PCI application are aggregated by the log management tools
2. Confirm that the tool can “understand” (parse, tokenize, etc) the messages and identify the “event ID” or message type of each log. For pure indexing tools, see the manual procedures presented in the next section.
3. Select a time period for an initial baseline: “90 days” or “all time” if logs have been collected for less than 90 days. In some cases, 7-30 days periods can be used.
4. Run a report that shows counts for each message type. This report indicates all the log types that are encountered over the 90 day period of system operation
5. Assuming that no breaches of card data have been discovered , we can accept the above report as a baseline for “routine operation”
6. An additional step should be performed while creating a baseline: even though we assume that no compromise of card data has taken place, there is a chance that some of the log messages recorded over the 90 day period triggered some kind of action or remediation. Such messages are referred to as “known bad” and should be marked as such.

Let’s go through a complete example of the above strategy.
1. Make sure that relevant logs from a PCI DSS application are aggregated by the available  log management tool
At this step, we look at the log management tools and verify that logs from PCI applications are aggregated. It can be accomplished by looking at report with all logging devices:
Timeframe: Jan 1, 2009 - Mar 31, 2009 (90 days)
Device Type Device Name Log Messages
Windows 2003 Winserver1 215762
Windows 2003 Winserver2 215756
SANITIZED1 SANITIZED1 53445
SANITIZED2 SANITIZED2 566
SANITIZED3 SANITIZED3 3334444
This would indicate that aggregation is performed as needed.
2. Confirm that the tool can “understand” (parse, tokenize, etc) the messages and identify the “event ID” or message type of each log
This step is accomplished by comparing the counts of messages in the tool (such as the above report that shows log message counts) to the raw message counts in the original logs.
3. Select a time period for an initial baseline: “90 days” or “all time” if logs have been collected for less than 90 days
In this example, we are selecting 90 days since logs are available.
4. Run a report that shows counts for each message type. For example, the report might look something like this:
Timeframe: Jan 1, 2009 - Mar 31, 2009 (90 days)
Event ID Event Description Count Average Count/day
1517 Registry failure 212 2.3
562 Login failed 200 2.2
563 Login succeeded 24 0.3
550 User credentials updated 12 0.1
This report indicates all the log types that are encountered over the 90 day period of system operation.
5. Assuming that no breaches of card data have been discovered , we can accept the above report as a baseline for “routine operation”
During the first review it logs, it might be necessary to investigate some of the logged events before we accept them as normal. The next step explains how this is done.
6. An additional step should be performed while creating a baseline: even though we assume that no compromise of card data has taken place, there is a chance that some of the log messages recorded over the 90 day period triggered some kind of action or remediation. Such messages are referred to as “known bad” and should be marked as such.
Some of the logs in our 90 day summary actually indicative of the problems and require an investigation
Event ID Event Description Count Average Count/day Routine or “bad”
1517 Registry failure 212 2.3
562 Login failed 200 2.2
563 Login succeeded 24 0.3
550 User credentials updated 12 0.1
666 Memory exhausted 1 N/A Action: restart system
In this report, we notice the last line, the log record with an event ID = 666 and event name “Memory exhausted” that only occurred once during the 90 day period. Such rarity of the event is at least interesting; the message description (“Memory exhausted”) might also indicate a potentially serious issue and thus needs to be investigated as described below in the investigative procedures.

Creating a baseline manually is possible, but more complicated.


To be continued.

Follow PCI_Log_Review to see all posts.


Possibly related posts:

Wednesday, December 08, 2010

Complete PCI DSS Log Review Procedures, Part 6

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company. As I am preparing to handle more of such engagements (including ones not focused on PCI DSS, but covering other compliance or purely security log reviews), I decided to publish a heavily sanitized version of that log review guidance as a long blog post series, tagged “PCI_Log_Review.”  It was written to be a complete and self-contained guidance document that can be provided to people NOT yet skilled in the sublime art of logging and  log analysis  in order to enable them to do the job and then grow their skills. It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all.

This is the sixth post in the long, long series (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.

And so we continue with our Complete PCI DSS Log Review Procedures:

Creating Log Message Types

It is important to note that explicit event types might not be available for some log types. For example, some Java application logs and some Unix logs don’t have explicit log or event types recorded in logs. Thus, what is needed is to create an implicit event type. The procedure for this case is as follows:

1. Review the log message – either review and identify what application or device produced  it (if multiple logs are collected together)

2. Identify which part of the log message identifies what it is about

3. Determine whether this part of the message is unique

4. Create an event ID from this part of the message

Even though log management tools perform the process automatically, it makes sense to go through an example of doing it manually in case manual log review procedure is utilized. For example:

Example 1

1. Review the log message

The log message is:

[Mon Jan 26 22:55:37 2010] [notice] Digest: generating secret for digest authentication..

2. Identify which part of the log message identifies what it is about

It is very likely that the key part of the message is “generating secret for digest authentication” or even “generating secret”

3. Determine whether this part of the message is unique

A review of other messages in the log indicates that no other messages contain the same phase and thus this phrase can be used to classify a message as a particular type.

4. Create an event ID from this part of the message

We can create a message ID or message type as “generating_secret.” Now we can update our baseline that this type of message was observed today.

Let’s go through another example using Java-based payment application logs.

[A.C. – sorry, sanitized]

Initial baseline can be quickly built using the following process, presented below for two situations: with automated log management tools and without them.

In addition to this “event type”, it makes sense to perform a quick assessment of the overlap log entry volume for the past day (past 24 hr period). Significant differences in log volume should also be investigated using the procedures define below. In particular, loss of logging (often recognized from a dramatic decrease in log entry volume) needs to be investigated and escalated as a security incident.

To be continued.

Follow PCI_Log_Review to see all posts.

Possibly related posts:

Tuesday, December 07, 2010

Complete PCI DSS Log Review Procedures, Part 5

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company. As I am preparing to handle more of such engagements (including ones not focused on PCI DSS, but covering other compliance or purely security log reviews), I decided to publish a heavily sanitized version of that log review guidance as a long blog post series, tagged “PCI_Log_Review.” 

It was written to be a complete and self-contained guidance document that can be provided to people NOT yet skilled in the sublime art of logging and  log analysis  in order to enable them to do the job and then grow their skills. It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all.

This is the fifth post in the long, long series (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.

And so we continue with our Complete PCI DSS Log Review Procedures:

Periodic Log Review Practices and Patterns

This section covers periodic log review procedures for applications in scope for this project. Such review is performed by either application administrator or security administrator (see section Roles and Responsibilities above).

Such review can be performed using:

  1. automated tools (which is explicitly allowed in PCI DSS), or
  2. manually, if such automated tools are not available or do not support log types from PCI DSS application.

Let’s build the entire end-to-end procedures for both cases and then illustrate it using examples from various log sources, commonly in scope for PCI DSS.

The basic principle of PCI DSS periodic log review (further referred to as “daily log review” even if it might not be performed daily for all the applications) is to accomplish the following:

  • Assure that card holder data has not been compromised by the attackers
  • Detect possible risks to cardholder data, as early as possible
  • Satisfy the explicit PCI DSS requirement for log review.

Even given that fact that PCI DSS is the motivation for daily log review, other goals are accomplished by performing daily log review:

  • Assure that systems that process cardholder data are operating securely and efficiently
  • Ensure that other regulations and frameworks prescribing log review are complied with
  • Reconcile all possible anomalies observed in logs with other systems activities (such as application code changes or patch deployments)

In light of the above goals, the daily log review is built around the concept of “baselining” or learning and documenting normal set of messages appearing in logs. Baselining is then followed by the process of finding “exceptions” from the normal routine and investigating them to assure that no breach of cardholder data has occurred or imminent.

The process can be visualized as follows:

clip_image002

Let’s start from building a baseline for each application.

Before PCI daily log review is put into practice, it is critical to become familiar with normal activities logged on each of the applications.

The main baseline to be built around log message types. For example, in case of Windows OS event logs:

clip_image002[5]

If the above message is seen the first time and we confirm that the message does not indicate a critical failure of cardholder data security, we can add it to the expected baseline of observed event log types.

To be continued.

Follow PCI_Log_Review to see all posts.

P.S. Today is my birthday – gifts and congrats are accepted anytime Smile

Possibly related posts:

Monday, December 06, 2010

Novell Bought–What Happens in SIEM?

After I came back from my vacation in Egypt, I started looking through all the noise related to Novell acquisition by Attachmate. Everybody whines about Microsoft, Linux, VMware, patents, open-source, unknown “IP bundle”,  etc – but what about SIEM? Novell has Sentinel SIEM and NetIQ, the previous Attachmate victim purchase, has their own toy “SIEM” – Security Manager.

Now, we can all joke about how sad that NetIQ SIEM really is, how it doesn’t scale and how nobody uses it – and culminate with quotes from Gartner’s Mark Nicolett about it (see “Magic Quadrant for Security Information and Event Management, 2010”): “not very visible in competitive evaluations” and “not growing with the market.” Seriously, if your product team fails to impress Mark with a few no-you-cannot-call-them-fake happy customer references and the final SIEM MQ report goes out with the above quotes, you should look into what seppuku really means to you Smile

So, what can become the future “Attachmate SIEM”?

  1. Is it NetIQ SM, coming back as a lumbering zombie to SIEM playground to be slaughtered in competitive deals?
  2. Is it Novell Sentinel, which is now improving both its technology and market position by leaps and bounds?
  3. Is it both but with some magic differentiation positioning? [ahem…like Tweedledum and Tweedledee of SIEM: IBM TCIM and IBM TSOM perhaps?]
  4. Is it some future integrated version of both?

While I don’t claim to possess any deep inside information on the deal, I think one can envision the last option actually working out OK over the long term for all involved – as well as for customers?   For example, combine NetIQ SM strength on Windows (and servers/desktops in general) with Novell cross-platform correlation, UIs, new log manager, etc. Reuse their FDCC-focused pieces too maybe. Also,  integrate NetIQ system management tools with Sentinel.

So, if I were them (and here is my unsolicited product strategy tip), I‘d salvage NetIQ “SIEM” for parts and use them to bulk up Novell Sentinel where such parts can be plugged in with minimum effort. Salvage some useful Windows correlation rules they used to have and port them into Sentinel. At the same time, integrate more functional NetIQ products with Sentinel to improve “IT and security management” story for Novell/Attachmate. In the short term, just make most NetIQ Security Manager customers happy by upgrading them to Novell Sentinel.

Saturday, December 04, 2010

BayThreat!

Just FYI, new security conference in Bay Area – see you all there next week; I will be doing a hilarious SIEM/log management talk there… It will be fun!!

What:

There's a new information security conference in the South Bay at The Hacker Dojo, December 10th & 11th. Perfect for those of us with exhausted travel budgets. We're an active community with tons of the smartest folks in the biz. It just makes sense that we would get a regional con of our own!

The theme for BayThreat is as simple as black & white: "Building & Breaking Security." Two tracks, each tackling opposite sides of the security fence. As Security Professionals, it is up to us to take that dichotomy and mold it into the shades of gray we use to protect our environment.

Shades of the Gray Area

We've invited speakers from all over the Bay Area and beyond to a two day conference at the Hacker Dojo in Mountain View, CA. The Dojo is a familiar place for the security community, as it hosts the #DC650 meetings every month.

We're excited to host speakers with security expertise from both sides of the fence. Early-acceptance speakers include Anton Chuvakin, Neel Mehta, Ryan Smith, Gal Shpantzer, Jim McLeod, Allen Gittelson, and Dan Kaminsky. The Call For Abstracts is now closed.

When: December 10-11, 2010

Where: Hacker Dojo, 140A South Whisman Rd, Mountain View, CA 94041  (map)

How much: nominal fee of (!) $45

Schedule: TBA here

Complete PCI DSS Log Review Procedures, Part 5

Once upon a time, I was retained to create a comprehensive PCI DSS-focused log review policies, procedures and practices for a large company. As I am preparing to handle more of such engagements (including ones not focused on PCI DSS, but covering other compliance or purely security log reviews), I decided to publish a heavily sanitized version of that log review guidance as a long blog post series, tagged “PCI_Log_Review.”  It was written to be a complete and self-contained guidance document that can be provided to people NOT yet skilled in the sublime art of logging and  log analysis  in order to enable them to do the job and then grow their skills. It is focused on PCI DSS, but based on generally useful log review practices that can be utilized by everybody and with any regulation or without any compliance flavor at all.

This is the fourth post in the long, long series (part 1, part 2, part 3all parts). A few tips on how you can use it in your organization can be found in Part 1. You can also retain me to customize or adapt it to your needs.

And so we continue with our Complete PCI DSS Log Review Procedures:

Logging and Log Review Policy

In light of the above, a PCI-derived logging policy must at least contain the following:

· Adequate logging, that covers both logged event types and details

· Log aggregation and retention (1 year)

· Log protection

· Log review

Let’s now focus on log review in depth as defined in project scope. PCI DSS states that “Review logs for all system components at least daily. Log reviews must include those servers that perform security functions like intrusion-detection system (IDS) and authentication, authorization, and accounting protocol (AAA) servers (for example, RADIUS). “It then adds that “Log harvesting, parsing, and alerting tools may be used to meet compliance with Requirement 10.6”

PCI testing and validation procedures for log review mandate that a QSA should “obtain and examine security policies and procedures to verify that they include procedures to review security logs at least daily and that follow-up to exceptions is required.” QSA must also assure that” Through observation and interviews, verify that regular log reviews are performed for all system components.”

Below we document application Log Review Procedures and workflows that cover:

1. Log review practices, patterns and tasks

2. Exception investigation and analysis

3. Validation of these procedures and management reporting.

The procedures will be provided for using automated log management tools as well as manually when tools are not available or not compatible with log formats produced by the application.

Review Procedures and Workflows

The overall connection between the three types of PCI-mandates procedure is as follows:

clip_image002

In other words, “Periodic Log Review Practices” are performed every day (or less frequently, if daily review is impossible) and any discovered exceptions or are escalated to “Exception Investigation and Analysis.” Both are documented as prescribed in “Validation of Log Review” to create evidence of compliance. We will now provide details on all three types of tasks. [A.C. – and so the fun starts!]

To be continued.

Follow PCI_Log_Review to see all posts.

Possibly related posts:

Dr Anton Chuvakin