Tuesday, June 30, 2009

Vulnerability Scanning and Clouds/SaaS/IaaS/PaaS

Here is a very fun post called “Vulnerability Scanning and Clouds: An Attempt to Move the Dialog On…” I loved it so much, I will just quote my favorite parts here with a few comments. It starts like this:

“Much has been said about public IaaS providers that expressly forbid customers from running network scans against their cloud hosted infrastructure.”

In other words, they host your server, but you cannot check it for vulnerabilities at all.

“As has been noted before, a blanket ban on legitimate scanning activity by customers of their own infrastructure (whether outsourced or not) undermines security assurance processes and can make regulatory compliance impossible; e.g. PCI DSS mandates network vulnerability scanning as a control”

Use PaaS – lose PCI DSS compliance status. Nice! Sadly, the above interpretation is correct as, I suspect, the IaaS/PaaS provider is not allowed to scan your boxes either. So, nobody does. And then you get 0wned.

Next the post highlights that the fact that vulnerability management challenges are magnified by using PaaS/IaaS. For example:

Scanning may trigger automated or manual actions by the provider. A common automated response from a provider is to apply traffic shaping to slow down the scan, or simply block the client IP address via an ACL update.  This can lead to false negatives; i.e. vulnerabilities present are not discovered as the scanner IP was automagically identified as a noisy vulnerability scanner and auto-throttled/blocked.”

He then highlight somewhat obvious, but still key point:

“Even if spinning up copies of “known good/secure” virtual machine (VM), you still need to scan them.”

Further:

“So the bad guys get to scan because they don’t care and yet the customer, who wants to do the “right thing”, is not allowed to.  Is that rational?”

The solution is “easy” – if you need scanning (and everybody does!) and you PaaS doesn’t allow it, don’t use that PaaS. But is this a solution? The post the  suggests “ScanAuth API” which will allow controlled scanning:

Something like an “ScanAuth” (Scan Authorize) API call offered by cloud providers that a customer can call with parameters for conveying source IP address(es) that will perform the scanning, and optionally a subset of their Cloud hosted IP addresses, scan start time and/or duration. This request would be signed by the customers API secret/private key as per other privileged API calls.”

I am not sure about you, but this sounds like an awesome idea! The original post calls for the start of discussion, and I am happy to continue it. Finally, as of today, I don’t think we can rely on “other security tools”  (software assurance, secure coding, etc) to supplant the need for vulnerability scanning. if you deploy an OS in the cloud, you’d need to scan it.

BTW, similarly to network vulnerability scanning, the situation is actually worse for web app scanning? If you have “doubts” about your blog provider, can you hit it with Qualys WAS?

Monday, June 29, 2009

Free Log Data For Research - Update

This WASL 2009 workshop reminded me that I always used to bitch that academic researchers use some antediluvian data set (Lincoln labs 1998 set used in 2008 “security research”  makes me want to just curse and kick people in the balls, then laugh, then cry, then cry more…).

However, why are they doing it? Are they stupid? Don’t they realize that testing their “innovative intrusion detection” or “neural network-based log analysis” on such prehistoric data will not render it relevant to today’s threats? And will only ensure ensuing hilarity :-)

Well, maybe the explanation is simpler: there is no public, real-world source of logs that allows comparison between different security research efforts.

Correction! There wasn’t. And now there is!

I hereby acting on my promise to share my collection of real-world logs, mostly collected from systems in the honeynets I ran in 2004-2006.  As of now, if you need logs for research, please contact me  or get them directly here.

Here is the description of the collection currently shared (more to come!):

 

Size: 100MB compressed; about 1GB uncompressed

Date collected: 2006

Type: Linux logs /var/log/messages, /var/log/secure, process accounting records /var/log/pacct, other Linux logs,  Apache web server logs /var/log/httpd/access_log, /var/log/httpd/error-log, /var/log/httpd/referer-log and /var/log/httpd/audit_log, Sendmail /var/log/mailog, Squid /var/log/squid/access_log, /var/log/squid/store_log, /var/log/squid/cache_log, etc.

License:  public; use for whatever you want. Acknowledging the source is nice; Beerware license is even better.

Sanitization: No sanitization or modification was performed. No additional sanitization is required before use for research.

 

So, for now, if your research requires real-world logs with normal operation data, suspicious data, anomalous data and attack data – grab it here.

UPDATE: I have created a Google Group log-sharing to  notify those interested about the shared logs. Please sign up here. The  purpose of the group is to notify about new logs shared, discuss the shared logs, collect references to research that uses the logs, post requests for more logs, discuss the events observed in logs, etc.

UPDATE2: the logs are now hosted here, courtesy of one of my readers who prefers to remain anonymous. Thanks A LOT for hosting the logs! Despite the fact that the logs are fully public now, I suggest you still sign up for the Google group as I will announce new log sharing there.

Possibly related posts:

Friday, June 26, 2009

CIS Metrics Fun Continues

CIS Security Metrics Guide (v. 1.0.0)  has been out for a while, I just forgot to announce it here on my blog. The document is definitely a work in progress and the team (myself included) has a lot to do to make it better; some metrics might even change and new ones added. BTW, the project goal was to “develop a balanced combination of unambiguous and logically defensible outcome and practice metrics measuring” and also to “utilize data commonly available in most enterprises.”  Since I consider security and information risk management metrics  to be one of the most important security challenges, I was very excited to help with this guide.

Here is the list of domains and metrics from the CIS site; it contains a mix of technical (automatable) and non-technical metrics:

“Currently, the consensus group has developed metrics covering the following business functions:

  • Application Security
    • Number of Applications
    • Percentage of Critical Applications
    • Risk Assessment Coverage
    • Security Testing Coverage
  • Configuration Change Management
    • Mean-Time to Complete Changes
    • Percent of Changes with Security Review
    • Percent of Changes with Security Exceptions
  • Financial
    • Information Security Budget as % of IT Budget
    • Information Security Budget Allocation
  • Incident Management
    • Mean-Time to Incident Discovery
    • Incident Rate
    • Percentage of Incidents Detected by Internal Controls
    • Mean-Time Between Security Incidents
    • Mean-Time to Recovery
  • Patch Management
    • Patch Policy Compliance
    • Patch Management Coverage
    • Mean-Time to Patch
  • Vulnerability Management
    • Vulnerability Scan Coverage
    • Percent of Systems Without Known Severe Vulnerabilities
    • Mean-Time to Mitigate Vulnerabilities
    • Number of Known Vulnerability Instance”

Download the metrics here or direct PDF link. BTW, the Quick Start Guide to launch your metrics program using CIS Security Metrics is coming soon. Also, a global data sharing project based on these metrics may be launched in the future.

Enjoy!

Wednesday, June 24, 2009

MUST READ: Best Chapter From “Beautiful Security” Downloadable!

This is pretty much a repost from Mark’s blog, hopefully he doesn’t mind that I am highlighting his awesomeness ;-)

So,  “Tomorrows Security Cogs and Levers” by Mark Curphey, by far the best chapter from “Beautiful Security” book (my book review here), is now downloadable in PDF form.

It is hard to decorate this post with a representative quote, but how about this:

“The security tools and technology available to the masses today can only be described as primitive in comparison to electronic gaming, financial investment, or medical research software. […]  the information security management programs that are supposed to protect trillions of dollars of assets, keep trade secrets safe from corporate espionage, and hide military plans from the mucky paws of global terrorists are often powered by little more than Rube Goldberg machines (Heath Robinson machines if you are British) fabricated from Excel spreadsheets, Word documents, homegrown scripts, Post-It notes, email systems, notes on the backs of Starbucks cups, and hallway conversations. Is it any wonder we continue to see unprecedented security risk management failures and that most security officers feel they are operating in the dark?”

or

“I was once accused of trivializing the importance of security when I put up a slide at a conference with the text “Security is less important than performance, which is less important than functionality,” followed by a slide with the text “Operational security is a business support function; get over your ego and accept it.””

and

“The areas I’ve pulled together in this chapter—from business process management, number crunching and statistical modeling, visualization, and long-tail technology—provide fertile ground for security management systems in the future that archive today’s best efforts in the annals of history.”

If you are not buying the book, please at least read Mark’s chapter. It exudes pure awesomeness.

Possibly related posts:

Tuesday, June 23, 2009

PCI DSS Prioritized Webcast Slides and Q&A

It took a while, but here are some fun PCI Q&A from that PCI DSS Prioritized webinar we did a few days ago.

Some questions are way too deep to be answered in a blog post; still, I hope the answers are useful to my readers.

Q: I have a Windows 200X Server and “XYZ Charge” [A.C. - names anonymized] credit card processing software on it; the server is also a domain controller and overall the only server in network, can my network EVER be compliant? PCI DSS speaks of servers not running multiple services such as DNS, Active Directory, etc, on one server. That would mean a small network would need around four servers if they keep cardholder data. What should I do?

A: First, see Myth #6 from the previous webcast. Namely, PCI DSS is not about “compliant networks”; it is about the entire organization. Second, common interpretations of Requirement 2.2.1 do prohibit the use of your DNS server for payment processing. While you might be able to demonstrate compensating controls for other services, combining the actual payment application with public network services is not a good idea.

Q: We have “XYZ DB” as our database and “XYZScan” anti-virus as our security [A.C. - names anonymized]. As a non-profit org. what would be the best route to take to become compliant. Any huge pitfalls to avoid?

A: Obviously, we don’t know your exact situation such as your organization size and the method you use for processing credit cards, it is impossible to give any “authoritative” answer on how to become compliant. However, there is one thing that we can recommend: try not to do ANY card processing “in-house”, but instead outsource as much of it as possible to others who specialize in secure payments. This will limit your exposure to credit card data and thus reduce your risk of BOTH successful hacker attack and PCI non-compliance “assessor attack.” Risks of hosting your own card processing infrastructure are just too high nowadays. As one of my QSA friends once said about card data: “don’t touch that toxic sh*t!”

Q: What systems are in–scope?

A: In brief, systems that “store, process or transmit card data” and those directly connected to them. The PCI DSS document outlines an approach for determining scope; here is an excerpt:

PCI-scoping-pcico

Q: Please help to clarify what “reducing the scope means.” In our scenario we have two main servers that have the databases with credit card information. We are assuming that all computers connected to this network (even through VPN) are required to be PCI DSS compliant. However, would it be acceptable to move these servers to their own network? Does that reduce the scope and what are the most used methods of separating the networks e.g. VLAN, separate firewall segment or zone?

A: “Reducing the scope” means making sure that PCI DSS applies to a smaller part of your organization by limiting where card processing takes place AND by segmenting this part from the rest of the network. The reason for this is that PCI requirements apply to systems that process cards AND the ones connected to them. It is also explained in PCI DSS document. The scope can be reduced by things like:

1. Limiting the number of systems where card data is stored, processed OR even transmitted through

2. Separating such systems from the rest of the network via a firewall, filtering router or something else, described in PCI DSS. Using VLANs for separation seems to be a subject of debate in QSA community; my impression was that it is more often acceptable than not.

In your case, it most likely means moving the card servers to a separate firewall zone and applying the ruleset described in PCI Requirement 1. By the way, why do you have those “databases with card information”? Is there any way to not store the data at all? This will be a perfect example of scope reduction as well.

Q: If I place the database with cardholder data in a separate VLAN or behind a firewall does it reduce the scope? Are the computers outside of that network required to be compliant?

A: If your “database with cardholder data” is separated by the firewall (configured as per PCI DSS Requirement 1!) from the rest of the network, the scope will likely be limited to systems which are in the same firewall zone as the card data database. Computers that are not directly connected to the database due to firewall separation would not be in scope, provided the firewall is configured as per PCI DSS Requirement 1 and thus does indeed separate them. Again, why is there a database with cardholder data? Is there a way to not have such “hacker magnet” database?!

Q: I have heard that some companies require network firewall users to open things on the firewall in order to scan. Why?

A: This stems from the following requirement of “PCI DSS Security Scanning Procedures” [PDF]: “Arrangements must be made to configure the intrusion detection system/intrusion prevention system (IDS/IPS) [A.C. – as well as firewalls] to accept the originating IP address of the ASV. If this is not possible, the scan should be originated in a location that prevents IDS/IPS interference.” While it sounds risky to some, it is NOT “a security crime;” you actually doing this to increase security by letting the scanner assess the vulnerabilities on the exposed systems. It goes without saying that you need to ONLY allows access for the scanner IP addresses and ONLY for a limited time (“scan window”) AND to monitor such network assess, even if you are sure it is coming from your scanning provider.

Q: Is there any specific priority listing for meeting application security (6.3)?

A: “Prioritized Approach for DSS 1.2” document suggests that application security requirements 6.1, 6.2, 6.3 “Develop software applications in accordance with PCI DSS (for example, secure authentication and logging) and based on industry best practices and incorporate information security throughout the software development life cycle.” as well as 6.5 and 6.6 are handled in Phase 3.

Q: Are small businesses subject to the same PCI DSS requirements as large businesses?

A: The requirements are the same for everybody. However, the exact manner in which you accept credit cards does dramatically change the scope of required PCI DSS validation. For example, if you are a “card-not-present (e-commerce or mail/telephone-order) merchant” AND have “all cardholder data functions outsourced” and have overall volume of less than 20,000 transactions/year (Level 4, but depending on the card brand), you only need to answer about 13 question in the Self-Assessment Questionnaire (SAQ). If you process more than 6 million transactions and have your own payment infrastructure, you’d need to pass an actual on-site assessment, covering a broad range of requirements (all 12 requirements with more than 200 sub-requirements)

Q: Where do i get that free PCI eBook you mentioned?

A: “PCI Compliance for Dummies” eBook.

Enjoy! The next webcast will be a lot of fun; watch this blog and your email (if you are on Qualys mailing lists) for announcements.

Possibly related posts:

Monday, June 22, 2009

On “PCI Letter”

So, the infamous PCI letter (PDF), an ultimate FunRead (tm) :-), is revealed by Martin. In my analysis below, I hereby promise to remain civilized, even though it would be a trifle too hard at times ;-)

The letter starts on a jarring note, namely, the claim that all merchants “take data security seriously.” This definitely helped me learn a new synonym for the word “ignore” :-) It then follows to mention that despite such HUGE attention to data security, it is hard for them to comply with the PCI standard….

pci-letter

So, item #1 about “incorporating formal review and comment phase on changes to PCI DSS” shows that the writers never looked at “Lifecycle Process for Changes to PCI DSS” (PDF), prominently featured at the PCI Council website. How can one miss it? Beats me. The council document does say that “Changes to the standard follow a defined 24-month lifecycle (!) with five stages, described below. The lifecycle ensures a gradual, phased use of new versions of the standard without invalidating current implementations of PCI DSS or putting any organization out of compliance the moment changes are published.” Specifically, “The second stage allows for market input into evolving PCI DSS through a formal feedback process.” In light of this, I fail to see what the letter writers really mean here.

They also mention ASC X9. Have you ever heard of ASC X9? Yup, my point exactly… The last RSA show had a couple of panels that mentioned X9.111 “X9 Guide to Pentesting,” which probably proves its wide industry adoption.

Item #2 asks for extensions to PCI 1.1 that otherwise “sunsetted” in Dec 2008. I agree that WEP need to be in use for another two years… NOT! Other than that, the differences between 1.1 and 1.2 (detailed here [PDF]) are so minor that needing another year sounds…well… a wee bit disingenuous.

In the next item, #3, they ask for a standard to include “end to end” encryption. Awesomeness ensues ;-) No, really!! It is pretty awesome.

Following that, the item #4 asks for marking “key controls” and for overall “control rationalization.” The former request seems well-served by the recent “Prioritized Approach for DSS 1.2” [PDF], which explains what to do first, second, etc. However, most people agree that more than 224 sub-requirements can use some rationalization. For instance, I am now trying to comb the PCI DSS doc for all the references to vulnerability management; and I am finding that some rationalization would be handy. For example, why does anti-malware belongs in “vulnerability management” theme, and not in “building secure network” theme?

pci-letter2

Final #5 firmly treads into bizarre as it rehashes the idea that brands make merchants store card data, while in reality they are begging them to stop doing so (e.g. Visa’s DropTheData site, Mastercard Security Rules, etc)

It ends on a funky note: “today most of the risk and financial burden for operating in compliance with PCI DSS is borne by the merchants.” Funny, they’d mention it, given that most of the financial burden for replacing cards “lost” by the merchants and resulting fraud is borne by the issuing banks.

Here is something else funny: the letter mentions Sarbanes-Oxley act twice as an example of how regulation should be done. No comment here…

In any case, I don’t think this letter is “mostly smoke and mirrors meant to draw attention away from the fact that many merchants don’t want to spend the time and money to become PCI compliant” (as Martin points out), it does contain interesting insight on how many organizations [that we buy from on a daily basis] view data security and regulatory compliance.

There you have it. Thanks to Martin for posting the letter!

UPDATE: PCI Council responds via a podcast at CSO Magazine site.

Possibly related posts:

Saturday, June 20, 2009

Why No Open Source SIEM, EVER?

Here is a perfect weekend post – on SIEM :-) Ok, all this Google web traffic of people searching for “open source SIEM” (sometimes “open source SIM”, almost never “open source SEM” {Is SEM .. dead? :-)}) continues to fill my web server logs and it finally prompted me to write this post, rather than simply whine about, like I was doing for 3 years :-)

It all started here when Matasano folks (sockpuppet.org at the time), in a rare bout of punditry proclaimed back in 2005 (!):

“A Credible Open-Source SIM

I predicted that, just as SourceFire commoditized and co-opted the IDS market, a nascent open source project would challenge SIM products like ArcSight and Cisco MARS.

Result: No Credit [A.C. – this is a later addition to their post when their scored their 2006 predictions]

What’s taking you guys so long? Getting spooked that all the money seems to be going to log management? That’s exactly the dynamic Snort charged in to! Get with the program!”

or in another version posted on DailyDave:

“A Credible Open-Source SIM

There's about $100MM spent annually on products that manage and correlate logs. Guess what? None of it is hard to do. The underlying tools are there. Customers know how to do this better than the vendors do. Expect a mainstream open-source combination of Argus <http://www.qosient.com/argus/> and Sguil <to">to">to">http://sguil.sourceforge.net/>to own the security management conversation next year.”

When I saw it, I got upset that people otherwise so amazingly intelligent (example: Thomas Ptacek) can make claims so incorrect :-) A fun discussion of this prediction emerged in multiple places, also back in 2005-2006: the post comments, DailyDave (Dave’s post, my post, David Bianco post, my next post, the whole thread), my blog (On Open Source in SIEM and Log Management, )

Among all the discussion, this piece by Dave stood up:

“My prediction: No credible open source SIM (aka, log aggregator).

Boring work gets done by corporations, and that's that. Not to mention the impossibly high barrier to market of having to purchase and maintain all the random devices that generate logs.”

This is basically the essence of my argument which I also made here in my approaches to log management presentation, slide 10 (even though I was arguing against building one’s homegrown log management or SIEM). To summarize:

  1. Building a SIEM is fun (perfect for open source), BUT SIEM is inherently “high-maintenance” via a lot of boring, manual tasks (one example: check Cisco.com weekly for changes to log messages of their hundreds of devices THEN pull your hair out anyway when logs change without any documentation). Maintenance is NOT open-source forte, and for SIEM, “no meticulous maintenance –> no value.” Open source community is not so great with eternal commitments.
  2. To analyze logs, you need to have logs. Either you get the logging devices (expensive –> not for open source) or you get the logs. Many people said “oh, open source community will collaborate on that.” Guess what? It didn’t (attempts here, here, here (now redirects)). When log standards (CEE) emerge, it will change; today it is impossible.
  3. Can the task of log analysis be pushed to end users of the open source tool (after all, they are getting it for free, they can do some work…)? Yes, it can, provided there are tools to drastically simplify the logs->intelligence path (at one point, I hoped splunk’s “Event type discovery” will do it, but it didn’t); such tools do not exist. And, sadly, normal people don’t write regexes (good joke about it). To top it off, writing parsing rules is nowhere near as much fun as writing IDS sigs or vulnerability checks – and then packet headers don’t change on you, while log headers do.
  4. Log analysis or SIEM system needs to be able to handle volume, not only live flow, but also storage. A lot of tools work well on 10MB of logs, but then again, so does human brain. When you move to TB volumes, a lot of simple things start to require engineering marvels. Is it as hard as getting the Linux kernel (the pinnacle of open source engineering) to perform? Probably not as hard, but the OSS SIEM project creator need to be BOTH a log expert and a performance expert.
  5. SIEM is also a lot about integration and not just hard-core coding. I believe in open-source correlation engine (SEC, OSSEC, general-purpose Esper), maybe in open source parser generator, possibly in open source data presentation UI, but definitely not in all pieces working together and pulling log data and context data from all the required sources and then making sense of it. There are way too many moving pieces – as we all know, many SIEM deployments fail not because of crappy technology, but because of politics.

There are other related grand problems too, but I digress.

Some people (in the same DD thread) even suggested that the reason that open source community didn’t get to tackle the above problems is simple: SIEM products aren’t really needed (Richard doesn’t have much love for them, for example) and that the community will find some other way of solving it (“a small, useful, standalone tool will almost always be more functional and more reliable than a merit badge feature equivalent in a commercial product”) I agree with that in principle, but if part of SIEM’s value-add is "tying stuff together" then having analysts watching 10 "small, useful, standalone tools" is actually a way back, not forward.

Maybe an open source SIEM project can only support a few “right” log messages? This was a very popular view in the 90s: just filter the logs and see the important ones. But do you know why Marcus created “artificial ignorance”? ‘Cause “filter the logs” approach doesn’t really work: you never know what are the right ones, until you look at all.

What about the existing products, which are

  • Prelude is not a SIEM and hardly anyone uses it.
  • Sguil is not a SIEM. It is based on a different model, assumptions (=intelligent user) and use cases.
  • OSSEC is awesome, but also not a SIEM. It has correlation now and “wide-ish” log source support, but doesn’t measure up to SIEM in many dimensions.
  • OSSIM is indeed an open-source SIEM. Now that it ha a full-blown corporate parent, it has potential. In fact, when I first saw it in 2005 (maybe before, not sure), it had potential too. It is just now it has more of it!

Now, more on OSSIM: Dominic and the crew are awesome, but I think that the above considerations will prevent OSSIM from becoming widely adopted. Here is why: how many open source NIDS do you know? 94% [source: srand() :-)] of folks in security will say: one (Snort), another 3% will say two (Snort, Bro), another 2% will say 3 (Snort, Bro, Prelude), another 1% will say something else. Now, try that with open source SIEM: there is no “snort of SIEM” and the result will be different. IMHO this is inherent (=not a question of time) due to incompatibility of SIEM and open source model, shown in items 1.-5. above.

BTW, somehow recent Twitter SIEM madness (eh… #SIEM madness), caused other people to think about this too.

Conclusions:

  • So, at the risk of eating major crow later, I insist: no credible open source SIEM will emerge until 2020 (niche projects will continue, just as open source NIDS existed before Snort)
  • Taking it to an extreme, I think a commercial SIEM may die first before the open source one is born…
  • Topics of industry discussions on SIEM from 2005 are still relevant => SIEM equals stagnation.

BTW, did I mention cloud/SaaS SIEM? Oops, I did now :-)

Have fun with it!

Possibly related posts:

Friday, June 19, 2009

How to Harness the Power of PCI DSS? Tip #2

Inspired by the panels we did on PCI (here, here), I decided to start a series of posts with tips on harnessing the amazing motivating power of PCI DSS for meaningful security improvements. These tips are most useful for those in the trenches who are required to comply with PCI DSS while keeping the systems running and secure, but maybe do not know how, and not to those who whine, bitch, blog and now Twitter their way to infamy…

So, got a nice heavy PCI hammer? Where do you hit for security?

image_thumb2

Tip #2 will again focus on something very basic, non-controversial and – we are in luck! – spelled out very clearly in PCI DSS: namely, NOT ever storing certain data. This requirement is also one of the key components of Phase 1 of PCI DSS Prioritized approach (detailed here)

By the way, did you know that “data deletion” represents one of the simplest-yet-effective information risk reduction methods ever invented by the humankind? :-)

This is exactly why this requirement is so important: it is much easier to delete the data and organize your business process based on not having it rather than protect and secure such data (and, yes, some will point at this fact and say “Security FAIL!”)

So, what data can never, ever, ever, ever, ever, ever be persistently stored if you are to have any hope of PCI DSS compliance for your organization [to the best of my knowledge, “storage” in RAM is not considered storage]? The answer is easy:

  1. Full track data from the magnetic stripe, magnetic stripe image on the chip, or elsewhere
  2. CAV2/CVC2/CVV2/CID code, a 3- or 4-digit value printed on the card (explained for laymen here)
  3. Personal identification number (PIN) or the encrypted PIN block.

Here is a reference from PCI DSS document:

pci-not-store-this

Remember, if you are persistently storing ANY of the above (full track, CVV2, PIN), you are NOT PCI DSS compliant and CANNOT BE PCI DSS validated [not legitimately, at least!]. Also see Visa famous DropTheData site.

Finally, this tip results in a simple action item:

  • Find out if you have such data stored.
  • If there happens to be an active business process that results in such data or that relies on having such data, adjust it.
  • Delete the data.
  • Make sure that no accidental/undocumented storage is taking place.

Enjoy decreased data loss risk, courtesy of PCI DSS :-) Also, please remember that stored of prohibited data killed CardSystems back in 2004 (well, that was one of the things…)

Possibly related posts:

Wednesday, June 17, 2009

PCI DSS Marches On: Level 2 Merchant to Require A Site Assessment


Branden alerted "the whole wide PCI DSS realm" today with this: "NEWS FLASH: MasterCard Requires On-Site QSA for Level 2 Merchants."

This has been rumored for a while, and turned out to be true. Here is a relevant updated table from the Mastercard site:


Obviously, awesome news for security! Now folks who are hell-bent on not having any concerns for customer data will need to deceive an actual live QSA rather than simply lie on their SAQ...

Tuesday, June 16, 2009

Workshop on the Analysis of System Logs (WASL) 2009 CFP

All Loggies everywhere:

WASL 2009 workshop is your chance to shine. This is also a chance to prove that something actually - gasp! - USEFUL and USABLE can come out of academic security research. Moreover, this workshop is designed to be a mix of academic and industry.

Full announcement follows below:

Workshop on the Analysis of System Logs (WASL) 2009
http://www.systemloganalysis.com Call for Papers

===============================
October 14, 2009
Big Sky, MT
(at SOSP)
===============================

FULL PAPER SUBMISSION: Monday, June 29th, 2009
AUTHOR NOTIFICATION: Monday, July 27, 2009
FINAL PAPERS DUE: Monday, September 14, 2009

--------------------------------------------------------------------------

System logs contain a wide variety of information about system status and health,
including events from various applications, daemons and drivers, as well as sampled
information such as resource utilization statistics. As such, these logs represent a
rich source of information for the analysis and diagnosis of system problems and
prediction of future system events. However, their lack of organization and the general
lack of semantic consistency between information from various software and hardware
vendors means that most of this information content is wasted. Indeed, today's
most popular log analysis technique is to use regular expressions to either detect
events of interest or to filter the log so that a human operator can examine it manually.
Clearly, this captures only a fraction of the information available in these logs and
does not scale to the large systems common in business and supercomputing environments.

This workshop will focus on novel techniques for extracting operationally useful
information from existing logs and methods to improve the information content of future
logs. Topics include but are not limited to:

o Reports on publicly available sources of sample log data.
o Log anonymization
o Log feature detection and extraction
o Prediction of malfunction or misuse based on log data
o Statistical techniques to characterize log data [A.C. - a very fun one!]
o Applications of Natural-Language Processing (NLP) to logs [A.C. - or that other NLP :-)]
o Scalable log compression
o Log comparison techniques
o Methods to enhance and standardize log semantics
o System diagnostic techniques
o Log visualization
o Analysis of services (problem ticket) logs
o Applications of log analysis to system administration

Finally, some advice to those looking for a log-related problem to solve (as if those are not on the surface :-))- look no further than "Anton's 'Grand Challenges' of Log Managemet": still fun, still unsolved, still horrible to look at :-)

Saturday, June 13, 2009

Fun Job Open at Qualys: Director of Vulnerability Research

Here is a fun job open at Qualys: Director of Vulnerability Research.

Description
The Director of Vulnerability Research will be responsible for ensuring that our vulnerability and compliance signatures and detections are kept up to date on the latest technologies. The candidate will also be responsible for advanced research and detection techniques and will interface externally with the security community.

Apply and read more.

Friday, June 12, 2009

Fun Reading on Security and Compliance #16

Instead of my usual "blogging frenzy" machine gun blast of short posts, I will just combine them into my new blog series "Fun Reading on Security AND Compliance." Here is an issue #16, dated June 11, 2009 (read past ones here).

This edition of dedicated to PCI DSS: stop whining – start securing.

Today’s security reading actually has one topic only: “QSA” lawsuit. It is covered and debated in the following pieces:

  1. Security Assessor Sued in CardSystems Breach: Merrick Bank v. Savvis” (David, suit copy linked)
  2. Don't Sue Me, Sue the Auditor
  3. Audits Show Things At a Moment in Time; Silly To Sue For Breaches That Happen 1 Year After Audit Conclusion?
  4. Ex-"QSA" Sued over CardSystems” (from Branden)
  5. Merrick Bank vs. Savvis: What can I say?
  6. Data Breaches, Lawsuits, and Auditors - Oh My!
  7. Security auditor gets sued
  8. Why suing auditors won't solve the data breach epidemic
  9. Dangerous Times for PCI Regulations, Auditors
  10. QSA Liability – CardSystems and court precedence
  11. "AUDITOR(S) TO BE HELD TO ACCOUNT? - CARDSYSTEMS AND SAVVIS"
  12. Finally the juiciest bit: David’s analysis of the suit “Merrick Bank v. Savvis: Analysis of the Merrick Bank Complaint”

Enjoy!

Possibly related posts:

Thursday, June 11, 2009

Fun Upcoming SIEM Roundtable

Just reblogging the announcement since I think it would be useful for my readers. These are fun panelists, BTW, not brainless drones (well, I don’t know Brendan personally, but I guess he is not one either :-)) so I suspect the discussion will be worthwhile.  And the questions are VERY good too: how does “Is  purchasing a SIEM solution a fiscally responsible act given the current state of the economy?” sound? :-)

============================================================

“SIEM Thought Leadership Roundtable" June 17, 2009 2:00PM ET

  • Mike Rothman - eIQ Networks

  • Mark Seward - LogLogic

  • Brendan Hannigan - Q1 Labs

  • Paul Stamp - RSA

Register Here

https://whitehatworldevents.webex.com/whitehatworldevents/onstage/g.php?t=a&d=667930266

Most IT security departments are swamped. On one hand they’re contending with a highly dynamic threat landscape and an ever-expanding technology portfolio that requires protection. At the same time they’re doing what they can to help fulfill a burgeoning list of audit and regulatory compliance requirements. And on their third hand … if only that were possible! Fortunately, Security Information and Event Management (SIEM) – which has long offered IT Security the prospect of re-gaining control – is possible. In this Thought Leadership Roundtable, we’ll get to the bottom of what makes SIEM different from other security management solutions as well as what level of investment is required to make it work. Questions our panel will address include:


• Is purchasing/implementing a SIEM solution a fiscally responsible act given the current state of the economy and IT security budgets?
• How much of the “compliance problem” does SIEM actually address?
• To what extent do SIEM solutions provide meaningful correlation and enable detection of threats that would otherwise go unnoticed?
• What does the future hold for SIEM, both in terms of functionality and its relationship to other security management solutions?

Wednesday, June 10, 2009

My Security Information Trust Pyramid

Not log trust, mind you; this is just a structured dump of how I look at security-related information coming from various public sources.

  1. Whatever writing from someone that I actually know personally (and can vouch for)
  2. Blog of a security engineer (typically minimum bias)
  3. Analyst blogger (their bias is typically spread around)
  4. Security vendor blogger  (their bias is clear and can be corrected for)
  5. Security consultant blogger (their bias is opaque, so less trust)
  6. Security journalist blogger
  7. IT journalist blogger
  8. IT journalist
  9. Clown in a neighborhood circus :-)

What are the conclusions one might draw from this?

a. Open bias makes for easier information interpretation than a hidden bias

b. I’d take “biased + knowledgeable” over “fair + ignorant” any day of the week :-)

Enjoy!

Tuesday, June 09, 2009

Monday, June 08, 2009

On Switching Away from Firefox

So today I did the unthinkable – I stopped using Mozilla Firefox for good on all systems. While I’ve rightfully considered the use of Internet Explorer to be “criminal negligence” for a long time, the popular perception of “IE – bad, Firefox – good” seems to have quietly collapsed with nobody noticing … thus, this blog post.

Over few last months, Firefox on several of my Windows systems (please don’t remind me that I should use Linux or that fruit thing) started:

  • Crashing a few times a week
  • Experiencing very long startup times
  • Going into occasional freeze-ups for up to 30 seconds
  • Not closing down; when you try to launch a new one, the previous instance would remain in memory and needs to be killed.

At this point, I am done with it. I am not sure what made Firefox to be the “IE of 2009” – slow, unstable and crash-prone. Was it a modular architecture? Module quality? General codebase bloat? Intense focus on security? :-) I don’t know and at this point, I don’t care. Google Chrome is stable, super-super-fast and renders all the websites I go to well. Bye-bye, Firefox.

Now the question that many would: “But is it secure?” The only honest answer is the same as with Firefox: we don’t know. If Firefox was “secure because it had niche use,” then Chrome is secure for that reason too :-) It definitely has more frequent updates, thus shrinking a half-life of its vulnerabilities. Early on, Chrome had some embarrassing holes, but these seem gone now, with – hopefully! – lessons learned.

However, as I was rereading that ever-awesome Mark’s “Naked Security” preso where he reminds folks that “Functionality > Performance > Security”, the thought that I switched from IE to Firefox and now to Chrome due to functionality (and to some extent performance), not security. I switched to Linux (back then…), because I needed to run a bunch of tools, not because “Linux is more secure.” Then I switched back to Windows for the same reason. Just how desperate a software user needs to be to switch due to insecurity? [BTW, I will explore this subject in the future on KilledBySoftware.info]

Finally, I wanted to finish with with a complete quote from my own post on the previous browser switch, back in 2006:

So, is security always secondary to functionality? No, that is the wrong question to ask. The truth is that secure functionality is clearly preferred to insecure functionality. However, all the security in the world will NOT make someone switch to something that does not have the needed functionality. Which is, IMHO, an important lesson that we purveyors of security gear should always keep in mind!”

In other words, if something doesn’t work, it might be “secure”, but nobody will use it!

Possibly related posts:

Thursday, June 04, 2009

Webinar “How to Prioritize PCI DSS Compliance” on June 11th

The time has come to do one more of our world famous Qualys Technology Briefings; and the topic is “How to Prioritize PCI DSS Compliance.”

So:

Join Dr. Anton Chuvakin (co-author of PCI Compliance: Understand and Implement Effective PCI DSS Compliance – second edition is coming soon!) and Terry Ramos (co-author of PCI Compliance for Dummies) as they discuss one of the key challenges of PCI DSS: prioritizing organization’s PCI Compliance efforts.

Date: Thursday, June 11, 2009
Time: 2pm ET / 11am PT
Length: 20-30 min plus Q&A

Using the recently released PCI Council “Prioritized Approach” guidance, this 20-min briefing will discuss how organizations can effectively focus their PCI DSS implementation efforts in order to ensure the security of cardholder data, reduce information risk and protect the organization --- all while on the shortest path towards PCI DSS validation. The session will also cover how to use the new guidance to save time and money on compliance projects as well as how decide where to start with PCI DSS.

There will be a Q&A session at the end of the briefing. And just like last time, I’d post the Q&A here on the blog.

Finally, a warning for PCI cognoscenti: this covers basic material; PCI 102, not PCI 600!

Register here NOW!

Possibly related posts:

Monday, June 01, 2009

Monthly Blog Round-Up – May 2009

As we all know, blogs are a bit "stateless" and a lot of good content gets lost since many people, sadly, only pay attention to what they see today. These monthly round-ups is my attempt to remind people of useful content from the past month! If you are “too busy to read the blogs” (eh…cause you spent all your time on twitter? :-)), at least read these.

So, here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics.

  1. Surprisingly, my PCI DSS Q&A from the Qualys webcast “PCI DSS Myths” (select slides here) took the #1 spot: “PCI Myths Webcast Recording and Q&A.” We will be doing another fun PCI webcast on June 11th – please watch the blog for details (and, of course, the Q&A will also be posted!)
  2. OK, I should stop whining about this, but it keeps puzzling me: A LOT of people are Googling for “open source SIEM” all the time! The old mini-post “On Open Source in SIEM and Log Management” is getting a huge amount of traffic from those queries, despite the fact that an open source SIEM does not exist (yes, OSSIM is kinda that, but not many people know about it)
  3. It looks like my pointer to Ben’s paper (“MUST Read PCI DSS Paper “PCI: Requirements to Action”) just a few days ago caused a traffic boost. As I mentioned, I don’t agree with everything there, but it is a worthwhile read.
  4. Just like last month, my PCI DSS hearing coverage in US Congress takes one of the top spots. Also see (if you can stand it…) , live Twitter coverage here under #pcihearing hashtag.
  5. Finally, my latest security reading piece (“Fun Reading on Security and Compliance #14”) took the last of the “Top 5” posts this months. It has a lot of fun links on security, PCI DSS compliance and other goodness :-)

See you in June. Also see my annual “Top Posts” (2007, 2008)

Note: this is posted by a scheduler; I am away from computers for a few days.

Possibly related posts / past monthly popular blog round-ups:

Technorati Tags: ,,,

Dr Anton Chuvakin