Since I am press (got my ridiculous pink badge tag already), I will write :-) After catching part of the keynote (the Google guy was pretty interesting with his security thinking – however, I suspect when he says “users”, he means much better people than a typical organization), I am at Qualys-led (Wolfgang) panel with GE (Richard), Orbitz CSO (Ed), Heartland Payment Systems CSO (Kris), Goldman Sachs (Paul) and State of CA (Mark).
- Half-lifes of vulnerabilities (=time in which half of the vulnerable boxes get patched) didn’t change much since 2004 (same as the research revealed at RSA 2009 showed)
- No matter how old, many vulnerabilities stay forever on some systems (or new systems with old vulns are being connected). 8-10% of machines which were vulnerable stay vulnerable for as long as the research covers, but likely forever. This about it! Even old critical – “Insta-0wn”-type – vulnerability stay forever on some – likely compromised – systems.
- If you limit the scope of half-life analysis to core OS vulnerabilities, the half-life drops to 15 days (which means that people patch those quickly!) On the other hand, if you limit it to Adobe and MS Office flaws, the half-life sharply rises to 60 days (which means people just don’t care – and the current dramatic “0wnage” will continue)
- “Speed up patching!” call is still needed, despite it being made for years and years. Looks like people get to pay attention to OS flaws, but not to client issues.
The panel then discussed that doing “single day” patching (holy grail for many organizations) is doable even in large companies, but that is not the end of it -by far. For example, Ed from Orbitz comments reminded folks that “deploy patch” becomes “write patches” for custom apps. The problem thus becomes worse and worse, if you happen to have a “build, not buy” culture: the percentage of systems that you can quickly patch becomes lower and lower.
A lot of interesting comments were made by the Heartland CISO (who, BTW, joined 2 weeks before the now-infamous election breach disclosure) about how a breach motivated a change in their patching process. He said that they used to focus most resources on payment processing environment, but then their non-CDE corporate network was breached first and analyzed for 7 months (!) by the attacker who then broke thru the developer access to CDE. Patching client flaws on the corporate (non-CDE) side is now a priority as well.
Richard’s comments come from the IR/IH side. For example, in case of a particular Adobe flaw, they saw exploitation activity on the 15th, were informed about the issue on the 21st, and then the patch came on the 28th (timeline approximate). Thus, even if you patch really well, finding 0days becomes key since 0-day 0wnage is rampant if you are the “right target.” In-house research is the only choice in this case, I suspect.
Afterwards, Kris from Heartland made a few one comments on DLP: they use it for discovery and data auditing, not for data leak prevention (which is definitely very reasonable). Another interesting theme (brought up by Ed) was not just awareness of what is going on your network (which is hard), but also on all the supplier networks that connect to it. This is a curious mix of technical security and legal, contractual stuff.
Finally, an interesting insight came to me from listening to this panel: evidence of different focus of security management was clearly heard - some organizations focus on patching the right segment, some on faster patching, some on limiting access, some on network visibility. To me this spells the end of the quest for “security best practices” since your “best” might be doomed to be forever different from others “best” …
Overall, this was a very fun panel to attend!
Now, on to the “Weaponizing the Web” talk.