Thursday, August 27, 2009
Tuesday, August 25, 2009
"Now look at them blackhats that’s the way you do itFinding holes in your securityThat ain’t workin’ that’s the way you do itMutating malware very frequentlyNow that ain’t workin’ that’s the way you do itLemme tell ya them guys ain’t dumbMaybe get a virus on your little palmtopMaybe get a virus on your phone..."
Wednesday, August 19, 2009
Thought Leadership Roundtable Webcast on Security Policy Configuration [A.C. – that is the “small p” policy, not organizational Policy]Date and time:
Wednesday, August 19, 2009 2:00 pm EST / 11:00AM PST (TODAY!)
Join Rich Mogull as he gathers industry visionaries from nCircle, Qualys & Tenable around a virtual table for a lively discussion on Policy Compliance IT Security trends. Attendees gain a fresh, unedited perspective of the emerging security technology trends directly from those responsible for their development. Our vendor partners use this platform to communicate their organization’s vision and direction in a marketplace otherwise overwhelmed with an abundance of hyper edited press releases and other bland corporate chatter.
It will be fun, so register here. And if you don’t like it, you can always beat them up on Twitter :-)
Friday, August 14, 2009
Quietly, sometimes with tiny steps and sometimes with long delays for deep meditation :-), the case for log standardization moves forward. Recently, at RSA2009 and BlackHat2009 meetings, the CEE team has managed to achieve a breakthrough and actually resolve many of the highly debated issues of taxonomy, definitions, formats and even of the future CEE compliance program. Of course, not all of the trouble spots are resolved – the log standardization remains as devilishly hard as it always has been!
For example, here is how far the Common Event Expression has progressed:
MITRE is continuing work on the Common Event Expression (CEE) standard
in conjunction with the Editorial Board and various organizations.
The past months have been spent on the drafting and validation of a
proposal for the initial CEE Specification.
This specification was submitted to the Editorial Board last month.
MITRE is currently working at rolling in the comments received from
the Board, and expect to have a new draft for their review in the next
couple of weeks.
Once the Board has approved the specification, the specification will
be posted to the CEE Community for feedback. We expect this to occur
within the next month. Our goal is to have final proposal that the
community can agree to by the end of 2009. “
MITRE in collaboration with industry and government offer the Common
Event Expression (CEET) Architectural Proposal for the Core
Components as the basis to standardize event logs from electronic
systems. This paper builds on the CEE proposal summarized in the
Common Event Expression Whitepaper by defining the core components'
architecture needed to enable collaborative efforts in the creation
of an open, practical, and industry-accepted event interoperability
standard for electronic systems.
This specification summarizes CEE and provides details on the
architecture of the core components including the data dictionary,
syntax specifications, and event taxonomies. This proposal is the
first in a collection of documents and specifications. The
combination of the documents and specifications provides the
necessary pieces to create a complete event log standard, which can
be mapped against the four components of CEE: Transport, Syntax,
Taxonomy, and Log Recommendations. “
You thought government will mandate which health insurance you’d have? Ha-ha-ha, how about what logs you’d have? ;-) [but unlike health insurance, that would be a good thing!![
Possibly related posts:
- All posts about CEE
Thursday, August 13, 2009
No time to comment on this, but aggregating this sudden revival of “the Heartland saga” is A MUST at this stage.
- Heartland CEO Carr interview with CSO Magazine titled “Heartland CEO on Data Breach: QSAs Let Us Down.” Notable quotes are: "The audits done by our QSAs (Qualified Security Assessors) were of no value whatsoever. To the extent that they were telling us we were secure beforehand, that we were PCI compliant, was a major problem”, “The false reports we got for 6 years, we have no recourse. No grounds for litigation,” “up until this point, we certainly didn't understand the limitations of PCI and the entire assessment process” and “PCI compliance doesn't mean secure.” Niiiiice. Also, why-oh-why did Bill ask that last question “What should companies be asking in terms of the insider threat?” Why did ya, Bill? :-)
- Mike Rothman freaks out and goes on a rampage in “One Man's View: Heartland CEO Must Accept Responsibility.” Notable quotes are: “my blood is boiling”, “It's about time organizations suffering from a data breach owned up to the fact that they made a mistake”, “inevitably it will happen again”, “you cannot outsource thinking” (my personal fave!), “they [QSA] are not there to tell an organization whether they are secure or not – that […] is the responsibility of the internal security team”, etc.
- Rich Mogull freaks out in unison and goes on a rampage in “An Open Letter to Robert Carr, CEO of Heartland Payment Systems.” Notable quotes are: “Your attempts to shift responsibility to your QSA are the accounting equivalent of blaming your external auditor for failing to prevent the hijacking of an armored car”, “Their [QSA] role isn't even to assess your security defenses overall, but to make sure you meet the minimum standards of PCI”, “Unless your QSAs were also responsible for your operational security, the only ones responsible for your breach are the criminals, and Heartland itself.” BTW, this is the post where you have to also read the comments!
- Andy reads all this and freaks out as a result in “Will the real leader please step forward.” Notable quotes: “If we can’t trust them to be responsible in this then how can we trust them to be responsible in any other way.” The rest of his comments are too strong even though this is my personal blog.
- Branden from VeriSign coolly adds in “Bob Carr: "QSAs let us down." And Things Never Heard by a QSA.” He also thoughtfully reminds everybody that their PREVIOUS QSA, not CURRENT one did it. Notable quotes are: “The article is a fantastic read, but also slightly humorous in nature”, “Some QSAs WILL let you down. You get what you pay for, and some QSAs may not do a good job.”
Enjoy! Will add more as the come.
Possibly related posts:
Tuesday, August 11, 2009
In the future, it will become clear why I am writing this... For now, please treat this as some random analysis of our profession as well as of the dreaded definition of “a security expert.” Some might say it is a rant, but I prefer to tag it as “musings.”
Lately I’ve run into too many people who [claim to] “know security” or are [claim to be] “security experts.” Now, as some of you recall, I used to do theoretical particle physics before I came to information security. In my physics days, I’d be pretty shocked if I were to meet a colleague in the hallways of the C.N. Yang Institute for Theoretical Physics who would self-identify as “a scientist” or, for that matter, even as “a physicist.” It is overwhelmingly more likely that he would say “quantum chromodynamics” or “lepton number violation in electroweak gauge theories” or “self-ionization of the vacuum” or some such fun thing :-) However, as we all know, some folks in our industry have no shame introducing themselves to a colleague as “security experts.”
So, you are “a security expert.” Awesome, happy to hear it! Please let me know whether you are Case A or Case B.
Case A: you know more than an average person on the street about every single area (or many, many areas) of information security: from ISO27001 to secure coding in Ruby?
Case B: you know more than your peers in security about one particular area (or a few areas) of information security: log management, Java security code review, penetration testing, NIDS/NIPS rule creation, firewall management, wireless scanning, etc?
Let’s see which one is consistent with how people in other professions define “expertise.” The obvious start is Wikipedia. As of today, http://en.wikipedia.org/wiki/Expert entry says:
“An expert is someone widely recognized as a reliable source of technique or skill whose faculty for judging or deciding rightly, justly, or wisely is accorded authority and status by their peers or the public in a specific well distinguished domain. An expert, more generally, is a person with extensive knowledge or ability in a particular area of study.”
Other sources (such as Google “define:expert”) present similar results; expert can only be an expert in a specific narrow area.
Now, notice that the farther you are from a certain area, the more it seems like a narrow one (example: “science” to a average janitor is a narrow area). On the contrary, the deeper you are inside a particular area , the more it seems like a wide area (example: “brain tumor surgery” to a neurosurgeon is a broad area or “quantum gravity” to a physicist).
Despite such relativism, other professions somehow managed to converge on their definitions of “an expert.” After all, you don’t get to “enjoy” a neurosurgery from somebody who “knows more about medicine than an average layperson.” However, as we all know, many organizations “enjoy” having their NIDS tuned by a just-hired CISSP (aka proof of being “a light-year wide and a nanometer deep” in security :-)). What’s up with that?
I think this has a lot to do with the fact that the area of security is too new and too fuzzy. However, my point here is that a little common sense goes a long way even at this stage of our industry development. In light of this, next time you meet “a security expert,” ask him what is his area of expertise. If the answer is “security”, run! :-)
Finally, career advice for those new to information security: don’t be a generalist. If you have to be a security generalist, be a “generalist specialist;” namely, know a bit about everything PLUS know a lot about something OR know a lot about “several somethings.” If you ONLY know “a bit about everything,” you’d probably die hungry...
Possibly related posts:
Monday, August 10, 2009
Just wanted to catalogue the whole “Oblomov-gate” for posterity.
- “Showing The Oblomovs The Door” by Nick Selby, of 451 Group fame; read the comments too.
- “Personal Responsibility in Information Security” by Mike Dahn, of QSA training fame; read the comments too.
- “Two must read posts on PCI” by Martin McKeay, of security podcasting fame.
The great “audit/compliance” vs “security/risk” battle is made very explicit in this discussion. As “audit/compliance” side was “winning" more lately, with this post the “security/risk” side hits back (with a pillow?) and makes the weaknesses in the other side armor more apparent. BTW, I am sure that all the participants of this read the original Donn Parker piece “Making the Case for Replacing Risk-Based Security” [PDF] (if inaccessible, comments here and here), now, didn’t they?
BTW, if somebody will dare say “we need both”, you’d win the Captain Obvious award. But given that this discussion is about the driving or primary approach, such attempts at pacifying the participants will definitely result in F.A.I.L.
I will update as more people comment about it, as they undoubtfully will.
Possibly related posts:
Saturday, August 08, 2009
As you might guess, I often read security books for fun, not for solving a particular technical problem. So I approached “Chained Exploits” by Andrew Whitaker, et al with that filter in mind. The book worked just fine for that purpose – it is well-written and has a story line, while covering enough technical details to be educational (for those who are reading it to learn about security and not just for fun). It covers the exploits of a malicious hacker “Phoenix” who fulfills the assignments of some underground criminal mastermind and sometimes just goes and 0wns somebody on his own. Obviously, the book does not cut it as “fiction” since it has actually commands, configuration, etc.
The book is not about a new cutting edge technique or an “oh-day”, its main goal is to actually tie “that security stuff” together for folks who are not skilled with it yet. IMHO, IT folks getting into security will benefit from it the most. If you 0wn boxes for fun and profit, you will not learn anything fundamentally new about security, but likely will have fun in the process. Think about it as “Life-like Security Horror Stories” or realistic scenarios. Still, these are a bunch of good story of how mundane, “uncool” attacks tie together to achieve some rampant 0wnage, like having people at a hospital almost die as a result of one particular scenario…
Each story covers motivation and goals of the attach, planning stage, sometimes failed attempts (and why they fail), tool selection and some guidance on tool use. Then it explains what happens and finally covers countermeasures that could have stopped it.
The book bears unfortunate, but noticeable signs of being written by multiple people who didn’t talk to each other much.
Finally, the name (“Chained Exploits”) first turned me away from the book, I thought it was kinda silly; now I suspect that it will attract some folks to the book.
Recommendation: definitely worth a read if you are new to security, especially if moving from IT. Useful for students in computer science classes to get motivated about security. Also useful for technical management to learn what is not just possible, but very real. Finally, useful for security folks – as a fun read – and also as a reminder about things in their own (still their own, not 0wned…) environments.
Possibly related posts:
Friday, August 07, 2009
"The CEO who lets the Security organization become the compliance department has abdicated to the government and Payment Card Industry his responsibility to understand and manage organizational risk. "
"Thus have they managed not only to not raise the bar but in fact to substantially lower the ceiling - PCI is not the minimum standard, it's the maximum effort that many organizations make."
"'Best Practices' is a term for which toilet-dunks should be applied rigorously - the term is, to borrow a phrase from Marcus Ranum, weapons-grade marketing bullshit"
"It's more about the fact that all this compliance stuff is preventing us from addressing risk and performing, you know, security."
"You want your compliance department to manage risk for you? You'd better hope your firm is considered, “Too big to fail,” so the next round of government bailouts can save your sorry butt. "
Thursday, August 06, 2009
There is “security theater” and then there is BlackHat/DEFCON. If it were a vendor glossy, I’d have called BH/DC “the latest version of an ultimate, next-generation, paradigm-shifting, integrated theatrical experience.” :-)
As a result, seeing a few of the speakers [and being in, you know, Las Vegas, Nevada :-)] made me think about whoring; media whoring, to be exact. Obviously, security industry is unthinkable or maybe even provably impossible without some media slutting, but at this year’s show I realized “I ain’t seen nut’n yet.” Some folks are just sooooooooooooooooooooooooooooo good at it.
In any case, my thinking converged into the following over-simplified model (or “muddle”?) which analyzes the intersection of media whoring and subject matter (in this case, security) knowledge:
|Security knowledge vs seeking media attention||Media attention not sought||Media attention sought|
|Knowledge of subject matter present||Knows his shit + nobody knows about it (case I)||Knows his shit+ makes everybody know it (case III)|
|Knowledge of subject matter lacking||Knows nothing + nobody knows about it (case II)||Knows nothing + makes everybody think that he knows everything (case IV)|
What does this teach us?
- Case I gets respect, but not enough of it. There is nothing we can do about it, however. People, if you are cool, speak up, the world needs you! Get a blog or something…
- Case II gets nothing, which is coincidentally what it deserves. Stay there :-)
- Case III gets non-trivial amount of disdain and some respect (especially when it involves MD2 crypto :-)) However, is such disdain truly justified? While there is no metric to compare the value of one’s contribution with an effort needed to get the media to spread his message, common sense criteria definitely apply (“Internet is DEAD! Press conference at 5PM. Live Twitter coverage!” :-))
- Case IV gets non-trivial amount of hatred and disdain, but IMHO – and this is my MAIN POINT! – nowhere near enough disdain compared to what they actually deserve!
So, action item: get all your disdain, antipathy, hatred and annoyance that you now spread between cases III and IV, “double it! double it!! – then double it again!!!” and focus it in the direction of case IV people.
Possibly related posts:
Wednesday, August 05, 2009
At the very end of BlackHat 2009, Day 2, I went to Bruce Schneier’s talk called “Reconceptualizing Security.” And, let me tell you, I was surprised that his talk was actually really fun, especially the Q&A in the end.
It seems like on his ‘security journey,’ Bruce is moving from security economics (which is still pretty “hot”, BTW, as most problems are unresolved) to security psychology. That was the main theme for his talk: being secure vs feeling secure. BTW, this post is an inseparable mix of what I’ve heard there at his talk and what I thought as a result :-)
He started from saying that ancient humans in African savannah used to have a complete match between “being” and “feeling” secure (scary=risky), but today this is out of sync AND, when it comes to computers, it is heavily out of sync (“tiger at the other end of the wire is not that scary”). So, while “evolution favors good security tradeoff”, we still evolved to today with an ever-decreasing correlation of being and feeling secure. To top it off, humans now make decisions on feeling, not being secure. Thus the whole mess :-)
This, BTW, drove the final coffin (for me, at least) into “market will drive infosecurity.” No it won’t!! Think about it:
People make bad risk decisions, since they are based on feeling secure, not becoming secure
Market drives security
Market is a bunch of people making purchase decisions
Overall result is folks feeling more secure and no advance in security aka “the whole mess.”
He also quoted some paper (this?) which analyzed the perception of risks and feeling secure (I think I’ve seen it before, but summary was useful):
- unknown risk > (=is perceived as higher than) known risk (example: new disease vs flu variant)
- rare > common (example: swine flu vs regular flu)
- personal > anonymous (example: Osama vs terrorism)
- involuntary > voluntary (example: smoking vs other medical problem)
One of the things I loved the most was Bruce’s final acknowledgement that “security theater” is actually beneficial: specifically, if PERCEIVED risk is higher than the REAL risk, what one needs is to be be reassured and feel good. Guess what? Security theater provides it! Air travel is pretty darn safe, but a lot of folks are afraid: thus, we have TSA, the ultimate in “security theater.” This argument actually makes sense, as long as the false boost to security does not overcome the actual state of being secure – you need to get them to feel as secure as they are secure, but not more. Get it? :-) Same logic applies to such “key” technologies as “anti-baby kidnapping RFID” or drug safety seals, which add a perception of safety to something already pretty safe.
His answer: metrics, of course. We need to observe the reality of security, not the perception. He had this fun warning about metrics though: “my elephant-trample protection device has been perfect for 10 years” (=nothing bad happened due to security vs nothing wouldn’t have happened anyway)
Next he went into models and at times sounded positively “Bandlerian” (actually, I think he quoted Bandler once when he said that ‘sometimes a “model” becomes a “muddle”’). My fave quote: neocortex is “kinda still in beta” :-)
Another very fun point was that he run a fine line about infosecurity becoming more scientific or at least more rational. He said that "“experiment, theory, science leads to good, useful models”, while “religion, faith, myth [or voodoo cult of infosec :-)] leads to bad models.” At the same time, when I asked whether security will become predominantly scientific, he countered with “not in our lifetime, maybe someday.”
So, his idea of “short term fix” for the whole mess is to sync the “feeling” and “being” secure, by reassuring (moves feeling up - hopefully not to “false sense of security”, leave security in place), FUD (moves feeling down – hopefully not too much to paranoia, leaves security in place) and securing (leaves feeling secure in place, increases security as needed). His idea of “long term fix” – “change the model” (which IMHO was not entirely clear to me or probably to anybody else in the audience for that matter :-)) BTW, he also reminded that maybe the reality now changes faster than we can adjust our models and so, as a result, maybe our models will never catch up (and we will be forever doing incident response on 0wned boxes :-), whether on-site or in the cloud…)
At one point, he also kicked infosec risk management in the balls, by reminding that you never really “manage” risk, sometimes it just hits you :-) This somehow reminded me about my sad experience at a Russian security conference a few years ago when I realized that a proper translation of the words “risk management” into Russian literally means “control of risk”…
Q&A was good. After the mandatory AES question, which proved that Bruce is still a cryptographer :-), there was a lot of interesting questions.
I loved these the most:
Q: Checkbox auditing vs value-based auditing, which is better? A: “Use AND” – both are useful.
Q: Is compliance beneficial? A: Security improves two ways: fear (negative) and greed (positive). The first is harder! “ROI nonsense; security is NOT a greed sell” . Thus, fear, but we need the right one :-) Compliance (=audit fail fear) sells security: Bruce noted that it is an “expensive way to sell security; a lot of stuff sold does not add to security at all – documentation, etc.” Still, his resume was that it is “INEFFICIENT BUT THE BEST we have!” and “has improved security at the cost of some extra spending.”
Conclusion: there is only one Bruce! :-) Despite all the jokes (and here), I still think that his security thinking contributions by far overshadow his contributions to media whoring (this will, BTW, be a subject of a dedicated BlackHat-inspired post soon…)
Now onto DEFCON 17th!
Possibly related posts:
Tuesday, August 04, 2009
As we all know, blogs are a bit "stateless" and a lot of good content gets lost since many people, sadly, only pay attention to what they see today. These monthly round-ups is my attempt to remind people of useful content from the past month! If you are “too busy to read the blogs,” at least read these.
So, here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics.
- Now every blogger has that experience: his most loved, deep, insightful post gets little traffic, while something fun and stupid gets loads of it: example this month is my “Nobody Is That Dumb ... Oh, Wait XII” about the evils of honeypots in Norway…
- I am no longer surprised that “Why No Open Source SIEM, EVER?” “rules the seas”, taking #2 spot this month, just as last month. The older inspiration for this post is “On Open Source in SIEM and Log Management.”
- My review and coverage of the book “Beautiful Security” (“Best Chapter From “Beautiful Security” Downloadable!” and “Book Review “Beautiful Security””) is popular due to a lot of linking to it.
- “Vulnerability Scanning and Clouds/SaaS/IaaS/PaaS” post, which is basically a “link and quote” post to this was in Top 5. It helps to continue the discussion about vulnerability assessment of cloud infrastructure (a topic which will be featured in a few posts soon…)
- “BlackHat 2009 Day 1 – Laws of Vulnerabilities Panel” is next. I have three more very fun BlackHat/DEFCON posts in the queue; never had a chance to write them since I was finishing that PCI DSS book last week.
Possibly related posts / past monthly popular blog round-ups:
- Monthly Blog Round-Up – June 2009
- Monthly Blog Round-Up – May 2009
- Monthly Blog Round-Up – April 2009
- Monthly Blog Round-Up – March 2009
- Monthly Blog Round-Up – February 2009
- Monthly Blog Round-Up - January 2009
- Monthly Blog Round-Up - December 2008
- Monthly Blog Round-Up - November 2008
- Monthly Blog Round-Up - October 2008
- Monthly Blog Round-Up - September 2008
- Monthly Blog Round-Up - August 2008
- Monthly Blog Round-Up - July 2008
- Monthly Blog Round-Up - June 2008
- Monthly Blog Round-Up - May 2008
- Monthly Blog Round-Up - April 2008
- Monthly Blog Round-Up - March 2008
- Monthly Blog Round-Up - February 2008
- Monthly Blog Round-Up - January 2008
- Monthly Blog Round-Up - December 2007
- Monthly Blog Round-Up - November 2007
- Monthly Blog Round-Up - October 2007
- Monthly Blog Round-Up - September 2007
- Monthly Blog Round-Up - August 2007
Monday, August 03, 2009
BlackHat 2009 is over, but sharing impressions from it is certainly not.
So, for the the remainder of day 1 went to “Weaponizing the Web” (which had good ideas on CSRF, see their tool here) and “Psychotronica” (which was great content totally killed by a sleep-inducing speaker – I left mid-talk. In fact, I am yawning even as I write about it…). And then I had a chance of seeing Linus get his pwnie…
Then I started Day 2 at Jeremiah and Trey talk, which was a lot of fun. Moreover, it was so much fun as to reach 100% entertainment, which is another way of saying that it was not useful for any practical purpose (apart from entertainment purpose mentioned above, of course :-)). In brief, it covered a whole bunch of fun “non-hacking hacking“ cases (such as compromise of a system to issue licenses to do logging in Brazilian jungle, which supposedly netted somebody a cool $800m). They touched (but, sadly, didn’t analyze) a few things such as what is a better focus: “super hacker strategy” (vs advanced targeted attacker on key systems) vs basic baseline (vs opportunists strategy on all systems) [“both” is what they hinted at, of course]. BTW, their deck is posted here, check it out!
Next was my cloud talk #1, “Clobbering the Cloud.” (UPDATE: full slide deck here) A lot of fun and useful things were discussed – and some impressive cloud “0wnage” was shown too. It started from a useful reminder that the whole permission for “testing the cloud” (whether via scanning or manual pentesting) issue is not resolved. Moreover, PaaS/IaaS made it that much worse, since you might have a permission from the cloud application vendor, but not from Amazon and then end up blacklisted (“Never allowed to buy from Amazon again” :-)). In addition, even issues like “Which version of the application/OS/environment are you testing?” are frequent, since SaaS provider might update their application at any time.
They briefly touched on “cloud compliance”, focusing on transparency of the cloud. Somehow they had an impression that nobody is putting regulated data in the cloud…mmmm… right :-) The also mentioned the subpoena risks of having your data obtained by this or that government without you even knowing. Their point was that trust matters A LOT in the cloud, but at the same time the “verify” part of ‘trust but verify’ often fails.
Here is a set of fun things discussed:
- Cool method for password brute-forcing with password reset links; after all, most if not cloud apps use some password recovery (email- or secret questions-based)
- A very interesting sifto tool (SaaS nikto) written as a Salesforce.com app, which then runs off a high-bandwidth link for free (the story also features a CAPTCHA with its text left in the same web page…)
- Also, a bunch of good ways to steal cloud resources: Amazon cloud instance of Windows license stealing, paid application theft (via DevPay), etc.
- Fun “cloud DoS” with exponential, virus-like growth of VM instances and users.
- Impressive use of trojaned images combined with a tool to make them popular and have them show up at the top of the list. Instant mass cloud 0wnage!
Overall, amazing Amazon IaaS rampage! Also, they showed some fun Apple MobileMe 0wnage as well.
What are my thoughts on this?
First, I’d bet that offensive cloud use (either using stolen benign cloud resources or native “built for evil by evil” clouds :-)) will beat defensive cloud use (like Mark Curphey’s security data analysis ideas) by a long shot. Before we harness cloud resources for security (such as for analytics, etc – we do harness them for scanning already), somebody will turn it against us in a big way. But then again, botnet use for password cracking (which is more “distributed computing” than “cloud computing”) is already there so, “evil cloud” stuff is starting to be a reality…
Second, something made me think that, personally, I’d always keep an offline backup (for BOTH data and processing capability!) for anything I’d put in the cloud. Notice how it compares to the past paranoid mantra “don’t store anything truly private on an Internet-connected PC” – nowadays it is “don’t store it ON the Internet” :-( What’s next, don’t announce it on Twitter? :-)
Third, people talk a lot about software liability and how hard/controversial it is. I had this thought that maybe cloud computing will be where it will start?
Finally, how’s that for a paradox?
a) Many folks say that: “cloud security" (loosely defined here) can be and needs to be awesome.
b) Everybody agrees that: web app security is horrible and will be horrible for a long time.
c) Obviously: cloud computing today is mostly web apps.
Huh? Isn’t the whole cloud security fun (now I know why some folks are so excited about it)?
Next, I went to Kostya’s “Cloudburst” talk; I didn’t follow VMWare security closely enough, but seeing another reliable Guest->Host escape is pretty cool. Sadly, too many people chose this room to catch up on some much needed sleep after a rough night, it seems.
Finally, Bruce Schneier did a very fun talk (yes, really!), which deserves its own post tomorrow.
Possibly related posts: