At the very end of BlackHat 2009, Day 2, I went to Bruce Schneier’s talk called “Reconceptualizing Security.” And, let me tell you, I was surprised that his talk was actually really fun, especially the Q&A in the end.
It seems like on his ‘security journey,’ Bruce is moving from security economics (which is still pretty “hot”, BTW, as most problems are unresolved) to security psychology. That was the main theme for his talk: being secure vs feeling secure. BTW, this post is an inseparable mix of what I’ve heard there at his talk and what I thought as a result :-)
He started from saying that ancient humans in African savannah used to have a complete match between “being” and “feeling” secure (scary=risky), but today this is out of sync AND, when it comes to computers, it is heavily out of sync (“tiger at the other end of the wire is not that scary”). So, while “evolution favors good security tradeoff”, we still evolved to today with an ever-decreasing correlation of being and feeling secure. To top it off, humans now make decisions on feeling, not being secure. Thus the whole mess :-)
This, BTW, drove the final coffin (for me, at least) into “market will drive infosecurity.” No it won’t!! Think about it:
People make bad risk decisions, since they are based on feeling secure, not becoming secure
+
Market drives security
+
Market is a bunch of people making purchase decisions
=
Overall result is folks feeling more secure and no advance in security aka “the whole mess.”
He also quoted some paper (this?) which analyzed the perception of risks and feeling secure (I think I’ve seen it before, but summary was useful):
- unknown risk > (=is perceived as higher than) known risk (example: new disease vs flu variant)
- rare > common (example: swine flu vs regular flu)
- personal > anonymous (example: Osama vs terrorism)
- involuntary > voluntary (example: smoking vs other medical problem)
One of the things I loved the most was Bruce’s final acknowledgement that “security theater” is actually beneficial: specifically, if PERCEIVED risk is higher than the REAL risk, what one needs is to be be reassured and feel good. Guess what? Security theater provides it! Air travel is pretty darn safe, but a lot of folks are afraid: thus, we have TSA, the ultimate in “security theater.” This argument actually makes sense, as long as the false boost to security does not overcome the actual state of being secure – you need to get them to feel as secure as they are secure, but not more. Get it? :-) Same logic applies to such “key” technologies as “anti-baby kidnapping RFID” or drug safety seals, which add a perception of safety to something already pretty safe.
His answer: metrics, of course. We need to observe the reality of security, not the perception. He had this fun warning about metrics though: “my elephant-trample protection device has been perfect for 10 years” (=nothing bad happened due to security vs nothing wouldn’t have happened anyway)
Next he went into models and at times sounded positively “Bandlerian” (actually, I think he quoted Bandler once when he said that ‘sometimes a “model” becomes a “muddle”’). My fave quote: neocortex is “kinda still in beta” :-)
Another very fun point was that he run a fine line about infosecurity becoming more scientific or at least more rational. He said that "“experiment, theory, science leads to good, useful models”, while “religion, faith, myth [or voodoo cult of infosec :-)] leads to bad models.” At the same time, when I asked whether security will become predominantly scientific, he countered with “not in our lifetime, maybe someday.”
So, his idea of “short term fix” for the whole mess is to sync the “feeling” and “being” secure, by reassuring (moves feeling up - hopefully not to “false sense of security”, leave security in place), FUD (moves feeling down – hopefully not too much to paranoia, leaves security in place) and securing (leaves feeling secure in place, increases security as needed). His idea of “long term fix” – “change the model” (which IMHO was not entirely clear to me or probably to anybody else in the audience for that matter :-)) BTW, he also reminded that maybe the reality now changes faster than we can adjust our models and so, as a result, maybe our models will never catch up (and we will be forever doing incident response on 0wned boxes :-), whether on-site or in the cloud…)
At one point, he also kicked infosec risk management in the balls, by reminding that you never really “manage” risk, sometimes it just hits you :-) This somehow reminded me about my sad experience at a Russian security conference a few years ago when I realized that a proper translation of the words “risk management” into Russian literally means “control of risk”…
Q&A was good. After the mandatory AES question, which proved that Bruce is still a cryptographer :-), there was a lot of interesting questions.
I loved these the most:
Q: Checkbox auditing vs value-based auditing, which is better? A: “Use AND” – both are useful.
Q: Is compliance beneficial? A: Security improves two ways: fear (negative) and greed (positive). The first is harder! “ROI nonsense; security is NOT a greed sell” . Thus, fear, but we need the right one :-) Compliance (=audit fail fear) sells security: Bruce noted that it is an “expensive way to sell security; a lot of stuff sold does not add to security at all – documentation, etc.” Still, his resume was that it is “INEFFICIENT BUT THE BEST we have!” and “has improved security at the cost of some extra spending.”
Conclusion: there is only one Bruce! :-) Despite all the jokes (and here), I still think that his security thinking contributions by far overshadow his contributions to media whoring (this will, BTW, be a subject of a dedicated BlackHat-inspired post soon…)
Now onto DEFCON 17th!
UPDATE: very timely link from Bruce's blog called "Risk Intuition."
Possibly related posts: