Here is a fun and disturbing trend; a real one this time :-) since we all saw a bunch of fake "trendware" being brought up during the RSA marketing crapstorm ...
So, imagine the world without legitimate vulnerability research. Scary like its 1992, right [unless your name is Pete] ...
It starts from this: "Current law (in the U.S., the U.K. and several other Western nations) allows flaw hunters like H.D. Moore (Month of Browser Bugs) and Kevin Finisterre (Month of Apple Bugs) to publicly disclose critical vulnerabilities to their hearts' content. Conversely, searching for flaws in a public Web application [public web site] is illegal, in no uncertain terms."
More on this: "The Chilling Effect" that says: "But then, right when security researchers were getting good at the disclosure game, the game changed. The most critical code moved to the Internet, where it was highly customized and constantly interacting with other highly customized code. And all this Web code changed often, too, sometimes daily. Vulnerabilities multiplied quickly. Exploits followed. But researchers had no counterpart methodology for disclosing Web vulnerabilities that mirrored the [some say crappy, but still somewhat workable - A.C.] system for vulnerability disclosure in off-the-shelf software. It's not even clear what constitutes a vulnerability on the Web. Finally, and most serious, legal experts can't yet say whether it's even legal to discover and disclose vulnerabilities on Web applications like the one that Meunier's student found [but most say its not - A.C.]. [see article for the story]"
And here too: 'Grossman's take is that Web security significantly suffers from the legal climate that prohibits so many trained eyes from inspecting Web applications, which are developing new--insecure--functionalities every day. I asked Grossman if he had a prediction for the future. "Yeah," he said. "The bad guys are gonna win."'
So, what do we have here? One can look for vulnerabilities ln in COTS or OSS software and then disclose them in whatever fashion (even "irresponsible" disclosure is still legal, IMHO but IANAL). But if you write a custom web app, as many do and many-many-many :-) more will do in the coming years, and deploy it on the web, nobody but you can legally discover vulnerabilities in it. See the point? If vulnerability disclosure does indeed improve the software security, a similar force will not be active in the realm of web applications. And as more applications move to the web, we are looking at 1992 pre-Bugtraq world all over again, which can be summarized as "those who know and dare, 0wn" :-)
But you know what? There is an opposite but equally disturbing trend related to liability. Few picked this one yet. So, many folks have been advocating that software vendors be liable for vulnerabilities whatever resulting vulnerability consequences such as data loss. For example, here Bruce Scheier (one of the most vocal proponent of this) says that "Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature, without any regard to security. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they are entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem."
And here is the fun thing: many agree that it is very hard to sue a software vendor if you lose the data due to their vulnerability, but you know what? You can sue a web application operator or a web site owner if they lose your data! Specifically,
Can't sue SAP, can sue Salesforce.
Can't sue MS for Office, can sue Google for Docs.
Can't sue Mozilla for Thunderbird, can sue Yahoo for Yahoo Mail.
Isn't it fun?! In other words, sue the software vendor for vuln-resulted data loss - get trouble, sue the SaaS vendor - get cash!
Am I wildly off base? Comment away!!
4 comments:
Good point, but I've got one addition to it. You say that no-one can legally test the security of a company's service if provided as a website (If I'm reading correctly). Well there's one exception to that... Customers can, and should, either test the security of the site or insist on it being tested and the report being passed to them.
Definitely in the organisation I working (Financial Services) a lot of the Penetration testing work we do is on 3rd party sites that our organisation is planning on using.
Obviously there is one difference, in that we can't publicly disclose any findings, but we can ensure that any serious findings are resolved before starting to use the service...
So if enough customers take this approach, you'd hope that at least the Low-Hanging-Fruit and hopefully more than that will get resolved in many ASP type sites...
>Well there's one exception to that...
>Customers can, and should, either test the
>security of the site
That's be nice ... just try to scan a website of your bank next time :-)
>That's be nice ... just try to scan a >website of your bank next time :-)
Strangely enough I get to do that on a reasonably regular basis :o)
that said I see your point, I was thinking more about B2B websites where you're entering a contract with the provider.
That is indeed a good point: B2C seems very different from B2B in this regard. B2B does seem to often mandate allowing partner testing but B2C never will (I suspect)
Now, we just need to assess where it would be worse: I suspect most web apps are B2C, but most IMPORTANT web apps are B2B...
Post a Comment