So, this piece from AndyIT blog and his quote of SANS's Dr Ullrich, touch upon something deceptively obvious: just WHERE do we draw the lines between user vs IT/IS responsibility for security? In fact, the situation is event more complex: it is user vs IT vs infosec team! (and there is also a software vendor responsibility somewhere here....)
Let's go thru some scenarios:
- User and IT=0%, infosec=100% Result: failure of security due to technology limitations, lack of control over the environment as well as social engineering
- User=100%, IT, infosec=0% Result: trivial case, obvious failure
- Then it gets real complex real fast for the cases of shared responsibility ...
UPDATE: AndyIT answers it - "Probably something like Security=85%, IT=10% and Users=5%." See more of his follow-up post here.
5 comments:
I may be regurgitating what I learned when I did my CISSP but it makes sense to me.
The Boss is ultimately responsible. If a small company closes because of a cyber attack the owner can rant as much as he wants to - his staff will get new jobs and he will be the one who has failed. In a large corporation it will be the CEO who has to tell the shareholders what happened - they will insist on it.
Of course, CEOs and owners can't do everything from draw up policies to patch servers and run a business all on their own and they must make sure that they get the best people to work for them.
Problems are going to occur but a good security expert can help mitigate the risks. If a risk is generally well known and the security expert has not informaed management about it, then what is he doing? The blame falls squarely on his shoulders. If he did inform management but they chose not to react to it then business is to blame. If they reacted to it and procedures were drawn up but not followed by IT then IT is to blame.
It is best for all concerned if a non performing department is found out before loss occurs and the best way to do this is through documentation. And audits.
The closer you get to the user, the more often inconsistency with user decisions yields insecure practices and decisions, no matter how much you pound into them about security.
I think it depends on the organizations, particular the size and culture. Small shops will likely need more help from the users, while large companies simply cannot rely on users at all. Still, I think IT/security should strive for 99% security with them and 1% security on the heads of users, with the assumption that their jobs and innocvation are not impacted. I would prefer technological controls to be as high as possible to better ensure consistent measures...
Alas, you're right. This won't be solved in our lifetime, nor anyone else's.
A fun response to this is here: http://andyitguy.blogspot.com/2007/08/where-does-buck-stop.html
>till, I think IT/security should
>strive for 99% security with them
>and 1% security on the heads of
>users,
Having the 99/1 split requires A LOT of improvements above the level of current security technologies....
True, and by using such an extreme, I am really just making a point. :)
I will maintain that if user education really worked, teen pregnancy and drug abuse would be non-existent. It helps some, but promises nothing.
Of course, I may be arguing something different. Where does responsibility lie? Yes, with the bosses, but on another level, everyone has some responsibility to security. I mean, all the protections could still allow someone to helpfully leave a door open for a stranger or give a password away over the phone... Responsibility starts top down, and if it breaks down anywhere in that chain, that is a bad precedence for everyone under that break.
Employees take cues, spoken and unspoken, from the organizational leaders...
Post a Comment