This is Anton Chuvakin original blog (pre-Gartner) that I will now use to backup my Medium blog content (2023+)
Friday, September 29, 2006
NIST Log Management Guide 800-92 is Final!
And, last but not least, thanks to the NIST folks for that special mention :-)
Wednesday, September 27, 2006
GFP Rating of Militaries
Do You Compete with Dumbasses? A Simple Test for Security Vendors! :-)
So, here comes "The Test":
- Launch your web browser while sitting in the office at your headquarters
- Type your competitor's URL in the address field and press 'Enter!'
- Observe the result
- An appropriate website (duh!)
- An old version of their website announcing version 1.1 of their product back in 1998
- An unrelated website (possibly p0rn :-))
- Nothing, the connection fails or times out
If, while doing it from the office, you see anything from the range of #2-#4, congratulations: you compete with dumbasses!
He-he :-)
Tuesday, September 26, 2006
Logging AS a Privacy Risk? Discussion Ensued :-)
He seems to agree when when we talk about corporate, work-related resource access. However, in his view, logging "everything" is not so kosher when dealing with "personal life." Upon thinking about it, I tend to agree that there is some truth in that, but, at the same time, it is often very hard to separate between the two: your personal life might be somebody else's business - e.g. your ISPs, Google's, whatever website you visit ... after all you do access their corporate resources (even if public)
The discussion also diverged (also here) towards "do we have a right for privacy?", in Constitution or elsewhere.
And , come on, Mike, what's up with that :-) (quote): "Anton works for a log management vendor, so it's no surprise that he thinks logging is cool everywhere"? I happen to genuinely think that, even when my marketing hat is off ...
Friday, September 22, 2006
Truly Amazing Shortsightedness...
"Have you ever taken a moment to realize that the primary reason the information security industry even exists is because a noted lack of pedantic people both in the RFC world of the 1980s and the software engineering world up until the mid 1990s?"
OMG, this is so not true! Security industry exists because people exist :-) People will break ANY technology, including the one built by “pedantic people”…
Now, I know that Arbor (and especially Arbor ASERT team :-)) is full of truly sharp people, but the above quote is still pretty stupid :-)
Anton Security Tip of the Day #4: Code 200 = Code Red?
Not the wormy "CodeRed", but Red Alert, mind you.
Following the new "tradition" of posting a security tip of the week (mentioned here, here ; SANS jumped in as well), I decided to follow along and join the initiative. One of the bloggers called it "pay it forward" to the community.
So, Anton Security Tip of the Day #4: Code 200: Good or Bad?
Now, this is somewhat related to my previous tip (#3), but as it applies to web server logs, such as IIS Extended W3C (or other) logs or Apache access_log logs. Unlike, say, databases, all webserves come with logging enabled, so web server log review is a common task for web server admins as well as for security teams. But what do you look for in normal web access logs, apart from web site access statistics and visitor trends?
Obviously, observing various access failures is important. All the pesky 404s, 401s as well as 50X response codes.
Here are a couple of examples from Apache web server access_log logs:
61.52.47.251 - - [26/Sep/2004:21:10:22 -0400] "GET /MSADC/root.exe?/c+dir HTTP/1.0" 404 286 "-" "-"
or
67.170.226.184 - - [10/May/2004:13:18:09 -0400] "GET /scripts/..%%35c../winnt/system32/cmd.exe?/c+dir HTTP/1.0" 400 293 "-" "-"
Is this ominous or what? :) Nah, it is just my Linux + Apache honeynet responding to some good ole IIS 4.X attack. Truly nothing to worry about.
How about this one though?
68.49.152.86 - - [22/Apr/2004:11:56:49 -0400] "GET /cgi-bin/awstats.pl HTTP/1.0" 200 2190 "-" "Mozilla/4.75 (Nikto/1.32 )"
Somebody using the web statistics CGI script? Benign, since the response code is 200 which means that the server served the page (or ran the script!) as expected? No, you are about to get 0wned thru an AWSTATS hole!
So, the gist of this tip is similar to the previous one: when monitoring web server logs, do not just focus on 404s, 401s, 50Xs and other "known bad" response codes; take a long hard look at the vanilla 200 response codes. "But how do I separate the "good" 200s from the "bad" 200s?" - you might ask! Well, that would be the subject of another tip in the future!
Just in case, here are the links to my previous tips: #3, #2, #1.
Also, I am tagging all the tips on my del.icio.us feed. Here is the link: All Security Tips of the Day.
tags: security, tips, log analysis, log, web logs, web servers, log management
Access or Access+Audit?
Now, this is one of'em philosophical posts. After all, I do have to justify the "Ph" in my Ph.D., right? :-) At the same time, this post will have an unmistakable stench of a rant :-) for some of my readers.
Recently, I was involved in some fun discussions on storage security. And, in most cases, you store "stuff" to let others access it, not just for archival or - gasp!- compliance purposes. One of the storage vendors I talked to recently mentioned that every year they've been in business (since early 90s), they have to add one or more audit features to their information access solution to increase the level of details, performance of their audit logging or whatever other audit related feature.
My response was: "What? You didn't build them from the very beginning?" And then I thought: why provide access without audit logging?
No, really, why have it?! Disks are cheap, bandwidth is affordable, CPUs are powerful: why provide access to any information without having an ability (at least) to log each and every successful and failed access?
Before some of you label me "a privacy Nazi", I have to disclose that I am somewhat of a fan of Scott McNealy's saying "You have no privacy. Get over it." Having access audit info is useful in so many cases, that not doing it becomes inexcusable and, frankly, stupid. Some of the many uses for such information are:
- Operational troubleshooting: knowing who failed to access the info and why
- Policy audit: who accessed what, with or without authorization?
- Regulatory compliance: legal requirement to have audit data is there to stay
- Incident response: what info got stolen and by whom?
- Information access trending and performance optimization: are we providing quick and reliable access to information?
So, what about privacy? Privacy is defined (in Wikipedia, where else) as "ability of an individual or group to keep their lives and personal affairs out of public view, or to control the flow of information about themselves." We see two completely different things here: keeping the info out of public view and controlling the info about you. The former is clearly reasonable and possible. How about the second? To be honest, it sounds like a sheer idiocy to me, because you do not control it and never did. You've got to a) become invisible and b) stay home all the time :-) for a fair shot - albeit not a certainty! - at controlling the info about yourself. I can still talk about you - and thus control the information flow about you - by saying "ah, that invisible guy that stays home all the time!" :-)
So, what is the connection between the above definition and my call for "no access without logging"? Logging is NOT a privacy risk; inappropriate use for collected data is. Before you object by invoking the infamous "guns don't kill people; gaping holes in vital organs do" :-) I have to say that the above privacy definition is about access to information about people, not about the existence of said information. And, yes, Virginia, there IS a difference!
Similarly, nowadays many folks are appalled when they see stuff like this ("Fresh calls for ISP data retention laws. US attorney general cranks up the volume."), but it actually - gasp! - seems reasonable to me, in light of the above. Admittedly, if your bandwidth is so huge that you cannot log and retain, you might be able avoid logging or at least avoid long term log retention, but that is a different story altogether.
Another thing that is tied to this is the whole "privacy vs security" debate which never made quite sense to me - until now. This is indeed the area where those who want to have logs for security and other uses will clash with those who don't trust controls on the collected log data and would prefer for such data to never get created in the first place. But that would be a subject of a follow-up post later....
So, have doing security and especially log analysis for whatever number of years gone to my head? Or am I onto a critical trend here? Comment away!!!
Working for a Leader in a Space
Alien vs Predator aka XSS vs Overflow :-)
The main issue at hand is that XSS vulns (or cross-site scripting vulnerabilities) overtook buffer overflows as the most common type of reported (important to note!) vulnerability: "XSS has become the number 1 vulnerability of all time [...] in CVE" and further "Buffer overflows were number 1 year after year, but that changed in 2005 with the rise of vulnerabilities that are found in webapplications, including XSS and SQL injection."
So the objections are centered around reported vs exploited vulnerabilities as well as their relative risk.
I think some correlation is in order. For instance, look at this piece that was published at about the same time as the above research. It covers Top 5 Causes Of Credit Card Data Loss and - surprise, surprise! (not :-)) - SQL injection and other web app vulnerabilities hold a respectable #4 on the list. Now you see that this baby is not about reporting, ease of discovery, triviality and other bla-bla-bla related to web vulnerabilities; this is about losing real cash(well, credit cards, really) and getting their behinds whooped due to PCI ....
It Is NOT Just Requirement 10!
Friday, September 15, 2006
Speaking at SANS (October 4th, 2006)
Last time I gave this preso I was told that I managed my inherent "vendor bias" pretty well, so the talk was truly useful for many folks in attendance.
Come on in: October 4th, 12:30PM-1:15PM, Caesars Palace, Las Vegas, NV
OMG, Not Another Security Analogy!
It his post, Chris @ RationalSecurity slams a post of Richard Stiennon on that very subject ("The human body is a good metaphor for the way security should be.")
It is kind of hard to argue with parts of it (like "You hardly ever notice when your body is attacked because the majority of attacks are warded off." and, ideally, security should fit that too), but admittedly security should not fail the same was the immune system fails, since it is not pretty...
More Musings on the Future
I read this fun document the other day (if you want a copy, just google on Google :-) for "eiuForesight2020.pdf") It is a 15 year forecast related to various economic - global and industry specific - trends. As one might guess, "data security" is prominently present in the report, alongside with energy security, geopolitical security and other "security brethren."
Moreovoer, I am getting an impression that the glorious march of IT, predicted in the document, (he-he, you didn't buy this "IT doesn't matter" crap, did you? :-)) will have to be accompanies by a corresponding march of security (and you thought security is hot now! Just wait! :-))
But here is another interesting bit: according to this forecast, "knowledge management" will increasingly become one of the "boardroom priorities" in the next 15 years. Knowledge management is not just "data management," but a part of much broader information lifecycle management (ILM). Even though knowledge management is not directly connected to security, it has obvious security implications. For one, you don't want that "knowledge" falling into the wrong hands or become corrupted or subverted. So, in reality, security is going to become even more prominent as a result of this knowledge management trend...
It IS a Joke, Isn't It?
No comment!
Thursday, September 14, 2006
Infosec to TSA [humor, well, kind of sad]
Example from the above: "And let’s not forget the valuable lessons hidden in classic security technology. Like antivirus. What can we learn from that? Well, we could ban pocketknives from planes. And shoes. And toothpaste. And every couple of months we can do a “retrospective”, looking back on all the stuff we foolishly used to allow through security checkpoints and marvelling at how far we’ve come."
Also read the comments after...more fun stuff there.
The above is inspired by Marty Roesch blog post which I already commented on here.
Is Dataloss A New Web Defacement?
Examples from their "Ten Most Recent"
"1. American Family Insurance - [2006-09-13](Stolen laptop contains over 2,000 customer Social Security and drivers license numbers) [archive]
2. Telesource - [2006-09-11](Social Security numbers and other personal information found in dumpster) [archive]
3. Cleveland Clinic (Florida) - [2006-09-08](Social Security numbers, dates of birth, addresses and other details of 1,100 patients stolen) [archive]"
etc, etc, etc...
Vendors, Lies and Users :-)
I hate it when vendors lie; I really do! But my 0xBEEF :-) with it is somewhat different than Mike Rothman's (and here too). Or Alan Shimel's, for that matter. I specifically hate it when competitors lie. I had this experience where a certain competitor was presenting [absurd] lies, pretty much in my face. What pond scum! But what do you do? Confronting them felt below me, letting it stand wasn't too good either... The only easy "solution", of sorts, is to rely on this principle "liars always get caught… eventually", highlighted here.
However, as luck would have it, their customer stood up and confronted them: "What? Easy to use and manage? It took us 9 months to deploy and we are not done yet!"
It sure felt good! :-)
Wednesday, September 13, 2006
Am I On? Security Bloggers A-List!
And what list might that be, you ask? Security bloggers A-list, no less :-). A.k.a. Mike Rothman's reading list, which he published recently. Mike says that " I track close to 100 security blogs now, but only about 30 rate 3 or more stars."
Ah, yes, to know the answer to the question that bothers me here, do check the list. :-)
Monday, September 11, 2006
On "Top SIX Reasons Why I [Fred Avoio] Hate Network- and Computer-Security"
And, not surprisingly, the reason #1 is: "We state the obvious" (which is a subject of his separate but no less enlightening blog post...)
Are You Into PCI...?
On Incidents and Military vs Business
This fun post highlights another misguided attempt to apply an analogy to the question "what is security? This time is "security is more like a military [strike] than like a business [process]" Are you laughing yet? And it comes from Richard Stiennon, mostly known for "IDS is dead" and, more recently, "NAC sucks" pronouncements.
At the same time, further in the post we find a very useful definition of an "incident", which is not based on hacking, crashing, stealing or other specifics. It goes like this: 'I define incident management as “what you do when you’ve exceeded regular process.”' And this is indeed correct: security incident is what happens when your regular process is disrupted ...
Friday, September 08, 2006
So Is The Worm Dead?
They further say "it seems like worm detection systems are no longer as high pressure as they were in the past." Well, suspiciously many of the folks (like this or that) who used to sell the "anti-worm gear" have switched to NAC...
So, is Bot a new Worm, aka the '"Scourge of the Century" of the Day' (note embedded quites :-))? :-)
Thursday, September 07, 2006
"Hacking Still Can’t Outdo Stupidity for Data Leaks"
Wednesday, September 06, 2006
I Like it When They Say it Like This :-)
On Top 11 Reasons [Some Think] Security Products Don't [Always] Work...
So, first DarkReading posted their "Top 10 Reasons Security Products Don't Work" and the enlightened Mike Rothman added his "The 11th (and most important) reason security products don't work."
For starters, here is the combined list:
"1. Too many false alarms
2. Products are riddled with holes
3. No protection against zero-day attacks
4. Products don't work well together
5. Security tools are too complex
6. Users don't understand the product's capabilities
7. Users fail to install/deploy the product correctly
8. Users do too much product "tuning"
9. Users fail to update the product
10. The Blame Game"
and
"11. The REAL reason most security products don't work is because both vendors that sell them and the users that buy them FAIL TO MANAGE EXPECTATIONS."
Well, what can I say what was not already said by others? There is more truth in this puppy than many care to admit ...
Tuesday, September 05, 2006
The fun "Security is Like..." Item
Why? Read the post, but here are some highlights:
"* It’s more painful than the providers ever tell you.
* If you make decisions based only on financial Return On Investment you’ll really screw things up
* No matter how many times you strap someone to a chair, shine a light in their face, and poke them with sharp objects until they bleed you can’t make them any smarter."
:-)
On Insider/Outside Mini-Closure :-)
However, I think Chris Walsh has nailed this one via his "Outsiders! Insiders! Let's call the whole thing off" post where he calls it a "marketing-generated distraction."
So check it out...
On Achieving Closure in Security
So, will security ever "get done"? Not likely since "the biggest problems in security are rooted in human behavior and not technology. " And this one is indeed a tough one to crack...
Dave further says "I am confident that programmers will write code securely and that operating systems and applications will be hardened on initial boot *long* before consumers of technology pay sufficient attention to security to make an event like "computing and the Internet are finally secure" meaningful."
So, read his post
"The Biggest Military Hacks of All Time" List
(IN)SECURE Issue 1.8 with My PCI Article is Out
Anton Security Tip of the Day #3: Watch for Failures and Successes
So, Anton Security Tip of the Day #3: Watch For Access Failures AND Successes in Logs
Now, many a winter ago :-), people used to think that checking for access failures in logs is all they need to do to "stay secure." Indeed, picking out failured access attempts seemed like a reasonable way of doing things. Similarly, people even considered firewall "connection denied" messages as more important than "connection allowed" log records (although this will be a subject of a separate tip - a whole paper, in fact - later)
So, what is more important? This
Sep 26 12:36:40 bridge sshd(pam_unix)[2128]: session opened for user anton by (uid=0)
or this:
Sep 25 14:31:24 ns1 sshd[19308]: Failed password for anton from 10.10.154.44 port 53452 ssh2
Let's think about it: one log entry says that Unix security is doing its job and blocking a bad user (assuming no fat fingering and forgotten passwords for simplicity sake) from accessing the system, the other says that some kind of user now has access to your system.
To quote Doc from "Back to the Future" :-) : "Exactly!!!"
You do need to be aware of both; you can't just focus on access failures while monitoring your logs. Make sure you review successes as well. And, yes, certain patterns, like a long string of failures followed by a success are even more fun to watch ...
Also, here is a link to my previous tips of the appropriate-time-interval (#1) :-)
Also, I am tagging all the tips on my del.icio.us feed. Here is the link: All Security Tips of the Day.
tags: log analysis, logs, security, tips, chuvakin