Not having a policy
Not updating the security policy
Not tracking compliance with the security policy
Having a "tech only" policy
Having a policy that is large and unwieldy
Friday, February 29, 2008
"Log Management Thought Leadership Roundtable Webcast" will features such log management / SIEM personalities as Hugh Njemanze, Anton Chuvakin, Chris Petersen and Mehlam Shakir, discussing what is and will be the coolest things in log management.
Date: March 5th
Time: 11AM PST / 2PM EST
UPDATE: recording is posted; it was fun!!!
UPDATE2: fun comments from one of the attendees (David from Devil's Advocate Security blog) are here.
UPDATE3: I got a list of question from the organizers; many of them are fun and I will answer them on my blog later this week... stand by.
Thursday, February 28, 2008
BTW, it is indeed a silly question: the common wisdom goes "if you are not at RSA, you don't exist" :-) (BTW, the latter phrase says nothing about the quality of RSA presentations or lack thereof)
Back in 2004, Forrester paper called "The Natural Order Of Security Yields The Greatest Benefits" proclaimed that "the adoption of security has a natural order: 1) authentication; 2) authorization; 3) administration and 4) audit." Note that audit which, in this case, broadly includes audit, monitoring and detection, comes last. It seems to be fairly in line with common sense: you audit the controls after you put them in place; you monitor after you have authentication and authorization taken care of and you detect the violations after you organized your administration.
The paper even had the following picture, which is presented here to illustrate the point:
(source: Forrester paper named above)
The paper clarifies: "With people and contexts defined, protective controls in place, and policies outlined, the
fourth set of questions [i.e. regarding the audit] includes: “What happened?”, “What is happening?” or, especially, “Is
However, is this really so? Or, is this always so? First, when reality collided with plans, many of the organizations that followed that wisdom got mired in one phase (e.g. in trying to get authentication under control) and ended up having no audit whatsoever: in other words, they are flying blind while implementing controls! Second, in some cases controls (authentication, authorization, administration) will actually be impossible to implement, while audit will be possible. Imagine retrofitting a legacy application for granular authorization? Third, in some cases implementing prevention/control will be much more complicated, compared to implementing audit: thus, people will face a choice between a half-baked control to a full-blown audit capability? An example will be managing which file each user can access vs monitoring/auditing which file each user have accessed. The latter is doable, while the former is next to impossible. Another way to phrase it is "reactive but possible" vs "proactive but impossible" (hint: pick the former :-))
I think the idea of putting audit first in some cases is the way we'd need to progress. "Wow, what a blasphemy!", some might say ... After all, if you have not defined controls, what are you going to audit? But remember that audit is meant broadly in this context and thus the opposite question is very relevant: what are you going to control if you have no idea what is going on? People sometimes define a security policy based on how things should be (and then WSJ happens :-) - refresh your memory of the "WSJ saga" here), but then spent years trying to bring policy and reality together (and end up with an environment which is "half-controlled") Won't it be better to audit first, then control?
Obviously, "do IDS before IPS" falls under the same principle: monitor first, [try to] control second. Here is another example: implement log management before identity management. Looking at logs will tell you what privileges the users actually use for doing their daily jobs. Then you can mix it up with what the idea access policy will be.
So, think about it! Questioning the common wisdom does often bring interesting insights.
Wednesday, February 27, 2008
As of now, I am reposting some of the most useful blog content there (such as the tips), but it will be used for other fun stuff on the future. Check it out.
I plan to outline just such a plan: poor man's DLP using logs. Yes, it will suck :-), but it will be free, not "$500,000". What can I say, 'Welcome to the world of "good enough technology!"'
Tuesday, February 26, 2008
Ehhhh... yeah! :-)
P.S. I hate Event ID 567 with a passion.
Monday, February 25, 2008
Have you ever seen a syslog entry that had "enterpriseId", "software", "swVersion" or something of that sort?
Why not? :-)
Here is the key piece, but do read the whole piece:
- "First they came for bandwidth...
- Next they came for secrets...
- Now they are coming to make a difference... "
It starts like this: "Welcome to this page on targeted malware attacks. This subject has become a very common point of discussion and is subject to a lot of intrigue. By presenting some of the basic results of analysis of actual targeted trojan attacks, this analysis hopes to contribute to the discussion, detection and response to this new type of industrial (and often even political) espionage"
Mike reminds all about the "inevitability of compromise" here: "This year I want to focus on the inevitability of compromise. [...] I mean the fact that your users will do something stupid and thus they will get 0wned and that means your environment will be compromised. Nowadays, it’s just too easy to get nailed. The users don’t have to do anything. The bad guys are now installed drive-by downloads on LEGITIMATE sites."
Do you know what it means? This means logs! Yes, logs!
Mike further clarifies: "... monitoring logs, netflow, and other stuff (like database logs, applications, transactions) is critical to figure out what you should be focusing on."
Wednesday, February 20, 2008
As promised, here is another "Top 11 Reasons" which is about log analysis. Don't just read your logs; analyze them. Why? Here are the reasons:
- Seen an obscure log message lately? Me too - in fact, everybody have. How do you know what it means (and logs usually do mean something) without analysis? At the very least, you need to bring additional context to know what some logs mean.
- Logs often measure in gigabytes and soon will in terabytes; log volume grows all the time - it passed a limit of what a human can read a long time ago, it then made simple filtering 'what logs to read' impossible as well: automated log analysis is the only choice.
- Do you peruse your logs in real time? This is simply absurd! However, automated real-time analysis is entirely possible (and some logs do crave for your attention ASAP!)
- Can you read multiple logs at the same time? Yes, kind of, if you print them out on multiple pages to correlate (yes, I've seen this done :-)). Is this efficient? God, no! Correlation across logs of different types is one of the most useful approaches to log analysis.
- A lot of insight hides in "sparse" logs, logs where one record rarely matters, but a large aggregate does (e.g. from one "connection allowed" firewall log to a scan pattern). Thus, the only way to extract that insight from a pool of data is through algorithms (or, as some say, visualization)
- Ever did a manual log baselining? This is where you read the logs and learn which ones are normal for your environment. Wonna do it again? :-) Log baseline learning is a useful and simple log analysis technique, but humans can only do it for so much.
- OK, let's pick the important logs to review. Which one are those? The right answer is "we don't know, until we see them." Thus, to even figure out which logs to read, you need automated analysis.
- Log analysis for compliance? Why, yes! Compliance is NOT only about log storage (e.g. see PCI DSS). How to highlight compliance-relevant messages? How to see which messages will lead to a violation? How do you satisfy those "daily log review" requirements? Thru automated analysis, of course!
- Logs allow you to profile your users, your data and your resources/assets. Really? Yes, really: such profiling can then tell you if those users behave in an unusual manner (in fact, the oldest log analysis systems worked like that). Such techniques may help reach the holy grail of log analysis: automatically tell you what matters for you!
- Ever tried to hire a log analysis expert? Those are few and far between. What if your junior analysts can suddenly analyze logs just as well? One log analysis system creator told me that his log data mining system enabled exactly that. Thus, saving a lot of money to his organization.
- Finally, can you predict future with your logs? I hope so! Research on predictive analytics is ongoing, but you can only do it with automated analysis tools, not with just your head alone (no matter how big :-)) ...
Past top 11 reasons:
Tuesday, February 19, 2008
- "2008 DOI: Day 2 - It's time for an audit revolution" explains why respecting auditors is a good idea and admits that "compliance remains a major buying catalyst"
- "The forensics mindset: Making life easier for investigators" discusses why "logs are the investigator's best friend" and reminds to enable logging on databases
- "Understanding where SIM ends and log management begins" talks about SIM being one application on top of a log management platform.
Friday, February 15, 2008
So, what sparked this was a post by my esteemed colleague about platforms. Not, not the platform shoes :-) Application platforms. In his post, Mr Baum climbs onto a platform :-) and proclaims that "the thoughtfulness by which we’re going about this [i.e. trying to become a platform] will yield much more than a bunch of hype." Despite that highly appropriate reference to "hype" :-), it is interesting that he chooses to point at such well-known application platforms as Facebook, Ning or Salesforce.com, but ignores an example much closer to home, in the domain of log management: LogLogic log management platform. To be honest, I am happy to welcome him to the platform club, where LogLogic resides since 12/2006. Platform is indeed the right way to go about log management, since the utility of logs is so broad: from mundane server troubleshooting to forensics to attesting to compliance mandates (and everything in between and around!)
To add more substance to this, let's review some of the key requirements for a log management platform:
- Overall platform requirements (good intro here): having an access API is central to this.
- Data access: in case of a log management platform, API should let users receive their log data in either raw or processed (i.e. "parsed" or tokenized) form.
- API for control: log analysis is not just searching, but also includes alerts and other things that sometimes needs to be tuned. API should allow that.
- Also, platform should enable broad, non-siloed approach to log management (silos are evil!) and thus allow any type of analysis and data access: not security-specific, not troubleshooting-specific, but broad, cross-domain approach, suitable for many types of users, from system admin to a CIO.
Finally, you know what? "Developer-centric ethos" sucks - I would much prefer a "user-centric ethos," since ultimately a platform is not built for people to play with it (like his? :-)), but for the end-users to do something useful with it and to solve problems that they have ... Development based on the platform is indeed critical - but not as critical as solving a problem at hand!
Get it? SIEM is about "S" - security, while log management is about "L" - i.e. logs; logs for all uses inside and outside of security.
Thursday, February 14, 2008
So, enjoy "Detecting Attacks on Web Applications from Log Files" in SANS Reading Room: logs vs OWASP Top 10 web attacks - the battle of the century - who will win (bet on logs! :-))?
One thing I miss in the paper is that all suggested approaches are rule-based, not anomaly- or profiling-based. Regexes suck! :-)
Wednesday, February 13, 2008
This is my 6th logging poll (vote here now!)- links to the previous five polls below.
This one is deceptively similar to the #1 below, but it is not. This poll is What logs do you actually LOOK at? and not Which Logs Do You Collect? In other words, are you a log packrat? Are you collecting and never using the log data? You are making a mistake, if you don't.
I responded to a question about using agents for log collection on a mailing list (semi-public); I think this content also begs to be blogged. So:
Agents for log collection pros and cons:
- agents are unavoidable in some cases (nowadays such cases are few and far between...)
- deployed agent can secure the log data in transit from its source to a log management tool
- agent typically can bandwidth-throttle / -manage the log data from source to a log management tool
- agents use up CPU/RAM on each system (sometimes A LOT, sometimes - not so much)
- when such agent crashes, it can take the system down with it (or use up ALL CPU/RAM/disk resources)
- added risk: you run new - possibly vulnerable! - software on all systems (some agents will also allow people to control them remotely for management, thus opening another hole)
- added hassle: you need to install and manage them on a large number of your systems (this is THE biggest reason IT people hate agents with a passion!)
Agentless / remote collection pros and cons:
- such agentless, centralized log collection usually incurs less impact on the logging systems
- contrary to popular belief, one can collect logs securely without agents (e.g. via SCP, FTPS or SFTP)
- just as with agents, one can schedule log collection for off-hours
- one can choose to pull or push data (e.g. HTTP upload)
- added risk: new open ports (in case of log pull) or running services (in case of upload or log push) on all systems
- added risk: log management system might store credentials for remote access (sometimes admin) thus exposing them for compromise (especially if you don't use appliance)
- added hassle: you need to manage credentials for all the servers on the log management system
Finally, please don't use the combination "remote agent" as it is deeply confusing. When people say "remote agent", they really mean "agentless." So, remote agent = no agents. It is MUCH less confusing to say "remote (or centralized) log collector." For example, Project LASSO is a remote Windows log collector, while Snare is an agent.
Possible related posts:
Tuesday, February 12, 2008
- "Antivirus Inventor: Security Departments Are Wasting Their Time" (Peter Tippett on how everybody is wrong about security)
- "Father of anti-virus says to invest in security awareness training" (Mr Stiennon says 'forget it!')
- "Security Today == Shooting Arrows Through Sunroofs of Cars?" (Mr Hoff agrees and disagrees)
- "tippett on security approaches today" (michael agrees to agree and disagree as well :-))
- "Are security departments wasting their time?" (David on 'Devil's Advocate Security', a new blog for me, says it is all about risk)
- "My security department is not wasting its time" (Stuart King opines as well)
- I'd add more as more people opine - thanks to Kurt for the last 2 entries :-) This has a chance to blow up as big as the WSJ saga.
"What's the fastest-growing data source at large organizations? Video? Maybe at YouTube, but not at Citibank. The answer is log files."
"Indeed, the log management snowball is rolling down a very steep and very snowy hill."
"... think of log management as the foundation of a Network Information System (NIS). Analysis of log data [...] is quickly becoming the difference between effective IT security/operations management and flying blind."
"Logs seem trivial, and log management appears like a tactical task at the bottom of the IT stack. Maybe in the past this was true, but in today's world, information is power and logs are device-specific information."
More discussion is here and on - wow! - TSA blog.
In this presentation I explained the CEE logging standard to the OpenGroup folks and also shared where the standard effort currently is.
Another CEE standard update: public mailing list archives are public and online (thanks, MITRE folks!) - now you can watch all the pillow fights we have on the list :-)
Monday, February 11, 2008
Somebody asked me a few days ago: EXACTLY what logging we absolutely MUST do for PCI DSS compliance? Since this is a common question, I am broadcasting it here.
The honest answer to the above question is that there is no list of what EXACTLY you MUST be logging due to PCI or, pretty much, any other recent "compliance thingy" (as we all know, PCI DSS rules are more specific than most others). However, the above does NOT mean that you CAN log nothing.
Is this bizarre or what? Yes, it is :-)
But that is exactly why vendors and consultants tell you what you SHOULD be logging. There is no easy "MUST-log-this" list; it is pretty much up to individual auditor, consultant, vendor, engineer, etc to interpret (again, not simply 'read', but interpret!) the PCI DSS guidance (e.g. Requirement 10 that is dedicated to logging) in your own environment.
PCI DSS guidance -> consultant, vendor engineer, etc -> your very own logging recommendations.
A few folks wondered: why not ask the auditor? Well, these critters :-) will tell you whether "yours is OK" or "OMG, no!", but will not write your logging policy for you. With them, the best approach is: define your logging policy, then show to auditor, if they are happy - now you know what you MUST do.
As a final word: still, I dislike the above compliance-induced daze as much as the next guy. I much prefer that people think what they want from their logs as well as how they need to use them and then log that!
UPDATE2: really insightful follow-up from Martin here.
Friday, February 08, 2008
OK, this poll WAS fun! The raw results are here and below:
What do we learn from this? Sadly, this poll was less popular than I hoped, so the results are not as statistically significant. Still, we can draw some fun conclusions from the data.
First, what are the top challenges? It is with great regret :-) that I report that the #1 challenge is exactly the one I thought it would be: We collect logs but don't have time/resources to look at them. Yes, automated "analysis challenge" has only become more of a challenge as we deploy more tools that enable log collection on a massive scale (e.g. 75,000 logs/second). I dare to predict that we will finally have to tackle this one in the next year or two. In fact, this challenge rears it ugly head via another popular response, Lack of log analysis tools, which made Top 5 responses.
Second, even though I didn't have any predictions about the #2 entry, but I was surprised: No way to effectively search all logs is a very close #2 (obviously, 1 vote is not statistically significant here). Indeed, log searching is an elusive little problem, especially when we want to do it fast and on a large pool of logs. Even though I think we need to search less and discover more, the need to search logs will be with us forever (and, no, I don't think you need a special product just to search logs, Raffy :-))
Third, I am happy to report that this poll shows that we finally broke the back of "the beast" of not having logs. Responses that point at not having logs (e.g. Logging is not enabled, We don't know what logging we must enable, etc) are not terribly popular (then again, maybe it is due to self-selection of my enlightened blog readers ...)
Fourth, infrastructure! Specifically, No infrastructure to manage the log volume we have is very popular as well (#4). This proves the point that I used to not take very seriously in the past (by mistake): when megabytes become gigabytes and those flow into terabytes, many things that used to trivial (e.g. moving logs from A to B, saving logs to disk, etc) become grand engineering challenges... Indeed, to manage high volume of logs you need a scalable log management solution (example :-))
Sixth, as I lamented, few care about log security (this counts as laments, I guess). Secure storage of logs is only bothering a few people. One word: yet! ;-) As of today, stored log hashing + (sometimes!) log transport encryption + (rarely!) encrypted archives are the state of the art.
Next poll is coming up!
"To be perfectly blunt, the Clinton and Obama campaigns both suffer from eminently exploitable flaws."
"In the weeks and months ahead, this distinction will allow strategists far beyond the United States to deal with a far simpler matrix of U.S. presidential possibilities, and they increasingly will be forced to consider the possible implications of a “President McCain.”"
Tuesday, February 05, 2008
Whether it is a home-grown log server or a vendor's log management tool, security audit will help establish that your logs will remain useful for investigations, forensics, possibly litigation (offensive and defensive) as well as other purposes, all the way to operational troubleshooting. Some of the regulations, such as PCI DSS do call for log protections (see Req 10 or, while we are at it, go read my PCI book chapter on logs[PDF] :-))
Also, keep in mind all the reasons to protect logs C-I-A that I highlighted in "Top 11 Reasons to Secure and Protect Your Logs" post. Auditing the server helps establish that you do in fact protect your logs!
Possibly related posts:
Ummm, no! I think people rightly don't care and will continue to shop at TJX. In the event of card abuse, one 10 minute call to your CC issuer solves the problem; a new card arrives in a few days. Magic! :-)
I bet the opposite will happen: this will prove that you can operate while 0wned and leaking data like a sieve ...
UPDATE: more details on why I think this prediction is truly "dumbistic": "... And yet Wall Street analysts didn't seem to care. [...] In fact, the lack of financial fury by the analyst community was entirely predictable. [...] ... the reason why TJX was able to escape unscathed is simple: TJX's customers didn't care, so why should Wall Street. "
Follow the debate and decide ...
Sunday, February 03, 2008
As I am sitting in hotel in Cheltenham with sheep wondering around the place, I am noticing that this discussion has grown - and became even more interesting.
So please read:
- Initial discussion with a link to a paper here
- Longer follow-up from Mike Farnum is here (scalpel bit is explained too :-))
- "Mr Hoff Strikes Back" discussion is here
- Security team usually does not own the 'A' - IT does. If you think "IT availability" equals "DoS defense", your view is painfully narrow
- It sucks that some folks chose 'A' over 'C-I-A', but they do!
- It often takes some effort to explain why people need to care about 'C' and 'I' (no matter how painfully obvious it is for security folks!), but you never have to explain the 'A' part
- Yes, if your data is corrupted ('I' violation), then obviously the right data is not available (leads to an 'A' violation), but this is actually far less common today compared to the following: a crappy solution is deployed to guard 'C' and 'I'; it flakes out and takes 'A' with it ... oops!
- I also disagree with taking it too far to "security against availability" (even those it happens sometimes, especially in the form of "security vs performance" or "security vs usability"); security is definitely not the opposite to availability.
- Information / IT risk management definitely covers all of C-I-A risks; thus, a security team might not be responsible if a lighting strikes a server, but such scenario must be considered in IT risk assessments.
- Maybe one of the reasons we lost the current malware battle (more on that later...) is that today's predominant malware - bots - and their creators/herders deeply care about preserving your systems 'A' - availability ...
Friday, February 01, 2008
I saw this idea of a monthly blog round-up and I liked it. In general, blogs are a bit "stateless" and a lot of good content gets lost since many people, sadly, only pay attention to what they see today.
So, here is my next monthly "Security Warrior" blog round-up of top 5 popular posts and topics.
- Darn it, but same as during the last many, many months, the "fallout" from being featured on a high-profile programming site continues to drive loads of traffic. The topic that got such a huge boost was anti-virus efficiency. Thus, these posts with same theme of anti-virus efficiency were the most popular: Answer to My Antivirus Mystery Question and a "Fun" Story, More on Anti-virus and Anti-malware, Let's Play a Fun Game Here ... A Scary Game, The Original Anti-Virus Test Paper is Here!, Protected but Owned: My Little Investigation, A Bit More on AV and Closure (Kind of) to the Anti-Virus Efficiency/Effectiveness Saga
- Again, my materials on database logging top the charts. Specifically, How to Do Database Logging/Monitoring "Right"? as well as its "prequels" :-) Full Paper on Database Log Management Posted and On Database Logging and Auditing (Teaser + NOW Full Paper).
- Next is my 2008 Predictions post - of course they are popular! :-) Its not 2009 yet! :-(
- Next is again my Top11 logging lists: Top 11 Reasons to Collect and Preserve Computer Logs and Top 11 Reasons to Look at Your Logs (the third list, Top 11 Reasons to Secure and Protect Your Logs, was not quite that popular - I long argues that, sadly, few people care about log security yet).
- Last in my top 5 is a post called Scary World Ahead?! about Internet becoming "0wned" at huge speed!
See you in February :-)
Possibly related posts / past monthly popular blog round-ups:
The topic: how to choose build vs buy vs outsource for log management, what are the critical issues to consider, how you can both build and buy, etc.
Fun! Especially useful for those who are about to start building their own log management system.