Thursday, December 01, 2011

Monthly Blog Round-Up – November 2011

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

Disclaimer: all this content was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

  1. On Free Log Management Tools” is a companion to the checklist below (updated version)
  2. Simple Log Review Checklist Released!” is often at the top; it is the case this month – the checklist is still a very useful tool for many people
  3. On Choosing SIEM” is about the least wrong way of choosing a SIEM tool – as well as why the right way is so unpopular.
  4. Top 10 Criteria for a SIEM?” is an EXAMPLE criteria list for choosing a SIEM.
  5. Log Management at $0 and 1hr/week?” is pretty much what it is. How to do log management under extreme budget AND time constraints?

In addition, I’d like to draw your attention to a few fun posts from my new Gartner blog:

  1. On Vulnerability Prioritization and Scoring
  2. On LARGE Scale Vulnerability Management
  3. On Scanning “New” Environments

Also see my past monthly and annual “Top Posts” – 2007, 2008, 2009, 2010.

Tuesday, November 01, 2011

Monthly Blog Round-Up – October 2011

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

Disclaimer: all this content was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

  1. Simple Log Review Checklist Released!” is often at the top; it is the case this month – the checklist is still a very useful tool for many people
  2. On Free Log Management Tools” is a companion to the above checklist (updated version)
  3. Top 10 Criteria for a SIEM?” is an EXAMPLE criteria list for choosing a SIEM. 
  4. On Choosing SIEM” is about the least wrong way of choosing a SIEM tool – as well as why the right way is so unpopular.
  5. Log Management at $0 and 1hr/week?” is pretty much what it is. How to do log management under extreme budget AND time constraints?

In addition, I’d like to draw your attention to a few fun posts from my new Gartner blog:

  1. On Vulnerability Prioritization and Scoring
  2. On LARGE Scale Vulnerability Management
  3. On Scanning “New” Environments

Also see my past monthly and annual “Top Posts” – 2007, 2008, 2009, 2010.

Enjoy!

Monday, October 03, 2011

Monthly Blog Round-Up – September 2011

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

Disclaimer: all this content was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing. For my current security blogging, go here.

  1. Simple Log Review Checklist Released!” is often at the top; it is the case this month
  2. Log Management at $0 and 1hr/week?” is pretty much what it is. How to do log management under extreme budget AND time constraints?
  3. Top 10 Criteria for a SIEM?” is an EXAMPLE criteria list for choosing a SIEM. 
  4. On Choosing SIEM” is about the least wrong way of choosing a SIEM tool – as well as why the right way is so unpopular.
  5. On Free Log Management Tools” is a companion to the above checklist (updated version)
  6. The Last Blog Post!” is still popular. It announces my departure from consulting business in order to join Gartner as a Research Director with IT1 SRMS team.

Also see my past monthly and annual “Top Posts” – 2006, 2007, 2008, 2009, 2010.

Enjoy!

Friday, September 23, 2011

Cloud HELP NEEDED: Cloud PCI Class Trainer(s)!

Are proficient in BOTH PCI DSS compliance and cloud computing security? If yes, you can help Cloud Security Alliance as well as build your security reputation AND make some money in the process!

Here is how: a few months ago, when I was still consulting, I have created a comprehensive full-day class on PCI DSS and cloud computing. More information is here and a brief description is pasted below:

“The first ever class dedicated to assessing and implementing PCI DSS controls in cloud computing environments covers how to think of and how to do PCI DSS in various cloud computing environments. Focused primarily on people familiar with PCI DSS, it starts from the “hype-free” cloud computing facts and then delves into key scenarios where PCI DSS and clouds overlap in the real world. You will learn where to look while assessing such environments and what pitfalls and mistakes to avoid. It will also cover the shared responsibility between service providers and merchants in implementing PCI DSS controls. Specifically, we will discuss how PCI DSS Requirement 12.8 applies to various cloud scenarios.

The class would be most useful to PCI DSS QSA, organizations offering PCI DSS consulting as well as merchants planning or implementing PCI compliance.”

At this point, I am unable to teach the class due to my employment. CSA is looking for instructors to teach this class in various locations.

Please contact me offline and then will share the current class materials privately as well as explain what this work entails (and connect you to the right people at CSA).

Finally, if you are only CURIOUS about PCI and/or cloud, please save the time you'd otherwise spend typing an e-mail to me….

Saturday, September 03, 2011

Monthly Blog Round-Up – August 2011

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

Disclaimer: all this content was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing.

  1. The Last Blog Post!” is obviously BY FAR the most popular post in August. It announces my departure from consulting business in order to join Gartner as a Research Director with SRMS team.
  2. Top 10 Criteria for a SIEM?” is an EXAMPLE criteria list for choosing a SIEM.  Also see “On Choosing SIEM” which is about the least wrong way of choosing a SIEM tool – as well as why the right way is so unpopular.
  3. On SIEM Services” is a quick overview of services that you really should be getting with that SIEM purchase
  4. Log Management at $0 and 1hr/week?” is pretty much what it is. How to do log management under extreme budget AND time constraints?
  5. A very old post (2009), “Log Management + SIEM = ?", is about architecting SIEM together with log management.

Also see my past annual “Top Posts” - 2007, 2008, 2009, 2010).

Enjoy!

Wednesday, August 31, 2011

Quick Blogging Update

As I mentioned, due to my joining Gartner, I am not blogging on security here anymore. However, a quick announcement is in order:

Enjoy!

Tuesday, August 02, 2011

Monthly Blog Round-Up – July 2011

Here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

Disclaimer: all this content was written before I joined Gartner on Aug 1, 2011 and is solely my personal view at the time of writing.

  1. Log Management at $0 and 1hr/week?” is pretty much what it is. How to do log management under extreme budget AND time constraints
  2. PCI DSS in the Cloud … By the Council” post is my quick review of recent PCI DSS guidance on virtualization, focusing on cloud computing guidance.
  3. Top 10 Criteria for a SIEM?” is an EXAMPLE criteria list for choosing a SIEM.
  4. On Choosing SIEM” is about the least wrong way of choosing a SIEM tool – as well as why the right way is so unpopular. A related read is “SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?”, check it out as well. While reading this, also check this presentation.
  5. Simple Log Review Checklist Released!” is still one of the most popular posts on my blog. Grab the log review checklist here, if you have not done so already. It is perfect to hand out to junior sysadmins who are just starting up with logs. A related “UPDATED Free Log Management Tools” is also still on top - it is a repost of my free log tools list to the blog.

Also see my past annual “Top Posts” - 2007, 2008, 2009, 2010).

Sunday, July 31, 2011

The Last Blog Post!

This is my last blog post –for the foreseeable future. It is dated 7/31/2011 at 11:59PM. What happens tomorrow? A new life, of course!

As only very few of you know, I have accepted a position of Research Director with Gartner, Inc. Tomorrow I am joining a stellar team lead by Phil Schacter, formerly from Burton Group.

I spent two VERY successful years consulting, working with companies like Novell, RSA, LogLogic, NitroSecurity, eGestalt, ObserveIT, Tripwire, AlienVault, “Big MSSP”, “Big Insurance Company”, “SaaS Log Management Company”, “IT Management Software Company”, “SMB Security Company”, “Big Networking Equipment Company”  and others. I defined,  built, deployed, and marketed security products, mostly in the area of SIEM and log management. I helped organizations with security and PCI DSS strategy. I advised security vendor management on compliance strategy for their products. I have spoken at clients’ events and have written more whitepapers than I care to admit… as well as did a lot of other fun things!

It was fun and I loved it - and as my clients can attest, I was good at it. Also, I was more busy than I thought I’d be, and occasionally more than I wanted to be. However, at some point I started to feel that I need another step up. And so I am making that step now!

In accordance with my future employer policy, I have resigned from the Advisory Boards of Dasient, Securonix, nexTier Networks, Savant Protection, eGestalt, and Rapid.IO. Good luck to all of you!

In all likelihood, I will eventually resurface at Gartner blogs – please look for me there.  And finally, those who love my personal blogging (all 4007 of you as of today), don’t despair – I will still occasionally blog here on non-infosec subjects: think good books, laser weapons, hypnosis, skiing, travel and my other weird hobbies Smile

Finally, I want to give very special thanks to Lee Kushner for his super-valuable career counseling that helped me make this difficult career choice.

Possibly related posts – my past “career decisions” blog posts:

On SIEM Services

Executive summary: you need to procure services when you buy a SIEM tool, if you don’t – you’d be sorry later.

Even if you are amazingly intelligent and have extensive SIEM experience – see above.  Even if you saw a successful SIEM project that didn’t include vendor or 3rd party services with your very eyes – see above. Even if your SIEM vendor tells you “you don’t need services” – see above. See above! See above!! See above!!! Smile

image

Let’s analyze this “SIEM services paradox.” A lot of organizations – way too many, in fact – balk at the need to procure related services before, during and after their SIEM purchase. The thinking often goes like this: we need a SIEM and this box <points at the appliance still in the box> is a SIEM. That’s all we need. What services? Why services? Huh?

In reality -  and this is what I sometimes call “secret to SIEM magic” – that box is not a SIEM. That box, when racked and connected to your network, is STILL not quite a SIEM. Only when you “operationalize” it (see picture), then you can say that you have Security Information and Event Management (SIEM) capability in your organization and that you do “real-time” security monitoring.

Now, be honest, do you know how to deploy a SIEM tool and then figure out the shortest path to its operational success? Probably not… thus services/consultants who will work WITH you to make it a reality by arriving at the best possible way of benefitting from SIEM in your environment. Which use case give you the best bang for the SIEM buck? Which one will show a “quick win” to your management? Which one is more likely to detect an attacker in your network?

When a SIEM vendor tries to sell you services, it is NOT vendor greed – but simply common sense. And if you say “no”, it is not “saving money” – but being stupid. SIEM success  out-of-the-box (while real, in some cases!) is a pale shadow of what a well-thought through deployment looks like! My [broken] analogy is: you buying a nice shiny Aston Martin and then only using it to commute to a train station 1 mile from home. Will it work? Yes. Is this a good investment and a good experience? Hell  no!

So, no, SIEM is NOT useless without services, but it is unlikely to reach its full potential. Pitfalls to SIEM success are many, and navigating them requires help.

And, no, outsiders alone cannot do it. You will need to help them  help you.

This also leads to the rise of managed or co-managed SIEM options (which are NOT MSSPs, BTW!) as more people realize that a) they need a SIEM and b) they cannot handle a SIEM. Future cloud SIEM will (when it emerges) try to tackle the same problem of being simpler to operate and thus simpler to operationalize.

Today, most SIEM vendors offer an extensive menu of services to go with a product, and there are also some smart third parties.  Many services around SIEM can be organized as follows.

Pre-sale services examples:

  • Product selection help
  • Vendor differentiation analysis and shortlist definition
  • Regulation analysis and business cases review
  • Product strengths/weaknesses analysis
  • Product fit for type of project
  • Product fit for vertical / business type
  • RFP definition assistance
  • Current tools vs requirements gap analysis

Services offered during SIEM acquisition and deployment:

  • SIEM implementation
  • SIEM project planning
  • Proof-of-concept deployment management
  • Product testing in production environment
  • Data source integration and collection architecture
  • Default contents tuning

Post-sale, operational services:

  • SIEM analyst training
  • Performance tuning and capacity planning
  • SIEM project management
  • Custom content creation
  • Custom device integration
  • SOC building

Vendors and consulting firms offer other types of services as well all the way up to “co-managed SIEM” where a 3rd party firm manages your SIEM deployment for you. Will future SIEM work better out of the box? Yes, I think so. Will SIEM ever be as simple as a firewall? No, never: it is inherent complexity of security monitoring that cannot be squeezed out even by creative engineering…

Enjoy ... as this was my final blog post on SIEM.

Possibly related posts on SIEM:

Saturday, July 30, 2011

Old Content Posted: Presentations, Documents, etc

In preparation for a career change (stand by for an announcement on midnight July 31, 2011), I am posting A LOT of my old presentations and documents online for the community.

See http://www.slideshare.net/anton_chuvakin/presentations for such gems as my HITB 2010 keynote “Security Chasm”Brief SIEM Primer, “Making Log Data Useful” as well as the most recent "Five Best and Five Worst SIEM Practices"

See http://www.docstoc.com/profile/anton1chuvakin for a bunch of older documents on security, logging, SIEM, PCI DSS – including such gems as Logging Haikufirewall logging primer, etc

Enjoy!

Friday, July 29, 2011

On Broken SIEM Deployments

Imagine you own a broken, dilapidated, failing SIEM crap deployment. What? Really… that, like, never happens, dude! SIEM is what makes unicorns shine and be happy all the time, right?

Well…mmm… no comment. In this post, I want to address one common  #FAIL scenario: a SIEM that is failing because it was deployed with a goal of real-time security monitoring, all the while the company was nowhere near ready (not mature enough) to have any monitoring process and operations (criteria for it).  On my log/SIEM maturity scale (presented here, also see this related post from Raffy), they are either in the ignorance phase or maybe log collection phase.

And herein lies the problem: if you deployed one of the legacy, born in the 1990s SIEMs that are not based on a solid log management platform, the tool will actually suck at the very fundamental level: log collection. The specific issue here is that most of these early tools were designed to only selectively collect what was deemed necessary for real-time security monitoring (vs all log data). In essence, you have a tool with monitoring features (that you don’t use) and with weak collection features (that you can use, but they are weak).

What to do? You have these options:

  1. Leave it to rot; you can always keep it just to boast to your friends (and PCI QSAs) that “ye own one of ‘em olde SIEMs
  2. Blow it away and join the “SIEM doesn’t work” crowd – and maybe buy a simple log management tool later
  3. Deploy a log management tool to “undergird” your crappy SIEM; you have a choice of buying from the same SIEM vendor (if they have it) or a different vendor
  4. Built your own log management layer on syslog and open source tools

I have seen people take either of the above four. Personally, I have seen much more success with the option #3 (buy log management) and not infrequently with #4 (built log management) – BTW, this deck might help you choose. You want to move your SIEM setup from “get some logs – ignore all logs” model to “collect all/more logs – review some logs” which is typically much more aligned with your level of maturity. And then grow and solve more problems with your SIEM and demonstrate “quick wins.” While you are at it, review some architecture choices discussed here.

Enjoy …while it lasts.

Possibly related posts on SIEM:

Thursday, July 28, 2011

Got A Pile of Logs from an Incident: What to Do?

As I am going through my backlog of topics I wanted to blog about (but didn’t have time for the last 4-6 months), this is the one I really wanted to explore. Here is the scenario:

image

  1. Something blows up, hits the fan, starts to smell bad, <insert your favorite incident metaphor> … either in your IT environment or at one of your clients’
  2. Logs (mostly) and other evidence is taken from all the components of the affected system and packaged for offline analysis
  3. You get a nice 10MB-10GB pile of juicy log data – and they wants “answers
  4. What do you do FIRST? With what tools?

Let’s explore this situation. I know most of you would say “just pile’em into splunk”  and, of course, I will do that. However, that is not a full story. As I point out in this 2007 blog post (“Do You Enjoy Searching?”), to succeed with search you need to know what to search for. At this point of our incident investigation, we actually don’t! Meanwhile, the volume of log data beyond a few megabytes makes “trial and error” approach of searching for common clues fairly ineffective.

If you received any hints with the log pile (“I think the user ‘jsmith’ did it” or “it seems  like 10.1.3.2 IP was involved”), then you can search for this (and then branch out to co-occurring and related issues and drill-down as needed), but then your investigation will suffer from “tunnel vision” of only seeing this initially reported issue and that is, obviously, a bad idea.

Let’s take a step back and think: what do we want here? what is our problem?  We want a way to explore ALL the logs in  a pile, across log types, across devices, across all time AND then also following a timeline of events. In other words, we ain’t in “searchland” here, buddy…

If you have an enterprise SIEM sitting around (and one with well-engineered support for diverse historical log imports – which is NOT a certainty, BTW), you should definitely load the logs there as well. I like this approach since you can then run cross-device summary reports over the entire set, slice the set in various ways (type of log source, log source instance, type of log entry – categorized, time period filter, time trend, etc) and data visualization tools (treemaps, trend lines, link maps, and other advanced visuals on parsed, normalized and categorized) help get a big picture view of our pile.

Looking at the open source log tools, does anything look promising for the task? OSSIM can do the trick (even though their historical log import is not my favorite), but nothing else does. In some cases, I used sawmill (free trial) for my “big picture first look”, but it is not cross-device and only shows reports for each log type individually. If I were feeling really adventurous (and was on hourly billing), I could actually send all the logs via a syslog streamer into OSSEC (in order to see the log entries the tool will flag as interesting/alertable), but this is not really something I’d enjoy doing. I am almost tempted to say that you can use something like afterglow, but it relies on parsed data that you’d sill need to cook somehow (such as again using a SIEM). Log2timeline is useful, but only for one dimension – and the one that splunk actually addresses pretty well already.

To generalize, you need (a) a search tool and (b) an exploration tool. The search tool should help you quickly answer SPECIFIC questions. The exploration tool should use data to generate “hints” on WHAT questions you should start asking…

Wednesday, July 27, 2011

Top 10 Criteria for a SIEM?

OK, this WILL be taken the wrong way! I spent years whining about how use cases and your requirements should be THE MAIN thing driving your SIEM purchase. And suddenly Anton shows up with a simple ‘Top 10 list’, so…. blame it on that cognac.

This list is AN EXAMPLE. SAMPLE. ILLUSTRATION. It is here FOR FUN. If you use it to buy a SIEM for your organization, your girlfriend will sleep with your plumber.  All sorts of bad things can and likely will happen to you and/or your dog – and even your pet squirrel might go nuts. Please look up the word “EXAMPLE” in the dictionary before proceeding!

On top of this, this list was built with some underlying assumptions which I am not at liberty to disclose. Think large, maybe think SOC, think complex environment, etc. Obviously,  an environment with its own peculiarities … just like yours.

With that out of the way, Top 10 Criteria for Choosing a SIEM … EXAMPLE!

1. User interface and analyst experience in general: ease of performing common tasks, streamlined workflows, small number of clicks to critical functions and efficient and quick information lookups (including external information) when needed during the investigation

2. Correlation: correlation engine performance, ease of rule creation and modification, canned rule content, cross-device correlation based on normalized/categorized data; additional analytics methods including analysis of stored/historical log data; ability to test rules before production deployment

3. Log source coverage: full integration of most (better: all) needed log sources before operational deployment, detailed parsing and normalization of all fields needed for the analysts’ work; coverage of device, OS and application logs; wide use of real-time log collection methods, even at a cost of using agents

4. Dashboards and analyst views: availability of required analyst views, flexibility and customization, drilldown capability to see additional details, ease of modification and tuning, real-time operation (not periodic polling)

5. Reporting: report performance, visual clarity, ease of modification and default/canned report content, ability to create custom reports on all data in a flexible manner without knowing the SIEM product internal structures and other esoterica

6. Search and query: high (seconds) performance of searches and queries when investigating an incident, access to raw log data via an efficient search command, tied to the main interface

7. Escalation, shift and analyst collaboration support: a system to manage collaborative investigations of security issues, take notes, add details and review/approve the workflow; likely this requires an advanced case management / ticketing system to be built in

8. Ability to gradually expand storage on demand when the environment is growing; this applies to both parsed/normalized data storage as well as raw log storage

9. Complete log categorization and normalization for cross-device correlation that enables the analysts to “cross-train” and not “device-train” before using the SIEM well.

10. New log source integration technology and process: ability to either quickly integrate new log sources or have vendor do it promptly (days to few weeks) upon request

Got any comments?

If not, well, enjoy it … while it lasts.

Possibly related posts:

Enhanced by Zemanta

Tuesday, July 26, 2011

NIST EMAP Workshop–Aug 2011

A lot of good work on logging standards as well as standards for the “surrounding areas” (correlation rules, parsing rules, etc) will happen at this first-ever NIST workshop on EMAP.

Please mark your calendars to save the date for an EMAP Developer Workshop to be held August 29-30, 2011 at the NIST Campus in Gaithersburg, Maryland.  We are still formalizing the agenda, but topics to be covered will include:

· Discussion of target use cases and requirements as identified by EMAP working group.

· CEE Overview and in-depth discussion of current issues.

· Discussion of EMAP component specifications and issues/questions for the community.

· Discussion of EMAP roadmap and connections with other efforts within security automation.

We are in the process of standing up a registration page and creating the agenda.  A teleconference line will be provided for those who cannot attend in person.  More details to come in the near future, we hope to see you there.

If you are dealing with logs and SIEM (such as building, or even using the tools) and care about standards, please consider attending – but only if you will contribute!

Possibly related posts:

Monday, July 25, 2011

Speaking at Catalyst 2011 in San Diego Tomorrow

Just FYI, I am speaking at Gartner Catalyst 2011 event in San Diego tomorrow. The topic is “Five Best and Five Worst Practices for SIEM.”

“Implementing SIEM sounds straightforward, but reality sometimes begs to differ. In this session, Dr. Anton Chuvakin will share the five best and worst practices for implementing SIEM as part of security monitoring and intelligence. Understanding how to avoid pitfalls and create a successful SIEM implementation will help maximize security and compliance value, and avoid costly obstacles, inefficiencies, and risks.”

Time: Tuesday, 26 July 2011
02:45 PM to 03:20 PM

Location:   Hilton San Diego Bayfront
1 Park Boulevard
San Diego, CA 92101

If you are around, come see me here.

Log Management at $0 and 1hr/week?

As I was drinking cognac on the upper deck of a 747, flying TPE-SFO back from a client meeting, the following idea crossed my mind:  CAN one REALLY do a decent job with log management (including log review) if their budget is $0 AND their “time budget” is 1 hour/week? I got asked that when I was teaching my SANS SEC434 class a few months ago and the idea stuck in my head – and now cognac, courtesy of China Airlines, helped  stimulate it into a full blog post.

So, $0 budget points to using open-source,  free tools (duh!), but 1hr/week points in exactly the opposite direction: commercial or even outsourced model.

The only slightly plausible way it that I came up with is:

  1. Spend your 1st hour building a syslog server; it can be done, especially if starting from a old Linux box that you found in the basement (at $0); don’t forget logrotate or equivalent
  2. Spend a few next weeks (i.e. hours) configuring various Unix, Linux and network devices (essentially, all syslog log sources) to log to it
  3. Consider deploying Snare on a few Windows boxes (if needed); it would likely be easier to do than doing remote pull – too much tuning might be needed
  4. Next, drop a default OSSEC install on your log server and – gasp! – enable all alerts
  5. Spend the next  few hours (in the next few weeks) turning off the ones that are too numerous, irrelevant or don’t trigger any action in your environment.
  6. If you log volume fits within a free splunk license size (500MB/day), also spend an hour deploying splunk on your log server and have it index all gathered logs
  7. Now you’d be spending your “one log hour each week” on reviewing alerts and (if installed) digging in splunk for additional details
  8. Congrats! $0 and 1hr/week gave you semblance of log management and even monitoring….

What do you think? It just might work for organizations with severe time AND money constraints.

Enjoy the post … while it lasts.

BTW, on a completely unrelated note:  do you think EVERY organization above a certain size NEEDS a SIEM? Or WILL NEED a SIEM in the future?

Monday, July 18, 2011

Job: Director of Product Marketing at SIEM Vendor

I am posting this as a small favor to my friends at NitroSecurity.

Description:

The Director, Product Marketing is responsible for developing, planning and executing externally-focused product marketing strategies, plans & programs for the industry leading NitroView SIEM, log management, database monitoring, application monitoring and IDS/IPS solution. They will research & understand security market trends by working with industry analysts and engaging prospects & customers, closely monitor & analyze competitor offerings and develop value propositions, product positioning and messages for enterprise and government markets worldwide. They will drive and lead all new product launch and introduction activities, and support on-going product and solution campaigns and programs.

Candidates in metro Boston, metro Washington DC or open to virtual, home office arrangements are welcomed to apply to jobs@nitrosecurity.com.

Responsibilities:

a. Work closely with Product Management, Engineering and Operations to fully understand current and planned technologies, products and solutions

b. Conduct competitive research and provide analysis on competitive advantages & competitor claims relative to customer needs.

c. Determine product positioning & product messaging and create & manage a broad range of product and solution collateral, on-line content, white papers, blogs & sales tools

d. Develop and deliver new product training to field sales, systems engineers and channel partner and technology partner organizations

e. Key company spokesperson, presenting to prospects, customers, partners, press and analysts in person, via webcasts and at industry conferences.

Experience and Qualifications:
a. 10+ years of product marketing experience in security/networking assignments
b. 5+ years security industry experience

c. Excellent speaking, writing and presentation skills
d. Strong analytical skills, including business, markets and competition

e. Team player with proven success in high growth environments

f. Technical undergraduate degree preferred or equivalent, MBA or equivalent advanced degree preferred

Apply via: jobs@nitrosecurity.com.

Monday, July 04, 2011

PCI in the Cloud Class July 8: Location Finalized

Just  a quick announcements about my “PCI in the cloud” class that I am teaching this week.  The location has been finalized:
Location (map):
Ariba Silicon Valley Office
Sequoia Conference Room

910 Hermosa Court,
Sunnyvale, CA

(please use the main entrance and tell receptionist  that you are there for CSA PCI class, lunch and coffee will be provided)
Date: Friday July 8, 2011 at 9AM
There are still, I think, 2-3 seats left at $20/seat (beta price! must provide class feedback!!), so go and register here.

UPDATE: 7/4/2011 5:50PM Sorry, sold out! I will check with the host tomorrow about the room size and there is a slight chance that we can fit more than 25 people, but it is not a certainty.

Possibly related posts:

Friday, July 01, 2011

Monthly Blog Round-Up – June 2011

Blogs are "stateless" and people often pay attention only to what they see today. Thus a lot of useful security reading material gets lost. These monthly round-ups is my way of reminding people about interesting and useful blog content. If you are “too busy to read the blogs,” at least read these.

So, here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

  1. PCI DSS in the Cloud … By the Council” posts is my quick review of recent PCI DSS guidance on virtualization, focusing on cloud computing guidance.
  2. On Choosing SIEM” tops the charts again this month. The post is about the least wrong way of choosing a SIEM tool – as well as why the right way is so unpopular. A related read is “SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?”, check it out as well. While reading this, also check this presentation.
  3. Simple Log Review Checklist Released!” is still one of the most popular posts on my blog. Grab the log review checklist here, if you have not done so already. It is perfect to hand out to junior sysadmins who are just starting up with logs. A related “UPDATED Free Log Management Tools” is also still on top - it is a repost of my free log tools list to the blog.
  4. Algorithmic SIEM “Correlation” Is Back?” is a post that I never thought would make it to my monthly top as it covers a bit of SIEM esoterica. Surprise!
  5. NIST EMAP Out” is my quick announcement/summary of the NIST EMAP standard efforts, the log/event “brother” of SCAP and an extension of CEE work

Also, as a tradition, I am thanking my top 3 referrers this month (those who are people, not organizations). So, thanks a lot to the following people whose blogs sent the most visitors to my blog:

  1. Anonymous “PCI Guru”
  2. Dmitry Orlov
  3. Lenny Zeltzer

Also see my past annual “Top Posts” - 2007, 2008, 2009, 2010). Next, see you in July for the next monthly top list.

Possibly related posts / past monthly popular blog round-ups:

Thursday, June 16, 2011

PCI DSS in the Cloud … By the Council

The long-awaited PCI Council guidance on virtualization has been released [PDF]. Congrats to the Virtualization SIG for the mammoth effort! I rather liked the document, but let the virtualization crowd (and press!) analyze it ad infinitum – I’d concentrate elsewhere: on the cloud! This guidance does not focus on cloud computing, but contains more than a few mentions, all of them pretty generic.

Here are some of the highlights and my thoughts on them.

Section 2.2.6 “Cloud Computing” does contain some potentially usable (if obvious) scope guidance:

“Entities planning to use cloud computing for their PCI DSS environments should  first ensure that they thoroughly understand the details of the services being offered, and perform  a detailed assessment of the unique risks associated with each service. Additionally, as with any  managed service, it is crucial that the hosted entity and provider clearly define and document the  responsibilities assigned to each party for maintaining PCI DSS requirements and any other  controls that could impact the security of cardholder data.“ [emphasis by A.C.]

Now, after spending the last few months working on a training class on PCI DSS in the cloud for Cloud Security Alliance (in fact, I am still finishing the exercises for our July 8 beta run), the above sounds like a total no-brainer. However, I know A LOT of merchants “plan” to make the mistake of “we use PCI-OK cloud provider –> then we are compliant”, which is obviously completely insane (just as PA-DSS payment app does not make you PCI DSS compliant…and never did).

Further, the council guidance clarifies the above with:

”The cloud provider should clearly identify which PCI DSS requirements, system components, and services are covered by the cloud provider’s PCI DSS compliance program. Any aspects of the  service not covered by the cloud provider should be identified, and it should be clearly  documented in the service agreement that these aspects, system components, and PCI DSS requirements are the responsibility of the hosted entity [aka “merchant” – A.C.] to manage and assess. The cloud provider
should provide sufficient evidence and assurance that all processes and components under their  control are PCI DSS compliant.” [emphasis in bold by A.C.]

The above is actually a gem, a nicely condensed version of a pile of challenges and hard problems, all nicely summarized. Indeed, “PCI in the cloud” is largely about the above paragraph, but … there is A LOT OF DEVIL in the details Smile I’d like to draw your attention to the fact that providers have to “provide sufficient evidence and assurance” as opposed to just saying “we got PCI Level 1.” There is a big lesson for cloud providers in  it…

In further sections (section 4.3, mostly), there is some additional useful guidance, such as:

“In a public cloud, some part of the underlying systems and infrastructure is always  controlled by the cloud service provider. The specific components remaining under the control imageof the cloud provider will vary according to the type of service—for example, Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). […] Physical separation  between tenants is not practical in a public cloud environment because, by its very nature, all  resources are shared by everyone.” [emphasis by A.C. again; this reminds us that PCI does NOT in fact require such ‘physical’ separation of assets]

On top of this, the Council folks also highlight some of the additional cloud security challenges, affecting PCI DSS, such as (page 24, section 4.3):

  • “The hosted entity has limited or no visibility into the underlying infrastructure and related security controls.
  • “The hosted entity has no knowledge of ―who‖ they are sharing resources with, or the potential risks their hosted neighbors may be introducing to the host system, data stores, or other resources shared across a multi-tenant environment.” [the section in bold is kind of a hidden big deal! think about it – your payment environment may blow up since your cloud neighbor just annoyed LulzSec by something they said on Twitter…]

The guidance counters these and other challenges with additional controls:

“In a public cloud environment, additional controls must be implemented to compensate for the inherent risks and lack of visibility into the public cloud architecture. A public cloud environment could, for example, host hostile out-of-scope workloads on the same virtualization infrastructure as a cardholder data environment. More stringent preventive, detective, and corrective controls are required to offset the additional risk that a public cloud, or similar environment, could introduce to an entity’s CDE.” [notice MUST, not “may” or “should”; also notice REQUIRED and not “suggested” or “oh wow, would it be nice if…” Smile]

And if you don’t have such additional controls, then: “These challenges may make it impossible for some cloud-based services to operate in a PCI DSS compliant manner.”

In any case, it was definitely a fun and useful read; hopefully future detailed guidance on PCI in the cloud is coming (given that virtualization SIG took a few years, I am looking forward to 2017 or later here)

BTW, my PCI DSS in the cloud training class will happen on July 8 in Bay Area and you can still sign up.

Tuesday, June 14, 2011

Algorithmic SIEM “Correlation” Is Back?

Back in 2002 when I was at a SIEM vendor that shall remain nameless (at least until they finally die), I fell in love with algorithmic "correlation." Yes, I can write correlation rules like there is no tomorrow (and have fun doing it!), but that’s just me – I am funny that way. A lot of organizations today will rely on default correlation rules (hoping that SIEM is some kinda “improved host IDS” of yesteryear … remember those?) and then quickly realize that logs are too darn diverse across environments to make such naïve pattern matching useful for many situations. Other organizations will just start hating SIEM in general for all the false default rule alerts and fall back in the rathole of log search aka “we can figure out what happened in days , not months” mindset.

That problem becomes even more dramatic especially when they try to use mostly simple filtering rules (IF username=root AND ToD>10:00PM AND ToD<7:00AM AND Source_Country=China, THEN ALERT “Root Login During Non-Biz Hours from Foreign IP”) and not stateful correlation rules, written with their own applications in mind. As a result, you'd be stuck with ill-fitting default rules and no ability to create  custom, site-specific rules or even intelligently modify the default rules to fit your use cases better. Not a good situation - well, unless you are a consultant offering rule correlation tuning services ;-)

One of the ways out, in my opinion, is in wide use of event scoring algorithms and other ruleless methods.  These methods, while not without known limitations, can be extremely useful in environment where correlation rule tuning is not likely to happen, no matter how many times we say it should happen. By the way, algorithmic or "statistical" correlation has typically little to do with correlation or statistics. A more useful way to think about is weighted event scoring or weighted object (such as IP address, port, username, asset or a combination of these) scoring

So, in many cases back then people used a naïve risk scoring where:
risk (for each destination IP inside)  =  threat (=event severity derivative) x value (=user-entered for each targeted “asset” - obviously a major Achilles heel in the real world implementations) x vulnerability (=derived from  vulnerability scan results)

It mostly failed to work when used for real-time visualization (not historical profiling) and was also really noisy for alerting. But even such simplistic algorithm, however, still presents a very useful starting point to develop better methods, post-process and baseline the data, add dimensions, etc. It was also commonly not integrated with rules, extended asset data, user identity, etc.

Let’s now fast forward to 2011. People still hate the rules AND  rules still remain a mainstay of SIEM technology. However, it seems like the algorithmic ruleless methods are making a comeback, with better analysis, profiling, baselining and with better rule integration. For example, this recent whitepaper from NitroSecurity (here, with registration) covers the technology they acquired when LogMatrix/OpenService crashed and now integrated into NitroESM. The paper covers some methods of event scoring that I personally know to work well. For example, a trick I used to call “2D baselining”: not just tracking the user actions over time and activities on destination assets over time, but tracking pair of user<->asset over time. So, “jsmith” might be a frequent user on “server1”, but only rarely goes to “server2”, and such pair scoring will occasionally show some fun things from the “OMG, he is really doing it!” category Smile

So, when you think SIEM, don’t just think “how many rules?” – think “what other methods for real-time and historical event analysis do they use?”

Possibly related posts:

Wednesday, June 08, 2011

NIST EMAP Out

As those in the know already know, NIST has officially released some EMAP materials the other day (see scap.nist.gov/emap/). EMAP stands for “Event Management Automation Protocol” and has the goal of “standardizing the communication of digital event data.” You can think of it as future “SCAP for logs/events” (the SCAP itself is for configurations and vulnerabilities). Obviously, both twin standards will be “Siamese twins” and will have multiple connection points (such as through CVE, CPE and others).

In reality, SCAP and EMAP are more like “standard umbrellas” and cover multiple constituent security data standards – such as CPE, CVE, CVSS, XCCDF, etc for SCAP and CEE for EMAP. As the new EMAP site states:

The Event Management Automation Protocol (EMAP) is a suite of interoperable specifications designed to standardize the communication of event management data. EMAP is an emerging protocol within the NIST Security Automation Program, and is a peer to similar automation protocols such as the Security Content Automation Protocol (SCAP). Where SCAP standardizes the data models of configuration and vulnerability management domains, EMAP will focus on standardizing the data models relating to event and audit management. At a high-level, the goal of EMAP is to enable standardized content, representation, exchange, correlation, searching, storing, prioritization, and auditing of event records within an organizational IT environment.

[emphasis by me]

While CEE team is continuing its work on the log formats, taxonomy, profiles and other fun details of logging events, the broader EMAP effort creates a framework around it as well as proposes a set of additional standards related to correlation, parsing rules, event log filtering, event log storage, etc. The released deck [PDF] has these details as well as some use cases for EMAP such as Audit Management, Regulatory Compliance, Incident Handling, Filtered Event Record Sharing, Forensics, etc.

In the future, I expect EMAP to include event log signing, maybe its own event transport (run under CEE component standard) as well as a bunch of standardized representation for correlation (via CERE component standard) and parsing rules (via OEEL) to simplify SIEM interoperability as well as migration.

Everything public to read on EMAP is linked here (2009), here (2010), here, etc [links are PDFs], if you are into that sort of reading. SIEM/log management vendors, please pay attention Smile - some of you have already started implementation of this stuff….

Wednesday, June 01, 2011

Monthly Blog Round-Up – May 2011

Blogs are "stateless" and people often pay attention only to what they see today. Thus a lot of useful security reading material gets lost. These monthly round-ups is my way of reminding people about interesting and useful blog content. If you are “too busy to read the blogs,” at least read these.

So, here is my next monthly "Security Warrior" blog round-up of top 5 popular posts/topics this month.

  1. On Choosing SIEM” tops the charts this month. The post is about the least wrong way of choosing a SIEM tool – as well as why the right way is so unpopular. A related read is “SIEM Resourcing or How Much the Friggin’ Thing Would REALLY Cost Me?”, check it out as well. While reading this, also check this presentation
  2. My commentary on the latest SIEM Magic Quadrant 2011 (“On SIEM MQ 2011”) is next – I not only share my insights, but also point some unintentional hilarity in the reports
  3. What To Do When Logs Don’t Help: New Whitepaper” announces my new whitepaper (written under contract for Observe-IT) about using other means for activity review and monitoring when logs are either not available or hopelessly broken
  4. Also, “How to Replace a SIEM?” is on the list – it talks about a messy situation when you have to replace one SIEM/log management too with another
  5. Simple Log Review Checklist Released!” is still one of the most popular posts on my blog. Grab the log review checklist here, if you have not done so already. It is perfect to hand out to junior sysadmins who are just starting up with logs. A related “UPDATED Free Log Management Tools” is also still on top - it is a repost of my free log tools list to the blog.

Also, as a tradition, I am thanking my top 3 referrers this month (those who are people, not organizations). So, thanks a lot to the following people whose blogs sent the most visitors to my blog:

  1. Anonymous “PCI Guru”
  2. Dmitry Orlov
  3. Anonymous “SIEM Ninja”

Also see my past annual “Top Posts” - 2007, 2008, 2009, 2010). Next, see you in May for the next monthly top list.

Possibly related posts / past monthly popular blog round-ups:

Tuesday, May 31, 2011

PCI DSS in Cloud Computing Environments–THE Training

It took many long weeks to create and now it is …. OUT!!! Sign up here now if you are in Bay Area on July 8, 2011. The training is being offered free by the Cloud Security Alliance (well, we ask for $20 to offset the pizza costs) in exchange for your feedback and participation is very limited. I would not be surprised if future production “runs” would cost its attendees 30x-50x of the above “price” since this is a full-day class focused solely on PCI DSS and cloud environments (likely 9AM-4PM with a few breaks).

The initial PCI DSS Cloud  Training Class to be held in Silicon Valley on July 8, 2011, exact location to be determined.

The first ever class dedicated to assessing and implementing PCI DSS controls in cloud computing environments covers how to think of and how to do PCI DSS in various cloud computing environments. Focused primarily on people familiar with PCI DSS, it starts from the “hype-free” cloud computing facts and then delves into key scenarios where PCI DSS and clouds overlap in the real world. You will learn where to look while assessing such environments and what pitfalls and mistakes to avoid. It will also cover the shared responsibility between service providers and merchants in implementing PCI DSS controls. Specifically, we will discuss how PCI DSS Requirement 12.8 applies to various cloud scenarios.

The class would be most useful to PCI DSS QSA, organizations offering PCI DSS consulting as well as merchants planning or implementing PCI compliance.

BTW, in addition to the class materials, I am preparing some “goodies” such as control spreadsheets and implementation tips that should work for various cloud and payment environments. There will be some fun exercises as well!

See you there! I will post updates and maybe even some materials as time progresses.

Thursday, May 26, 2011

Log Management->SIEM Graduation Criteria: Violate at Your Own Peril!

Somebody asked me that question “Do I need SIEM or do I need log management?” yesterday again, and I figured I’d repost this “bit of Anton’s wisdom” (ego alert!Smile), so that people can just read this instead of repeatedly bugging me with this question.

Q: How do I figure out whether I need SIEM or log management?

A: You need log management – if you have computers, IT, data, etc. Period! This is not really a discussion item at all, since about 1986 or so. But do you also need a SIEM? You might think you need it, but you would only be able to benefit from it and satisfy that need if your organization fits the following "graduation criteria from log management to SIEM:”

  1. Response capability: The organization must be ready to respond to alerts soon after they are produced. Incident response process/procedures are a must
  2. Monitoring capability: The organization must have or start to a build security monitoring capability such as a Security Operations Center (SOC), or at least a team/person/resource dedicated to ongoing periodic monitoring.
  3. Tuning and customization capability: The organization must accept responsibility for tuning and customizing the deployed SIEM tool; pure out-of-the-box SIEM deployments rarely succeed.

(originally written for this paper where the above are clarified in more detail)

Possibly related posts:

  • All my posts about SIEM

Thursday, May 19, 2011

On SIEM MQ 2011

As all of you know, Gartner SIEM MQ 2011 is out – you can see it here (or here) without registration. The quadrant mostly matches my recent SIEM project experience.

My observations follow below:

  • CA “SIEM” and “Log Manager” are finally wiped off the face of the Earth (=removed from SIEM MQ), NetIQ is dumped down to the Niche. As they should be.
  • Honestly, Symantec SSIM in Leaders is a mystery to me; must be those invisible non-competitive deals or EU/APAC deals. I’ve not seen them on an enterprise SIEM shortlist in the US for a loooooooong time. The rest of the leaders match my expectations fully (and four of them have been at some point my consulting clients)
  • Splunk is now officially a [sub-par] SIEM, even though it is really not. Is that good or bad? Well, they got their “honorable mention” for the last few years and now they are in the quadrant. BTW, this example shows that you can make A LOT of money by being free and not in any Magic Quadrant!
  • Visionary sector of the MQ galaxy is extremely crowded – but with very different tools, ranging from Prism to Trustwave. Many organizations will choose a tool from this sector, but need to be careful – read the related posts below for some selection ideas and pitfalls.

BTW, congrats to all the vendors who got added this year: AlienVault, Tripwire, splunk and the regional SIEM guys.

As always, apart from insight, the MQ document has a good share of unintentional hilarity, for example:

  • “This company declined to provide any information to Gartner for this research” (Darwin Awards anybody?)
  • “Customer feedback on product function and support is mixed.” (Anton translation: product usually doesn’t work?)
  • “Non-English-language versions of XYZ are not available.” (Anton’s comment: is everything else about the product perfectly perfect?)

Finally, if anybody is wondering, I think the concept of Magic Quadrant (whoever at Gartner came up with) is brilliant. However, many wrong  SIEM purchase decisions I’ve seen made usually stem from the decision maker’s own ignorance and not from whatever document or market visualization he has in his possession. Keep this in mind…

Rocky, your turn! Smile

Possibly related posts:

Wednesday, May 18, 2011

What To Do When Logs Don’t Help: New Whitepaper

Here is a hard problem: you MUST log, but there are no logs to enable. Or, what is no less common, logs are so abysmal that they don’t help – and don’t fit the regulatory mold (example: PCI DSS Requirement 10.2 and 10.3). Or, logs are “out there in the cloud” and you cannot get them, but compliance is here and requires them.

What to do?

The answer to this eternal question is in my new whitepaper that I have written for Observe-IT (observeit-sys.com)

Executive summary:

This paper covers the critical challenges implementing PCI DSS controls and suggests creative solutions for related compliance and security issues. Specifically, the hard problem of security monitoring and log review in cloud, legacy, and custom application environment is discussed in depth. Additionally, clarification of key PCI DSS compensating controls is provided. This paper will help you satisfy the regulatory requirements and improve security of your sensitive and regulated data.

Short version [PDF] (5 pages)

Extended version [PDF] (13 pages)

As usual, the vendor was paying the bill, but thinking and research are all mine (SecurityWarrior Consulting)

Enjoy!

Possibly related posts / past whitepapers:

Tuesday, May 17, 2011

PCI Webcast Q&A

From the webcast I’ve done awhile back, here are some fun Q&A that I volunteered to answer. PCI DSS literati reading this blog, don’t freak out – this is BASIC since the webinar was for Level4 ecommerce merchants.

Q: I have a hosted Card Service Provider, are the SSL tunnel with certificates good enough security?  What PCI say about this?
A:  Well, “SSL tunnel with certificates” is good security (at least compared to no SSL!), but is it enough? Not really. PCI DSS has a long list of other security controls which need to be implemented - for example, if are and e-commerce merchant, web application security is extremely important, likely more so than SSL.

Q: Another crystal ball question. Do you think the day will come when merchants are not permitted to store credit card information in order to be PCI compliant?
A: Well, merchants are not permitted to store CVV data today, merchants are not permitted to store PAN in cleartext and they are strongly discouraged to store PANs at all today (example) – all as per PCI DSS. I do not foresee a complete ban on PAN storage, but these rules might well become stronger.  If

Q: If we are not processing cards at all, but instead are protecting client lists, how much security is needed?
A: The beauty of this question is that it is up to you to determine that risk. There are no regulations to compel you so you have to make your own decisions based on your own research. The answer might vary from “none” (if these are essentially public) to “a lot” if loss of those lists will destroy your business.

Q: What about ACHDirect processing?
A: Not under PCI – all risks are yours, same as above. In recent years, a lot of smaller companies have been attacked by ACH credential stealing malicious software.

Q: The question about 2 or 3 things to secure their system.  Could they not just go to dial up credit terminals?
A: They sure can a net will help protect the card data.

Q: How can a criminal use stolen card data for themselves?
A: Charge cards themselves, resell them in bulk, manufacture cards for resale and use (if Track2 data is available), buy and resell goods, buy software and then pirate it, etc, etc, etc. Think what you’d do if you are given a “free credit card” Smile

Q: Retailer that use MPLS networks have historically not had to encrypt data over a "private" network connection like MPLS.  Do you expect MPLS to require data  encryption and firewalling like you find with networks served by public internet connections?
A: No, this is not a “public” network defined in PCI DSS,  at least to the best of my knowledge. So, while encryption and firewalls are “a good idea”, they are not “the law.”  Requirement 4.1 states that “Use strong cryptography and security protocols (for example, SSL/TLS, IPSEC, SSH, etc.) to safeguard sensitive cardholder data during transmission over open, public networks”

Q: When we went to our website provider to close ports as we said it was not PCI compliant. we were told that because there was no CC data being taken through the  site (it's informational only), it doesn't have to be PCI compliant. Is that true?
A: Not exactly true. Public servers are in scope and must be scanned for vulnerabilities; having less open ports will help you have less vulnerabilities exposed to Internet.  Now, if you don’t accept credit cards at all in your business, then obviously your website is not under PCI DSS.

Q: We have a third party vendor that handles our payments; what tools can we use to audit our vendor?
A: Likely, you're talking not about technical tools, but “legal tools” like SLA, agreements, etc.

Q: To be totally honest, we save the CVV number. This is because is it a huge annoyance to have to call the customer every time we need to charge the card. Is there another solution so we don't have to contact our customers for their CVV number?
A: It is OK to save the CVV if you accept the fact that can never be PCI DSS compliant and will always be in violation of your agreement with your acquiring bank. If I were you, I’d ask you acquiring bank about how to do recurring payments without saving the CVV – it IS possible.

Q: Besides a firewall and web application firewall what other layer of security can be used?
A: Yes, many (if you are under SAQ D) – please read PCI DSS. Examples include log management, configuration management, IDS/IPS, FIM, etc, etc.

Q: What about credit card data stored in QuickBooks?
A: QB does have encryption, do you use it? PANs stored in this application are just like any other stored complete PANs: they need to be encrypted.

Q: What IDS/IPS system would you recommend?
A: Snort is free and is hard NOT to recommend for that reason.

Q: I use PayPal website Pro integrated into my site to process payments. Do I still need a firewall to be PCI DSS compliant?
A: It depends how it is used, but most likely yes (and not just a firewall). Read this for details.

Q: If we use a swipe machine, are we storing data, or is it just transmitted?
A: Depends on the machine, likely just transmitted but older machines are known to store data and should be replaced, whenever possible.

Q: How about some websites/books for learning web security
A: Key web security: OWASP and WASC.

Q: What products/solutions do you recommend for managing logs from different types of applications (e.g., web applications) and systems (e.g., /var/log/*) ?
A: Many tools exist with prices from $0 to (literally) millions, here are some of my favorite free log tools.

Q: How do I know if a website is PCI compliant before I accept credit cards? Should the web host give me a certificate?
A: Ah, good question and you are not the only one to wonder about that. But there's no good answer! Many security seals exist (and some mention PCI DSS scanning on them), but their credibility is frequently called into question.

Q: Why hasn't the term 'passphrase' taken off?  I tell all my users, use a pass phrase, with proper punctuation and spacing.
A: Hard to say, this is a really good way to create long while memorable passwords.

Q: We still transmit our payment card data over telephone lines. Is that less risky?
A: Yes, much less risky. Dial-up terminal makes PCI DSS easier and genuinely reduces the risks to cardholder data

Q: On the Who/What do Hackers Target question, what are the constraints for including the company data?  Are all companies included or only ones that require PCI compliance?
A: All data is potentially under risk – but payment card data (and now ACH credentials) are easier to profit from, if you are a criminal. Many companies use PCI DSS to learn about security and then expand their knowledge to protect other kinds of data, beyond the card numbers.

Enjoy the basics!

Possibly related posts:

Dr Anton Chuvakin