A long, long, long time ago (eh, like, in 1997 :-)) Marcus Ranum said in his post "artificial ignorance: how-to guide": "... you build a file of common strings that aren't interesting, and, as new uninteresting things happen, you add them to the file." And then you watch for other log records, which are then presumably interesting. This introduced the concept of "negative" filtering i.e. looking NOT for what you know to be "good."
This technique is indeed extremely effective for finding "interesting" (i.e. unusual + actionable; see more, for example, here [PDF]) events in logs. I even wrote a few prototype tools myself - including filters such as "new log record type for this port", "new for this network segment", "new for this sensor", "new for this IP range", etc or even two-dimensional ones such as "new for this port and destination IP address" which worked really well. All approaches had adjustable timeout after which the tool would consider an event to be new (e.g. "new for this week", "new for this month", etc). However, few vendors chose to incorporate such analysis approach in their products (bizarre, isn't it?).
And now, 10 years later (!) somebody finally did. In his post on 'Finding Events that have "Never Been Seen" Before' Ron Gula (with Marcus Ranum standing over his shoulder, no doubt :-) ) talks about implementing this in his product. What took you so long? :-)
They say "Regardless if your event logs are from UNIX systems, router access control violations, wireless access DHCP logs, intrusion detection systems or so on, after a certain period of time, the same events tend to repeat themselves. This is because most of our networks run controlled and automated processes. With this in mind, finding out when something "new" occurs could indicate a security or administration problem. " Right on! There is more fun stuff in the follow-up post 'More on "Never Before Seen" Log Events'.