Friday, November 03, 2023

Google Cybersecurity Action Team Threat Horizons Report #8 Is Out! [Medium Backup]

 This is my completely informal, uncertified, unreviewed and otherwise completely unofficial blog inspired by my reading of our eighth Threat Horizons Report (full version) that we just released (the official blog for #1 report, my unofficial blogs for #2, #3, #4, #5, #6 and #7).

My favorite quotes from the report follow below:

  • “The cloud compromise factors and outcomes observed in Q2 2023 were largely similar to previous quarters and consistent with the last 12 months of reporting. […] weak credentials continue to represent the largest compromise factor where many observed instances were a result of attackers brute forcing default accounts, Secure Shell (SSH), and the Remote Desktop Protocol (RDP)” [A.C. — as usual, shocking but not surprising. Perhaps the surprise is that it is NOT changing over 2–3 years of ‘clouding’…]
src: TH8 report
  • Here is the data averaged over a few reports — the data is actually fairly stable over time, sad though it may be.
src: TH8 report
  • “In the Q2 2022 Threat Horizons Report, we highlight that a disproportionate percentage of attackers opportunistically use coin mining across Cloud products and alter their tactics to evade discovery. This is consistent with this quarter’s findings, as this is the most observed outcome from compromises.” [A.C. -another ‘resilient’ finding, most cloud attackers just cryptomine]
src: TH8 report
  • … and the data averaged over time:
src: TH8 report
  • “This quarter our teams observed a 8.5% increase in vulnerable software compromises led primarily by PostgreSQL being the most exploited.” [A.C. — an interesting choice, perhaps some of the instances got ransomed too? Also, I sense this is related to credentials above…]
  • “SaaS providers were also targeted earlier in the year by suspected financially-motivated DPRK actors in order to gain access to downstream victims.” [A.C. — this is interesting, SaaS as a stepping stone! Is SaaS — likely SaaS credentials, frankly, your weakest link?]
  • Finally, here is some interesting data focused on healthcare cloud compromises
src: TH8 report

Now, go and read the report!

Related posts:

ORIGINAL LOCATION: Anton on Security

Frameworks for DE-Friendly CTI (Part 5) [Medium Backup]

 This blog series was written jointly with Amine Besson, Principal Cyber Engineer, Behemoth CyberDefence and one more anonymous collaborator.

In this blog (#5 in the series), we will build a quick “framework-lite” for making CTI to DE flows better.

Let’s review three organizational models of integrating an existing threat intelligence (CTI) team with the detection engineering function for optimum detection work.

Operating Model 1: CTI Feeds SOC / Detection Engineering

Some organizations have a clearly defined and separate CTI team, which supplies information to different teams, functions and recipients. Detection engineering (DE), whether inside or outside the SOC (here, specifically this point may not matter), is just one of the recipients.

When there are organizational reasons why this setup cannot be radically changed, you should define a concrete interface: requirements, expectations, procedures for threat sharing (including rush or priority ones), artifacts sharing, cadence and some other common processes.

CTI Feeds SOC / DE

Ultimately, this is the setup that is most affected by all the classic “silo” problems, “us vs them”, finger pointing, cross-blaming, etc. It can work, and it has worked for people, but it sure isn’t the best.

Operating Model 2: CTI Feeds Mini-CTI Inside SOC / DE

A more effective approach is to embed a CTI analyst(s) into the Detection Engineering team. Working shoulder-to-shoulder allows shorter turnaround times and a better understanding of processes on both ends, while keeping the CTI team separate and able to serve other stakeholders.

CTI Feeds Mini-CTI Inside DE

In other regards, this is similar to model 1, but it likely will work better for most organizations (well, most orgs mature enough to understand all this stuff and who have the actual SOC, TI, etc teams)

Operating Model 3 : CTI — DE Peer Team Model

An even better way is what some call a Cyber Fusion Center (because, clearly, we don’t have enough “cyber” in our lives and we miss the cold fusion idea…), which has been a term recently popularized by MSSPs/MDRs and some threat intelligence vendors. To us, this is a model of peer teams that work together for common mission.

CTI — DE-IR Peer Team Model

This model restructures the CTI, SOC, Hunting (sometimes also a special team) and CSIRT / IR functions in a single cross functional organization, with defined, but highly collaborative functions serving shared goals and targets. It allows teams to build complex workflows directly, with a single voice, without the trouble of interfacing with separate teams owning different backlogs and agendas. Further, ideally these teams needs to be fluid and rotate people thus obliterating siloes (a TI guy may do a bit of DE, an IR could do TH, etc.)

Naturally, if you are building a shiny new SOC, we’d recommend looking into integrating this fusion center collaborative concept in your architecture: breaking down (better, not creating) silos is always easier to do when planning, than with existing organizations.

Better CTI for Fun and Profit … and Detection

As we discussed, threat intelligence is key to Detection Engineering, however as we have seen its input is sometimes vague, hard to parse, or just insufficient. Frankly, we’ve seen cases where the CTI team is genuinely at a loss when dealing with DE teams, as it doesn’t always understand what is expected.

DE teams expect from CTI not reports (especially not overstuffed 27 page PDF reports ), presentations or news articles, but handing over a concrete, well-described knowledge item (like a wiki page) that they can easily understand and that enables kicking off R&D easier. Ideally, it should be tagged to make the overall knowledge base easy to search, and tracked as an issue in a project management system to build a relevant backlog of threats to detect. These last requirements also make metrics and reporting on detection quality much easier.

Delivering those prioritized, technical threat knowledge items should be a core service of CTI teams focused on helping the DE team. Other functions like IOC compilations, high-level reports and threat landscape analysis are still helpful, but will not deliver what detection engineers need to succeed in their mission.

From the DE perspective, the best intel input is highly cohesive (focuses on describing a particular adversary behavior) and loosely coupled (isn’t so high level that it combines multiple classes of attack in one behavior). This can be seen as the best raw materials to cook tasty finished detections.

Some key markers of helpful intel that will lead to better detection engineering :

  • Focused on described a single threat, in a particular domain
  • Specific to technologies, protocols, OS, device type.
  • Threat impact is evaluated and so the threat can be prioritized into a rolling backlog
  • Technical, procedure level information without losing track of overarching TTPs and commonalities across variants.
  • Displays some unique characteristics which are helpful to isolate invariable behaviors
  • Threat descriptions that at least hints at detection opportunities

To make it even better, such information should:

  • Show relevance to organization IT estate and crown jewels
  • Doesn’t assume deep threat actor knowledge in DE team and explains concepts in clear English
  • Not a PDF or email, but a knowledge base item; doesn’t need a second parsing by a DE into more usable knowledge items
  • Delivered at set frequency (frequent but not overwhelming intel flow means that the team has reached the correct level of granularity and cadence)

In this way, when the DE and CTI are able to work together in a common flow, it improves development and delivery of new detections (both in time-to-ship and quality), evaluation of the actual detection coverage, and boosts the teams competence in understanding cyber threats. It also builds team morale, as there is now a single direction and single prioritized backlog instead of divergent perspective in what detections to build first, and how to develop them best (this is kinda a big deal!).

You should now be very familiar with the challenges that detection engineers face with the threat intel they receive, and be able to take away some key improvement points to improve cohesion in your particular organization structure. Or, if you don’t have a TI/CTI function at all, keys to build one!

In our next blog post we’ll explore the other side of the coin: assuming you get nice intelligence, how to break it into detections?

Previous blog posts of this series:

Dr Anton Chuvakin