Pursuing Meaningful Metrics for Cyber Threat Intelligence Programs

Pursuing Meaningful Metrics for Cyber Threat Intelligence Programs

hello there, decision-maker!

Over the last 12 months, I had my assumptions about metrics challenged and the outcome was me rethinking my approach to leveraging metrics in the context of Cyber Threat Intelligence (and threat informed defense to a wider extent).

Here’s the key shift: focus on metrics that go beyond surface-level reporting.

Tracking throughput or performance continues to be a good start, but the real value lies in showing how your insights drive smarter decisions, reduce risks, and deliver measurable benefits to your organization.

Turning metrics into a compelling story about your team’s value.

Let's dive in.


The Paradox of Metrics

Here's my most important adjusted assumption.

I’ve often heard people say ‘if something is important, it should be measured’.

After all, numbers feel concrete, objective, and easy to present.

Long has this been the cybersecurity industry's thinking as well, but I consistently found that the things that matter most (things like trust, confidence, and influence) are often the hardest to measure.

We humans don’t make decisions purely based on logic.

Our choices are shaped by emotions, context, and those are intangible factors that don’t fit neatly into a spreadsheet.

Think about the last time you made an important decision.

Was it purely based on measurable factors? Probably not.

The same paradox exists in cyber threat intelligence (CTI).

Metrics play an important role because they give us a framework for evaluation and a way to communicate our value to leadership.

But if we only focus on what’s easy to measure, we miss the bigger picture.

Let me give you an example:

“..We have 135 IOC feeds, 9 collection sources, and our new MSSP, and last year they provided us 4.315 items that we were able to take action on..’

Although these numbers provide insight about what you’re looking into and certain performance items, it really doesn’t tell you anything about what items result in an impactful adjustment to the organization.

A correct metrics needs to contribute to:

Building confidence in decision-makers.

Fostering trust between teams and stakeholders.

Enabling preventive actions that often go unnoticed because, well, they prevented something from happening.

These are the things that truly define the value of CTI, even if they’re difficult to quantify.

Although most of us know this, this is rarely reflected nor accepted in modern dashboards.

When we focus too much on the numbers, like the volume of reports or the speed of alerts, we risk undervaluing the impact CTI has on smarter decisions, reduced uncertainty, or aligned business goals.

A lot of the CTI teams I talked to over the past 24 months have struggled with this exact thing.

I believe a lot of it has to do with traditional metrics falling short and the critical need on how we can pair them more effectively with a story that captures the real, human impact of our work.

Demonstrating Value through Metrics

Metrics are one way to measure how effective and efficient cybersecurity functions are, but they don’t always capture the full picture of value.

In CTI, it’s particularly hard to quantify outcomes like improved decision-making, cost savings from avoided incidents, or agility in responding to evolving threats.

These benefits often take the form of less-visible indicators, like shorter dwell times or increased service uptime, which we compare to industry benchmarks such statistics coming out of the SANS annual surveys or Mandiant’s M-Trends reports.

The challenge grows when we consider CTI teams, whose job I believe is to inform and guide decision-making for defenders, risk managers, and leadership.

How do you measure the value of decisions that are better informed, risks that were avoided, or threats that never materialized?

To effectively demonstrate CTI’s value, we need to start adopting, or at minimum moving towards, a collaborative systems-thinking approach.

This means going beyond traditional cybersecurity metrics like confidentiality, integrity, and availability, and instead factoring in things like brand reputation, customer trust, legal impact, and even employee morale.

Considering a well-defined subset of CTI-specific metrics, you begin breaking down an overwhelming problem (e.g. when can measure everything, where do we start!?) into simple and actionable steps that make it easier to demonstrate the impact of our work.

Skip the Cue and Start with Purposeful Metrics

The key to meaningful metrics starts with clarity of purpose.

Often this word alone could lead into a discussion.

Metrics require processes and technical means to gather, store, and report information in ways that it supports clear goals.

It’s also important to recognize that metrics are not the only way to demonstrate success.

Qualitative achievements, like improved team collaboration or enhanced decision-making processes, also deserve attention.

Metrics should serve as tools to drive business decisions, not as an end in themselves.

Capturing performance data of the team or an MSSP just for the sake of it simply wastes valuable resources and shifts focus away from what truly matters.

Instead, metrics must support specific business outcomes and use cases.

By aligning metrics with the organization’s goals, CTI leaders can ensure that every data point collected has a clear purpose and measurable impact.

After this discussion, I often continue into conversations about Intelligence Requirements (but that is a different system for another time).

After this discussion, I often continue into conversations about Intelligence Requirements (but that is a different system for another time).


Building Your CTI Metrics Taxonomy

In May 2019 I started curating a cyber threat intelligence metrics overview.

Initially it was just a collection, but after some adjustments it turned into something very practical giving teams a one-page sense-of-direction.

Earlier this year, I discussed usage of metrics with friends John Doyle and Steven Savoldelli to determine a practical path forward for every team.

Based on many of the teams we supported over the years, we all identified that CTI teams, much like other stakeholder-driven functions, struggle with creating metrics that work universally across organizations.

Every organization is different, with unique scales, goals, and levels of stakeholder involvement, which makes it difficult to establish a one-size-fits-all approach.

To address this, we developed a taxonomy that provides a structured way to evaluate and construct meaningful metrics within CTI programs.

This should be used as a guide, giving you different lenses (in the concept of systems thinking) to build metrics from the ground up:


Variation 1: By Role

To begin, identify the specific problem each metric addresses.

CTI metrics generally fall into three categories: Administrative, Performative, and Operational.

While some overlap exists, especially between performative and operational metrics, each category serves a unique purpose:

Administrative Metrics:

  • Gauge throughput and support capacity baselining.
  • Over time, these metrics enable comparative analyses, such as evaluating the impact of adding a team member.

Performative Metrics:

  • Track the effort required to complete tasks.
  • Assist with resource planning, establish job role expectations, and evaluate performance.

Operational Metrics:

  • Focus on business impact.
  • Help demonstrate how CTI services reduce risk, inform strategy, and enhance cybersecurity defenses.


Variation 2: By Audience or Stakeholder

When designing metrics, consider the intended audience and the outcomes these metrics aim to support.

Metrics could serve multiple purposes, from justifying resource needs to identifying areas of excellence or concerns.

While senior leadership often serves as the primary audience, other stakeholders, such as external governance functions, finance, and audit, can also benefit.

Examples by audience include:

  • Senior cybersecurity manager: Evaluates the percentage of reports using data from specific sources to assess utility, licensing costs, and processing requirements.
  • Red team manager: Reviews the frequency of CTI support in simulations, ensuring realistic threat scenarios.
  • Risk management team: Tracks CTI’s contributions to risk reduction over a given period.

Some metrics may apply to multiple stakeholders, so understanding audience interpretation and application is essential.


Variation 3: By Stakeholder Integration

Some teams talk about stakeholders, some about consumers, and others about customers: it is often the same thing.

Effective CTI programs demonstrate integration with stakeholder groups, while at the same time raising awareness about CTI’s value.

Understanding the workflows of stakeholder teams is crucial, as metrics like RFIs or feedback loops can illustrate CTI’s impact.

Examples of metrics include:

  • Count of documented consumer workflows and PIRs.
  • Volume of tickets created, team involvement, and support types.
  • Rate of proactive vs. reactive delivery of threat actor information.
  • More example metrics via my GitHub .

An important consideration here is that early-stage CTI teams often ‘bear the burden’ of educating consumers about CTI’s role: if this is your program, then you can often have a lot of success in articulating impact to those stakeholders.

As integration improves, again, the goal is seamless alignment with stakeholder workflows.


Variation 4: By Complexity

Metrics range in complexity, from low-complexity (self-contained) to high-complexity (requiring cross-team collaboration).

Examples include:

  • Low complexity: Source count based solely on CTI data.
  • High complexity: Financial impact metrics requiring finance team input on the value of protected assets.

While complexity itself isn’t inherently good or bad, I believe high-complexity metrics demand careful attention to assumptions and collaboration effort (i.e. taking time to report properly).

Balancing these different types ensures meaningful results without overburdening resources.


Variation 5: By Time

Point-in-Time or Period of Time, where metrics provide value as either point-in-time snapshots or longitudinal trends.

Each type serves distinct purposes:

  • Period metrics: Capture production impacts monthly or yearly, or enable year-over-year comparisons.
  • Trend metrics: Track time shifts in CTI support or source reliance.

I’ve found stakeholders often to rely more heavily on changes in trends (i.e. what is new or ‘off baseline); this is certainly a fallacy because this leads to forgetting what the baseline is, but it is something to keep in mind.

Extended time frames also help identify patterns but can complicate attribution of outcomes to specific causes.


Variation 6: By Causality, Assumptions, and Gaps

Every metric carries potential implications or gaps that may lead to misinterpretation.

For example, stakeholders may misread a drop in intel production volume as declining productivity, even if report depth has increased or when the team supported an ongoing incident or crisis.

Examples to consider include:

  • Productivity: Lower volume may not indicate reduced output if report quality or depth improved.
  • Risk reduction: Metrics tied to brand reputation or consumer trust require assumptions about value and causality.

Anticipating gaps and assumptions helps CTI leads and practitioners prevent misunderstandings when presenting their metrics.


Final Considerations

For mature CTI programs, a well-rounded metrics portfolio balances strategic, complex, and contextually relevant options with basic, accessible metrics.

This taxonomy enumerates a lot of different perspectives I've seen over the years, cataloging in a sense has made me aware of some of the gaps or things I can do differently when it comes to CTI metrics.

While this taxonomy looks like its for teams with advanced analytics, I strongly believe ALL teams (not just mature ones) should be able to do this...

??



How to Get Started with Metrics, Regardless of Your Maturity

During the conversations John, Steven and I had on the topic, we agreed that most often folks just need some good examples.

We can give you (yeah, I'm looking at you) all the thinking but we understand that you don't always have the priority to go over this whole thing.

This is why you find the most practical, getting started actions below.


Step 1: Understand Purpose Behind the Metric(s)

The first step in building effective metrics is having a clear understanding of their purpose.

Metrics are a means to an end, not the goal itself.

As Rob Lee explains in the SANS FOR578 course, the core metric for any CTI team is whether it meets stakeholder needs and demonstrates business impact.

CTI teams should aim to answer common program questions clearly while establishing a tangible program baseline.

Early metrics in the (Pre-)Foundational stages of your CTI program should focus on:

  • Data that is easy to obtain.
  • Minimal complexity to calculate.
  • Simplicity for stakeholders to interpret.

Starting small allows teams to build confidence in their data collection processes and gradually evolve through the taxonomy toward more nuanced and sophisticated metrics.

Example:

  • Start with year-over-year trend analysis to establish baselines.

This gives stakeholders a clear view of how the organization’s security posture evolves over time and helps inform strategic decisions.


Step 2: Build Your First Set and Run With it

Creating your initial set of metrics is about balancing simplicity with purpose.

Using my CTI Program Metrics example (see below), just start small by focusing on metrics that align with your program’s maturity level:

  • Starter stage, track the number of ad-hoc PIRs requested or intelligence products created.
  • As your program evolves, you can incorporate Intermediate metrics like the integration of consumer feedback or the volume of non-security projects supported by CTI.
  • At the Advanced level, aim for more impactful measures, such as calculating revenue saved or reductions in downtime.

Now based on the lessons learned mentioned above, there’s two significant changes I’m pushing out:

  1. Examples for ad hoc teams: Adding columns with examples for teams or programs that are in Ad Hoc or Initial levels. It is basically some easy examples that enumerate your impact, we learned along the way. This is a starting point, before you even start documenting your process.
  2. Align maturity level naming to CTI-CMM: As a personal contributor to CTI-CMM , I want to align as much of my research to this initiative as possible. I’ve only adjusted the names so far, next year I plan on working together with the wider CTI-CMM community to make some deeper changes - WITHOUT losing the simplicity of a one-pager. Stay tuned.

The revised Excel overview will be available today here:

https://github.com/gertjanbruggink/Metrics/tree/master/CTI

Use this example + the structured taxonomy to get to a first set and start running.

Remember, the key is to start measuring, learn from the results, and refine your approach continuously.


Step 3: Build Stakeholder Engagement using Metrics

Metrics do more than measure performance; they are powerful tools for securing stakeholder buy-in.

When done effectively, metrics:

  • Demonstrate responsible investment in security resources.
  • Communicate the rationale behind metric selection.
  • Provide actionable insights for data-driven decisions.

Engaging stakeholders in the development of metrics increases program support and builds trust in the data that informs cybersecurity strategies.

To align CTI programs with organizational goals, focus on actionable insights that directly influence processes, deliverables, and integrations.

This ensures that measurements are not only consistent but also practical for driving decisions.

Example:

If your organization uses JIRA, CTI teams can leverage JIRA’s built-in solutions to track deliverables.

This creates a cost-effective dashboard for measuring CTI engagement and program effectiveness.


Step 4: Rinse & Repeat

These steps are real simple:

  1. Use the previous steps to get in a baseline
  2. Put in the work
  3. Ask for feedback
  4. Improve
  5. Start with step 1 again

Focus in step 2 on building confidence with decision-makers.

Focus in step 3 & 4 on fostering trust between teams and stakeholders (e.g. sometimes openly stating that you don't know everything opens a lot of good will for improvement).

Build a story that captures the real, human impact of your work.

Thrive.


Make this week count folks!

GJ

P.S. Whenever you're ready, here's two ways how I can help you:


#1. I'll give you a HUUUGE Lego set for each new referral to Scenario Intelligence:

Just in time for the holidays! Help us grow our Venation community. So far we've built 30+ scenarios and 15+ systems. Just copy this link , send it to 5 friends, and have them successfully register before the end of the year while referencing this referral code to me: XnsTC5TTbnrb7


#2. Join the community!

Obtain access to our weekly content, every week a new system. No BS, no fluff, just systems.

You will also receive access to our productivity booster pack as a free gift from me to you!

Subscribe here: https://venation.digital/newsletter


#cybersecurity #cyberthreatintelligence #metrics


要查看或添加评论,请登录