Effective Use of Analytical Assessments in Threat Intelligence Deliverables

Effective Use of Analytical Assessments in Threat Intelligence Deliverables

This article is for all you threat intelligence nerds out there. It demonstrates how private sector threat intelligence teams can effectively demonstrate value, when supporting decision-making.?

In short, leverage the concept of assessments to substantiate uncertainty. Daily use of assessments supports proactive security, minimizing long term costs. It creates a trackable paper trail of value.?

How do you implement assessments in your organization?

If you’d like to get some ideas, check out how we use assessments to help create our own threat scenarios. Schedule your demo today via https://content.venation.digital.



Recap, what is an assessment?

“Why is that briefing you send out worded so weird? Why can’t you just give us 100% certainty?”

If you leverage any form of analytical rigor in your intelligence deliverables, you’ve probably heard this question before. The answer, unfortunately, is not so simple. The reason? There are no simple answers. Security, especially cyber, is not black or white. Fifty shades of gray is much more accurate.

Why is accepting this so hard? We found that it’s mostly because people prefer simple answers over complicated ones. We have no time for long analysis. Be brief, be gone. If it looks like there’s a simple solution to a problem, why bother looking deeper? The thing is that this approach does not have value long-term.?

With threat intelligence you play the long game. You are almost always dealing in uncertainty. It’s like having numerous pieces on a chessboard moving simultaneously in multiple dimensions. It is your job to connect the dots on a technical, process, and people level.

How can you create a proper paper trail of value? Start with analytical assessments.

What is an assessment: Assessments establish level of uncertainty (e.g. in a hypothesis).

For example: We assess with high confidence that we are not able to drink milk at this time, based on the fact that there is no milk in the fridge or the cellar..        

Assessments include confidence and/or likelihood (e.g. in a threat landscape or any other deliverable). This helps quantify an analytical claim.

In private sector teams, I often see teams just adding confidence and assessments in the same sentence. Usually because of practical reasons, they get QA'd out or people just find it annoying. However, as my friend Freddy Murre pointed out to me, the official guidance is that use of assessment words and confidence in the same sentence when talking about an event or development of one should be avoided.

See ICD 203: "To avoid confusion, products that express an analyst's confidence in an assessment or judgment using a "confidence level" (e.g., "high confidence") must not combine a confidence level and a degree of likelihood, which refers to an event or development, in the same sentence."

For example: We asses with <insert confidence level: i.e. high, medium, low> confidence, that there is a <insert probability level: i.e. likely, possible, unlikely> probability that <insert event> will happen based on <insert evidence> obtained from <insert source>.        

Why do you need this? People who make decisions need to understand the level of uncertainty when evaluating the assessment. What can the organization control, what cannot? What is within our scope, what is not? Making assessments allows you to establish a comprehensive understanding of something uncertain.

Master thinkers

How do you make sure you think about the right things??

Assessments are made by thinkers. No, not people who have too much time on their hands: people who research, investigate, assess, and advise.

The late Randy Pherson devised the 'Five habits of master thinkers' identifying the mindset fundamental to quick judgements and fast decisions.?

These are areas you can train yourself and your team on. Leverage them in QA cycles. Include them in training programs.

How do you compare against them?

On a personal note, the most important struggle often observed is the one related to (key) assumptions. Generally, we are immature at articulating this.?

Most of the time this is caused by simply not knowing. Identify what you don’t know! Our industries’ intelligence deliverables should inherently take assumptions into account or identify them explicitly.


Moving assessments upwards

So, we're back in sync on what an assessment is and how you leverage it as a 'Master Thinker'. Before or while using this, seemingly 'complicated,' turn of phrase, you will need to make it clear why this is actually pretty useful.?

One thing I use often in my daily work to educate non-intelligence stakeholders: The concept of moving assessments upwards.


In the above example, you start with a report coming in on a given technology vendor being down.?

  1. Facebook is offline. For most of us, this is not really relevant. Who cares right? This is just current intelligence ( Sherman Chu ).?
  2. Once you actually dive into the matter, develop your assessments, and identify key takeaways - you might identify more of these events happening.?
  3. When you perform this for a month you will be able to see some fundamental issues occurring. Which may, in turn, lead you to request a risk assessment.
  4. Going a step further, on a quarterly basis you see even bigger trends that warrant a strategic decision to be made. For example, leading you to advise preparation of a business case for additional (security) controls.
  5. The assessment acts as the line in the sand, demonstrating your understanding and acting as supporting evidence.?
  6. For those of you taking it even a step further, this is where you also use the evidence to forecast ahead (which is usually also needed for strategic decisions, but not often leveraged explicitly).

This creates a ‘paper trail’ for teams to keep track of how a given assessment contributed to a decision or subsequent change in the organization.?

Measuring value

Tracking the assessment over time results in tracking the value over time.

Here are a few metrics you can leverage for this:?

  1. Rolling Percentage of assessments made in Intelligence products that were incorrect
  2. Number of ad-hoc PIRs requests made to Intelligence team (e.g. RFIs)
  3. Number of ad-hoc PIRs not meeting the standing list of PIRs
  4. Number of Intelligence products created that include forecasting and that are filtered per PIR
  5. Number of non-security projects, where Intelligence contributed to actionable insights (e.g. M&A)
  6. Number of security projects, where Intelligence contributed to actionable insights (e.g. development or implementation of security controls)
  7. Number of Intelligence products created
  8. Amount of revenue saved (currency)



Integrating Assessments into your daily process

Before you start automating the heck out of this, first think about the process. Integrating these efforts in your daily procedures or processes can be quite hard. Not to mention, where are you going to pull the data from?

You will become a librarian. Tagging, annotating, enriching, etc.?

Here's a few considerations to help you and the team clarify where you can leverage assessments and other analytical methods. For convenience, I mapped them to phases in your CTI process so you can quickly have that conversation with the applicable team member.

Direction sub process:

  • Does everyone in the team know where to find the ‘requirements to stakeholder’ mapping?
  • Do stakeholders (including the team) understand the need/use of assessments?
  • Did we ever do a walk through with the group?

Analysis sub process:

  • How do you do your analysis?
  • Where do you store your analysis, including assessments?
  • Do you store this in your internal repository?
  • How does our team 'make' assessments?
  • Did we document this in an SOP?

Dissemination sub process:

  • Has everyone been trained in writing succinctly?
  • Has anyone ever asked for feedback?
  • Do we use 2 row summaries?
  • Did we consider formatting?

Feedback sub process:

  • Did we do a yearly recap or review of assessments made?



Automation lessons learned

Yes, my fellow nerds. At this stage you will probably be ready to start considering automation.?

There is not a one size fits all solution, just tools that have multiple overlapping applications. Here are some lessons learned from our end:

  • None of the commercial or open-source threat intelligence platforms provides the option to comprehensively track specific analytical findings and conclusions - over time.?
  • In Windows environments, SharePoint remains a useful way to store reporting.?
  • In Google environments, tracking intelligence deliverables through Google Docs works remarkably well. Especially when searching through numerous documents.
  • The Atlassian suite, especially Confluence, is one of the few promising applications I have seen that allows it to be repurposed to make an effective tracking environment. It does require serious (internal) development/tuning efforts. Reach out to Patrick Grau or Andreas Sfakianakis for applied examples.
  • Brian Mohr over at Reqfast has built a (intelligence) requirement tracking system, which allows you to track analytical products to requirements. Current version is looking very promising and curious to see where he takes this.
  • Finally, there is no shame in choosing not to automate this process. Make it a team exercise and at the end of the year just go over your deliverables; note assessments and do a qualitative review of each assessment.

If you have a different approach, I’d love to explore this with you. I assess with high confidence that it is very likely there are things that I do not know about, based on past experience, as pointed out to me by several sources in my family. Perhaps there are things I don’t know about or things that might be incorrect. Do let me know so I can learn or amend.


Wrapping up

If you want to explore some of the fallacies often seen in our domain further, I recommend having a look at Andy Piazza ’s recent talk from the SANS CTI summit ‘Threat Intelligence is a Fallacy, but I May be Biased: https://youtu.be/0gbLJJIAdiY?si=N6k5WgbwiRV4uUrS

I also recommend following Ole Donner and Freddy M. as individuals that regularly share their take on analytical techniques or judgement. These guys are just two talented voices in this scene, there are obviously many more folks that explore this but not distribute that much through social-media. The fact that they look similar is just self-attribution or attentional bias. ??

Finally, I recommend following Venation team where we support threat intelligence capabilities through advisory, coaching, or just sharing experiences (for example like this article). In addition, we curate a content library that contains most of our knowledge, approaches, and templates over at www.venation.digital.?

Hope this helps. Curious to learn if the items discussed here are new to you or if you are already regularly using them.

Cheers!

#cybersecurity #decisionmaking #cyberthreatintelligence #assessments #analyticalthinking #masterthinkers #mentalmodels #intelligence

Hudson Hlavaty

Cyber Fusion | Threat Intel | Orchestration

9 个月

Nice breakdown and assessment.

Bart van den Berg

Co?rdinerend specialist | CTI | International Security

9 个月

Good stuff Gert-Jan B. , love the article and how you concisely interlink various CTI concepts!

Varshil Desai

Threat & Vulnerability Analyst ? Microsoft Security Solutions?CEH | AZ-500 | SC-200 | SC-300 | SC-400 | ISC2 CC

9 个月

Got some new info through this article. Thanks for sharing ??

Freddy M.

Senior Threat Intelligence Analyst-NFCERT | PHD candidate-NTNU researching intelligence, CTI, and Artificial Intelligence | Creator of intelligence architecture mind map | Helping CTI analysts understand intelligence

9 个月

Gert-Jan B., you ROCK! This is a great LinkedIn post and summarises the field of intelligence well. And thank you for the shout out. I think Ole made an excellent point about the similarities of our massive hairdos ?? One observation I want to make is the use of assessment words and confidence in the same sentence when talking about an event or development of one. See ICD 203 for more: "To avoid confusion, products that express an analyst's confidence in an assessment or judgment using a "confidence level" (e.g., "high confidence") must not combine a confidence level and a degree of likelihood, which refers to an event or development, in the same sentence ."

要查看或添加评论,请登录

社区洞察

其他会员也浏览了