Security Metrics - One CISO's Opinion

Security Metrics - One CISO's Opinion

Update: 1/25/2020 - I originally threw a mishmash of metrics into this list and did not bother differentiating by audience. I've since added that detail...

Metrics continue, job-over-job, to be the one area where I really feel I can improve my game. One of the things I always promise my leadership is to bring them measured risk - to speak to security in measured business terms that can be weighed against other measured business factors from other parts of the business.

I know one very gifted CISO whose program is very mature, and who reports one and only one metric to his board of directors: dwell time.

I'm afraid that most of us are not positioned that strongly. So what can we do for meaningful metrics? I'll start with some of my favorites and please chime in with yours. While you're at it, please help me with one of mine...

1. What we spent and where it got us (Board level, C-suite level). This is perhaps the most important metric of all. Show risk and maturity targets per #2 below, but demonstrate how the last ask resulted in a spend that resulted in a deployment that affected the bottom line of maturity and business risk. Do this successfully and continuously, and the next ask becomes so much easier.

2. CSC maturity audits (Board level, peer level, C-suite level), if you can afford them, provide an outstanding starting place for showing in a very meaningful way where the business stands with regards to security. I've seen another clever executive I know use the Gartner cybersecurity maturity curve as a poor man's alternative to this one. It's vaguer and much more simplified, but still includes key concepts like 'CISO Appointed' and 'Governance Body Established' to show where one lies on a 5-level scale of maturity with regards to one's security program.

Maturity models as contrasted with a risk register are the best. These days I use TrustMAPP (disclaimer: I'm on their advisory board) and use their CSF-based maturity assessment as fed by the specific critical risks identified to the Board (based on prior Business Impact Analysis). The risks and their status are reported in one chart, and progress overall (but based on those same risks) for Identify, Protect, Detect, Respond, Recover are also presented. As the Board has bought into critical risks, they should also be bought in to goals for each CSF category.

No alt text provided for this image

KRI's become optional adjuncts with this approach.

3. Raw count of incidents/breaches (various levels of the org, but should be quick and simple) - this one is self-explanatory. All good metrics should have a target associated with them, and the target here is zero. Also report the following detail whenever there is an incident: time incident occurred, time to detect incident, time to resolve incident, dwell time, etc. Aggressive targets for each of these can be set, steadily improving them over time.

4. Successes in the form of bad emails blocked, bad websites blocked, viruses blocked, data breaches blocked, etc. (team only? self? useful at all?) This one always is a sticking point for me, as what are the industry standards supposed to be? It's easy to show the raw numbers, and even the percentages, but what do they mean? Maybe your DLP solution blocked 5,743 attempts to offload personal data this week. Is that a lot? A little? If we can measure that number versus the total number of all DLP blocks does that percentage mean anything? Can we measure that figure versus all the data we let go through? Does that percentage mean anything? The same questions haunt me whenever a raw count of "my tool did the following" comes up. I report this information dutifully everywhere I go, but all the while I'm frustrated with the lack of meaning behind such information. I freely admit that I need help with this entire category. Vague industry stats such as “90% of all emails are spam” exist, but we need more data in this category. We need guard rails. I love tools like this, but I’ll be darned if I can figure out a way to report meaningfully on their efficacy.

5. Security awareness training completion (various) - I aim for 70% of the organization completing any training I throw out there. Update Jan 2020: I work in an organization that literally ties bonuses and such into the requirement for training completion. The completion rate is literally 100%!!!) Lately I'm favoring short, funny, lighter weight videos to draw the crowds in and encourage engagement. It requires more videos overall during a given year, but gets a lot more involvement on the users' parts than the once-a-year boring and long approach.

6. Anti-phishing training click rates (peers, C-suite) - I aim for single digit rates of people clicking on the "bad" link. If you're launching a program for the first time, expect more like 30% click rates or so.

7. Pen test remediation and/or source code audit remediation (peers, C-suite) – Sort all findings by Critical/High/Medium/Low and use CVSSv3 or some other similar system to quantify and defend those ranks were needed. Then set aggressive goals to remediate and improve those goals over time.

Create an organic variance chart that shows vulnerability count status over time both by growth (increasing findings of vulnerabilities over time = bad, but having them increase at a greater rate than they are being resolved = really bad). This is similar to a bug burndown chart used by development QA teams to demonstrate progress towards release.

8. Business continuity and disaster recovery come with their own built-in metrics - recovery time objective and recovery point objective (peers, C-suite). Use these to shape BC/DR policy and to also report on real recovery scenarios when they arise. Run through tabletop exercises at a minimum, live drills if possible, and report results.

9. Lastly, as per my first paragraph, measured risk management (rollup to Board for key risks, the rest in detail as appropriate for C-suite and peers) is key. Have a risk management tool and use it. Every time you dialogue with the business about remediating vs. mitigating vs. accepting, track that in a tool. Separate critical and strategic risks from minutiae and be able to generate both strategic and tactical reports on a per-audience basis.

I don't often endorse products, but I love SimpleRisk as it's exactly what you need and no more.

There are plenty of other metrics I have used successfully over the years, but these are the major highlights. If you have not formed an executive security council where you are, do so. That council should have representation from every nook and cranny of the business, should collectively own risk decisions, and its decisions, conclusions, etc. should all be tracked as metrics as well: How many issues did they tackle? How many were accepted? Remediated? Mitigated? Transferred? This is similar to #7 above, but speaks to the activities of the business leadership with regards to security. Set goals to make the ESC effective and useful, and set further goals to reduce acceptance of risk and increase remediation.

While we’re on the subject of governance, an ISMS is never a bad thing even in the absence of a formal certification like ISO 27001. Shape your policies according to a framework, and track metrics based on policy production rates as well as adoption rates – how many enforcement events occurred and versus which policies? It’s always interesting to find out which policies are actually being leveraged to better your security posture, and which ones lie uselessly in some repository somewhere.

I hope this list helps you, and I look forward to your LinkedIn feedback – especially with regards to #3 – hopefully you’ve seen a way to make those figures more meaningful…

This article is part of my ongoing "One CISO's Opinion" series, and does not reflect the views of my employers, past or present, or anyone else with whom I do business. I receive no endorsement compensation of any kind, and any products or vendors I recommend are recommended by me solely based on experience and personal preference.

#metrics #cybersecurity #informationsecurity #informationsecuritygovernance

Allan Alford?thanks for sharing objective Metrics instead of pervasive fear mongering. Once we have some metrics as the baseline, we can systematically train and work to improve it. Jeremy Case Copeland?a must read.

Nicky Lawlor MCIPD

Chief of Staff at Global Frontier Group

5 年

I just posted an article on LinkedIn about measurement and cybersecurity - more around ROI on the investment in cybersecurity.? Also a very interesting subject.? https://www.dhirubhai.net/feed/update/urn:li:activity:6534770533148487681

回复
Brian C.

vCISO- Advanced c?????y???????b???e??????r??? ??????

5 年

Allan Alford, here are some alternatives any in the role can use to accomplish such wizardry. https://cset.inl.gov/SitePages/Home.aspx <- CSET is a tool your tax dollars pay for (ICS_CERT and DHS product)? that provides a means for a Cyber Security Evaluation and reports.? There, you will find a link to a simple self-evaluation sheet, or, to download CSET, THE application that allows for the collection of the inputs and reports that include the maturity.? The tool allows for a full range from the most basic, to a rather intense level of measurements against multiple standards, and provides comments on the directions needed for improvement and where there are gaps and or massive holes.? Here is a video that covers version 7, but the latest v 8.1 has newer standards mapping. UNLIKE FREE TOOLS< You and I and all? paid for this one with our taxes... https://youtu.be/nvVeeWvw97E ?

回复
Michael Henry

CEO at Accelerynt

5 年

Nice article, Allan. ?One of my favorite topics. Recognizing that these improvements are always a journey, my view is that we need to address this in stages, based on the level of maturity in the organization. ?I ?use four levels: ad-hoc, documented, performing and optimizing. ?Reporting on dwell time for an organization with an ad-hoc level of maturity is meaningless, since it's likely that the team can't see everything in the first place. ?Dwell time shows up for me in performing and optimizing phases. If the organization is in a lower-level state of maturity, it's easier to have conversations with the board about investing to develop capability. ?In ad-hoc, teams tend to measure activity, since they can't see their process flows. ?Once these are documented, you can start to measure things like % of devices under management, or % of controls enforced or mean time to respond and remediate. ?When the team is performing, the assumption is they have control over their environment, so they can see everything. ?At this point, I love dwell time and # of repeated system failures. ?These translate well into optimizing, where cost and cycle time become bigger measure of effectiveness. ?

回复
Alex Baird

Advanced Security Engineer at Kroger Technology

5 年

Outstanding article!? I will be referring to this and passing on.? Thanks for sharing!

回复

要查看或添加评论,请登录

Allan Alford的更多文章

社区洞察

其他会员也浏览了