Boost your programme: Turbos, black-box thinking and risk management

Boost your programme: Turbos, black-box thinking and risk management

There is a great book called Black Box Thinking, which talks about the aviation industry's approach to risk management. Reading it made me think about how I can apply the same principles to Information Security and Privacy programmes.

The basic idea is to identify and establish various feedback mechanisms, which can help mature and ‘evolve’ your programmes, based on feedback from various key performance and risk metrics.

To understand the concept, there is a great story about turbochargers, which explains the difference between a scientific/theoretical approach and what I would call a 'practical' approach, based on feedback:

A large turbo production company was trying to improve the airflow output of their top-of-the-range ball-bearing turbo chargers. They enhanced the blades, smoothed off angels, decreased bearing resistance and made marginal gains, until the only thing left was to fine-tune the exhaust outlet; to ensure maximum adiabatic efficiency and power output.

They brought in some of the best mathematicians in the world, who drew up impressive designs and complex mathematical equations, and eventually came to an agreement on the ‘perfect’ exhaust outlet.

The company tested performance output and were happy to see an improvement, but it did not meet the expectations they had set.

The organisation then decided to bring in a group of engineers (let’s say they were German engineers!). The engineers got together and quickly came up with a simple strategy:

Produce ten different models, each with very slight variations in design, and test each one.

Once they identified the best performer, they simply created 10 more replicas of this model, with very slight changes, and re-tested.

They continued to do this until they surpassed the performance objective set by the company!

The engineers enjoyed the prize for taking the product to a level which was previously thought to be impossible, through basic trial and error.

The critical difference between these two strategies is that one assumes it is correct from the start. The other assumes nothing, and relies solely on feedback to improve.

The engineers built a system which was conscious of identifying, evaluating and implementing feedback…

The mathematicians will design a decent product based on theoretical models, but the fine-tuning and real-world gains can only come through experience and ‘evolution’ of the product.

Critical thinking is a skill oft-overlooked when it comes to cyber security and privacy professionals, in favour of degrees, certificates and fancy letters behind the name. The letters are great and they provide some level of assurance, but they cannot replace practical experience and a critical mindset.

Overall, the company’s strategy was to use both. Theory first, followed by 'evolution' of that idea.

So what does this mean for a successful security or privacy programme?

We still need to start with the policies based on our existing knowledge and experience, and manage from the top-down, but equally important is to build in and look out for all forms of valuable feedback, from the bottom-up.

A successful risk management programme (cyber, privacy, business continuity, quality or other) must be designed with feedback mechanisms in mind, from the start.

Feedback can come in the form of:

·         Audit report findings

·         Risk assessments

·         Vulnerability scans

·         Penetration test reports

·         System logs

·         Project KPIs (milestones reached, planned/earned value, actual costs, etc.)

·         ROI calculations for solutions implemented

·         Number of audits conducted

·         Number of threats detected

·         Number of threats prevented

·         Number of threats responded to (incidents)

·         Amount of downtime/trends of availability of systems

·         Number of privacy requests from data subjects

·         Number of legal cases from data subjects

·         Number of incidents that result in reputation damage

·         SecOps KPIs: Patching reports, AV reports, internet proxy reports, emails blocked, etc…

·         Number of staff trained for the awareness programme

·         Number of staff that have signed off/accepted policies

·         …and the list goes on…

 

All of these metrics can help you focus on the highest risk areas within your organisation’s unique risk profile… One size certainly does not fit all, so your policies, standards and control framework should ‘evolve’ to match the feedback from the various indicators you are now monitoring. For example, if your organization is experiencing high levels of financial fraud, perhaps your email systems are not detecting spear-phishing attacks. Consider implementing a phishing awareness campaign, or there could be a lack of internal segregation of duties in the finance department (for internal fraud).

Some areas may have too few key risk metrics available, which could indicate a blind-spot. Consider implementing more detective controls, such as system monitoring or audits.

 

Another key benefit of implementing a feedback system is Assurance Reporting.

Interested parties will want to know how far their investments are coming along. Having a range of metrics can help in providing the needed assurance.

The benefits are obvious, from building trust to acquiring further budget for the next solution, because you can clearly demonstrate the returns on investment, or avoid implementing solutions that were less effective.

I have often heard of security professionals struggling to attain budget from the brass, but I think this is simply down to not having the right numbers to present. The board are looking at many investment opportunities across the organization with clear, tangible benefits, such as a new sales system, so the risk manager must realise that they are competing for organizational budget. The solutions must therefore have a realistic and demonstrable ROI! And of course it must align with business objectives.

 

Is there more to it than just the metrics?

This depends a bit on your personality. Some people say ‘the numbers never lie’, but I am a strong ‘N’ (Meyers Briggs), which means I generally prefer to make decisions based on intuition.

For me, intuition means experience of understanding context. Numbers certainly help outline a picture, but context and experience are what will create a work of art.

I prefer to let the numbers guide my thinking, but experience should help balance the feedback out.

I am also cautious of relying solely on feedback, because this could mean I land up chasing my tail! If we constantly react to metrics, we will never be proactive enough to prevent incidents.

A key difference when comparing cyber security risk management to something like quality is that cyber threats constantly evolve.

Cyber criminals constantly evolve their game plan to find new ways in, so information security professionals need to think in the same way and match them, like a game of chess, otherwise we will always be three steps behind.

Combining metrics with experience and intuition (in my view) is the best strategy for managing risk. I call this diversity of thought - being able to think in a creative way, by looking at what other industries do (not just military) to tackle similar challenges.

Relying on intuition alone will get you nowhere, so just like the turbo production company, we should take a combined approach based on feedback and experience as well as theory and research.

Feedback welcome.

要查看或添加评论,请登录

Cathal J.的更多文章

社区洞察

其他会员也浏览了