Is your Data Governance Rear-View and Reactive or Forward-Looking and Proactive?
First published on my website here
Following a short post I recently published on the need for Proactive Data Governance, (Post | Feed | LinkedIn) I’ve had a few people DM me to ask for more information on how to analyse their data risks.? I’ve therefore taken that post and expanded on (1) my definition of Proactive Data Governance and ?(2) the thought processes behind assessing your data risks with a couple of examples thrown in.
Reactive Data Governance
When you drive your car, is your focus on the rearview mirror or the windscreen?
I’d hope it’s the windscreen – unless you’re reversing.
You should be assessing the risks and planning ahead as you move forward, ready to take remedial action if one of those risks materialises, for example, a car pulling in front of you.
So why don’t we apply the same degree of diligence to Data Governance?
Instead, the typical focus tends to be on monitoring a set of Data Quality Indicators measuring something like the following:
Nothing wrong with this.?
It has a valid place - but it’s largely reactive.? It’s measuring the quality of data that has already been created.?
It’s looking through the rearview mirror!
Of course, in addition to their suite of DQ Indicators, many organisations will also be able to reference a further set of controls – but, in my experience, they ??????'?? ?????????????????????? ?????? ???????? ?????? ?????????????????????? ???? ?????? ?????????????? ???? ?????? ?????????????? ???????? ????????.
Why might this be the case?
Probably because they haven't thought about the end-to-end flow.
It's still a very siloed and disjointed approach.? They may not have all the controls they need, or they may not be in the right place within the data flow to be truly effective.
Either way, they haven’t been through a systematic process to validate that the right set of controls is in place for each data flow.?
If this is your organisation, your data risks will be unidentified and likely unmitigated, with issues arising unexpectedly and out of the blue.
What does this result in?
Proactive Data Governance
The alternative is what I call Proactive Data Governance.?
What does this look like?
Proactive Data Governance recognises that a typical data flow, from upstream at its point of origination to its downstream usage, contains a host of risks to the integrity of the data.?
It aims to ensure that the key risks across the data flow are identified upfront and that appropriate controls are implemented.?
It does not wait for a risk to materialise with potentially costly re-work.?
It’s forward-facing.
Here, I’ll give you my approach to proactive Data Governance.
First step?
Identify your Data Risks.
How to Identify Your Data Risk
This requires you to follow the three steps below:
You need to understand all three elements to be able to undertake a meaningful identification and quantification of the risk.?
Let’s take a couple of examples to illustrate this.
Use Cases and Data
Firstly, let’s consider the reserving process within an insurance carrier, which ensures that sufficient capital is set aside to pay claims.? This process is heavily on the individual case reserves set by the claims adjuster on each claim.
Secondly, sticking with the insurance industry as an example, let’s look at the facultative reinsurance recoveries process.?? For those not familiar with it, facultative reinsurance is used to provide coverage to an insurer in respect of having to pay out on a single insured risk.???
Data Usage
We now need to dig deeper to understand how that data is used in each case, to determine the level of accuracy and completeness required and, therefore, the type and extent of control needed.
Let’s look at the reserving process first.? Calculation of the Actuarial Best Estimate involves statistical methods being applied to the base data and the subsequent selection of an Actuarial “pick”.?
The second scenario just uses the in-force periods of both the direct and reinsurance policies, as well as the excess point of the facultative cover to enable an automated process to trigger on payment of a claim under the direct policy.
Analysing the risk
We can now analyse the extent of the risks in each case.?
To do this we need to think about the Inherent and Residual Risks.
We need to consider both types of risk through the following lenses:
?A risk may have a high impact, but a very low likelihood of occurring.? Equally, it could have a low impact but a very high likelihood.?
Taken together, these elements enable us to perform a robust analysis of the risks posed as data is created and flows toward the point of consumption.?
Your assessment of the combination of these factors determines the level of control required and the types of controls deemed appropriate.
Application
Let’s now apply this to the two scenarios above,?
Example 1: Partial Data Flow for Reserving Best Estimate Risk & Control Analysis
The flow starts with the setting of the case reserves by the adjuster before export into the reserving system.?
Despite the centrality of this data, I’ve judged the risk of case reserves being incorrect here to only have a “medium” rating.? This is because the data would need to have sufficiently material deficiencies to skew the process of arriving at the Actuarial Best Estimate.?? This is not to minimise the importance of this data, but merely to acknowledge that each and every case does not need to be completely accurate.? It is, after all, a judgment in and of itself.?
Accordingly, I might choose to conclude that the existing QA control, whilst not comprehensive, is nevertheless sufficient.? It performs a valid check on the adjusters’ abilities to exercise their judgement over a period of time.?
Instead, the degree of risk is judged to be higher at the point at which the picks are selected, due to the degree of expert judgement involved over highly summarised data and it is therefore this operation that requires additional robust scrutiny and control.??
To mitigate this, there are two levels of Actuarial review with the residual risk score still being judged as “medium”.?
Let’s now look at the second example.?
Example 2: Partial Data Flow and Risk and Control Analysis for the Facultative
The process flow commences with the booking of the reinsurance policy.? This critical step enables automation of the recoveries process.
It is important that all reinsurance recoveries are effected.? Accordingly, in this case the Critical Data Elements identified above do need to be booked completely accurately for each and every policy at the point of set-up.
Hence the analysis shows that the inherent impact of a booking error is deemed to be “high” in this case.?
The Quality Assurance check, which covers 100% of all facultative bookings is therefore more important in this scenario than the previous one, and to further augment this, an exception report is used as a catch-all to further reduce the overall residual risk to “low”.?
Feel free to disagree with me on the ratings and rationale I’ve applied.? That’s the beauty of this approach; it makes the whole process transparent.
Either way, the two examples should now provide a clear view on why you need to understand, not just what data is being used in each use case, but how it is used, to enable you to accurately identify the risks and design your control environment accordingly.
Reactive vs Proactive Governance
The benefits of a proactive approach to your Data Governance can easily be seen.? By adopting this approach, you will reduce the likelihood of costly re-work, or potentially not noticing the error until it’s too late.?
The downside?
A significant amount of effort is required.? Producing the requisite artefacts necessary to perform a robust Risk & Control Analysis is time-consuming.??
So where do you draw the line between reactive and proactive Data Governance?
That’s a question you need to answer.? It’s essentially a policy-level decision.?
Both approaches have their pros and cons.?
You may not adopt a proactive approach for every data feed.? There comes a point where you might conclude that the effort involved does not warrant the amount of risk posed.?
That’s fine.
Equally, you shouldn’t merely adopt a purely reactive approach because it’s easier and simpler to implement.? Doing so may well leave your organisation open to some unacceptably large risks.?
Is that a risk you're willing to take?
?
In the next article, we’ll focus on some practical steps you can take to build a detailed data lineage.
?
Subscribe here to get future articles in this series.
--
Need Data Governance help??
Book a call here to discover how we can support you.