Beyond Design: Who’s Ensuring Your Data Controls Do What They Say On The Tin?
First published on my website (Beyond Design: Who’s Ensuring Your Data Controls Do What They Say On The Tin? — AHUJA CONSULTING LIMITED)
Your Data Controls Framework: Securing Your Organisation Against Data Risk And Driving Value
Data risk features very high on most financial services organisations' risk registers – and so it should.??? The myriad of data disasters over the years makes sobering reading for any CRO, CDO or Head of Data Governance
The risk of large fines and reputational damage is ever-present. But that’s just one side of the coin.?
Perhaps the biggest risk is the missed opportunities due to poor data.?
At one client, due to delays in ensuring that trading partner data was entered into a core system, staff were using a dummy code to force trading details through.?
The result?
The firm could not track whether their trading partners were delivering on their targets.? A key performance metric, essential for growing and building the business, was flawed.
A well designed Data Control Framework is your antidote to poor quality unusable data to ensure that it remains the high quality asset to propel your organisation forward.?
I’m talking here about the full spectrum of controls over each critical data flow.? Things like:
These are the bedrock of your Control Framework and the guardians of data quality.
But there’s an elephant in the room: how do you know your controls are working?
Design Effectiveness vs Operating Effectiveness
Here’s the reality.? You can go to great lengths to understand the risks buried in your data flows and put great effort and ingenuity into designing a suite of controls to mitigate them, but on its own, this will not protect your organisation against bad data.?
As any auditor can tell you, great control design does not equal great operational effectiveness.? Your Framework is still dependent on people to operate it – and people are fallible.
Ahh…but wait…what about the Data Quality Indicators?? Won’t they tell you if the control environment is working as intended??
Yes and no.? They’re indicators, not guarantors and contain their own blind-spots.
Remember the example above?? Guess what the Data Quality Dashboard showed??
You guessed it; it was green.? All fields were complete.?
But that didn’t mean the data was accurate.
The bottom line: someone needs to ensure your Framework is doing what it says on the tin!
Why Wait for Audit?
Now some may say that this is the job of Internal Audit.? Others might say that as the business owns the controls, they should test them.?
They would be right – on both counts.
However, the fact is that 3rd line often have limited resources and time pressures and can only test a subset of the organisations overall control environment, (not just the data ones), in any given year.?
What about the business?
In some organisations, the Operations team might undertake the testing. If that’s your organisation – great!? Just make sure that you know what they’ve tested and what the results are.?? But in many smaller organisations, the business doesn’t have the skill sets or level of maturity to check control operation.
In which case it may be down to the Data Governance Team, or a Data Risk Team in the 2nd line to test the Data Controls environment.
An added side benefit?? If you’re responsible for one of these teams and undertaking reviews of Operating Effectiveness, the experience will give you a much better idea of where you need to tighten up the existing control design.?
In this article, I’ll show you the principles you can use to conduct your own Operating Effectiveness testing.
Operating Effectiveness Testing
If you want to test how well your controls are actually working, you need to consider the following:
领英推荐
Finally, you’ll need to make sure that you report on what you’ve found.?
Let’s dive in.
Scope
If the Framework has been designed well, you’ll know which controls act as pillars and which controls are secondary supports only.?
Your Pillar controls will be those that mitigate against risks present in multiple critical flows.? I’m talking about things like QA checks that occur far upstream of most of your critical data use cases but underpin so much of the core data feeding them.
For maximum bang for your buck, the pillar controls will be your first port of call.? Target them. But that’s not to say that you shouldn’t test others.? You’re just not going to get the same value.
The key is to be clear on why you want to test certain controls and not others in any given review.
Sampling
You may have a QA that operates every day.? You can’t possibly look at each instance of operation.?
You’ll need to be selective.? Decide in advance on a sample.? It could be as simple as 10% or maybe more nuanced, such as 10%, split by various processing teams, divisions or departments to give a better representation.?
Evidence
If your organisation has designed the suite of controls well, you’ll know what counts as good evidence of operation and where it’s to be stored.
You’ll need to check the evidence for your sample and ensure that it corresponds to what’s expected.? Look for anomalies, the things that the original control operators may have missed.? If in doubt, raise queries with them and be tenacious.?
You need to maintain professional scepticism - remember the adage about assumptions!
Reporting
Your testing may give you the comfort you need that your controls are working.? But you need to pull it together and put it in a report for your Data Steering Group as well.? They need to know what you’ve found – good or bad.?
Don’t forget – your report – and the evidence of your testing – is a valuable tool in conversations with the auditors and regulators.?
It can be leveraged to demonstrate due diligence and proactiveness.
Industrialising Your Testing
Testing the operating effectiveness of your controls is no once-and-done exercise.? You need to build a programme of testing so that all of your most important controls – the make-or-break ones that your critical data use cases rely on – are tested over a continuous cycle.?
The Bottom Line
By undertaking Operating Effectiveness testing, you have an added degree of assurance over the controls your organisation relies on to protect and defend one of its most critical assets: the data.?
Whether it’s the Data Function that does this, or whether it’s the Operations, Finance or Risk teams - or a mix of them, there needs to be transparency over who is undertaking testing, what’s being tested and what the results are if this important element of the Control Framework is not to fall through the cracks and only come to light when it’s too late.
Question: If you’re not exercising due diligence, who is?
Coming Next:? The Hidden Dangers of Misaligned Risk Management and Data Governance
?
Subscribe here to get future articles in this series.
--
Need Data Governance help?
?
Book a call here to discover how we can support you.
Great insights! As a Data Governance expert, I fully agree—control design alone isn’t enough; ongoing operational effectiveness is key. Recent frameworks like BCBS 239 and the EU Data Act emphasize traceability, transparency, and continuous validation of data controls. Organizations are moving beyond dashboards and adopting AI-driven anomaly detection, data trust scoring models, and automated control validation to ensure real-time oversight. Additionally, embedding data risk ownership within business units strengthens accountability, while structured control effectiveness reviews align with regulatory expectations. Waiting for audits is too reactive—proactive monitoring and industrialized control testing are now essential for both compliance and competitive advantage.