Too many cooks: why the public sector only needs one data team.
Somewhat unrelated introduction
W. B. Yeats best summed up the mood of the team at today's agile stand-up. When asked to give an update on progress of the data pipeline, the Irishman, who died in 1939, provide his assembled colleagues with the following update:
"Turning and turning in the widening gyre
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world."
We politely asked him to remember to raise these concerns at the next retrospective, and then we all got back to work.
Towards a common public sector data capability...
What makes the public sector different?
The private and public sectors play to different rules. We hold them to different standards. We're used to thinking of sprawling corporations (think Alphabet, Hyundai, or GE), as single entities. But we rarely, if ever, think of the public sector as a single entity.
Instead we are used to a public sector composed of semi-autonomous factions. The UK government numbers 23 ministerial departments, 20 non-ministerial departments, 13 public corporations, 107 'high profile groups', 412 'agencies and other public bodies', and '333' local authorities.
Now, obviously, there are many more private companies in the UK than there are public bodies. But is it best to think of public bodies as if they are are individual private companies, or as subsidiaries of a single large conglomerate? I would argue that if the dynamism of the private sector comes for the animal forces of the market, then the secret weapon of the public sector should be collaboration and common purpose.
Many private sector firms are larger than the entire public sectors of a developed country. And yet we expect firms to be homogenous, and to act with some semblance of a consistent will. Our law and vernacular treats corporations as singular persons (e.g., "What's Apple planning next?").?
In private sector firms, the presence of a wide array of competing IT silos and platforms is (rightly) seen as wasteful. We admire the CTO who seeks scale, consistency, and user experience in everything they do. Waste does inevitably occur in corporations. Plenty of non-digital native companies have crippling technical debt, or have acquired competing systems through mergers and acquisitions. Everything I hear about finance is frankly terrifying! But crucially, firms are rarely actively working to perpetuate these problems.
Whilst it's great that public sector organisations are seeing the value in recruiting data teams, CDOs, and the like, we risk having "too many cooks". Every big new data hire does, after all, must justify their existence (and pay check).
So what would a uniform and holistic data platform for the public sector look like? How would we get there? And would it be worth the effort?
What if?
Here's a thought experiment: legislation is passed which explicitly forbids any department from recruiting a data function. There will be no more departmental data teams, CDOs, or other data investment, only a single 'centre of excellence' centralised data team. That centralised team will take responsibility for all data investment and staff across the public sector. Budgets are apportioned across departments according to some assessment of need and/or risk and/or societal value. This is a deliberately extreme scenario, because deliberately extreme scenarios are the most fun to think about.
So what is the fallout?
One might argue that nothing would meaningfully change. We’re still committing the same amount of money and resources to the same problem, why should the outcomes be any different?
领英推荐
Others might argue that there would be an explosion of productivity thanks to both economies of scale, common ways of working, and a reduction in non-value-adding middle managers and their ilk. Instead of focusing on the whims of small organisations, the centralised data team would instead be able to focus resources on answering the big questions better.?
Yet others might argue that delivery would plummet. Creating the centralised data team could prove a chaotic nightmare, and would inevitably face push back from entrenched teams. But even when the dust settled, is there really enough commonality of need across the public sector to warrant centralisation. The bigger and more removed from user needs that investment in data becomes, the less likely it is to succeed. Centralisation would put all of the power in the hands of people too distant from the customers to understand their problems.
The role of the public sector data team / CDO
The thing that I love the most about working in data is the license to abstract problems to their core components. What makes data matter to organisations isn't the what or the how, but the why. There are a bajillion different data use cases out there, and a zillion different ways to solve each use case. And 1 bajillion use cases * 1 zillion solutions = 1 gazillion possible combinations of use cases and solutions (#maths). But do we really need a gazillion different architectures? Or do we actually just need a few standard ways of specifying, collecting, transforming, and analysing data, maybe with some allowance for scale or urgency??
So, what would a centralised public sector data team / CDO do? Perhaps the most useful would be to arrive at standard designs for common fundament problems. This would begin to address the inefficiency, redundancy, and frankly human time wasted in designing bespoke strategies and architectures for every different organisation. Using the power of abstraction, this new centralised data team would be able to focus on creating standard user-centric services rather than simply writing business cases and deploying technology.
Sure, it would take a long time to unwind the incredible complexity of technical debt across our public sector organisations. However, think of the reward at the end! Standard services for creating data pipelines or deploying self-service analytics, for defining data dictionaries or assessing data quality, for administering access controls and monitoring budgets. Not just standardised, but tangibly better, because of the sheer volume of resources and incremental development a centralised team could devote to these common problems.?We might, just, stand a chance of rivalling large private firms in terms of capability.
Commonality of approach has the potential to massively reduce both the overrunning costs and the variability of outcome that we see in public sector data projects. Data projects often go wrong due to the same old issues: lack of funding, ineffective procurement, poorly defined or changing requirements, lack of senior sponsorship, wishful thinking masquerading as planning, neglected dependency management, and such like.? We often speak about knowledge sharing between public sector clients, and it does happen (largely thanks to Melissa Zanocco. But you know what’s better than problem sharing? Outcome sharing! What if we all started pulling in the same direction? If my data warehousing project was also yours because, ultimately, we are both using the same platform.
Now perhaps you are thinking of the old red herring: public sector procurement rules! Does the need to competitively procure our data investments stop us from mandating a particular approach? Of course not! There are essentially no technical barriers to having multiple different suppliers compete to realise outcomes using a common platform. Sure, it requires an informed client that can define some standard ways of working, set up a central code repository, etc., but again we are talking about a huge cross-government data function here, if National Highways can do it, then it must be possible at scale.
The allure of the small and local
There is, of course, a counter argument to this modest proposal. Scale does not always mean efficiency or success. If you work in data then the words “NHS patient records system” should bring a chill to your spine. The public sector is littered with massive IT projects that failed even to deploy, or realise any return on investment, and left organisations with more rather than less technical debt. Often, in my experience, the bigger a programme the less it can adapt to the needs of end users. So-called ‘shadow IT’ can be dangerous, expensive, inefficient, and many other problematic things, but what shadow IT projects usually have in abundance is user engagement and connection to what staff value.
The thing is, we shouldn’t confuse centralisation with scale. It could be true that big IT projects are more likely to fail in the public sector. But that might actually be another argument against letting individual departments embark upon such large-scale projects without any meaningful supervision. Believe me, data projects attract their fair share of dilettantes. Often the root of failure lies not the scale of ambition, but rather in expecting non-specialist organisations and staff to deliver data projects at all.
A centralised public sector data team could command a budget worth hundreds of millions of pounds a year, but that wouldn't force it run unreasonably large projects. You can spend large budgets more effectively through lots of small, incremental, agile, user-centric projects. Private sector companies that operate platforms succeed by continuously improving their core offering, rarely by starting from scratch every 10 years. The code that runs Google search, or Facebook, or Netflix, or Uber (etc. etc.) is usually an evolution rather than a revolution of the code that ran back when you started using these platforms. This allows these firms to cost-effectively utilise massive workforces spread across the globe, to do clever things like A/B test different changes, and fundamentally to tirelessly improve their services without any meaningful downtime (oh, and also to drain our wallets, undermine our mental wellbeing, threaten our democracy, and the very concept of shared reality, but that's a separate article). By focusing on common platforms and ways of working - rather than projects- the centralised data team could radically simplify- and de-risk- investment in data. Just as importantly, it will radically reduce the 'barriers to entry' for public sector organisations with smaller budgets. Your local authority will be able to access and use exactly the same platform as a big national arms length body, but at a price point that reflects their scale and need.
To use a tortured analogy, at the moment most public sector organisations are like works formula 1 teams: they have to build their car every year from the ground up, with very little sharing of components. Not only is this approach eye-wateringly expensive, it also has very unpredictable results, for every Mercedes 2020 season or Red Bull 2021 season, there's a McLaren 2015 season or indeed a Mercedes 2022 season. Formula 1 have themselves realised that this is a problem, bringing in budget caps, more standardised designs, and greater use of standard components, and we are seeing more success from sharing engine suppliers, and by 'customer teams' that share large parts of their platform with a works team. A centralised data team would allow the public sector to operate more like Formula E: still fast, still fun, but so much cheaper (and environmentally friendly)!
Don’t always be different.
Working on a public sector data project should feel challenging, purposeful, meaningful. But it shouldn’t feel unique. And speaking from personal experience public sector data projects often feel eerily unique. They're like sailing into uncharted waters (the 'here be dragons' parts of the map) and looking for signs that the weather is about to turn against you. Sure, we all claim to be adhering to roughly the same methodologies. Sure, we share many of the same suppliers, and frequently exchange employees. Still, the idiosyncrasies of each client organisation (governance, technology, org structure, procurement, delivery method) means that the start of every project feels like heading off into terra incognita. And there is perilously little circumspection about this variability. It’s clear that lip service to GDS, knowledge sharing and ‘best practice’ isn’t going to cut it.
Office politics rarely reward humility, and there is a lingering sense that borrowing from others is a sign of a lack of faith in one’s own ideas.?But it's not. The ability to learn from others, to share with others, to find common cause, that is a super power both for each of us individually, and for the public sector as a whole. We work in the public sector because we believe in something bigger than ourselves, a common cause, let's f***ing act like it!
Nobody sets out to run a poor project loosely adhering to a discredited methodology, based upon a misinterpretation of user needs, unrealistic estimates of time and cost, reliant on an incompetent and mercurial supplier. But hey! It still happens! And it begs the question: maybe, just maybe, the organisations that run our transport networks, deliver our utilities, protect our health, built our buildings, collect our taxes, educate our kids, and all the rest... Maybe they shouldn’t run their own data projects.
Making data fit for human use. Keeping it real (ontologically). Building tools for the future of infrastructure.
2 年Hmmm. Better surely to have a wells eveloped and cared for standard for interoperability and then just let a thousand intertwingled flowers bloom?
Director - Head of Digital Services
2 年I have questions ….
Ontologist at Semantic Arts, Inc.
2 年When England develops it's own parliament, in the same way that the other 3 nations have, we can look at modernisation and remove the silos of ministries and have an administration with the full A-Z in one organisation with one information set. At that point the Westminster parliament can be significantly condensed too.
Associate | Chartered Quantity Surveyor | RICS APC Assessor | Sustainable Digital Construction Advocate
2 年Some interesting ideas, however, the underlying principle is the quality of the data is what drives the outcome. If the focus is around quality data the sky is the limit on what can be delivered by project teams.
Leading the transformation of data-driven project delivery | Recognised in DataIQ100 for 2 years running.
2 年For project delivery, Dr Jo Jolly is chairing a x-gov working group for data driven project delivery. Leveraging resources and skills from across government against an agreed set of visions, then working with the broader community to deliver them. A bold and inspirational vision All aligned to a community based ontology, or Project Brain, which connects the use cases, open source solutions and data. Government can never hold a lot of this data because when they give it to next supplier they will create issues of collusion. Hence the need for a data trust. This future can only be delivered via a joint government/industry collaboration, working to common goals, incentivised by tender requirements that favour collegiate working. If you don't move soon, you will be swamped with vendor based products offering silver bullets. A senior member of the Crossrail team published a similar think piece in 2018 on why we need a single team of ninja project managers for megaprojects in government. In Sept 2018 the idea went south when they realised groupthink and hero project management was the cause of delay; the ninjas lost their sparkle. There is a danger of creating a closed club, laden with nepotism. I agree entirely with David Owens.