Why isn't Banking more Boring? Part Two
In the first instalment we looked at how a cost advantage in banking could be applied to another bank's customers and transactions to generate significant additional value. However we avoided the question of how to generate the advantage in the first place. So in this instalment we are going to try to take the first step in what Musk did with his "boring" strategy - we will understand what stops us digging cheaper and faster.
Regardless of whether you are an existing bank looking for (or building) your go-to target or you are a challenger bank building something new to take on existing banks (and eventually acquire them or their customers), then to adopt this strategy you will need something with the following characteristics:
- It will need to be a working bank of reasonable size (say 1 million+ customers) in order to be “viable”. To move a major bank in full onto something else, that something else needs to be proven and running in some form and with some scale
- It will need a responsive operating model (organisation, locations, people, process and technology) that enables frequent change and personalised variation & interactions at near-zero marginal cost
- It will require an approach to onboarding the customers and transactions of another bank at scale. Unless you can get the customers and volumes of another bank on to it then the strategic cost advantage cannot be leveraged
- It will have a cost income ratio at major bank scale low enough to be strategically compelling, say 25% or lower.
If there is something out there today that looks like this, then it is not in plain sight. And while there are some new brands and offshoots of existing banks that may aspire to achieve all of these, there are probably only one or two who are approaching it in a way that gives them a good chance of achieving it.
Why?
To avoid the complexity and cost of today’s banks you have to understand where the complexity and cost comes from in the first place, i.e you need your equivalent of Musk’s insights on cross-sectional area and machine productivity. Equally to move existing customers and transactions from another bank at scale requires explicit design consideration in the architecture. So you need a parallel to Musk’s overground parking slots, which are actually electric lifts for moving vehicles between today’s slow road-network and the high-speed underground network of tomorrow.
However if many of the new banks aren't visibly designing to solve both of these, there are signs that such designs may be just around the corner.
You don’t need to break the laws of physics
The cost base of a bank is the result of the choices it makes in its business model and the operating model it implements to support it - its organisation design, locations, people, processes and technology. And whilst some of this may be independent of technology (such as locations), much of how the rest evolves is in large part determined by the underlying technology architecture.
Stripped back to its essentials banking is not inherently complicated. Yet the operating model and technology landscape of most banks has turned out to be very complicated indeed. Ask:
- Why are conduct risk teams concerned about whether or not the cash flow forecast from an online loan calculator is aligned with the actual interest calculations and payments a customer will make through the lifetime of the loan?
- Why do finance and risks teams have to reconcile the inputs and outputs of different reports that are based on the same underlying customer transactions and interactions?
- Why are there different applications, processes and teams for identifying and managing fraudulent transactions on debit cards, credit cards and mobile payments?
- Why do teams have to regression test that existing products still work when a new one is launched or check that all channels are correctly displaying interest rates when they are changed?
The answer is simple. A large part of the operating model in today’s banks is compensating for the accidental complexity introduced through the myriad of design, technology and engineering choices made through the years. And it isn’t hard to spot. It can be seen wherever functions and data are duplicated between applications or where systems are coupled together in ways that they don’t need to be. But what is accidental complexity and why does it get introduced in the first place?
It was Fred Brooks who introduced the term in his paper No Silver Bullet – Essence and Accident in Software Engineering. In it he distinguished between “essence” and “accident”. By "essence", he meant the crucial issues and complexities that are inherent in the problem being solved, and in the conceptual structure of the solution. By "accident", he meant the incidental complexities that are imposed by the ways that technology is built in the world: physical limitations of size and speed, and low levels of abstraction in our technology choices.
Although not software, this can be seen in Musk’s diagnosis of today’s problems in underground transportation. When people drive cars you need wide tunnels as they tend to not stick to straight lines. They also have accidents and occasionally they just stop unexpectedly despite what the roadsigns say. You also have to have enough space for ventilation to deal with vehicle emissions. However take those physical limitations away (or reduce them by putting a zero-emission autonomous car on an electric skate) and you can make different design choices. You can have narrower tunnels that are much cheaper and faster to dig.
In the world of banking there are similarly lots of examples where today’s (or yesterday’s) technology choices make the world more complex than it conceptually needs to be. For example, take an ATM. Why does it have the ability to perform an offline pin check? It was originally there to avoid having to make a costly network call back to the issuer bank to check the pin on every transaction. However, often an online pin check is required, so there are two mechanisms in an ATM for doing the same thing. The simpler and less confusing design with no duplication of function would be to always perform an online pin check, but historically this was physically more expensive to operate, potentially less reliable and often not quick enough. The physical limitations of the time left us with a more complicated and costly design.
Of course, accidental complexity is not just in ATMs. It is in all of the channel systems and the product platforms too. It is also in the risk and finance systems. In fact it is everywhere in the architecture and it is not the same as technical debt. It can’t easily be paid down or removed, because it is an inherent consequence of the physical limitations and low levels of abstractions possible with the technology choices made at the time, rather than a result of the quality of the implementation.
But what if there was a silver bullet in banking that meant accidental complexity could be avoided altogether (or at least minimised)? Is it possible to end up with just the “essence” of what a bank should be and have parts of a bank’s operating model - and together with it a significant part of the complexity and cost of the operation - just drop away?
Very possibly.
Brooks’ original thesis was technology improvements generally presented 10x improvements in reducing accidental complexity and given that the essence of a good problem should be 90% of what is to be dealt with then
"There is no single development in either technology or management technique, which by itself promises even one order-of-magnitude improvement a decade in productivity, in reliability, in simplicity", Frederick P. Brooks Jr.
Put more simply, Brooks said that inherently hard problems are still hard no matter what tools you have available to solve them (although he was less certain on AI in this context).
However banking isn't inherently hard. In fact the essence of it is very simple. What's made it more difficult and more costly than it needs to be is the accidental complexity that has resulted unavoidably from the technology and organisational design choices made so far. To restate Brooks', solving a problem that is today 10% essence and 90% accident is a perfect candidate to see 10x improvement from the application of new technology. In the context of micro-services, cloud and machine intelligence, banking has likely reached this point.
In Part Three, we will examine the accidental complexity in banking in more depth and use this to define what the emerging essential architectures are likely to look like.
Architecture and Integrations Director @ Piraeus Bank
7 年Very keen to say that the accidental complexity as a concept is applicable in most of the legacy IT systems in almost all business sectors. Very well said and presented.
Managing Director | Digital & Technology Specialists | Public Sector
7 年Hits the nail on the head....excellent.
Business Consulting, Head of Digital, Transformation & Operating model change, Growth Marketing and customer success, Product & Propositions, Fintech, Commercialisation, Payments, Operations, Wealth Management
7 年Superbly articulated Jon. The right architecture is fundamental to design simplicity and therefrom low cost and flexibility. Huge challenge is in how does legacy unwind itself to go to the new place.
Head of product - Api, content and web analytics (Director) at Lloyds Banking Group
7 年Thanks for sharing John, I have missed part 1. Keen to have a look