AI and the Evolution of Technology Risk
Areiel Wolanow
LinkedIn Top Voice in AI, Quantum Computing, and Emerging Technologies. Advisor to governments, central banks, regulators, and global enterprises on AI, Fintech, DLT. Managing Director of Finserv Experts.
AI and the evolution of technology risk
Last year was a watershed year both for me and for Finserv Experts, the boutique consultancy I have the privilege to lead. We were asked by some of the world’s top AI experts to help them solidify their approach to managing AI risk.? Google asked us to help develop and deliver AI risk training for their own go-to-market team; global law firm and tech experts CMS engaged us to help them define their own AI risk assessment offering.? Coming out of this experience, I was given the honour of being invited to speak this year at LEAP , the upcoming gathering in Riyadh that has been shattering technology conference records since it started a few years back.
In preparation for LEAP, I have been going through the latest developments on AI risk in academic research, as well as the public policy and regulatory positions of leading countries around the world and patterns of investment at both new startups and established enterprises. Coupling these with my own experience in the past year, I begin to see some patterns that I’d like to share with you.
Like most emerging technologies, the key to assessing potential impact is less about being able to predict the future and more about being able to understand the past.? Consider any disruptive technology, not just the ones in our lifetimes like the internet or mobile phones, but earlier ones like automobiles, electricity grids, or birth control pills.? In each case, the perception of the risks associated with these new technologies can be tracked through a series of stages, which I call Blob, Framework, Empirical, and Algorithmic.
Blob Stage
Blobs are amorphous; impossible to measure or classify.? In the Blob stage, the technology is so new that people don’t understand it yet, and have limited experience putting it to use in the real world.? As a result, the risks associated with using that technology are expressed as primal fears, and because there is so little understanding or real-world experience, these fears are largely irreducible.? These irreducible fears are the chief hallmark of the Blob stage, and the fascinating thing is that for the most part, they don’t change from one technology to the next.? Consider the questions that people are asking now about AI:
People may be asking these questions now about AI, but in the late 1800’s - when the world’s logistic networks were converting from sail to steam - people were asking the exact same questions.? Getting beyond the Blob state requires looking at the actual use cases for the relevant technology in a systematic way, and understanding the details of how people intend to use it.
Framework Stage
In the Framework stage, people have begun to think systematically about the new technology and its use cases, and it becomes possible to use well-established risk management techniques to articulate risks in in a structured way that allows for comparison and quantification:
With AI, we begin to see evidence that some enterprises have already moved beyond the Blob stage, and are starting to address AI risk more systematically.? There are also innovators hard at work to enable this transition; as one example, UK-based startup Holistic AI offers an excellent platform for identifying, quantifying, and efficiently managing an enterprise-wide portfolio of AI risks.
领英推荐
Quantified Stage
Ultimately, the way to make any risk palatable, whether to investors, consumers, or societies at large is to share that risk widely enough so that no one party is faced with insurmountable losses if the risk occurs, and the primary vehicle for accomplishing this is insurance.? But to insure a risk, you have to be able to quantify it:
·???????? How large is the loss?
·???????? How likely is the loss?
·???????? If a loss happens, who will have to pay for it?
For AI risk to be insurable, it needs to be possible to answer these questions in order to price an insurance policy appropriately.? This in turn requires a base of evidence on which to base the estimates of size and probability of loss, as well as enough case law to have a good idea of how liability is likely to be assigned.? Once again, we are seeing early signs that some enterprises are beginning to think about AI risk in the Quantified stage.? Munich Re Specialty - North America , for example, already offers an AI insurance product.? Their white paper on AI risk is also worth reading; you can grab a copy here: https://www.munichre.com/en/solutions/for-industry-clients/insure-ai/ai-whitepaper.html#whitepaper
Algorithmic Stage
In some cases, we will eventually collect enough data about a particular kind of risk that we can very accurately model the amount and probability of loss based on a known set of input variables.? In these cases, it becomes possible to offer protection against that risk at a society-wide level.? It also enables societies to make those protections available to all, even to those who would not otherwise be able to access insurance, assigned risk pools for auto insurance or government-provided health insurance being but two examples of this.
Note that classifying a risk as algorithmic does not pre-suppose the use of AI, or even the use of computers at all; many insurers had created actuarial tables that allowed them to accurately price life insurance in the late 1800’s and early 1900’s.?
Some forms of risk will never reach the Algorithmic stage. Maritime insurance is the oldest form of insurance known to man; a significant portion of the Code of Hammurabi was devoted to insurance regulations, yet a good portion of the world’s maritime insurance is still negotiated each year between buyer and seller on the trading floor of Lloyd’s of London rather than plugged into a pricing program.
So where are we with AI risk?
The evolution of technology risk between the Blob, Framework, Quantified, and Algorithmic stages does not happen all at once.? Different people, enterprises, and countries evolve at different speeds.? As I have cited above, some enterprises are already managing their AI risk with a clearly defined framework, and some have even tried to quantify that risk and make it possible to insure.? But it is fair to say that as of early 2024, much of the world is still in the Blob stage when it comes to thinking about AI risk.? And that a problem that I and my colleagues at Finserv Experts are trying to address.
For those of you who will be attending LEAP in March, please get in touch; I would love to meet up with you and tell you more about the work we’re doing.
Marketing Content Writer, Book Ghostwriter, Author. Helping individuals and businesses successfully communicate their messages.
1 年Thank you for this, it was very helpful. This afternoon the topic of risk in generative AI came up in an interview with one of my clients. Then this excellent explanation of where things stand in this area showed up in my feed. perfect timing, and an excellent article
LinkedIn Top Voice in AI, Quantum Computing, and Emerging Technologies. Advisor to governments, central banks, regulators, and global enterprises on AI, Fintech, DLT. Managing Director of Finserv Experts.
1 年Kefirah Kang Ewan Puckle Hobbs Chris Justice Katharyn White Emre Kazim Elmar van Emmenis Charles Ayorinde Tim Ahmad Ailsa Williamson Jad Barakat Kristi L Swartz Boonyarat Kittivorawut Keith Bear Kenn Herskind Robin Fairless Kyle Hannan Fred Puckle Hobbs Mark Fisher Sophie Meehan