Why Decision Intelligence is the Gravity that is bringing Planet Data and Planet Process together
Credit: Midjourney v5

Why Decision Intelligence is the Gravity that is bringing Planet Data and Planet Process together

People in the Artificial Intelligence (AI) and Intelligent Automation (IA) space live on very different worlds.

Planet Data

The residents of Planet Data have a data obsession, perhaps even a data fetish. They are utterly focused on the science of Machine Learning (ML) - that is, building predictive algorithms from historical data. They are motivated by insights, not automation.

Data drives the modern world and has opened up numerous products and services, yet business leaders constantly complain that they are not getting enough value from their data investment. I have been chatting to these folks for the last 15 years and there remains a big gap between data and the ability to take action.

When people ask what I do, I explain that I run an AI company that closes that gap by leveraging next-generation Symbolic AI. People on Planet Data generally give me a sceptical look, so I’ll add, "Yeah, we power decision services for some of the largest companies in the world, like EY and Deloitte, who have turned their knowledge into machine intelligence that replicates human reasoning in a way ML cannot."?

Where the conversation goes from there depends on the age, experience and open-mindedness of the data scientist!

I was recently speaking at a summit on the importance of explainable AI in healthcare. I got chatting to the founder of a predictive algorithm business who had a booth next to ours. After we shook hands and made polite introductions, his first question was, "So, what data do you use to train your AI?".?

I repeated my well-rehearsed answer; that we are less dependent on data to train models because our models are based at least in part on human knowledge.

Before I could expand, he started getting quite animated. "But, where's the data, yer gotta have data, WHERE'S THE DATA?” - as if repeating the request louder and louder would somehow make me submit and tell him where I was hiding it.?

I invited him to take a look at our platform but instead, he just walked off, pulling at his own hair in bewilderment, continuing to mutter, "You can't build an AI without data!". He reminded me of that kid who puts his fingers in his ears and screams, “La la la, I’m not listening!” to block out something he doesn’t want to hear.?

The next morning, the same guy came over to the team and had a demo of one of our knowledge models - and watched it reasoning. It was making complex and contextual decisions in the authorisation of health treatments and explaining how they were reached.

No alt text provided for this image

Rainbird is an advanced neuro-symbolic AI platform that automates decisions and provides evidence for each outcome.

He looked over at me and smiled. He was blown away. I’m not exaggerating, he was adamant that we must collaborate.?

This got me thinking. Why is it so hard for some people to entertain the possibility that you can build AI solutions without ML and then, on looking more closely, have their minds completely blown when they see what we do??

Because they live on Planet Data - destined to a life of data-cleansing, bias-battling and worrying about explainability, ethics and regulation.?

So, what's the problem?

ML models make predictions. Those predictions can be wrong but look right or right but look wrong. As models are statistical, there is no accompanying explanation, so a subject matter expert must add a layer of human judgement to turn this prediction into a decision to turn data into action.?

Think about it, every time someone tries to delegate responsibility to ML for making a decision, it requires some sort of human oversight. Nowhere is this clearer than with Generative AI tools like ChatGPT, powered by GPT-4.?

ChatGPT has been a veritable nuclear bomb that has gone off in the middle of the business world. It has created what feels like a blisteringly hot “AI summer”. GPT-4 and other Large Language Models (LLMs) have once again put AI front and centre in the public consciousness, and everyone is trying to figure out how to adopt more AI safely and responsibly.?

But, adopters of LLMs quickly see the challenges. They are slowly realising they need to be the expert to get value from Generative AI, which is prone to mistakes. Like all ML, it's useful for making predictions, but that only helps if you are the subject matter expert capable of smelling the bullshit (and I mean that in the technical sense).?

“Bullshit is convincing-sounding nonsense, devoid of truth.”

…and ML is very good at creating it.

ML is a powerful predictor but cannot replicate human reasoning or complex decision-making.?

We, therefore, cannot delegate responsibility to LLMs or other forms of ML to make a decision, because they lack transparency. Without transparency we have no trust, and without trust, we have little adoption.??

In fact, I spoke to a leading industry analyst who told me that 40% of all inbound enquiries from enterprises are asking the same question, “How can we govern LLMs?”.

The history of AI has featured many summers and winters as AI has gone in and out of fashion since the 1950s. Like many, my friend at the conference needed to be made aware of his AI history. He didn't know that the last time AI was interesting, during the 1990s, and ML wasn't driving it at all. It was a summer of Symbolic AI.

Planet Process

But there is another world I’ll call Planet Process.?

The population of Planet Process is focused on efficiency. They have made their living leveraging Business Process Management (BPM) tools, Robotic Process Automation (RPA) and now Intelligent Automation (IA) to take away the repetitive heavy lifting of manual, white-collar office tasks by automating them.?

RPA has proven adept at getting modern systems to talk to legacy systems. They have acted as an effective Band-Aid, papering over the cracks of technical debt - not really curing the wounds but effectively covering them up, making them tomorrow's problem.?

BPM and workflow have become the default way of structuring repetitive processes and have acted as frameworks to tackle more complex functions like interpreting unstructured documents. This is a field called Intelligent Document Processing (IDP) and uses Natural Language Processing (NLP) and Natural Language Understanding (NLU) to interpret unstructured data - mostly now displaced by the power of LLMs like GPT-4.

Ironically these NLP/NLU technologies were invented on Planet Data and turned into simple-to-use tools that can be leveraged by people on Planet Process.?

Being ML based, these tools are also predictive, subject to error and human checking - as those that use them know only too well.

So, the people on Planet Process are generally not ML people. They are entirely focused on reducing human effort and error, mainly to reduce costs.?

So, what's the problem?

The processes on the surface of Planet Data, the low-hanging fruit, have mostly been automated, which has opened up two new frontiers:

  • Process mining: the science of using discovery tools to find the next processes to go and automate.
  • Artificial Intelligence: moving beyond the automation of simple tasks towards more complex ones powered by AI.

Unlike the people on Planet Data, who build probabilistic models that make opaque predictions, the toolkits used by those on Planet Process have been mostly linear in nature. They are designed around the principle that an automated process will do one thing after another, branching at various stages and eventually, should something more sophisticated be required, deferring to a human.

They are declarative and explainable but also very limited in their capabilities.

If you look at any process diagram, sitting at the fringes will be people waiting to take on decision-intensive, human-only work.?

If you want to do something complex on Planet Process, there is no choice but to defer to humans because linear rules cannot replicate human reasoning.

The coming together of two worlds

For the population of Planet Process and Planet Data to advance their agenda further, these worlds need to come together.???

People on Planet Process would never try to automate human reasoning or decision-making because they would assume it was an ML problem and, therefore, in the wheelhouse of those who live on Planet Data. They know deep down that you cannot automate human reasoning using a decision tree, not least of all because humans do not think linearly.

People on Planet Data know that decisions require explainability, which is the key to not needing a human to close the gap between a data-derived uninterpretable prediction and the ability to take safe action.?

Of course, despite the data fetish people on Planet Data suffer from, data is not created in a vacuum. All data is generated as the result of business processes and human actions. The process that led to data may not always be obvious, but it is always there.?

How are these worlds coming together??

So, we have process people looking for more sophisticated automation and data people looking to close the gap between data and action. There is a missing piece in the middle, something that bridges these two worlds.

This all comes together around the automation of complex, contextual human decision-making - the domain of Decision Intelligence (DI). DI is the gravitational force that has been pulling these two planets together. It represents the ability to achieve the best of both worlds by combining the power of data-driven statistical prediction with symbolic human judgement.

I sometimes get challenged on the semantics here. Is there really a difference between prediction and judgement??

Predictions are data-driven machine-learned forecasts of future outcomes based on historical data and neural networks. They are the “neuro” part of an incomplete equation.?

Judgments are the evaluation of a prediction based on beliefs, values, and experiences, which can include regulations and standards. These are expressed in software symbolically, and we might call “symbolic” the other half of the equation.?

A genuinely autonomous decision must be informed by the past and measured, moderated or judged by the standards set by humans. Both data and knowledge must be reasoned over to form a decision. And, that process must be subject to scrutiny and inspection to ensure the whole chain of reasoning can be understood.?

Symbolic AI gives us transparency at the expense of needing to explicitly encode the entirety of a domain.?

ML avoids the need for explicit encoding at the expense of an unquenchable thirst for data, a lack of transparency, and the risk of biases.

This has heralded the birth of Neuro-symbolic AI, and it’s the solution to squaring this seemingly intractable circle.?

Neuro-symbolic AI describes a set of architectures that combines symbolic approaches with neural networks and other techniques from ML. By combining these methods, we benefit from their complementary strengths and avoid their respective weaknesses.??

Neuro-symbolic AI is a little like a hybrid car, presenting a single driving experience, but under the hood, two engines purr away in harmony. It has an architecture that allows us to manipulate and reason over symbols while at the same time breaking free of the need to build explicit models.?

It effectively bridges structured, symbolic models and fuzzy real-world, data-driven reasoning.

Not only does it enable us to automatically build structured models from natural language, but we can also apply those models to real-world situations that were previously unseen. We can empower human experts to see, understand, and modify models and to describe with complete transparency the reason behind any decision made.

In the post-LLM world, decision intelligence has revealed that our two worlds were always one. It is the answer to leveraging the combined power of all that has come before, delivered through the power of neuro-symbolic AI.

And that is a superpower for anyone who adopts it.?

Ralph Aboujaoude Diaz

Global Head - Product and Operations Cybersecurity

1 年

Very insightful article ??

要查看或添加评论,请登录

James Duez的更多文章

社区洞察

其他会员也浏览了