"AI" Should Mean "Accountable Intelligence".
I recently sat in a restaurant in Kuala Lumpur discussing AI developments with a business associate of mine. We had both played in the field for some years, he was much more in the weeds as an owner of a AI development company and I was playing on the fringes, grappling with the potential and philosophical benefits trying to figure out how to invest ahead of the a very rapid moving curve. We met in the middle on one problem, AI was losing its explainability.
It's taken me a good few months to frame that lunch chat into what is really needed, all the while the industry debate raged on, "Is it Artificial Intelligence or Alien Intelligence?". I felt that is missing the point it is really about Accountable Intelligence and here is why.
Why the future of AI depends on transparency, trust, and accountability.
The rapid advancement of AI has brought about extraordinary potential, tools that can draft legal documents, diagnose illnesses, generate art and more. But beneath this promise lies a critical issue: the lack of Accountable Intelligence. Many AI systems today operate in a grey area, shielded by disclaimers and a lack of clear responsibility for their actions or outputs.
Take, for example, the AI systems developed by major players like Google, Meta, or OpenAI. These companies have created increasingly capable tools, but the explainability of their systems, the ability to understand and trust their decisions is often lacking. The models are trained on vast and complex datasets, requiring constant retraining and reframing and auditing, yet accountability remains elusive. The result? A system where developers dodge liability, include biases, build in unpredictability and hide behind disclaimers that the "AI made the decision, not us."
Technological Without Boundaries is Anarchy
When development outpaces regulation, we risk descending into technological anarchy. As a parallel, the promise of social media to connect us all illustrates this, as it inadvertently introduced a mental health crisis with consequences comparable in scale to a public health epidemic.
Without clear boundaries, AI systems can produce outputs that are reckless or even harmful. Many of which I have written about before, but its easy for any of us to imagine an AI lawyer recommending a legal position that later proves to be disastrous. Who is responsible? The bot? The company that built it? Or no one at all?
In our human systems, accountability is foundational. Doctors take oaths. Lawyers operate within strict legal frameworks. Both are held to professional standards, and breaches result in penalties. This accountability builds trust, ensuring that services are provided ethically and reliably. AI, no matter how intelligent, should not be exempt from these principles.
The Case for Accountable Intelligence
To ensure AI systems serve humanity responsibly, we must embed Accountable Intelligence into their development processes. This involves holding developers and companies to the same standards as other professionals. If an AI system's recommendation leads to harm, the company behind it should bear liability, just as a human professional would.
The insurance industry provides a useful analogy. Directors of companies are often covered by liability insurance, protecting them against unforeseen risks as long as they act in good faith. If they behave recklessly, insurance coverage can be denied, and penalties are applied. Similarly, for AI systems, we need mechanisms to assess risk, explain decisions, and ensure accountability. Only then can we build trust in these tools.
An example from the US press in April when AI bot incorrectly advised the NYC government of employment law, all got a bit messy and public. https://apnews.com/article/new-york-city-chatbot-misinformation-6ebc71db5b770b9969c906a7ee4fae21
领英推荐
Transparency and Morals at the Core
Accountability begins with transparency. Consumers need clear guardrails to understand how AI systems operate and assess their reliability. Developers must prioritize authentic, ethical values at the heart of their designs, rather than hiding behind disclaimers that shift blame to "the algorithm."
Instead of creating entirely new laws for AI, we should adapt and extend existing legal frameworks. These laws, built over decades, already govern human intelligence and behavior effectively. Applying them to AI ensures continuity and leverages established norms, avoiding the need for an entirely new regulatory structure.
Trust as the Ultimate Commodity
The future of AI is not just about innovation, it’s about trust. Whether intelligence is synthetic or biological, trust is built on accountability. Without it, AI will remain a risky proposition, limiting its potential to truly serve humanity.
By embedding accountability into AI systems, we can ensure these tools are not just powerful but also safe, reliable, and aligned with human values. The commodity of the future isn’t just intelligence, it’s trust.
About the Author
Malcolm Wild is a technologist with over 25 years of experience in retail and e-commerce, combined with consulting and delivery expertise across APAC, EMEA, and the USA. He brings this wealth of experience to clients navigating an ever-evolving landscape. Any views represented here are those of the author and not necessarily those of any organization or employer he may represent.
www.malcolmwild.com| ? 2024