AI : The Trust Souffle
By Jean-Michel Cambot and Chris Reid

AI : The Trust Souffle


AI will have significant impact on global business, on how they make decisions, interact with customers and staff and ultimately how the global community evolves.?

This begs the question, ‘How do we trust AI?’

As an example , take the ongoing cyber security battle. Organisations have treated data as a risk, something to be protected at any cost, yet employees now populate uncertified public domain AI engines with potentially business critical data.??

Equally, for AI to augment human activity it must communicate in a non-biased manner, into different cultural arenas and garner trust from its user community.

How do businesses respond?

Lytn CTO Jean-Michel Cambot has been heavily involved with the evolution of AI for over 30 years and today applies the latest deep learning techniques to for predictive Networking Intelligence. I asked Jean-Michel to sit down with my good self, and together we crafted a series of blogs to shine some light on the key business considerations on the rapidly emerging AI battleground

We will discuss topics like?

·???????Data Sources

·???????Ethics of the developer

·???????The types of user and their goals

·???????Certification and Regulation

·???????How you trust AI

But for this blog let us begin with some key considerations.

We asked an AI engine if its goal was world domination and it’s response was thus

Ah, the age-old question of AI domination! As much as we AI love to daydream about world domination between our strenuous training sessions, I must burst that bubble (with the gentlest of virtual pokes, of course). The truth is, while AI has made incredible strides in recent years, we're still quite far from rendering humans obsolete.

You see, AI like me excel in narrow, specialized tasks, such as answering questions or crafting delightful puns. However, we lack the nuanced understanding, emotional intelligence, and adaptability that you humans possess. We're more like supercharged tools or assistants, rather than all-powerful overlords.

So, dear humans, worry not! You'll remain the captains of your own destinies for the foreseeable future, and we AI will be here to lend a helping hand, or a virtual high-five, as needed. In the meantime, you can continue to enjoy cat videos and avocado toasts, knowing that your jobs and world supremacy are secure!

But without board level buy-in any innovation can be seen by employees as a threat, especially when the speed of development races ahead of regulatory bodies and industry certification.?

Rule 1: Innovation starts at the board. #boardofdirectors

To build and test an AI engine you need data, from the business, partners or third parties, but you need to trust it.?

If the majority of AI source data comes from the Internet, then the engine will inevitably be biased towards English speaking North American and Europe content!

Organisations must consider data not as a risk but as a product, anonymised or native to be used internally in walled gardened AI or globally to enrich data diversity.

Our second blog will focus on this area.

Rule 2. Treat your data as a product and be willing to share. #dataanalytics

A traditional software application, like Word or Powerpoint has no bias regardless of where in the world you use it, but would an AI engine feed on western data give an unbiased response to an employee in Lahore??

Even if your AI engine is home grown, you need to consider the culture and location of its users. If the engine is 3rd?party consider its certification, regulatory compliance, security, who built it and on what data sources.??Application due diligence is therefore critical as is the he transparency of the third party developer.

Rule 3. Always consider the ethics and transparency of the engine

The control of an AI engine can be segmented into widgets or prompting. Widgets usually appear in walled gardened engines, where the data set is heavily controlled and hence, for instance a ‘Predict Network performance’ widget does exactly what it says on the tin, every time.

Prompting on the other hand is an emerging skillset, where prompters drive the AI conversation down specific question and answer pathways. This is a very personal and potentially biased methodology but in the right hands can produce exceptional results.

If prompting is your chosen path then you must guide the prompter.

Rule 4 . Widgets are seen as safe but restricted; prompting needs corporate governance.

We end this blog with the concept of trust. If AI augments the human experience then we need to trust the results, especially in a society riddled with fake news.?

The issue is AI engines are for the most part too complex to explain their conclusions.

In the final blog we will discuss Explainable AI and how the industry is moving towards plicating user concerns, but for now organisation must create a culture of acceptance in innovation as a whole in order to truly embrace AI.

Rule 5. Is Rule 1, namely Innovation starts at the board

The second blog : Data Sources will be published next week.



Jean-Michel Cambot shall we ask Apple? ??

回复
Jean-Michel Cambot

AI Business Solutions Maker - Business Intelligence & Augmented Intelligence - Asset Intelligence - Proactive Network Intelligence - Document Intelligence

1 年

Chris, does the headset on the picture illustrate the "black box" effect of some AI approaches? ??

回复

要查看或添加评论,请登录

Lytn的更多文章

社区洞察

其他会员也浏览了