Don't leave AI to the engineers
AI needs creatives and philosophers

Don't leave AI to the engineers

Just as I was starting my career, in the heady days of late 90s Britpop and Cool Britannia, I was flown by my future employer to their eCommerce Innovation Centre in the South of France. The 90s were a time of huge technological advancement, the increasing availability of the internet and personal computers transformed communication, shopping and entertainment.? I was shown a prototype of a smart fridge and was drawn into a future of work where we were greeted and given access to the work place through facial recognition. While the technology was available over 20 years ago, the use cases for these have yet to really take off.

?

Not long after, I worked on concepts for online dating and share trading over Wireless Application Protocol.? The less said about the research I had to do for online dating, the better!? These kinds of applications are now ubiquitous but at the time were severely restricted due to slow speeds, inconsistent user experience across devices and lack of support for richer media.?

?

In all these periods, the solutions were technically available but they hadn't been designed in a way that they were meaningful yet so that people used them. The engineering was ahead of the culture and ethics.? Thought hadn't yet been put into privacy issues, trust and security. Engineering thrives on innovation and problem-solving, pushing the boundaries of what is possible. While the technology is disruptive, there is often a lag in cultural adaptation or understanding.

?

With generative AI - that is AI that is able to generate original content - we are experiencing an extreme example of technology driving ahead of culture. People don't yet know what to think and feel about a song written by an AI or whether an article that is AI-generated is simply cheating.

?

ChatGPT has been in the headlines every day since it launched.? Its ability to engage in interactive conversations is remarkable and is already transforming the way that we research.? However, it has severe limitations and is often wrong, suffering occasionally from AI "hallucinations".? ChatGPT was unable to define for me accurately what an AI hallucination is but, in short, it is where an AI provides a confident response which is in fact entirely fabricated.

?

A friend of mine's father had a letter published in the Times newspaper in London recently explaining how he had asked ChatGPT about himself and been the subject of an AI hallucination.? ChatGPT confidently and repeatedly asserted that he was an IRA terrorist guilty of the IRA bombing of the Baltic Exchange. In fact he was CEO of the Baltic Exchange and a victim of the event.?

?

In another test, ChatGPT also insisted that former F1 driver, Heikki Kovaleinen, had participated in Le Mans and was a simulator driver for Mercedes.? I sent Heikki the write-up and he assured me that both were incorrect.

?

AI is still in its infancy and brings with it a promise of huge advancements.? As with my early experiences, the technology is far ahead of our own ethical thinking.? We don't yet know what we don't know.? There are implications on culture and society that we haven't yet thought through.? I swing between extremes of catastrophic alarm and triumphant optimism as I navigate the different thinking on the topics.

?

As we all catch up with the engineering, it is clear that a multi-disciplinary approach is required.? Engineers need to be a more responsible part of the solution to help us navigate and understand the technology. We need scientists, ethicists, artists, politicians, sociologists and many others to come together with diverse perspectives and expertise to help us start to tackle the evolving challenges of AI.? Mistakes have already been made: there is no option to press pause but there is an opportunity to improve. AI is here to stay, we need to develop an integrative approach to allow humanity to catch-up.

I dont think any mistake has been made but its just the start of a long journey. Engineer are jack of all traits and will do justice with the work, However different perspectives are always required in such a broad application based tool. I want to add an interesting point here about philosophy though. AI is a kid which will also have to follow some philosophy to be ethically right. How it will process ethics. You may wonder that we will have to go back to our old philosophy books and educate it… amazing isn’t it….

回复
Lee Mallon

Building world class innovation | ?? Author of DKR

1 年

Holly Joint interesting read, hallucinations are problematic but they are reducable with fine tuning or it is just an unintended consequences of people using the technology in ways it wasn't intended, as the model is incentivised/rewarded to return something. Though I do agree that this technology is here to stay and will have dramatic change our lives and therefore of course those in technology should bring in people from other areas but I think those in technology also have a responsibility to educate those outside of engineering as to what this technology is and different use cases so people get a more broader understanding of the technology.

回复
Tom Barnardiston - Regs and Business Change

Asset & Wealth Mgmt. | Regulatory PM/BA | Front Office Process Redesign | ChPP | Agile PM

1 年

Great article it's certainly made me think about the future impact on wider society but also on how AI is used for project management and analysis.

回复
Biswajit Dasgupta

Strategic Advisory/Investment Management/Wealth Management/ Treasury

1 年

Couldn't agree more. AI is simply a technology. How it is used, what ethical and moral guardrails are put in place, etc is a matter for the entire society to decide. Unfortunately, we don't even have a framework on how to go about it today

回复
Christopher Hafner, PgDip Oxford, FIoD, FRSA, FStratPS

Catalyst | Consigliere | Contrarian | Provocateur | Strategy | Transformation | Operations | AI & Digital

1 年

Great provocation, Holly. Intelligence is about acquiring knowledge Knowledge is about acquiring information thru learning Learning is based on data So… if we believe we have a ‘truth & fact’ problem with data on ‘the internet’, and if these AI’s are fed volumes of available data, how can we be surprised at the bias and disparity in Generative AI? If a firm states they do their best to fact check and review training data sets, are we comfortable with that? Who determines the firm’s heuristics? It’s an ethics discussion as much as an existential discussion…

要查看或添加评论,请登录

社区洞察

其他会员也浏览了