The Delicate Balance Between Society And AI

The Delicate Balance Between Society And AI

I had an interesting question during a discussion the other day: 'How do you see humans and AI interacting?’.?

As somebody who has been working in and around AI for the past 12 years, it’s something that I’ve written about a fair amount. However, with the almost instantaneous changes that ChatGPT has brought about, it feels like something has changed.?

So how does this work now? What is the future of AI and human interaction and who will be responsible for the impact?

When I used to write about this more, it was apparent - governments would need to put in worker protections to stop the exploitation of AI to the detriment of society. I was writing this kind of thing 10 years ago, since then the political world has not exactly been too forward-looking, instead focussing on short-term issues.?

So where we had a community crying out for legislation, the legislative bodies were too busy either dealing with COVID, dealing with inflation, trying to hold political leaders to account, or using wedge issues to gain power. It meant that governments have been forced into short-termism that takes a long time to work out because legislation takes a long time to pass, whilst the speed of technological change has?been the fastest it has ever been.

In 2023 we find ourselves in a situation where within six months of ChatGPT being released we are seeing the biggest disruption since the invention of the internet. No government is in a position to legislate and this has the potential to create an economic nuclear bomb.?

Imagine a world where a multi-million-pound company can be run by a couple of people and a GPT4 login. Sounds great right? Well, no.?

This is an entirely unsustainable model, if 50% of companies reduce headcount because an LLM can do the jobs of those people, the profit of the company temporarily goes up, but the long-term prospects of all companies go down. Even B2B companies eventually rely on some form of B2C transaction to support themselves whether that is 2/3/4 times removed from their direct clients or whether they are selling directly to B2C companies. By replacing humans with AI the number of consumers reduces and so while a few company stakeholders will become richer, the majority of the population will become poorer and will basically be buying less stuff so those B2C companies will see declining profits and reduced spending power.?

There is also the prospect of mass civil unrest as we have seen any time there have been significant changes to working practices or conditions. We don’t even need to look far back in history to find these as contemporary society is showing at the moment. Right now there are public sector workers striking in the UK, Paris was on fire last month thanks to pension reforms, and film sets sit empty in the US thanks to the writer strikes. Imagine if this unrest was suddenly not isolated to specific industries and practically every job was suddenly under threat. There wouldn’t just be protests, there would be riots.?

At the same time, governments move so slowly that legislating against these changes will take so long that by the time laws come into effect they will be completely outdated and society will be forever changed.?

So what does this mean? Is society doomed? Is AI too dangerous and must be stopped?

No.?

AI has the potential to have a significant positive impact on society and destroying the progress that has been made would be possibly the biggest act of self-sabotage that any government could take. It’s not the nuclear arms race, there are significant positives to AI. We have seen only recently how it is better at identifying cancers in patients and its potential for everything from cleaning up the oceans to improving sustainable business practices is massive.?

The scary part is that society will be shaped by the CEOs of companies in the absence of any form of effective legislation from governments. In the recent past, we have seen how damaging this can be with modern political discourse being upended and polarisation threatening democracy thanks to policies from only one or two companies monopolising the social media space.?

However, We have already seen prominent business people, academics, and scientists putting their names to letters decrying the threat of an unregulated AI development environment. Only this month papers around the world rang the alarm when Geoffrey Hinton left Google and publicised his beliefs about the dangers of AI describing it as a 'bigger threat to humanity than Climate Change'.?

Whilst some could see these as scary, I don’t, these are signs that things are working. AI can only have the kind of damage that is prophesied if it is done in the dark and if society at large is unaware of it. Instead, we are seeing debates across the world about AI ethics and any company that does decide to publicly outsource its work to AI would be almost instantly shunned by society.?

So human interaction with AI in 2023 is not a one-way street, it is not just a case of it taking our jobs, but it is in fact a symbiotic relationship. AI can only exist in a society that allows it and with so many prominent voices holding the companies developing it to account, these developments can no longer happen in the dark. The time of ‘wow, isn’t ChatGPT amazing’ is coming to an end and the age of ‘Remember who you work for’ has begun.

And for anybody wondering, no, I did not use ChatGPT to write this.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了