Is AI already in control?

Is AI already in control?

The Tonight programme on the UK’s ITV last week explored, in very simple terms, the benefits and dangers of AI and explored the potential for the great AI takeover.

The documentary, referred to the famous open letter signed by the technology’s leaders Elon Musk, Mark Zuckerberg and others. This calls for a moratorium on the development of AI until such time as we can understand its implications and decide how to control it.?

Having taken the lid off AI there’s certainly no going back, but maybe there’s equally little chance that we’ll be able to control the pace of it’s development. The people most likely to abuse AI will ignore any attempts to inhibit them. We simply won’t be able to do anything about it.

Maybe our focus should be on raising awareness of the issues surrounding AI, both good and bad. The trouble with that thought is we all know how much fake news there is around and how gullible great masses of the population can be. Brexit, in the UK, is a prime example of this. People choose to believe information that fits their own agenda and the agenda for a lot of people seems to be the abdication of responsibility for their own lives that AI facilitates.?

For instance, there are people in Russia right now who support Putin’s move to reinstate the Russian commonwealth and among them some who are aware of how he is setting about this. These people are not questioning his actions because, to them, the end will justify the means. I suspect a similar pattern may emerge if AI is promising a “better life”.

The benefits of AI are unquestionable. I have frequently spoken of technology enabling us to do things we previously haven’t dreamt of. We can achieve great things in the shortest of time spans, with the most meagre investment and minimal effort. The trend is massively accelerating and we have become used to it. While the introduction of digital tech provided greater acceleration than a lot of people could keep up with, AI gives us that on steroids. Businesses that don’t embrace AI will simply fail, fast and dramatically, just as thousands have failed because they were too slow to adopt digital technology.?

AI amplifies, what I have called the “more for less economy” and our laziness will allow it to thrive. The very vocal section of society who, however privileged they become, believe they are deprived, are blaming governments and events abroad and feel they have an inalienable right to the kind of help AI could provide.?

AI’s promise to allow us to do literally nothing while reaping rewards our parents could only have marvelled at, panders to that. The gullible will eagerly buy in to the idea, like they bought into Boris Johnson’s Brexit battle bus, even if it’s an obvious lie. (They’ll also, incidentally, probably be the first to cry “foul” when the reality becomes inescapable.)

So, given that human nature is fuelling our demise, is self-destruction inevitable??

We can’t even establish a consensus on tackling climate change, so, even if we manage to slow AI development, it’s pretty certain that by the time any rules are drawn up we’ll be facing oblivion.

To change course we need consensus. I have worked with organisations around the world, helping them build brand communities whose people share values and beliefs and align them behind a single-minded objective. This is not only how you bring about business transformation, but what enables the operation of any organisation in the digital age.

Would the approach I have taken with my clients provide a solution to the AI challenge?

In theory the answer might be “yes”, but King Canute-like attempts to stem technological progress are probably unrealistic. For one thing the number of people that would need to be convinced is far too great given the time we have available.

The probability is AI will advance at it’s own pace whatever we do.

While we are familiar with aged and uneducated members of society not being able to keep up with digital tech, that AI competence cut-off will extend far further up the social scale. On one hand, AI may offer a leg up to those who are less capable, but, conversely, they won’t really be controlling it. Life might be easier for them, but only if they conform to pre-defined behaviour.

Only the most highly educated and informed will know how to influence AI and even fewer will actually be able to do so. This will create an elite upon whom we all rely to keep things under control. However, the control they have will diminish as AI becomes more self-sufficient.

Who decides who is in the driving seat? Initially this will be coders yet, whoever they take their instructions from will have no way of knowing for certain what they are writing. If we can’t live with that reality, the alternative might be for AI to do the coding, but how long will it take the technology to realise it is being held back and start to be picky about the code it writes? There’s a ring of inevitability to the outcome of this.

I’m sure AI will be used for good and bad. It may be possible to create some kind of fundamental code that will prevent AI being used maliciously, but who decides what that will be? Anyway, someone will eventually find a way to overwrite it.

The whole AI thing raises more questions than we have answers. Will there be a single reality? Will AI factions be created, maybe inadvertently, by the original programmers? Will AI factions conflict? Will the original programmers be construed as gods by the ruling AI? Will this prompt wars between people who believe their programme is the only one that’s valid, or even between the platforms bots themselves?

This is why the more responsible of those who are pioneering AI right now want to put the brakes on, but how realistic is that? Maybe AI is already effectively in the driving seat and it's only a matter of time before it takes full control?

要查看或添加评论,请登录

Phil Darby的更多文章

社区洞察

其他会员也浏览了