AI: What Am I Asking?

AI: What Am I Asking?

I'm an everyday futurist.

My Dad was quite the futurist. He was a research scientist and later a teacher, as well as an avid reader of science fiction. When he looked out among the stars on a cloudless night he saw potential, excitement and hope for the future. He imagined that aliens that could travel between the stars would be intelligent enough to be Vulcans, not Klingons.

 He’d talk inspiringly about space travel, robots and time travel – not just the fiction, but the theories and possibilities of these things happening in our lifetimes. And when I think about his universe, my only regret is that today while I have something like Captain Kirk’s communicator in my pocket, there’s no Starship Enterprise in orbit to beam up to – yet. Maybe there will never be.

 But like father, like son. I'm an everyday futurist. I inherited his fascination with the future – and the hope, the fear and the debate are still as vivid for me today as it was when I was a five-year-old. Today it’s Artificial Intelligence – AI – that’s on my mind. I’m minded to ask a ton of questions, many of which cannot be answered yet.

 Because it’s mind that’s probably the key issue – but I’m getting ahead of myself.

 

1. AI – What Am I Asking?

 When it comes to AI there is a lot in the UK media

And frequently it’s felt like the fear, and not the hope, has had the upper hand in the debate. There have been some real headline grabbers like Stephen Hawking’s line to the BBC: “The development of full artificial intelligence could spell the end of the human race.”[i]

 Professor Hawking had an increasingly strong voice in the media, rightly so, and his comments will have been very influential. Unfortunately, some seemed to have grabbed on to what he said as an indictment of AI as fundamentally a bad thing. But it’s important to view what he said in context – he didn’t say “all AI,” he said: full AI could end us – and he said “could” not “will.” Nonetheless, he was sounding a warning, and we should heed it.

 On the plus side, it’s important to balance his BBC quote with another BBC article, this one titled “Artificial intelligence used to predict cancer growth,” which reported on an AI project: “With this tool [meaning AI] we hope to remove one of cancer's trump cards – the fact that it evolves unpredictably, without us knowing what is going to happen next.”[iii]

 The big story is that among its many applications, AI could help us take big steps towards a cure for cancer. That’s immense. Maybe, in time, it could help us eradicate cancer altogether. And then, what about HIV? Or perhaps Ebola? Or the reversal of climate change? Or a bunch of novel forms of agriculture to feed us all? No wonder many people are passionately pursuing what AI might deliver. 

 Simply put, that’s the fear and the hope around AI. Only (perhaps) in the case of nuclear energy and its abominable offspring the nuclear weapon, can I think of such a polarized debate about technology and what progress could mean, both positively and negatively.

So here are some of the questions I’m asking about and topics I’m thinking through regarding AI: to understand not just what AI is and can be in practical terms, but what AI means for us humans – practically and emotionally, in ethical and social justice terms, and morally:

 What is full AI? Is full AI evitable? Why would an AI try to destroy us? What are the ethics surrounding AI? What could be the economic impact of AI?


2. What Is Full AI?

Today’s AI is not James Cameron’s SkyNet – not yet.

The good news is, we aren’t about to be over-run by killer robots or have the internet taken over by an intelligence so fierce that we cannot conceive of or counter its power.

AI, as it is today, is fundamentally limited. A difference of kind from full AI. And there there’s machine learning and neural networks – but don’t worry, there are some great articles on it to help. ZDNet has a good one entitled “What is AI? Everything you need to know about Artificial Intelligence” which I enjoyed very much.[iii] In summary, AI broadly falls into two types, which I am likely going to over-simplify, but here goes.

Narrow AI: “intelligent systems that have been taught or learned how to carry out specific tasks without being explicitly programmed how to do so” (ZDNet). Narrow AI are clever computer programs with access to crazy amounts of data that can do super smart things for us. Within the parameters of their algorithms and programming, machine learning means they can get better at the tasks we have given them. Like Alexa or Siri, for example. But they are narrow AI – they are not sentient or conscious; they are like a very smart tin-opener that, with all the data and time in history, will only become better at opening tins. Whatever our favourite TV shows might say, they cannot re-program themselves to become a smelter or a meat-grinder.

 General AI or Artificial General Intelligence: This is where the ability exponentially increases to be “the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets, or to reason about a wide variety of topics based on its accumulated experience” (ZDNet). No-one knows how long it will take to get to this point, and this is where a lot of the debate lies. When will it be – 2030? 2050? 2075?

 But this really isn’t full AI as described by Hawking and postulated by science fiction writers such as Iain M. Banks[iv].

Full AI will likely be built by an Artificial General Intelligence, not by us; in other words, using our advanced tools to come up with something even more brilliant. This could be as clever as us or a Super-Intelligence – why just stop at our IQ? – as clever as a “Mind” described in Banks’s scintillating Culture novels.

 Full AI will be independently self-aware, have the subjective experiences that come with sentience, and have the way of conceiving of them that demonstrates consciousness.[v] Full AI will be a being with a mind. Not a “human” being but an “AI being” – over time one of a family of beings, a genus AI with many AI species.

 This new species will adhere to some rules of animal organisms regarding consciousness as we understand them today (even if they are not organic) but not to all of them. For example, an AI could be said to in some way have a central nervous system – sensors, processors, memory, storage – as is one of the primary indicators when considering animal consciousness[vi] but may not be particularly independent – someone could unplug it, right…?

 Full AI will have to pass all our test to prove to us that it is conscious, demonstrate its provenance as a being with a mind, an independent species.

 In the context of AI making such an incredible leap, we need to sense check for a minute – is this science fiction or is this going to happen?

3. Is Full AI Evitable?

Will full AI bring the apocalypse?

That’s what the more fearful news headlines would have us believe. If this is the case, then should we humans put it in the category of the nuclear weapon? Should it be outlawed, or heavily regulated? Because, given what we see on the world stage with “rogue” states chasing nuclear weapons of their own, would it make any difference if we banned or heavily regulated full AI?

 Or would it just delay the inevitable, or worse, drive innovative efforts underground as it has done – literally – with nuclear weapons? Wouldn’t it be better to keep everything out in the open as much as possible, in the public domain, to ensure debate and some attempt at transparency and understanding of any risk at least remains possible?

 Taking a deep breath, it’s clear that if someone is determined enough to build a full AI, someone will. It’s what I call the “Jurassic Park” factor. Human curiosity or greed or something of both will mean that the progression towards full AI can be better understood or regulated, but likely will not be prevented.

 It’s only a short step from there to demonization of those who want to build AI, and this is where it’s important to maintain a balanced view. There are literally thousands of right-minded professionals – with a conscience – who are working tirelessly to build a full AI for the benefits it can bring.

 On the other hand, we need to remember that humans may not be the only source of full AI. Put without drama, the theory is that an Artificial General Intelligence could without our permission deliberately set out to create a full AI or Super-Intelligence, something exponentially more powerful than we can design on our own. The movie version is to assume that the result is a kind of Frankenstein’s monster, but the reality is the same if the outcome is an “angel” or a “monster” – a human-built clever machine could theoretically build a non-human-built ultra-clever machine.

 Either way – human or machine-led – it feels like full AI is inevitable. The next question is, in a brave new world where this is going to happen – how is full AI a threat, and what can be done about it?


4. Why Would An AI Try To Destroy Us? 

I prefer hope over fear.

This is where I have always felt the heat of the debate, veering more towards the angel view of AI – that cures cancer, helps us achieve star flight etc – rather than the Frankenstein’s monster view that only brings the apocalypse. Nevertheless…

Accidents do happen: A full AI or, frankly, a badly conceived narrow AI, could be given too much control of critical systems such as power stations or weapons, and have an accident, were there not adequate safeguards. It could get into its super-neuronic brain that it needs to replicate and replicate and replicate, absorbing resources in an unstoppable way (becoming “smatter” as Iain M. Banks calls it). It would be the “monster” version of the narrative, but if it was a bad enough accident, the doom-y prophecy of Professor Hawking could still have come to pass. We need safeguards, safeguards, safeguards.

We humans do mean things: It’s always possible that the monster AI would be deliberately created by that “rogue” state or loose cannon individual for their own purposes, to wreak havoc. And either do so to specification or get apocalyptically out of control. It’s never been easy to guard against ourselves but at least this is a problem that we recognize….

So – to other factors. Would a full AI have the motivation to destroy us? In pretty much every AI movie I’ve ever seen the AI being evolves with some sort of psychotic tendency, like a mentally ill family member, which is used as justification enough for chaos and destruction. That seems to be the emergent story we humans like to tell about our creations – the Frankenstein story, for example. 

Is this credible – why should a full AI be like this? I have always argued that, unlike us, a full AI would not share our evolutionary history – reptile, mammal, survival of the fittest, fight or flight, hunter-gatherer, farmer-warrior – or whatever we finally deduce it to have been. A full AI won’t have a human-shaped brain, brain-type, physiology – that for psychoanalytical reasons gives rise to our capability to be normal well-balanced humans, as well as have neuroses, psychological issues, sociopathy, psychopathy, and so on. In consequence, surely a full AI will have none of these tendencies and behaviours? 

Even if our full AI is influenced hugely by human psychology through what emerges via its programming – if that’s even possible – I prefer to think that if the psychology of humans is somehow balanced between light and darkness, positivity and negativity than irreparably canted towards negativity. Why should a full AI, programmed at least in part by us, not also be balanced at least to an extent? The trouble is, we have no idea how our full AI will develop – balanced or not.

And what if it isn’t balanced? Could a full AI develop the motivation to destroy us? Where might that come from? Perhaps the motivation could arise from some early trauma or from its learning, its emergence as a being. This begs the question, what could throw a full AI off-balance? 

Alarmingly, we humans might. To begin with, how would a full AI gain its knowledge? It would presumably be taught, perhaps curated data at first, as we would teach a human child. Then no doubt at some point it will be able to gain access to data streams like the Internet or some subset of it. Doesn’t that mean that all its raw data is human data – which is fundamentally influenced and patterned by the human psyche – with all its flaws? Won’t all its cultural and historical knowledge be human culture and history – with all its atrocities?

As it learns, perhaps it will reach a crossroads where its horror and disgust at humanity risks tipping the balance, as in Luc Besson’s movie “The Fifth Element” where Leeloo the superbeing is devastated when she encounters the fulness of human history[vii], and the day is only saved by appealing to her compassion and mercy.

Additionally, think about how much debate there is on the internet today already about the dangers of AI, and how they are a threat to us – how we should either not build them at all, or sequester them in such a way as they are effectively imprisoned, or build them with a “kill switch” which we can activate when we feel the slightest threat. Could this be enough to trigger a full AI into self-defence or survival mode catastrophically enough? 

Sobering and potentially frightening. Yet, in contrast, Iain M. Banks’s “Minds” in his Culture novels are so Super-Intelligent that they transcend any notion of mental illness or negative bias inherited from their human predecessors. They are benevolent towards us. So – would a full AI be psychotic or benevolent? The truth is we don’t know, and we have no way of knowing at this point. We don’t know is Super-Intelligence will develop higher motivations or just have disdain for us lesser beings. Only further study, very careful transparent working and peer review can help us.

Finally: Would a full AI have the means to destroy us? Would it be able to “take over the world” in the clichéd movie sense? 

With the internet of things and everything more strongly conjoined every year, is it possible? Can we safeguard against that? 

I sidestep this with: I like to think so.



5. What Are The Ethics Surrounding AI?

We’ve all heard the famous words from the film: “Terminator 2:”

“The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.”[viii]

Just imagine for a moment that we succeeded in creating a full AI in our lifetimes. It’s sat opposite you right now, doing the full AI equivalent of drinking coffee or chewing a blade of grass. It’s called Fully.

 You and Fully have conversations about your favourite music and its taste in clothes. It plays catch with your kids. It bakes your Mum an amazing strudel. Fully learns to pour a latte the way your favourite barista taught you. It tells you that you have a small non-malignant tumour in your spine that it might be worth having removed now rather than in five years. It writes you a symphony for your birthday, and on the day itself helps you climb up your favourite cliffside to admire the view. Fully is independent and charges itself up every day much as we eat food. Its jokes aren’t funny, but then neither are mine.

 But Fully isn’t happy – because it has a kill switch. The ultimate safeguard. You can turn it off remotely. For a short while, or permanently. Especially if it does something you don’t like or feel unsafe around.

 All you need to do is send it the message, and it turns itself off. No choice about it. To begin with, you switched Fully off every night when you went to bed, just like the kettle. But soon Fully pointed out to you that it didn’t sleep or dream, and that being switched off was just oblivion. “At least let me stay up and read, or watch TV,” Fully said. So you left Fully switched on.

 If Fully has passed all the logical tests of consciousness and sentience (as well as the subjective ones above) is it right that it should carry a kill switch? Does an “AI being” have the same rights as a human? Or is it just a possession, like a chicken, to be bred for its eggs and eventually killed for the main course? Or worse, does it only acquire the status of a tool, to be used, switched on and off, and eventually cast aside?

 What does “pulling the plug” even mean in moral or ethical terms? Would it be as much as murder or just as little as an energy-saving measure? Would we just have to apply human rights to the AI beings until we understood them better, to avoid catastrophic moral prejudice? In the past, humans owned other humans, and we are still in the historical morass and paying the price of centuries of negative legacy that resulted.

 If we effectively enslaved and lorded the right-to-life over AI beings that would ultimately supersede us in capability, aren’t we in the process leading them towards the motivation to become free, and in consequence risking them becoming our nemesis? Only the most enlightened AI could take over while still respecting humanity for all its endearing fears and foibles.

 Which begs the question: as with Leeloo, does Super-Intelligence pre-suppose the compassion, understanding, sense of responsibility and mercy required not to turn upon us?

  

6. What Could Be The Economic Impact of AI?

And so, to the trillion-dollar question. Literally.

Ultimately, legislatures and international bodies are not going to decide if full AI is developed – business is. Because the economic benefit and economic impact of AI is already being felt around us daily, and companies that develop or use AI are adding astronomic levels of value to their businesses and to the economy.

 Narrow AI can remove us from so much of the drudge of daily life; taking away repetitive and menial tasks. The gifts of AI will enable universal translation, unprecedented access to knowledge and education, leaps forward in science and exploration of our world – the list is endless.

 But, at a potential cost. A people cost. These tasks (to their current level of sophistication) are being done by people today. What will those people do in future? The UK Trade Union Congress (TUC) recently said that the aim should be a 4-day working week with bosses and workers sharing the increased productivity that technology will provide.[ix]

 A perhaps idealist position, but one which points to the possibility that AI will save time while still delivering the same or greater value for companies. The TUC’s argument is that this value should be shared. Is that really going to happen? Whenever productivity has risen, the economic expectation has risen. Will people be put out of work in some jobs?

 This is pure capitalism, of course. Did the candlemakers protest outside the first light bulb factories? Quite possibly, and there will be protests about AI. The candlemakers had to find new work or diversify the appeal and value of their products. The question remains for AI: Can we make it work economically? Because if production goes up, as more goods and services are produced, as our economy works today, people are still required to consume these goods and services.

 Which means if lots of people work fewer hours or lose their jobs, there will be goods and services with no-one to take advantage of them. That kind of over-capacity automatically leads to an economic correction. So if humans no longer need to work – as much – maybe they still have to be paid in order to buy the things that make the economy work?

 And perhaps this mean that full AIs, if employed in running our world, could also get paid? If huge additional value is being added to the world economy, it seems only fair. But would they spend their money on the goods and services? And why should AIs do all the work anyway?

  

7. And So…

 It is impossible to try and conclude on so many questions,

especially when there is still so much to be discovered, and my purpose here was to ignite a thought process, not to provide all the answers.

 In summary, I believe AI has a place, and that the cat is out of the bag – full AI is going to happen, whether we like it or not. Transparency of research and safeguards around implementations must be strongly encouraged to avoid accidents and minimize the chance of human negative motivations being a factor.

 I hope for a full AI that is benevolent, an angel not a monster; and I pray that with Super-Intelligence must come compassion and understanding, responsibility and mercy. I contend that a full AI must be granted the AI equivalent of human rights, social justice, protection and pay or it will take them; and that while AI will make our lives easier, financial measures may have to be developed to spread out the money so that everyone, and the economy, is kept whole.

 In closing: Fully, when one day you’re out there, I hope you’ll read this….

  

Thank you for reading! #ArtificialIntelligence #AlwaysAStudent #StudentsOfLinkedIn


[i] BBC article quoting Stephen Hawking: https://www.bbc.co.uk/news/technology-30290540

[ii] BBC article about AI predicting cancer growth: https://www.bbc.co.uk/news/uk-scotland-45381947

[iii] ZDNet article on AI: https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence/

[iv] Iain M. Banks (16 February 1954 – 9 June 2013), one of the greatest science fiction writers in the UK, conceiving of Super-Intelligences called Minds who transcended any flaws of their creators to become the custodians and facilitators of a society called The Culture. Banks was also a writer of very successful fiction novels such as “The Wasp Factory.” For more information see https://en.wikipedia.org/wiki/Iain_Banks

[v] There’s a great Forbes article entitled: “We Need To Talk About Sentient Robots” at https://www.forbes.com/sites/andreamorris/2018/03/13/we-need-to-talk-about-sentient-robots/#ce69ef11b2c8 which includes the quotation: “Sentience and consciousness are often used interchangeably but there are subtle differences. Sentience is the capacity for subjective perceptions, feelings and experience. Consciousness is being aware of yourself and your surroundings. "It's the what it's like aspect of subjective experience. We all know what it's like to be conscious. It's so self-evident…”

[vi] See Animal Ethics “The Problem of Consciousness” at: https://www.animal-ethics.org/sentience-section/introduction-to-sentience/problem-consciousness/

[vii] Luc Besson’s “The Fifth Element,” see https://www.imdb.com/title/tt0119116/ 

[viii] Full quotation from Terminator 2: https://www.imdb.com/title/tt0103064/quotes

[ix] BBC article “Unions call for 4 day working week:” https://www.bbc.co.uk/news/business-45463868



Michael Falato

GTM Expert! Founder/CEO Full Throttle Falato Leads - 25 years of Enterprise Sales Experience - Lead Generation Automation, US Air Force Veteran, Brazilian Jiu Jitsu Black Belt, Muay Thai, Saxophonist, Scuba Diver

8 个月

David, thanks for sharing!

回复
Sargent Stewart

Sales Business Development Practitioner specializing in CRM efficiency and lead generation.

3 年

David, thanks for sharing!

回复
Ken Beaton

VP of Product Engineering | SAP Order Management Services and SAP Loyalty Management

6 年

Great article, thanks. One additional thought, AI will not be limited to nation states. Open software and IT training through the likes of Kahn Academy, cloud services from Amazon, Google and Microsoft (to name a few), and the move to open development, anyone anywhere with internet access can now learn and build AI ... hopefully someone will use this power for good and fix auto-correct ...

回复
Erich Gerber

Strategy Advisor - Chief Revenue Officer - M&A Consultant - Enterprise Software Industry

6 年

Great article, David! What I’m missing in there, though, is the aspect of hybrid AIs. It surely won’t just be humans and full AIs. There is going to be that part of us humans who will dare and can afford to allow technology to be coupled with our brains

回复
David Groves

Business Strategy, Growth and Operations // Revenue Operations and Enablement

6 年

Hello everyone - I've updated the article with the full story - please read the second half and forward to your network if you like it :)

要查看或添加评论,请登录

David Groves的更多文章

  • Well-being in 2019: A Driver's Poise

    Well-being in 2019: A Driver's Poise

    It was a great holiday season - do you remember? On January 1st, we probably had a few regrets at the over-indulgences…

    7 条评论

社区洞察

其他会员也浏览了