Article of fAIth (a perspective)
Distorted reality - the late Queen Elizabeth in conversation with the author

Article of fAIth (a perspective)

AI (Artificial Intelligence) increasingly permeates our lives today, sometimes in exciting ways like medicine, or in less obvious ways hidden in an electric toothbrush, disguised in a word, arranging emotive family photo sequences to music using facial recognition.?AI has been a long-time-coming but has been brought into our consciousness in the last year through ChatGPT, the most publicly used of a number of AI systems now available, which are reviving historical fears about cyber awareness.?Hardly a day seems to go by now when some new AI miracle pops out of the air, sometimes giving the impression of “an other” consciousness.?But chatGPT is far from conscious - it is a Large Language Model (LLM), which uses layered neural networks to mimic the way a human brain works. A deep learning algorithm converts words into numbers, compiles a range of likely outcomes, then spits out an artificial best-guess which, when converted back into language, which gives the impression of intelligence.?To most of us, that is artificial intelligence right there - but does an intelligible outcome equate to intelligence?

Context

In an era when we seem to be competing for existential threats (whether through climate change, Mutually Assured Destruction (v2.0) or the next virus), AI has barely registered as an existential contender because fear competes with genuine excitement about the potential benefits.??In other respects, AI is like all other threats in polarising opinion between terror and evangelism.??COVID showed us that humanity can get over itself and collaborate effectively to re-determine the outcome.??But unlike COVID, AI means different things to different people. The CEOs of the big tech companies tell us that far from being a threat, AI can resolve our planetary issues for us.??They even anthropomorphise AI, suggesting it sometimes “hallucinates” - AI can no more “hallucinate” than can an abacus (see below).??The threat is much simpler and has been understood for centuries: a superior intelligence with a different set of priorities decides that the human race is a threat to its existence - BOOM.??We have no idea how to prevent that from happening, partly because the threat has yet to be framed in a language we can all understand.

“…a superior intelligence with a different set of priorities decides that the human race is a threat to its existence - BOOM…”

ChatGPT and its AI rivals are impressive and not just because of their speed and efficiency.??They impersonate intelligence and present us with a mirror, but they lack the capacity to experience sensations and have feelings about them in a sentient way.??What we see in the mirror looks like a distortion of ourselves, which is deeply troubling to us.??In a post-truth world where appearances are valued higher than reality, what we see is deeply impressive as well as troubling.??But apart from some suggestions from Isaac Asimov and Alan Turing, how do we know when simulated intelligence becomes real, independent, sentient intelligence (or AGI - Artificial General Intelligence)?? When many who currently work in AI are saying they don't understand how their creations work, then how will we ever know?

Human perceptions (a human perspective):

Sensory receptors send information to all living organisms, providing often narrow glimpses of the world around us.??Sensory perception evolved to provoke a response (through sensation), allowing any organism to react reflexively to its ever-changing physical environment. Reflex is an automatic involuntary response to stimulus, offering little optionality.??If an animal has the capacity to be reflective (make a qualitative or quantitative judgment about information received), it must possess a mental model or a perception of the environment it inhabits, to scope optionality (free will).???In effect, animals create their own internal reality, a mental model, which is a recreation of the physical environment they inhabit. The robustness of the mental model to the individual is strengthened recursively by repeated mundane personal experiences, punctuated by heightened sensations that become memorable events, perpetually updating that model.??Belief in our model is endorsed and verified by the observations that all other animals inhabiting the same reality, respond to the same information in recognisable ways and can respond to each other on the same terms. Even though collected individual experiences are unique to all animals, they very quickly converge on a similar perception of reality, particularly if they have shared experiences, a powerful endorsement or validation.?

“…conditions like autism or dyslexia are in themselves gifts to humanity because they enrich the shared experience for all of us?…”

Humans are socially connected because we share very similar views of reality, mostly because our sensory organs operate in almost exactly the same way and because we can socially share different perspectives.??Sometimes we process that information differently, through conditions like autism, dyslexia or bipolarism, which are in themselves gifts to humanity because they enrich the shared experience for all of us.??Humans have developed the means to intervene in the physical world to shape that reality, which may be perceived as a personal gain, rewarding us with positive sensations or pleasure.??Conversely If the same intervention degrades that reality, we may experience negative sensations such as pain or the perception of loss.??In extreme cases, loss or gain may be life-enhancing, life-degrading or even life-threatening, which makes our judgments accountable.??Shaping reality has real world consequences, which all sentient organisms are aware of but with varying degrees of consequence.

“For the love of all that is human…”

At the moment, AI is able to integrate almost all versions of human reality by constructing a statistical model that covers the full range of recorded experience, real and imagined. Cloud-based data storage has facilitated historically unprecedented sharing of human experience, which AI compiles efficiently, sometimes indiscriminately but always derivatively.??AI uses recursive deep learning to present novel combinations of reality, some of which may have the appearance of originality. The "G" in ChatGPT stands for "Generative", but it is not generative in a human way because ChatGPT is not creating its own reality based on its own sensations or feelings (in response to sensory perception).??Without that perception of reality, AI cannot be said to “hallucinate”, which is itself a distortion of perception.??AI can only recycle the experiences of sentient humans in novel combinations. Without consciousness, AI is just copying; neither creative nor visionary.

“…recursive self-improvement is perceptual, because who (apart from the Buddha) defines the path to self improvement?”

And yet the AI of the future will have access to many more sensory receptors than we have, so it should be capable of creating a much richer reality with independent observations of its own. At what point does such a wealth of mutually reinforcing sensory perceptions become awareness???Even the computer-tech expression “recursive self-improvement” is perceptual, because who (apart from the Buddha) defines the path to “self improvement”???We agonise about whose purpose that may serve but perhaps it is the absence of direction that scares people the most.

AI is, for now, our creation, with perhaps the potential to tells us more about ourselves.??In the last few decades, we have learned that our memories are only as good as the last time we recalled them, so even they aren’t real.??Are we then just flawed algorithms, with a bit of bio-randomness thrown in (individual free will)??? AI is already teaching us how to unlock our full potential by accessing analogous case-histories from vaults filled with verified data (for instance). Humans, however, learn and grow by making mistakes, which is a fundamental difference that separates us from AI.

Humanity - a footnote in the geological record??

Humanity aspires to love, and display kindness with humility and gratitude because we suspect that we, as individuals, are not whom we aspire to be as a species.???Or do we suspect that we, as a species, are not whom we aspire to be as individuals???It turns out that what we are most afraid of is ourselves.??That is the problem when feelings manifest themselves at the heart of a discussion, a consequence of the human condition outlined above.

AI has huge potential for good but we have also seen recently how LLM can fabricate knowledge just as effectively as humans. The difference is that humans possess a set of values for survival, a codified compass for our environment, which AI doesn't possess. The key to successful AI and human endeavour is in partnership, which must guide our future, keeping each other honest with shared goals.??To lose that connection and release control would be to abrogate responsibility for our own future to our creation.??Doing so would not be the first time for humanity, but it could be the last unless we can agree on a common purpose.??

We have witnessed what happens to future generations when big tech controls content on the internet, for instance.??There is currently no oversight on AI, as we rush headlong to a "finish line" with no understanding of what lies beyond. AI needs to be controlled in the way that it is used and the way it is allowed to develop. It is time now for humanity to take control of its own future, with one voice.? Is that voice yours?

Further reading

Acknowledgments

My thanks to Nigel Stanbury, Steven Bloemendaal, Richard Barrett and Dione Venables for keeping me human.

Allan Scardina

Upstream - Exploration - Value Creation | Shell | IMD | MOL Group | Insead | Devon/Santa Fe | Univ. Maryland MBA | LSU Geology. Views Expressed Are My Own

1 年

In my opinion, unless AI-generated material is 'watermarked' as such, we are in for real trouble - especially concerning politics. We have already seen, from Brexit to Trump to anti-vaxers and now in the Russian invasion of Ukraine, massive dis/mis-information campaigns that will now become harder to distinguish from reality with AI. And that is by the people who care to look. 80% (?) of people don't bother to look past the headlines. Fewer still know how to spot fakes, even when they do dig a bit deeper. And some of them will see some self-serving aspect that will cause them to champion/repost positions that they know are incorrect or at least misleading. Remember, the best lies include some element of truth. AI is ideally suited for merging fake and fact. Strap-in. It is going to be a bumpy ride.

Joshua Turner

(CGG) Manager | Geoscientist | Oil Exploration & Development | Geologist | Geophysicist | CCUS | Structural Geology | Prospect Generator | Seismic interp. | Expert on GoM & SE Asia, Salt tectonics

1 年

Did you get chat gpt to help in writing that?

要查看或添加评论,请登录

Guy Loftus的更多文章

社区洞察

其他会员也浏览了