Trust me, I’m an AI

Trust me, I’m an AI

How well do you trust your own memory?

How sure are you that the childhood events that defined your sense of self and shaped you adult issues happened just as you recalled?

Science says that most of us make up, or at least “alter” almost as many memories as we save correctly. This phenomenon is not exclusive to individual memory either. Collective memory hallucinations, while rare, are far from unknown. One of the most famous examples of) here is the so called “Mandela Effect”. The Mandela Effect refers to the many thousands people who have written and admitted in public that they remember Nelson Mandela dying in prison in the 1980s, and as such are convinced that that man who became president of the new democracy of South Africa was an imposter.

As plainly ridiculous as conspiracy theories such as The Mandela Effect are; sometimes, these collective hallucinations have absolutely devastating results. Such as in the documented course cases involving allegations of childhood abuse, where children were found to have memories implanted, or to use the more correct legal term, “tainted by suggestion” by well-meaning social workers (and less well meaning lawyers) of abuse that never in fact occurred. More disturbing, of course, is to realise that even false memories of abuse can cause a both the supposed victim and the accused lifelong real and psychological harm - just because it didn’t really happen doesn’t mean it didn’t really hurt you. Believing you have been abused may cause real trauma and require real counselling to overcome. Likewise, being convicted in the court of public opinion, or even in a court of law, on the basis of witness testimony recalling false memories can still land you in jail, out of a job, or on a sexual offenders list.

And that brings us to ChatGPT and how when it comes to making up the past (and compiling the & "evidence" to prove our versions of it) humans and robots have much in common.

You see, ChatGPT has been having some (very human) hallucinations. Several academic colleagues of mine have been asked about papers cited by ChatGPT, bearing their names, and written on subjects where they have expertise - papers however that simply did not exist. Global news companies, have discovered conversational chat bots have citied headlines, and even written full articles using their brand names and authored “by” real journalists working on their publications - articles that never existed until ChatGPT hallucinated them and presented them to it’s users as fact.

One professor found out ChatGPT had not only invented a sexual misconduct allegation but had also helpfully provided the "primary evidence" to support its spurious claims - in the form of a seemingly credible Washington Post article (by a real staff columnist, no less) that the artificial "intelligence" also made up all by itself. The trouble with such allegations, of course, in the age of the #MeToo movement and our social responsibility to “believe all victims” is that once the idea of your guilt has entered the public consciousness, it is very difficult, even impossible to clear your name in reality - even if you have never been convicted in a court of law, the stench of such accusations lingers around your personal reputation. And, how you can sue a bot for libel? How can you get a take-down order for an inaccurate, personally damaging article about you that doesn’t even exist? How can you convince anyone that you are innocent and that their version of “the truth”, verified by the omniscient machine gods, is the wrong one?

Well, we are about to find out. In the mean time, maybe don’t trust the machines any more than you trust your own fallible memory.

https://www.sciencealert.com/your-brain-can-create-a-false-memory-quicker-than-you-think

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6818307/

https://en.wikipedia.org/wiki/False_memory#Mandela_effect

By Bronwyn Williams commissioned and published by ITWeb Brainstorm South Africa ?magazine.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了