Embracing Spelling Erors as Tool for Linguistic Solidarity
John C. Havens
Author, Heartificial Intelligence and Hacking Happiness. Director, IEEE Planet Positive 2030. Founding E.D. of IEEE AI Ethics program and IEEE 7000 Standards Series.
I used to write a lot for the tech publication, Mashable. My last article for them was in 2016.
It was around that time that I updated my bio to say, "human journalist" you can see here .
I did that as I knew AI in earlier manifestations (than in 2016) could already allow for deep fakes for identity in various forms, one of these being to potentially trick a reader into wondering whether an author was human or content was generated by AI.
Along these lines, I'm hoping to have the courage to start regularly keeping spelling errors in my communications and publications. My logic is this will help allow a reader to know that it is likely that a human - in this case me - wrote whatever it is that they're reading.
This is because with hallucinations in GenAI a person won't know which content put before them is erronenous (yes, on purpose) in the sense said content didn't exist before and is false. Where I say "false" the fake content now exists, but isn't an accurate or factual response to a person's query.
This is a big deal. Meaning, disclosing where a human wrote something or where content has accuracy. I'm always amazed how often companies producing tools that create hallucinations try to wave off this pervasive and constant result of all queries as "something that's being addressed and minimized" and that should bring society and users comfort.
Where in fact telling someone, "any query in a LLM could result in hallucinations" is the same as saying, "every query in an LLM will result in responses you won't know are hallucinations or not." Which also means, "you can't trust any query provided to any LLM."
Audo Correct
Over the years there have been a lot of great articles about the misdirection and erasure that autocorrect can cause. I've always pictured autocorrect as a verbose but somewhat annoying friend at a party who interrupts you when you pause to look for a word: "Did you mean 'platonic?' Or maybe, 'paternal'?
While their intentions may not only be to show off their vocabulary, my irritation comes from the interruption of the joy of searching for the right word to say. As a writer, this is akin to a chef looking for the right ingedient (yes, on purpose) for a meal, or as a musician (I play blues) lingering in a moment to see what music the muse will bring. I may know the song is shifting from the key of A to the key of D in a 12-bar blues, but that doesn't mean I have to play a certain note at a certain time.
Auto correct can also be mendacious and a form of erasure. Who says a certain way I start to type a word means I wanted to type the word the algorithm suggests? And what words that are suggested mean I start using those words and forget the other ones I would have or could have used?
领英推荐
Now take that logic and think about it with queries in LLMs and the situations is amplified in ways many people have written about before - where hallucinations could occur (meaning all the time at all time) without genuine disclosure, then all information is hallucinatory in nature.
Note - the genuine disclosure bit is the critical point here. I'm not decrying LLMs as a sometimes useful tool where IP is protected and anthropomoprishm is avoided (and both those things don't generally happen with most LLMs along with hallucinations and we haven't talked about the massive misuse of water and energy involved in all queries) but where it's not absolutely clear that any query typed could lead to a hallucination, then we as a society are back in the old riddle of something saying, "I'm a compulsive liar" and then at some point you say, "wait - if he's a compulsive liar, then is he lying about saying he's a compulsive liar?"
Technically, the answer is "probably." Which in general makes most people assume a cerebral form of the fetal position or seek alcohol.
Rise Up and Revold!
For today my idea is simple and I'll see how it goes as an experiment. I'm going to start inserting spelling errors in my communications where I'll also disclose I'm doing so. Here's my logic:
Also - hallucinations are errors. And the term "hallucination" is anthropomorphic be design. So where these things aren't made clear, then any LLM, AI or tool where creators say only, "we're fixing those errors, stop being jerks and talking about the errors" is the same as someone talking about a car they designed that blows up in two out of every five tests or actual uses on the road. If that car manufacturer said, "leave us alone, we're working on it" you'd say, "I prefer not riding in cars that explode" and you wouldn't buy the car.
And those car designers would spend more time testing before releasing those cars to the general public.
So for now, where it will work for you, I welcome anyone who wants to join in this experiment.
Use you're words "incorrectly" to show you're humanity! Make your "mistakes" BOWLED!
Because YOU are lovely. YOU have words you are allowed to form in your mind and consciousness before you share them with the world as a form of music and joy and identity.
I can't wait to here what you think.
Love the creative approach to embracing individuality in communication. It's refreshing to see a focus on the value of human expression.
Sr. Consult for AI / InfoSec Strategic Initiatives; secure SDLC; data protect; privacy; symbolic AI; OPA; ABAC; metadata governance; compliance; 12 yrs finance & defense sector InfoSec; sustainability CRISC CDPSE CSQE
3 个月My vote is to be the Puerto Rican crested toad, which can provide early warnings for endangered ecosystems and offer medical benefits.
Program and product visionary, specializing in driving transformational change through clear direction, transparency, and highly motivated teams.
3 个月To air is human!
Founder, Dolus Advisors | Human Decision-Making + Behavior Expert | NIST Collaborator | Forbes Contributor | Speaker
3 个月John C. Havens I so appreciate your calling out this important topic. Hallucinations: my fingers are soar ?? from typing this over and over but it’s worth repeating: Humans are prone to hallucination. Computational systems are prone to errors. No matter what folks decide to call the mistakes machines make, we should all be clear that these are NOT hallucinations. My objection is not linguistic purity; terminology can cross-migrate between fields to positive effect. But the application of psychiatric/neurological phenomena to computional systems is outright incorrect, and also perpetuates a misnomer that these agents are human-like. No. People make mistakes for many reasons. Ultimately, those reasons matter. But ‘to err is human’ is an astute comment on the human condition. It should not become a way to establish you are not a machine.
Registered Psychotherapist
3 个月Emile van Bergen livit Belta!