Embracing Spelling Erors as Tool for Linguistic Solidarity
From Pinterest: https://www.pinterest.com/pin/63543044717583311/

Embracing Spelling Erors as Tool for Linguistic Solidarity

I used to write a lot for the tech publication, Mashable. My last article for them was in 2016.

It was around that time that I updated my bio to say, "human journalist" you can see here .

I did that as I knew AI in earlier manifestations (than in 2016) could already allow for deep fakes for identity in various forms, one of these being to potentially trick a reader into wondering whether an author was human or content was generated by AI.

Along these lines, I'm hoping to have the courage to start regularly keeping spelling errors in my communications and publications. My logic is this will help allow a reader to know that it is likely that a human - in this case me - wrote whatever it is that they're reading.

This is because with hallucinations in GenAI a person won't know which content put before them is erronenous (yes, on purpose) in the sense said content didn't exist before and is false. Where I say "false" the fake content now exists, but isn't an accurate or factual response to a person's query.

This is a big deal. Meaning, disclosing where a human wrote something or where content has accuracy. I'm always amazed how often companies producing tools that create hallucinations try to wave off this pervasive and constant result of all queries as "something that's being addressed and minimized" and that should bring society and users comfort.

Where in fact telling someone, "any query in a LLM could result in hallucinations" is the same as saying, "every query in an LLM will result in responses you won't know are hallucinations or not." Which also means, "you can't trust any query provided to any LLM."

Audo Correct

Over the years there have been a lot of great articles about the misdirection and erasure that autocorrect can cause. I've always pictured autocorrect as a verbose but somewhat annoying friend at a party who interrupts you when you pause to look for a word: "Did you mean 'platonic?' Or maybe, 'paternal'?

While their intentions may not only be to show off their vocabulary, my irritation comes from the interruption of the joy of searching for the right word to say. As a writer, this is akin to a chef looking for the right ingedient (yes, on purpose) for a meal, or as a musician (I play blues) lingering in a moment to see what music the muse will bring. I may know the song is shifting from the key of A to the key of D in a 12-bar blues, but that doesn't mean I have to play a certain note at a certain time.

Auto correct can also be mendacious and a form of erasure. Who says a certain way I start to type a word means I wanted to type the word the algorithm suggests? And what words that are suggested mean I start using those words and forget the other ones I would have or could have used?

Now take that logic and think about it with queries in LLMs and the situations is amplified in ways many people have written about before - where hallucinations could occur (meaning all the time at all time) without genuine disclosure, then all information is hallucinatory in nature.

Note - the genuine disclosure bit is the critical point here. I'm not decrying LLMs as a sometimes useful tool where IP is protected and anthropomoprishm is avoided (and both those things don't generally happen with most LLMs along with hallucinations and we haven't talked about the massive misuse of water and energy involved in all queries) but where it's not absolutely clear that any query typed could lead to a hallucination, then we as a society are back in the old riddle of something saying, "I'm a compulsive liar" and then at some point you say, "wait - if he's a compulsive liar, then is he lying about saying he's a compulsive liar?"

Technically, the answer is "probably." Which in general makes most people assume a cerebral form of the fetal position or seek alcohol.

Rise Up and Revold!

For today my idea is simple and I'll see how it goes as an experiment. I'm going to start inserting spelling errors in my communications where I'll also disclose I'm doing so. Here's my logic:

  • My spelling errors will indicate I'm a human being, at least until the point someone invents an algorithm to make spelling errors on purpose to look like a human being.
  • By disclosing I'm including spelling errors on purpose, people know I didn't just forget to check for spelling errors. So I can still be considered "businesslike and professional" in my communications.
  • By choosing which words I misspell on purpose, I engender potential conversation about specific words. In this way I'll get to discuss my favorite words with people.
  • I believe this will help me have more conversations with people who don't speak English as their first language and will help me learn more about them, their words, and their cultures. This is a hope.
  • I get to try and remind people that the "errors" that remind them they are human, and that others are human, are a gift. For one thing, you get to help someone else feel smart when they say (as long as they're not rude about it), "Did you mean 'there' instead of 'their'?)

Also - hallucinations are errors. And the term "hallucination" is anthropomorphic be design. So where these things aren't made clear, then any LLM, AI or tool where creators say only, "we're fixing those errors, stop being jerks and talking about the errors" is the same as someone talking about a car they designed that blows up in two out of every five tests or actual uses on the road. If that car manufacturer said, "leave us alone, we're working on it" you'd say, "I prefer not riding in cars that explode" and you wouldn't buy the car.

And those car designers would spend more time testing before releasing those cars to the general public.

So for now, where it will work for you, I welcome anyone who wants to join in this experiment.

Use you're words "incorrectly" to show you're humanity! Make your "mistakes" BOWLED!

Because YOU are lovely. YOU have words you are allowed to form in your mind and consciousness before you share them with the world as a form of music and joy and identity.

I can't wait to here what you think.


Love the creative approach to embracing individuality in communication. It's refreshing to see a focus on the value of human expression.

Mark Underwood

Sr. Consult for AI / InfoSec Strategic Initiatives; secure SDLC; data protect; privacy; symbolic AI; OPA; ABAC; metadata governance; compliance; 12 yrs finance & defense sector InfoSec; sustainability CRISC CDPSE CSQE

3 个月

My vote is to be the Puerto Rican crested toad, which can provide early warnings for endangered ecosystems and offer medical benefits.

回复
John Day

Program and product visionary, specializing in driving transformational change through clear direction, transparency, and highly motivated teams.

3 个月

To air is human!

Dr. Alexander Stein

Founder, Dolus Advisors | Human Decision-Making + Behavior Expert | NIST Collaborator | Forbes Contributor | Speaker

3 个月

John C. Havens I so appreciate your calling out this important topic. Hallucinations: my fingers are soar ?? from typing this over and over but it’s worth repeating: Humans are prone to hallucination. Computational systems are prone to errors. No matter what folks decide to call the mistakes machines make, we should all be clear that these are NOT hallucinations. My objection is not linguistic purity; terminology can cross-migrate between fields to positive effect. But the application of psychiatric/neurological phenomena to computional systems is outright incorrect, and also perpetuates a misnomer that these agents are human-like. No. People make mistakes for many reasons. Ultimately, those reasons matter. But ‘to err is human’ is an astute comment on the human condition. It should not become a way to establish you are not a machine.

Kate Caldwell

Registered Psychotherapist

3 个月

Emile van Bergen livit Belta!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了