What are we becoming?
The current state.
Large Language Models (LLMs) that power ChatGPT, Claude, Perplexity, LLAMA, etc are trained on content found on the internet and the results they produce are the sum of patterns found in those billions of pieces. This begs to reason that the predictions or results of a prompt are the a summarization of the internet.
Is that what we are becoming? The sum of the internet?
I recently wrote a article about how fly fishing is similar to product design. I started by asking ChatGPT to write four paragraphs for me - and it did a coherent job. I asked it about success and it wrote four more paragraphs on that alone. I copied the text from the window and pasted it into the Linkedin article window... then I read it. Really read it and I realized that while correct, it was boring. Maybe the word is clinical - I don't know. It definitely wasn't good. As example, compare what it wrote to my favorite passage from David Coggin's book "The Believer".
David Coggins:
When we fish we imitate and disappear. Reflect what the fish eats and hide any trace of ourselves. We get closer to the natural world by trying to mirror cycles of life that have existed long before we have. Fishing is practiced and technical, yet out of our control, an act of submission. Not to a higher power, but to complexity of the natural world, where animals live, beside us. When I fish I feel leveled, penitent.
ChatGPT:
Attention to detail is a critical aspect of both fly fishing and product design. In fly fishing, the selection of the fly, the accuracy of the cast, and the subtle movements of the line all require precision. Small details can make a significant difference in whether a fish is attracted to the fly or not. Similarly, in product design, details such as the user interface, the choice of materials, and the ergonomics of a product can greatly impact its usability and appeal. Successful designers and anglers alike understand that meticulous attention to these finer points can enhance the overall effectiveness of their efforts.
You could replace fish with product and users in Coggin's passage and it would still work. One could argue that I didn't prompt ChatGPT to write like Coggins, which is true, but I also didn't prompt it to write with no appreciation for either fishing or products. "...meticulous attention to these finer points can enhance the overall effectiveness of their efforts." indeed.
Where are we heading?
Ten years ago I started to get concerned that the idealism of social media posts would cause a skewed anthropologic understanding of my time. I still have that concern, but now it's not so much social media that worries me.
领英推荐
A recent entry on Scientific American outlines a paper from Surfaces and Interfaces, a journal published by Elsevier that found 1% of scientific articles published in 2023 had evidence of AI writing.
Most notable tells in AI writing are certain adverbs and adjectives.
If you're 'meticulous' you might notice something interesting about the passage ChatGPT wrote for me and the findings of this study. What is worrisome is that these papers are destined to be in the future training sets of AI models. We can also expect the findings of the study to affect the training and engineers of AI - resulting in a new set of overused, less precise words. This phenomena is known as "model collapse" or as I like to call it "The McDonald's effect". A lesser known version of trained stupidity wherein pictograms for food items replace numerical equivalents of prices on cash registers.
So what now?
Stop cutting corners and accepting 'good enough' for one. If you are a company, don't allow processes to cut out peer review by humans. More specifically don't let a single domain, like engineering, approve an experience. Create governance to ensure a humane outcome. Expect that the pressures you are putting on your staff to lead them to use AI poorly. Reflect on those pressures and improve top-down direction.
I also think that a reason AI is stumbling, is it lacks knowledge of non-verbal inputs. For example, to write about about the feeling of submission and pennance you have to have felt the look on other people's faces and have seen the change in their posture when you are vulnerable. AI just can't do that yet, and it shows if you are looking.
Thankfully there are companies, like Archetype AI , who are working with non-verbal, non-image, sensors to train models to understand more than words on a page. In the near-term that will help AI understand humans when they aren't talking or being photographed.
Creative technologists, sociologists, anthropologist, pyschologists will hopefully get involved by addressing the AI engineers who are steering the ships. Long-term I hope we can create AI that has depth and understanding in ways that benefit us all.
Storyteller & Dot Connector | Fractional Marketing & Communications | Disability & Neuro-inclusion Advocate | Words Matter
9 个月Last summer during the writer's strikes, one of the best picket line signs I saw said: ChatGPT doesn't have childhood trauma. When it comes to business writing we're most likely not drawing on our childhood trauma, but there's a very distinct difference between content that's been written by AI and content that's been written by a person with unique experiences and a point of view. As a writer, I have issues with LLMs beyond the above, though. The biggest one is the way they scrape content and the possible copyright violations that could eventually come into play. There have been a few interesting cases, especially surrounding fiction, over the past couple of years.
On the positive side though, I got AI to create a word for this phenomenon: "Aibberish" Definition: Aibberish (noun) captures the phenomenon where the overwhelming presence of AI-generated text on the internet leads to a degradation of language, resulting in a flood of monotonous, formulaic, and uninspired content that lacks the richness and creativity of human expression. Let's get it trending..!