ChatGPT beware: How to spot AI generated Text ?? ??
Credit: The Author, using OpenAI

ChatGPT beware: How to spot AI generated Text ?? ??

There are several key differences between text written by a human and text generated by a machine learning model like ChatGPT. Here are three of the most important ones:

  1. Style: Human-written text tends to have a more natural and varied style, with a wide range of sentence structures and word choices. In contrast, machine-generated text is often more repetitive and formulaic, using the same structures and words repeatedly. This is because machine learning models are trained on large amounts of data, and they often generate text that reflects the patterns they have learned from this data.
  2. Grammar and spelling: Human-written text is generally more polished and free of errors, while machine-generated text may contain grammatical errors or misspellings. This is because humans have a deep understanding of language and are able to self-correct and revise their writing, while machines simply output the text that is most likely based on the data they have been trained on.
  3. Content: Human-written text is often more diverse and creative, covering a wide range of topics and ideas. In contrast, machine-generated text is usually more focused and specific, addressing a limited set of topics and ideas. This is because machine learning models are trained to generate text that is similar to the data they have been trained on, and they may struggle to produce text on topics that are outside of their training data.

Overall, the best way to differentiate text written by a human from text generated by a machine learning model is to look for these key differences in style, grammar, and content. By paying attention to these factors, it is often possible to tell whether text has been written by a human or generated by a machine.


Ok, you might have already suspected it, here comes the unsurprising bummer:

No alt text provided for this image
Source: ChatGPT, asking for the best ways to differentiate text written by a human from that written by ChatGPT

Ok, now here is the actual problem: None of what ChatGPT has come up with is technically wrong. It is just nowhere close to what essentially differentiates text written by generative AI from human generated text. In fact, being able to differentiate between human and machine written content requires us to remember what makes us human in the first place, what only we humans have, because robots are phenomenologically lacking it. Now here is my short list of candidates, but I am very much hoping for feedback, comments and critique to improve and/or amend this list, so we humans have a fighting chance to actually spot machine written content and hence appreciate the human effort even more:

  1. Errare humanum est: If we can believe Seneca, then making mistakes is uniquely human. This is not to say that generative AI models don't err. It is just that our human errors have something specifically human to it. They are nothing like the super-polished always correctly spelled generic versions of "bullshittery" that ChatGPT is spitting out, but they truly reflect the highly biased, evolutionarily formed and hence often opinionated and distractible brains of ours. This unfortunately also means that our human writing is much more often spiked with misspellings, grammatical errors, logical flaws and other rather obvious bumps in the linguistic and semantic road to perfection.
  2. Real-world experience: AI-written text is generated by an essentially non-intelligent, bodiless, radically inexperienced entity divorced from the real world. Because AI is simply only "existing" in an abstract corpus of linguistic data, void any possibility of real-world feedback. This is why a lot of AI generated text actually feels so generic, because it is only "syllogistically" making sense, but lacking all the depth and colour that comes from real-world experience humans have to go through to learn during their lives.
  3. Childish confabulations: Generative AI is similar to a child in the way that it is eager to please, to deliver what it has been asked for, even if it does not make any sense and if that means that it is simply confabulating and just making things up. In this way it is actually not dissimilar to some general characteristics of (specifically juvenile and senile) brains whereby our memory and sometimes our ability for critical reflection is failing us which leads to our brains just "spinning stories" that only superficially make sense.
  4. Emotional Flatness: Another important differentiator is the shallow level of emotionality of AI-generated text, which most probably also has to do with some general parameters guiding the inner workings and articulation skills of ChatGPT. In fact, I am still to see some ChatGPT generated output containing really bad swearwords, cursing and emotionally-laden argumentative speech. But maybe it is actually good that this type of output is algorithmically precluded, because otherwise these systems would surely be ill-fated and short-lived as we could see from the example of early retired AI chat systems like Microsoft's Tay. Nonetheless, what happens is that AI generated text is generally missing the authenticity and depth off emotion which naturally relies on actually having human emotions in the first place, compared to just trying to deliver a shallow imitation of a human-like emotional palette.
  5. No Conviction or Ethics: ChatGPT and similar generative AI systems also do not develop any real opinion on any matter they write about. In fact, they can't. They rather excel by having no conviction at all and even changing any incidentally assumed position the second you question their output. But more importantly than lacking any conviction, they also lack any ethical position as all of the knowledge they can access does not amount to actual ethical guidelines.


Now here comes the real question and critical consequence from the above: What are we to do with the "human" parts within ourselves, which have been busy doing all of this fancy "wordsmithing" which has now suddenly been replaced and made obsolete by a few robots? Who are we if not "linguistic" animals, animals who love to put themselves on the very top of the imagined "pyramid" of all living beings, because of our wonderfully nuanced and artistic gift of language?

Well, if we were actually serious enough about challenging this notion, we would quickly find out about all of the shortcomings of our much beloved and hailed about human language, for example by reading more about it in the research of Noam Chomsky. Or we could simply rely on the very fundamental knowledge of the indigenous people of New Guinea, the Eipo, who actually have not defined humans by referring to their unique linguistic skills, but are describing them by simply calling out their three core characteristics as they perceive them: Balamle, Dilamle, Foklamle. Or translated into our "lingua franca", English: It walks, it eats, and it copulates.

Now here we are, and we have clearly found it: The reason why we are all so bamboozled by ChatGPT and the likes of it: It imitates us, or at least the version of us which we were thinking defined us, and in a way that hits way too close to home. Because by doing so it delivers another, this time potentially final, fourth essential blow to our human egos. So to stop keeping you in suspense and finally release the full torment of human self-deprecation triggered by AI, I will be revealing what this last blow is and at the same time remind you of the other three. I shall hence conclude this absolutely and very truly human text with this short, four item list of "wounding blows" to our human ego:

  1. The earth is actually not the centre of the universe.
  2. We are actually not created by god but are mere descendants of ordinary apes.
  3. Our superb ego is actually not the master of our own behaviour.
  4. Our intellect, specifically our linguistic intelligence, is actually not what makes us human.

No alt text provided for this image
Scene from 2001: A Space Odissey. Credit: Scott Myers / Medium
Justin Garcia

Supply Chain @ United States Air Force | BAS, Project Management

1 年

I completely agree with your assessment of the differences between human-written and machine-generated text. You've identified some key points that really set the two apart, such as the fact that human writing often contains errors and reflects real-world experience, while machine-generated text is typically more polished and lacks the depth and nuance that comes from personal experience. It's also interesting to consider the idea that machine-generated text can be seen as similar to a child's confabulations, in that it is trying to please and provide the requested information but may not fully understand the context or implications of what it is producing. These differences highlight the importance of recognizing the value of human writing and the unique qualities that it brings to the table. So, it is very important to differentiate the text written by human and machine.

Nadaav S.

Expert filmmaker. 25 years as Producer, Director, Writer, Editor. Documentary, comedy, branded content. I devise and create innovative content for broadcasters, brands & digital channels incl BBC, ITV, C4, C5, CNBC

1 年

The main thing I use chat gpt for is to Reword my writing more concisely. based on this article I recommend you give it a go too! Tldr>tldr!

Cordula Lochmann

AI Literacy aufbauen, AI Use Cases identifizieren und umsetzen. Messbare Steigerung der Produktivit?t durch Integration von generativer KI in die Wertsch?pfungskette.

1 年

Wonderful text, Thomas, really enjoyed reading it. I do not agree with your wounding blow 3, but this would be a topic for a long and deep discussion ?? I absolutely agree that AI-based linguistic capabilities are already fantastic, especially when using tools you can train. Besides not making as many mistakes as we humans there are more advantages in the business context. You can train a certain style that does not change when people supposed to write those texts leave the company or are replaced by somebody else for other reasons.

Thomas Hirschmann ?

CEO CoreCortex & Behavioural Economy

1 年

  • 该图片无替代文字
回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了