How Can You Detect If Content Was Created By ChatGPT And Other AIs?
Bernard Marr
?? Internationally Best-selling #Author?? #KeynoteSpeaker?? #Futurist?? #Business, #Tech & #Strategy Advisor
Thank you for reading my latest article?How Can You Detect If Content Was Created By ChatGPT And Other AIs??Here at LinkedIn and at Forbes I regularly write about management and technology trends.
To read my future articles simply join my network here or click 'Follow'. Also feel free to connect with me via Twitter,?Facebook, Instagram, Slideshare or YouTube.
---------------------------------------------------------------------------------------------------------------
Artificial Intelligence (AI) is capable of producing increasingly human-like writing, pictures, music, and video. There have been reports of students using it for cheating, and an industry has emerged around AI-authored books claimed by people as their own work.
However, there is also at least one reported case of a teacher (apparently ineptly) using AI to incorrectly “prove” his students had cheated – leading him to fail all of them.
There is also a recent case of a photographer winning a competition by submitting an AI-generated picture rather than one he took himself. In this case, the photographer had good intentions and returned his award after exposing what he had done.
Fortunately, some fairly accurate – for the moment – methods exist for detecting where works have been created with the help of AI. In this article, I will look at what tools exist, how they work, as well as why they could be vital for security and for protecting academic and artistic integrity.
?
Why Is AI Content Detection Important?
As AI-created content becomes more commonplace, its potential to cause disruptive and potentially harmful consequences increases. A great example is the phenomenon of “deep fakes,” where realistic images of videos or real people appearing to do or say things they have never done can be made. There have already been examples of this being used to fake pornographic content of people without their consent and to put words in the mouths of politicians, including Barack Obama. You can find a video of Trump being arrested (even before he was) and Joe Biden singing Baby Shark (which, as far as I know, he has never done!)
Some of this might seem funny, but there’s the potential for it to have damaging consequences for the people involved – or for society at large if it influences democratic processes.
AI has been used to clone human voices to commit fraud. In one case, it was used to attempt to trick a family into believing that their daughter had been kidnapped in order to extort ransom money. In another, a company executive was persuaded to transfer more than $240,000 via a deep-faked voice that he believed to be his boss.
If it’s used by students to cheat on essays and exams, it could damage the integrity of education systems and the reputations of schools and colleges. This could result in students being inadequately prepared for the careers they hope to enter and the devaluation of diplomas and certificates.
All of this highlights the importance of robust countermeasures to educate the public on the dangers of AI and, where possible, detect or even prevent it. Without this issue being addressed, AI could lead to widespread disinformation, manipulation, and damage. So, what exactly can be done?
?
Methods for Detecting AI-Generated Content
Fortunately, there are a number of methods available for detecting AI-generated content.
Firstly, there are digital tools that use their own AI algorithms to attempt to determine whether a piece of text, an image, or a video was created using AI.
You can find several AI text detectors freely available online. The AI Content Detector claims to be 97.8% reliable and can examine any piece of text for signs that it wasn’t written by a human. This is done by training the detector on the methods and patterns used by tools like ChatGPT and other Large Language Models when they create text. It then matches this information against the submitted text to attempt to determine if it is natural human writing or AI-created text.
This is possible because, to a computer, AI content is relatively predictable, being based on probabilities. This means that a concept called “perplexity” can be used to work out whether the text uses language that is highly probable or not. If it consistently uses the most probable language, there’s a higher chance it’s created by AI.
If it’s important that you know with a high degree of assurance, you can check it against multiple AI detectors. Other useful tools are the Writer AI Content Detector and Crossplag.
For detecting Deepfakes, companies including Facebook and Microsoft are collaborating on the Deepfake Detection Challenge. This project regularly releases datasets that can be used to train detection algorithms. It’s also inspired a contest on the collaborative data science portal Kaggle, with users competing to find the most effective algorithms.
Recognizing the threat that AI-generated video and images could pose to national security, military organizations have joined the fight too. The US Defense Department Advanced Research Projects Agency (DARPA) has created tools that aim to determine whether images have been created or manipulated by AI. One of the tools, known as MediFor, works by comparing the AI-generated image to real-world images, looking for telltale signs such as variations in the effect of lighting and coloring that don’t correspond with reality. Another, known as SemaFor, analyzes the context between pictures and text captions or news stories accompanying them.
Finally, we shouldn’t overlook the role that human judgment and critical thinking can play in AI content detection. Humans have a sense of “gut instinct” that – while certainly not infallible – can help us when it comes to determining authenticity. Casting a critical eye and applying what we know – is Joe Biden really likely to create a video of himself singing along to Baby Shark? – is essential, rather than delegating all responsibility to machines.
?
The Future of AI Detection – An Arms Race?
It’s likely we are only witnessing the very early stages of what will be an “arms race” scenario as AI becomes more efficient at creating lifelike content, and the creators of detection tools race to keep up.
This isn’t a battle that will be fought only between technologists. As the implications for society become clearer, governments and citizen’s groups will find they have an important role as legislators, educators, and custodians of “the truth.” If we discover that we are no longer able to trust what we read, watch, see and hear, our ability to make informed decisions in every walk of life, from politics to science, will be compromised.
Bringing together technological solutions, human judgment, and the informed oversight and intervention, when necessary, of regulators and lawmakers will be our best defense against these emerging challenges.
To stay on top of the latest on new and emerging business and tech trends, make sure to subscribe to?my newsletter, follow me on?Twitter, LinkedIn, and YouTube, and check out my books Future Skills: The 20 Skills and Competencies Everyone Needs to Succeed in a Digital World and The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society.
---------------------------------------------------------------------------------------------------------------
About Bernard Marr
Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a?best-selling author of 21 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1.7 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.
Bernard’s latest books are ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’ and ‘Future Skills: The 20 Skills and Competencies Everyone Needs To Succeed In A Digital World’.
Communication Specialist | Risk Communication & Community Engagement | Media & Journalism | Radio/TV Producer & Presenter | Human Rights & Gender Equality | Climate Change & Disaster Management | Sustainable Development
8 个月This was a worthy read, thank you. I was having a discussion on this with colleagues this morning. Quite a frightening trend really.
Father/Husband/Friend | Security Consultant | Googler | Veteran
1 年Great article! Thank you for sharing :)
A great read on detecting #AI-generated content by ChatGPT and other algorithms. At Good AI Vibes, we believe that understanding AI applications across various industries is crucial. Our bi-weekly newsletter covers these topics comprehensively and in an accessible way. Don't miss out on the discussion. Subscribe today: https://goodaivibes.substack.com/ ?? #AIforGood
Bernard Marr Thanks for Sharing! ?
Entrepreneur, Investor, Recording Artist, US Navy War Vet, Avid Pickleball Player, Black Belt Guided Chaos, Foodie, AB4 Civil Service Preference
1 年How about a way to program AI not to allow it to be used for unscrupulous proposes? Just a thought or a question in this case.