The Danger Of AI Content Farms
The Danger Of AI Content Farms

The Danger Of AI Content Farms

Thank you for reading my latest article?The Danger Of AI Content Farms.?Here at LinkedIn and at Forbes I regularly write about management and technology trends.

To read my future articles simply join my network here or click 'Follow'. Also feel free to connect with me via Twitter,?Facebook, Instagram, Slideshare or YouTube.

---------------------------------------------------------------------------------------------------------------

Using artificial intelligence (AI) to write content and news reports is nothing new at this stage. The Associated Press began publishing AI-generated financial reports as far back as 2014, and since then, outlets, including the Washington Post and Reuters, have developed their own AI writing technology.

Generally, it was first used to create templated copy, such as sports reports. The AI can simply grab data such as team and player names, times, dates, and scores from feeds, then augment it with natural language generation, adding color and flavor to turn it into a readable article.

Just a few short years ago, this technology was entirely proprietary and only available to the media corporations that could afford to buy and run it. Today, anyone can use AI to generate an article in seconds, and with just a little bit of technical knowledge, can set up a “content farm” designed to churn out and publish online content 24/7.

Just recently, an investigation by NewsGuard uncovered nearly 50 websites publishing content entirely created by generative AI. It describes the articles as "low quality" and "clickbait." Some appear to simply be designed to generate money by showing advertising and affiliate links to readers. Others may have a more nefarious purpose, such as spreading disinformation, conspiracy theories, or propaganda.

So, let’s take a look at some of the threats posed by this new breed of automated content farm and explore some of the steps that can be taken to protect ourselves from them.

?

Disinformation and Propaganda

Even without robots churning out content day and night, there’s a lot of bad information on the internet. Given the speed that which AI can generate articles, it’s likely that this is only going to increase. The real danger comes when this information is used to maliciously deceive or advance a false narrative. Conspiracy theories exploded during the global Covid-19 pandemic, causing confusion and alarm among an already-scared general public. We’ve also seen a huge increase in the emergence of “deepfakes” – convincing AI-generated images or videos of people saying or doing things that they never did. In combination, these tools can be used to deceive us by those who want to push a political or social agenda in ways that could potentially be very damaging.

Many of the websites highlighted by NewsGuard obfuscate their ownership as well as the details of those who have editorial control. This can make it difficult to determine when agendas might be in play, as well as to establish accountability for defamation, disseminating dangerous information, or malicious falsehoods.

?

Copyright Infringement

Some of the content farms that have been identified so far appear to exist solely to rewrite and re-publish articles generated by mainstream media outlets, such as CNN. It also has to be noted that the training data it uses to learn how to create these copycat articles are often taken from copyrighted works created by writers and journalists.

This can make life difficult for those who rely on writing and content creation of all sorts, including artists and musicians, to make a living. This has already led to the creation of The Human Artistry Campaign , aimed at protecting the rights of human songwriters and musicians to protect their work from being plagiarized by AI. As already noted, many of these content farms are effectively anonymous, making it very difficult to find and take action against humans who are using AI to infringe copyright. As things stand, this can be considered a legal "grey area" as there is nothing to stop AI-created works that are “inspired” by human works, but society has yet to establish exactly how this will be tolerated in the long run.

?

The Spread of Clickbait

Many of the AI-farmed articles that have been found are clearly just there to put adverts in front of audiences. By telling the AI to include keywords, it’s hoped that the articles will rank highly on search engines and attract an audience. AI can be instructed to give the articles intriguing, shocking, or frightening headlines that will encourage users to click on them.

The danger here is that it makes it more difficult for us to get genuine, valuable information. Distributing advertisements via the internet obviously isn’t a crime – it funds a huge amount of the media we consume and the services we use online. But the speed and consistency with which AI content can be churned out create a risk that search results will be muddied and our ability to find real content will be diluted. It’s already far cheaper to create AI content than it is to create human content, and the output of these farms can be scaled almost infinitely at very little cost. This leads to a homogenization of content and makes it more difficult for us to get unique perspectives and valuable, in-depth investigative journalism.

?

The Consequences of Biased Data

Bias is an ever-present danger when working with AI. But when it is present in the training data used to power algorithms creating farmed content at scale, it could have particularly insidious consequences. An AI system is only as good as the data it’s trained on, and the old computing adage that “garbage in=garbage out” is magnified when applied to smart, thinking computers producing content at scale. This means that any bias contained in the training data will infect the generated content, perpetuating the misinformation or prejudice it creates.

For example, if a badly-constructed survey that forms part of an AI’s training data over-represents the views of one segment of society while minimizing or under-representing the views of another, the AI-generated content will reflect this same bias. This can be particularly harmful if those whose views are marginalized are vulnerable or a minority. We’ve already seen that operators of these content farms appear to take little oversight of their output, so it’s possible that dissemination of this type of biased, prejudiced, or harmful output could go unnoticed.

In the end, biased AI output is bad for society as it perpetuates inequality and creates division. Amplifying this by publishing it across thousands of articles churned out day and night is unlikely to lead anywhere good. ?

?

What Can Be Done?

No one would claim that there's never an agenda behind human-authored journalism or that human-powered media outlets never make mistakes. But most countries and jurisdictions have safeguards in place, such as guidelines that stipulate news reporting and opinions must be kept apart and laws regarding libel, slander, and editorial accountability.

Regulators and legislators need to ensure that these frameworks are still fit for purpose in an age where content can be created and distributed autonomously at a massive scale.

Additionally, the responsibility for mitigating harm clearly lies with the tech companies that create AI tools. They must take steps to ensure that they reduce the impact of bias as much as possible and build in systems that encompass accuracy, fact-checking, and recognition of copyright.

And as individuals, we need to take steps to protect ourselves, too. An important skill we all need in the age of AI is critical thinking. This simply means being able to evaluate the information we come across and make a judgment on its accuracy, truthfulness, and value, particularly if we aren’t sure whether it was created by a human or a machine. Education certainly has a part to play here, and an awareness that not everything we read may have been written with our best interests at heart should be instilled at a young age.

Altogether, addressing the dangers posed by large-scale, autonomous, and often anonymous content distributors is likely to require smart regulators, responsible businesses, and a well-informed general public. This will ensure we are able to continue to enjoy the benefits of ethical, responsible AI while mitigating the harm that can be done by those looking to make a quick buck or mislead us. ?


To stay on top of new and emerging business and tech trends, make sure to subscribe to?my newsletter , follow me on?Twitter , LinkedIn , and YouTube , and check out my books, Future Skills: The 20 Skills and Competencies Everyone Needs to Succeed in a Digital World and The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society .

---------------------------------------------------------------------------------------------------------------

About Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a?best-selling author of 21 books , writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1.7 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest books are ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’ and ‘Future Skills: The 20 Skills and Competencies Everyone Needs To Succeed In A Digital World’.?

No alt text provided for this image


Benedikt Backhaus

KI für KMUs & Solopreneure | Doppelt so produktiv in Marketing & Sales mit ChatGPT & Co. | Keynote Speaker | KI Trainings | Video-Tutorials | ChatGPT & Prompt Engineering

1 年

Very insightful, thanks for sharing Bernard Marr. It is high time we debate these issues and how responsible companies and developers can be held accountable.

回复
Birdie G.

Friendly | Focused | Ethical

1 年

Great summary of the basic concerns. My only criticism on first read through is this part: “the responsibility for mitigating harm clearly lies with the tech companies that create AI tools. They must take steps to ensure that they reduce the impact of bias as much as possible and build in systems that encompass accuracy, fact-checking, and recognition of copyright.” Unfortunately private endeavour consistently shows us a notable profit bias. OpenAI might claim to care a bit about ethics but they still white knuckle a ‘profit for purpose’ model which allows them to shrug off any ideas of transparency re training data. That leaves us with regulatory levers that are very slow to act. In Australia we are only just now seeing demands that the telco industry do something about caller ID cloaking in VOIP after decades of scam callers being enabled make it look like they are calling from a local mobile number or landline. We cannot afford that kind of lag.

回复
Marcin Burakowski

IT Program Manager & Executive | Technology strategy implementation & management | ESM & ITSM, ITOM & AIOps, IT Modernization & Cloud, Scaled Agile | Managing Partner at Evergo

1 年

If we thought bot farming and misinformation was bad before... Just wait and see how effective a fully automated system powered by AI can be!

回复
Joseph Pareti

AI Consultant @ Joseph Pareti's AI Consulting Services | AI in CAE, HPC, Health Science

1 年

linkedin is brimming with clickbait

回复
Olga Demeter

Independent Professional Training & Coaching Professional

1 年

We are thankful for making your article about the dangers of AI freely available. We've heard a lot about the dangers, but little is widely available on the details about what those dangers with AI might be. For sure there are many more perils with AI. Your recommendation of using critical thinking is of primordial importance!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了