The Subtle Menace: Is AI-Generated Content Killing Web Quality?

The Subtle Menace: Is AI-Generated Content Killing Web Quality?

Before the advent of #generativeai, it would take anywhere between a week, a fortnight, and sometimes a month even, to write a quality long-form article. Then came OpenAI’s ChatGPT and a truckload of similar tools, and suddenly, clients were demanding 150-200 articles a day!

So what’s the point here? Now that the euphoria over?generative AI?has dissipated, and the road is a little clearer with the scattering of the haze, the content journey from what we were and what we are becoming seems daunting. For the whole of mankind, because content affects us all, one way or the other.???

Last week’s edition of this newsletter was on how AI was on its way to replacing the Internet/World Wide Web with personalized,?AI-powered chatbots. But the current version of the Web (2.0), the fountainhead of information, may die even sooner because of altogether different reasons!

Generative AI presents a two-fold concern.?Problem 1?is related to the mushrooming of fake sites, fake news, content farms, and malicious-intent content.?Problem 2?is the production of low-quality or subpar content. Both have the capacity to kill the World Wide Web and, eventually, the human intellect.

The advent of generative AI models has resulted in a surge of machine-generated #content on the #Web. Companies/clients are now extracting #data from the open Web and employing AI to produce inexpensive, yet less dependable content.

Already, this has placed considerable strain on prominent platforms such as Reddit, Wikipedia, Stack Overflow, and even Google. The latter, being the dominant search engine, has also felt the impact of AI-powered alternatives like Bing AI and ChatGPT. In response, Google is said to be experimenting with AI-generated summaries as potential replacements for its traditional search results (blue links). This, however, has already led to concerns being expressed about the dependability and quality of these AI-generated summaries.

Be very clear that the #internet and the #worldwideweb are undergoing a transformation. It would not be an exaggeration to say that the old Web, or Web 2.0 is gradually fading away, while the new one struggles to take shape.

Let’s first look at?Problem 1.?It’s already triggered off alarm bells in the short, seven-month span that ChatGPT’s been in existence. A recent report by the online misinformation tracking company, “NewsGuard”, which calls itself a “journalism and tech tool”, has shed light on the utilization of AI-generated chatbots to produce subpar content that lures advertisers to "made for advertising" websites.

These are the same guys who had, a few weeks ago, published an alert that almost 50 AI-generated “content farms” were being run by chatbots posing as journalists.

The findings of the new report revealed that over 140 prominent #brands were unknowingly paying for advertisements that ended up on unreliable #websites generated by #ai, often without their knowledge. These sites lacked human oversight and were plagued with the typical errors associated with generative AI systems. Surprisingly, programmatic #advertising serves as the primary revenue stream for these AI-generated websites, with the average cost of a programmatic ad being US $1.21 per thousand impressions.?The report?revealed that numerous Fortune 500 companies and well-known brands inadvertently supported these sites through their advertising efforts.

It did not end there. NewsGuard also reported that it came across approximately 25 new AI-generated sites each week, many of which were also created without human oversight.

Supercharged Fake Content

25! Just by one estimate. Which means a 100 such sites a month spewing out rubbish! To be fair, clickbait sites existed even in the pre-generative AI days, after the advent of the online search engine. But with AI in the mix, it’s become far easier and quicker to launch such fake or malicious sites, from the earlier month or so to a couple of days now. And tracing ownership has become even more problematic.

As Lorenzo Arvanitis, a researcher at NewsGuard has explained, the use of generative AI?supercharges?the process of creating misleading or false content. This trend is likely to become even more pervasive as these language models become more advanced and accessible. The result? A rise in unreliable AI-generated news and information sites, all vying for a piece of the advertising money pie.

These kinds of fake or poor-quality websites pose a significant threat in the form of the propagation of misinformation. This allows generative AI to be harnessed for the creation of fake news and the dissemination of misleading information, leading to its widespread and rapid spread.

Which brings us to?Problem 2?- using generative AI to produce shallow, inferior-quality content. Most of us know by now that generative AI is typically trained on extensive sets of existing data, limiting its content generation capabilities to just what it has learned from this data. But what works in its favor as compared to a human writer is that an AI system possesses the capability to scale effortlessly with financial resources and computing power.

This means if you were an enterprise, to produce content at a faster pace compared to human-written copy using the machine is well within your means, quality be damned.

To read the rest of this article, please subscribe.

要查看或添加评论,请登录

Sorab Ghaswalla的更多文章

社区洞察

其他会员也浏览了