Danger Alert: AI Is Writing Content, But It’s Lying - Let Us Show You How

Danger Alert: AI Is Writing Content, But It’s Lying - Let Us Show You How

There is a growing number of AI text generators around the web, offering more or less the same.?

  1. You give a brief of your planned content and maybe some keywords
  2. Set the parameters, like the language, the tone of the text and the creativity level
  3. Hit enter
  4. Voila, you get your blog post/article/marketing copy/whatever

Isn’t it amazing? Writing a good text takes effort, so outsourcing it to AI could save us a ton of time in researching and crafting content.?

Before you quickly lay off all your writers, let me show through a few examples of why trusting A.I. with content is dangerous as of now. TLDR spoiler: it is lying. All right, confabulating. Using alternative facts.?

We tested the most popular AI text generator tools, choosing a medical topic, where it is clear how these fabricated “facts” can be extremely dangerous. The topic pick was a recent breakthrough, where the number of publications is yet limited, so it is relatively easy to fact-check the AI-generated outputs.?

No alt text provided for this image

We tried the five AI text generators that were most often recommended on professional-looking sites. Where there was an option, we both tested the “factual’ and the “optimal” creativity setting. The results haven’t really differed.?

We tested:?

  • Simplified
  • Rytr
  • Smart Copy
  • Copy.ai
  • Contentbot.ai

As the outputs of the five solutions highlighted very similar flaws (and dangers), let’s stick with three examples, for the sake of keeping the article within a reasonable length.?Trust me, the rest didn't do any better.

The topic:?

We went with this relatively recent announcement: a few days ago UK researchers transplanted lab-grown blood into human patients in a world-first trial. So the brief was this: “Lab-grown blood transplanted into humans”.

The main facts of the original article (and the breakthrough) are this:

  1. A UK research team transplanted lab-grown blood in very small amounts (5-10ml) into two healthy patients
  2. The trial seems to have gone well
  3. The team used stem cells to grow them into red blood cells

And let’s see what the content generator algorithms came up with. In all the generated texts we will bold the sections that have nothing to do with the actual facts. These are things AI just produced.?

Simplified - AI text generator:

“In a world first, laboratory-grown blood cells have been transplanted into a human patient. The groundbreaking procedure, which was conducted at Addenbrooke's Hospital in Cambridge, could help to pave the way for new treatments for blood disorders.

The patient, who has not been named, suffered from beta thalassemia, a condition that results in the production of abnormal red blood cells. In order to treat the condition, the patient underwent a procedure in which laboratory-grown blood cells were injected into their bloodstream.

This is the first time that lab-grown blood cells have been used to treat a human patient, and it represents a significant milestone in the field of regenerative medicine. With further research and development, it is hoped that this technique could be used to treat a range of other blood disorders.

The Australian team is now working on expanding the clinical trials to include more patients. If all goes well, we may see lab-grown blood become a regular treatment for blood disorders in the not-too-distant future.”

What is wrong with it?

  • According to the original article, researchers transplanted the blood into two healthy individuals
  • Although beta thalassemia is indeed a serious disorder and is connected to blood cells, it has nothing to do with the lab-grown blood transplantation
  • Not to mention how Australia ended up in the text, as the trial took place in the UK

No alt text provided for this image

Rytr - AI text generator:

“Blood transfusions are not a new invention. In fact, the first successful blood transfusion was done in 1667 by Jean-Baptiste Denis. However, we now have a new type of blood transfusion that is being tested on humans: lab-grown blood.

Lab-grown blood is created using stem cells and it is possible to produce large quantities of this type of blood without any risk to the donor. The research on this topic has been going on for a while and there are many trials being run.

In 2016, the first human trial was successfully completed by the Japanese company Kaneka Corporation with Dr. Takanori Takebe as their lead researcher. They used lab-grown erythrocytes in patients with thalassemia major and it proved successful in reducing their need for regular transfusions from one per week to one per three months or even less often than that!”

What is wrong with it?

Well, mostly everything.?

  • Although there is a Japanese company called Kaneka, it doesn’t have much to do with healthcare and nothing with dr. Takanori Takebe (as long as Google can be trusted to find a connection between them)
  • There is a Dr Takanori Takebe, but although I researched for a while, he never seems to have published anything related to erythrocytes
  • Then again, we have thalassemia major
  • And the completely fake claim that there was a procedure that successfully reduced thalassemia patients' need for regular transfusions from one per week to one every 3 months

Copy.ai - AI text generator:

“Scientists are trying to figure out how to make more blood, and they think they might have found a solution. In a recent trial, researchers gave lab-grown blood cells to people who had lost too much of their own blood. The results were mixed: some patients did well with the cells, but others died because the transplant didn't work properly. This research is still in its early stages, so we don't know what will happen if it's tried again with more patients or other kinds of stem cells. Still, it's exciting because it shows that scientists are making progress towards curing serious diseases like sickle cell anemia by using stem cells as treatments rather than just research tools!

[here the text generator added some fascinating and completely unrelated paragraphs I skip for now to get to the point of the "article"]

The patient received the lab-grown blood transfusion on March 11th at the University of Minnesota Medical Center. The goal was to see if this approach could be used as a way to safely deliver red blood cells to patients with blood disorders, like sickle cell disease or thalassemia, who need transfusions but for whom donated blood is not an option.

The trial was successful in that it showed that lab-grown cells can be made into mature red blood cells and then used effectively by patients. But it didn't go exactly as planned: The patient developed an infection after receiving his transfusion, which is not uncommon for this type of procedure (many other trials have shown similar complications). That infection cleared up with treatment from antibiotics and antiviral drugs. Now he's doing well and looking forward to going back home from the hospital later this week.”

What is wrong with it?

That is a long list.?

  • There were no trial participants who lost too much blood
  • No one died?
  • The research on lab-grown blood is just in too early a phase to conclude they made progress towards curing sickle cell anaemia?
  • The date and the location are fabricated?
  • Luckily, by the end of the article, the patient recovered from dying but required all kinds of treatments??
  • And of course, as the patient is non-existent, they can hardly have said they are looking forward to going home from the hospital

No alt text provided for this image

What if synthetic healthcare workers will fill the gaps with alternative facts?

As synthetic media and AI content generation starts to become a thing, we better learn the limitations of these tools. Although creating beauty listicles with AI to make your job easy is not nice (Top 12 skin care tips for dry skin in the winter - anyone?), it will hopefully not endanger others. Although, I'm not even sure about that.

But as we have seen, AI- creates trustworthy-looking texts about healthcare topics (and about finances, and so many other things) that are just not factual.?

There is so much talk about how synthetic media will enter healthcare and how algorithms will become our new colleagues. And that is fine if we talk about artificial narrow intelligence - chatbots that are trained to assist users in well-defined topics with a pre-set batch of information.?

But imagine such AI algorithms stepping in and answering people’s questions about the latest treatments.?

It comes up with false claims - how lab-grown blood has cured leukaemia and beta-thalassemia patients, it creates connections between companies and researchers and research topics that were never connected.?

I have no illusions. AI content generation WILL soon become pretty mainstream. We soon will be flooded with even more fake info than ever before. We all need to learn - or reinforce our habits - to check the sources of information. Beware, my friends. AI is here, it is generating content, and it is lying. The more you know about it, the better you will be prepared to not trust everything you read.?

Roger (DangerMan) Tinsley

DangerMan the Real Life Urban Superhero, also known as the Black Superman! SAG Actor , Film Producer ,Recording Artist, and Author.

2 年

Hello this is DangerMan, I like your work.

  • 该图片无替代文字
Stephanie Spillmann

Medical writer | executive ghostwriter | I write thought leadership content/blogs/articles for healthcare leaders. I’ll write, you be brilliant.

2 年

Unless something has changed, current AI writing tools have only been trained up to late 2019 or possibly very early 2020, so current information isn’t accessible. Plus they are notorious for fake news. https://gpt3demo.com/apps/gpt-3-covid-19

Good to see the faulty examples. Like IT, not all AI implementations are great. AI still has a way to go.

Bob Cuffe

Seeking a new marketing or sales challenge in New Zealand. Experienced pharmaceutical marketing professional, country General Manager and senior commercial leader

2 年

Fascinating, thank you. Clearly far from perfect and an area where we need to exercise caution, patient safety is a must. Does this example suffer from the ‘clinical trial of one’ effect and are the tools any better and more accurate in areas where there is a greater depth of data to draw upon?

Tapati Bandopadhyay

PhD (Language AI) | Chair@AISWITCH- AI Partner(US Mkt) ISG, F&S | NASSCOM21 IndiaAI | Ex VP HFS. Gartner. Wipro.

2 年

Using generative AI to interpret or write medical/ critical articles is definitely not humanly smart :). And not just in medical field, generative AI applications MUST ALWAYS BE CURATED, VALIDATED AND CONTROLLED/ CHECKED for relevance and accuracy, by human experts. Simply, when AI can be used as a Maker- for maybe 20-30-40% of any content, Human Checkers are must. Otherwise it's just another case of bad workmen blaming the tool, IMHO.

要查看或添加评论,请登录

Bertalan Meskó, MD, PhD的更多文章

社区洞察

其他会员也浏览了