How do you ensure your AI writing assistant is telling the truth? (spoiler – you can’t)
Photo by Magda Ehlers

How do you ensure your AI writing assistant is telling the truth? (spoiler – you can’t)

One of the challenges with AI writing assistants is that they are optimised for writing fluently, not writing truth.

This means they can quickly and convincingly write on any topic, often sounding like an authentic expert, without actually being one.

For general content this can be fine. If the ‘top five reasons you should blog’ is opinion rather than fact no-one is harmed. We’re entitled to opinions, and an AI writing assistant using probabilities from previously text to write a list of reasons isn’t doing harm by ignoring actual research.

However if you ask an AI writing assistant to write on the progress of cold fusion, it is more likely to write ‘creatively’ than factually. What the AI is doing is calculating the probability of each next word it writes based on its learning data.

If that learning data is up-to-date and weighted towards current information, it has a higher probability of writing relevant facts.

More likely the AI will write what it knows – guaranteeing a well written answer, but one based on old and potentially outdated information. Or it may get ‘creative’ - deviating from fact to ensure you get a well-written (albeit false) answer.

This poses a challenge for anyone writing technical, legal, scientific or medical information. While an AI writer can be useful for writing linking bits between facts, or finding some references (though there’s a chance these are ‘invented’), it can’t be trusted, to write factually on, well, any topic.

This, again, is because AI writing assistants are tuned to write well, not write factually. Their probability-based writing approach can simulate a research paper or homework assignment closely. Yet the information in this simulated content is tuned to fluency, not accuracy.

So how does a technical writer seeking to accelerate their work gainfully use AI writing assistance without sacrificing factual accuracy?

This is a problem we have made steps to solve through our reKnow Summarizer solution. In Summarizer we removed the weight of factualness from the AI and returned it to the human.

Our AI takes what has been written previously and summarizes it, preserving the main facts and arguments of the original. This removes the AI’s flexibility to invent facts, putting accuracy back in the hands of a responsible sentient entity, the original human writer.

In this way Summarizer can rewrite highly technical documents for a general reading audience, or repurpose them accurately as blogs, articles, social posts, scripts, FAQs, white papers… the list goes on.

The accuracy of the output is maintained, based on the accuracy of the input.

Our AI doesn’t invent fake facts. It (very quickly) repurposes, rewrites and reorganises existing facts from a document for new audiences in new formats.

Now if you’re seeking to write new content, particularly low-level filler content, and don’t have a ‘source’ you wish to draw on for factual accuracy or modelling, you may find another AI assistant useful. Such as our SimpleMarketing.AI solution, which writes original content from your key words.

However if, like many organisations, you have a lot of pre-existing content, just not in useful forms for a given audience, you can rewrite these quickly, preserving their accuracy, using Summarizer.

That means you can be more confident facts are preserved and your message gets to your audience accurately as well as eloquently.

Tony Booth

Cybersecurity/AI/ MS Consultant - Senior BDM & Regional Manager ACT

2 年

Brilliant and insightful explanation of the limits of GPT and differentiation of reKnow's knowledge-based AI

回复

要查看或添加评论,请登录

reKnow的更多文章

社区洞察

其他会员也浏览了