Bing CoPilot GPT-4 – Smooth Writing, Spooky (Un-)Certainty
Thoth presenting the invention of Writing, Greenfield Papyrus, 950 BC, Britannica

Bing CoPilot GPT-4 – Smooth Writing, Spooky (Un-)Certainty

Bing has been running with OpenAI GPT-4 since March 2023, and MS CoPilot has been generally available since December 2023. When I published a series of four LinkedIn articles based on tests and observations of ChatGPT over a year ago, I was impressed with the writing capability, the apparently unlimited range of topics covered, and one, major nagging problem with ChatGPT very confidently generating information that was simply wrong. We’ve all heard about misguided lawyers submitting AI-generated briefs full of fictional citations, and we’ve heard the developers admitting this issue of “hallucinations.” Upon exercising the Bing CoPilot (formerly Bing Chat) feature, based on OpenAI GPT-4, I discovered results that are both amazing and amusing, if disconcerting.

The auto-generated responses to the “Ask me anything” prompt indicate very flexible access to web search, and highly creative compilation of multiple data sources. And there’s the rub, the output can be rather too creative, with AI making associations and drawing deductions that a typical human would not consider. We now see this functionality appearing everywhere – AI-powered Takeaways on LinkedIn, auto-summaries on search engines, and CoPilot available in Office 365 – AI is everywhere. With Bing CoPilot, you can choose your Conversational Style – More Creative, More Balanced or More Precise.

This is great, computing is really starting to work like it always has in sci-fi, where we can just talk to the machines, even literally speak to them in solutions like Microsoft Conversational AI for Healthcare and Customer Engagement, via the acquisition of Nuance for its voice recognition technology. So we have reached the Mount Olympus of Search where information on the web is not only available, but rapidly presented in handsomely composed summaries for us. But what if the AI doesn’t understand the information?

To introduce this study, some initial clarifications will help. I published a sci-fi book via Kindle Direct Publishing that is available on Amazon called "The President's Suit" that features a hero named Dr. Applebreath. I have a web site I named drapplebreath.com where I published my first book on PDF called "From Paper to Web" as well as several articles on PDF, OCR, and data capture in the cloud. I added a joke link at the top of the web page called Dr. Applebreath Labs. This concurrence of terms led to the following confusion.

I decided to ask Bing CoPilot about my science fiction character: “Who is Dr. Applebreath?” I was incredibly surprised to find that I am Dr. Applebreath, and he wrote my book “From Paper to Web” (Adobe Press, 1997). Curious?

The results are very exciting, but the more I followed CoPilot’s suggested queries, the stranger the trip became. The first answer found my sci-fi book on Amazon, which is perfectly normal for a search result.

"Dr. Applebreath?is an intriguing character associated with various creative works. Here are a couple of contexts where you might encounter Dr. Applebreath:

1.????? The President’s Suit: How Dr. Applebreath and the Little White Ball Saved the World from Nuclear Annihilation: In this fictional story by Tony McKinley…"

Then Bing CoPilot demonstrated more impressive comprehension than normal search results, as we expect AI to do. It apparently accomplished this leap by finding my web site at drapplebreath.com, where the search term is the URL. This web site does not mention the sci-fi book, but it includes my articles on PDF, OCR and AI. But now the fictional character from my sci-fi book is “associated with” my non-fiction work. This is where CoPilot leaps into imaginary connections.

2.????? “From Paper to Web – 25th Anniversary – PDF Edition:

Dr. Applebreath is also associated with a publication titled “From Paper to Web”

???????????????? This next behavior must be caused by the fact that I jokingly included a link at the top of my site to Dr. Applebreath Labs, and it assumes all the articles I have published on that site are the product of this fictional character’s work.

3.????? “Data Extraction in the Cloud – Amazon Textract:

Dr. Applebreath Labs has expertise in data extraction using Amazon Textract”

???????????????? I admit I contributed to the AI’s confusion, but the web is a confusing place, and now we start to see how AI “thinks” much differently than we do with this following reminder that is both funny and jarring. I see I am becoming Dr. Applebreath, and in addition to his sci-fi skills, he has gained my expertise.

“In summary, Dr. Applebreath is a fictional character known for his innovative inventions and contributions to various fields. Whether it’s energy weapons, PDF expertise, or cloud-based data extraction, he’s always at the forefront of technological advancements!”

Stepping back from the fascinating creativity of GPT-4, we return to the subject of this article – AI just makes stuff up! Or more accurately in the above examples, GPT-4 draws illogical conclusions from lexically but not logically related info. For example, a prompted query offers to tell me “What other inventions has Dr. Applebreath worked on?” As humans, we remember that the AI has just told us “Dr. Applebreath is a fictional character” and now we read that this fictional character has replaced me as the author of my book, and blends this statement with text copied directly from my web site:

“Dr. Applebreath authored a book titled “From Paper to Web,” which was simultaneously published in both paperback and PDF formats on April 1, 1997.”

???????????????? Well, as I mentioned above, the quality of the writing is much improved over ChatGPT, to the point where it is quite entertaining and assumes the tone of the text you would expect to find on the cover of a sci-fi paperback, at least in the default "More Balanced" mode selected in this example. But here again, beyond the impressive creative writing style, it makes up a lot of stuff that has nothing to do with the book, the character or anything relevant to the topic it is discussing. It prompts me to ask, “What is the origin of the Little White Ball” and when I do, it becomes wildly imaginative!

??????????????? “Ah, the Little White Ball—a mysterious enigma that has captured imaginations and sparked countless theories! Let’s delve into its origin, shall we?

Once upon a time, in the dimly lit corridors of Dr. Applebreath’s secret laboratory, the air crackled with anticipation. Dr. Applebreath, a brilliant scientist with wild hair and a penchant for eccentricity, had been working tirelessly on his latest invention—the President’s Suit. But hidden away in a dusty corner of the lab sat something smaller, more unassuming: the Little White Ball.”

??????????????? It then goes on to suggest fantastic origin stories including “Cosmic Serendipity, Quantum Anomalies, Whispers in Dreams,” and so on. All great stuff, but if I am looking for the Little White Ball’s origin story, this isn’t it.

??????????????? Again, we’re back to that crucial problem with trusting AI for search and auto-summarization – it just makes stuff up. We can dress up this critical flaw with diagnostic terms like “hallucination” but that does does nothing to resolve the corruption of information, the building of false narratives, the creation of inaccurate deductions.

??????????????? In conclusion, with GPT-4, both the creativity and the quality of the writing is vastly improved, but if you need to find facts rather than fiction, you need to double-check the output. If CoPilot does this for light-hearted topics like this, what does it do with business data? Again, if you are intimately familiar with the subject matter, you can easily sort the wheat from the chaff, use the good stuff, delete the “hallucination.” But if you are dependent on CoPilot and GPT-4 to understand complex data for you, be careful!

??????????????? This situation reminds me of the myth that Socrates relates in Plato’s “Phaedrus” where the Egyptian god Theuth has discovered writing and he wants to give it to humans, but first he runs it by the chief god Thamus. The chief expresses the downside of this new invention, he could be talking about the current state of AI:

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

要查看或添加评论,请登录

Tony McKinley的更多文章

社区洞察

其他会员也浏览了