ChatGPT isn't a Trekkie (AI research)
John Ogilvie
Since 2007, I've helped GC clients sort out their IT. Since 2017, it's been a dozen clients in cloud. As a serial startup CEO, I help GC build great solutions. Fast, clean, secure, and no drama.
My original title for this was "ChatGPT sucks at Research" but that seemed unkind.
This observation is based a number of informal experiments I've done lately. The basic format is to ask the AI a question to which I already know the answer.
The best example is "how many starships were built by starfleet in the original star trek? What were their names?" because everyone knows the answer to that.
[What? You don't know the answer? It's TWELVE. Kirk says it explicitly in one episode. (About HALF of which were described as lost or visibly destroyed onscreen. It wasn't just the redshirts that regretted joining StarFleet.)]
Trying this identical question on Google produces the correct answer, if you use your human brain to skip through the first few responses.
My OpenAI transcript originally produced an answer that was incomplete and inaccurate. It produced the correct response only after I outright pointed it at an authoritative source.
领英推荐
You could quibble that my original question was open to interpretation. But - in an ideal world - the AI would appreciate that and respond with clarifying questions like:
Here are my takeaways.
Sooo, don't ask ChatGPT a question .. to which you don't already know the answer.
However, if I wanted the (known) answer to be presented at some length and with background material and a clear, fluid presentation, I'd use ChatGPT and save myself the time of preparing this material.??