ChatGPT: The Polar Bear and the Frozen Fish
I’d been working on a comprehensive deck about generative AI for about six weeks when ChatGPT hit the scene in late 2022. Although the preceding several months of generative AI had already blown my mind, the advent of conversational AI at this advanced level still spun my head around. So I started experimenting, trying to push the model and see how far it would go.
This isn’t a column of grand forecasts regarding what this means for humanity. There’s plenty of that going around. I just want to share a particular thing I tried and make some observations about it.?
Here’s what I asked one recent evening: “What would happen if a polar bear tried to eat a fish that was frozen solid?”
The reply from ChatGPT is below. If you don’t want to read the whole thing, you can scroll past and I’ll summarize.?
If a polar bear eats a fish that is frozen solid, the fish will likely thaw in its mouth first before it goes down to the stomach. Polar bears are adapted to live in extremely cold environments and have a thick layer of fat and fur that helps to keep them warm. They also have a high metabolism which generates a lot of heat. When the bear eats a frozen fish, the fish will come into contact with the bear's warm saliva and body temperature, which will cause it to thaw.
The fish will then be able to be chewed and swallowed more easily, and it will continue to thaw as it goes down the bear's esophagus and into the stomach. The bear's stomach is also equipped to handle partially frozen food as it has a thick lining of mucous to protect the stomach wall from the cold food. The stomach also has a strong acidity to aid in digestion, which can help to further break down the frozen fish before it goes into the small intestine.
TL;DR: The polar bear will do just fine thawing and digesting said fish.
Some observations:
领英推荐
The net of it all is that fact checking ChatGPT can be awfully difficult. And in numerous circumstances this is not a big deal. I can’t think of anyone I have ever met whose life would be even slightly different based on an accurate assessment of this scenario. When you ask the AI to compose a bedtime story or lyrics to a song, accuracy is not a factor.
It’s all fun and games until you actually need true information. And as it turns out, people need true information all the time. Setting aside the issues we currently have with disinformation and social media, cholera did a whole lot of damage in 19th century London before they figured out the problem was from mishandling wastewater. The accurate location of a runway is really important to pilots.
ChatGPT has gotten a lot of folks excited, and that’s great. But I’m a bit concerned that the casual user having fun with it may be underestimating the gap between where it is now and a point where it can be reliably trusted to convey objective truth. The model is trained on what it found scraping the Internet up until some point in 2021. Its logic is populated by things human beings wrote, and in turn it is only as reliable as we are in aggregate.?
The model “knows” that the world is round not because it can be reasoned with, but because the amount of content supporting a round earth vastly outweighs the content indicating that the earth is flat. Following this logic, any number of biases and widely-held myths held by humans with keyboards got absorbed into the model as well, and may need to be course-corrected by hand after the fact.?
Which is all to say that while the model may be a very impressive X% accurate, getting the rest of the way to 99.9% is going to take a lot longer than most people think. When you look at an AI output, you are looking at an unfinished work. You need to take it the rest of the way. If you’re going to present it as art, then make sure it is clearly framed that way. If you’re going to present it as fact, you ought to double-check as best you can. No computer is going to take that job from you any time soon.
Audio Graphics - President (Retired)
2 年Re: “ChatGPT has gotten a lot of folks excited, and that’s great. But I’m a bit concerned that the casual user having fun with it may be underestimating the gap between where it is now and a point where it can be reliably trusted to convey objective truth.” Agree that precicision is important, but don’t believe that truth is what’s important to the masses. Over time, the masses will embrace AI as the truth. That’s frightening but it’s a response to doing as little work as possible, which has become the new path dependency of society. AI should be feared as a monumental change of direction. Not necessarily for the good.
CEO at Resound
2 年Really well put Andy.
Product and Strategy Executive | Omnichannel Marketing, Programmatic Advertising, and Commerce Media
2 年Great piece! There are pretty clearly holes in the model and you correctly identified that it is more bluster than brain. That said, I used it to create a deck and was quite impressed by the output. Also saved me days of work!