Conversations with Chad
Yesterday morning, I decided to do a little experiment for entertainment purposes. As a long-time player of the New York Times game, ?Wordle, I decided to see if ChatGPT4 could help with a recent challenge. Read on…
Apparently, this request overloaded our little robot friend. Chad is now rolling up its sleeves and is determined to do a better job. Come on, Chad. You got this, buddy!
Wait, did it just tell me this answer provides a “decent selection� Not even gonna go there. But there is another problem. It seems, 3 of the 7 words it gave me were repeats, even though I had asked for new entries. Read on:
Ok. You promise? This is a genuine list of 6 new words that meet the criteria? Are you sure, Chad? Come on, buddy. Work with me here…
Unfortunately, there are some serious issues with this genuine list.
Are you seeing any issues with this corrected list? At this point, this isn’t funny anymore and I am starting to lose my confidence in Chad’s ability to carry out this seemingly simple task.
领英推è
So, after several attempts, Chad got it right thanks to its advanced double-checking logic. Who is this “weâ€, anyway?
Still, I was curious why this simple activity created so much trouble for our AI buddy.
This really is something! First, let’s get one thing straight. ChatGPT doesn’t have hands. There’s no such thing as “manually creating†anything. The first algorithm it used failed, so it switched to another programmatic method. There’s no “manually†anything here…
Lastly, is it really blaming its errors on “human-like†methods? Perhaps it doesn’t understand irony yet. In one paragraph it says it is making “human-like errors†and one paragraph later, it praises me – a mere human – for helping it reach the correct outcome.
Oh, don't worry, I'll be back with another episode of Conversations with Chad.
?