ChatGPT Isn't Always Easy

ChatGPT Isn't Always Easy

We all know the craze that is ChatGPT and generational AI. Many of us have likely used it or toyed with it some - ask for it to help rephrasing something on your resume, to help you solve word problems in math class (or life), or some other cool way to make doing something faster and easier than what you're used to. A few days ago, I tried to use it to help me in my pickleball hobby, and let me tell you, it didn't go as imagined. I thought I'd share my experience and thoughts (or frustrations) on this little 6-hour endeavor. Note that I'm using the free version of ChatGPT - 3.5.

Pickleball brackets - easy right?

My tasks and ideas were simple. I'm responsible for organizing a series of pickleball matches for an upcoming gathering. I was provided some printed paper that I could use to easily fill out and organize. But I work in IT and I hate having to redo work and carry around papers. I know what I'll do, I'll just have ChatGPT make the brackets for me and then I can change everything up with a copy/past and click of a button!

Complexity is in the mind of the executor

At no time digging into this did I believe things were going to be hard. And yes, I'm not a ChatGPT expert by any stretch of the imagination, but I could easily describe what I needed.

I need to create a bracket of pickleball doubles matches where there will be 8 rounds of matches on 4 courts with 17 people. There should be a different player sitting out every round and I want the player sitting out to be printed on a separate line so I can see who is sitting out

Seems pretty simple overall, right? The reality is a machine finds this very complicated. It's not an even number and it defaults to making sure it sticks to a pattern or program to get everyone matched up so every player plays with someone new (commendable). But then having to sub in and out a person mucks that all up. In our human mind, we just sub in and out and keep things going, no big deal. In the end, I never was able to get a direct output with what I was looking for - even if I tried to guide it through every step and round, prompt by prompt. I got close and the rest I had to do manually to eliminate people being missed and duplicated each round. It's amazing how much more complex this becomes just by adding 1 extra person and how rigid you realize things are due to the fact that it only works easily if the "right" number of people are involved. And while this is true, even the standard roster of 16 people proved to have misses that just don't make sense.

Wording is everything

English (especially American English) is considered one of the most difficult languages to learn because it's so nuanced. Now try to take that into the machine world where everything is a 1 or 0. The slightest change in phrasing or wording can get you a completely different interpretation. Another challenge is that this becomes more human. How often do you talk to another person about what they want, and they have a completely different interpretation of what you said? No one can read your mind and automatically understand you all of the time and the machines are probably worse yet because they don't understand us as humans of a similar culture.

Consistency is surprisingly lacking

I was really surprised how frequently the results were different every time I would start a new session in ChatGPT and pasted in a previously used inquiry. Over time I learned that this was occasionally because I built up context in my previous session before asking the question I pasted. However, there are also numerous times I would ask the very same exact starting question and get a different output - maybe I get python code instead of output text, or I'd get Player ABC instead of Player 123. For something coming from a machine, it was really surprising how inconsistent it was. As someone in IT who is used to consistency, this was shocking and a bit unnerving. It becomes all the more important to check the work of the output and make sure it's right, which isn't something people are used to doing. It took another pair of eyes to help me catch that people were getting assigned to 2 games and others were being left out. The error rate greatly increases, it seems.

Does this change my professional view of AI?

In a word, no. When Blockchain was all the hype and rage of the day along with Bitcoin, I wasn't on the bandwagon. I really struggled to come up with business cases where blockchain can make a fundamental and, more importantly, a financial difference in the business world. I still struggle to find it. With AI, I see much more room for possibility, but the business cases are still lacking today. I think my experience here highlights some of the challenges yet to be addressed to really make interactive AI flourish. It's time we recognize just how special human interaction and the abilities for our minds to quickly process and conclude things that make a difference in an experience - the ability to recognize and consume not just words, but emotions and culture, to understand one another - are. ?AI will be challenged to truly make a person feel heard and understood. How many times do I have to see "My apologies for the confusion" or "I appreciate your patience" before I go numb or into a fit of rage? How long will I tolerate getting a long set of output only to have to painstakingly comb through it to make sure the data is correct, consistent, and stable? How does that make our lives "easier?" How will it help me or anyone else? I'm fortunate my curiosity and drive to reach success was so strong this time around. There's no way I want to go through another 6 hours to do this all over again. But when it comes to AI, I really sense it's a matter of time and investment until it reaches a point where we get something we can call our assistant.

要查看或添加评论,请登录

Micheal Kuhn的更多文章

社区洞察

其他会员也浏览了