How I caused ChatGPT to make a mistake.
My first interaction with ChatGPT occurred later than for many IT professionals who decided to try it. I wanted to give it some time and wait for all the fuss to die down. Finally, on a calm weekend in Norway, I decided to get acquainted with AI, which was supposedly going to leave all software developers and IT engineers unemployed within the next 2 years. After initial greetings and general questions like "Do you think the war in Ukraine will turn into a nuclear one?" I decided to challenge it with a request to teach me C++ programming language. My goal was to see how AI would teach a human being something that AI is expected to be the best at. The vast majority of feedback on ChatGPT that I have read so far was about its capabilities to generate efficient code in many programming languages.
ChatGPT responded with some generic information about C++ as a programming language. I decided to be more specific and made a request for an explanation of the C++ pointer. AI responded with a nice chunk of information about what a pointer is in C++ and provided some code snippets with examples. My next request was to provide me with two examples of code with and without pointers. And this is where ChatGPT had a hiccup. The first example provided looked good, and the explanation was clear:
The second example and explanation made me wonder if I had misunderstood something:
AI claimed that the "main" function calls the "add" function "with POINTERS to 'a', 'b', and 'result' as arguments". However, as you can see from the code snippet above, 'a' and 'b' were passed by values, not by reference. My next question was about exactly that:
We all make mistakes, don't we? But this AI is supposed to take over all programmer and IT engineer jobs and throw me and my colleagues to the bottom of the financial swamp called capitalism. I asked ChatGPT if it had made a mistake in the initial answer with Example 2. AI admitted that it had made a mistake.
Besides feeling like a superhuman causing almighty AI to make a trivial mistake in the first 15 minutes of interaction, some other thoughts crossed my mind. First of all, AI admits it is not perfect and has a lot to learn from us human beings. I do not see AI as my competitor. I rather see it as a powerful complement to my skill set. Second, it is very interesting that AI explained the mistake with "a lapse of attention". Attention is a cognitive process and something very human. So, AI does not try to drag me into the field of algorithms and mathematical modules, explaining even its own mistakes. Instead, it operates with concepts that are more comprehensible to its human companion.
At the end of the conversation, I asked AI for consent to publish an article about my experience and the mistake it made. The answer was: "You are welcome to use our conversation as a reference for your article and include any relevant information, including my mistake and the explanation of the possible cause. However, please keep in mind that I am an automated system and not a human expert, so it is always a good practice to verify any information with additional sources if possible."