ChatGPT is not great at non-linear logic
Stephen Redmond
AI Visionary | Head of Data Analytics and AI at BearingPoint Ireland. Delivering real business value to our clients by harnessing the transformative power of data and AI.
One of my weekend relaxation activities is to have a go at the logic puzzle shared by Irish Mensa on their Twitter page. These type of puzzles require a certain type of logic approach and methodology that is mostly non-linear, in that you can't really apply an algorithm to it. ChatGPT struggles with this type of task.
The first puzzle this weekend asks us to simply work out which two consonants could be used to solve the crossword, along with the vowels already supplied. Words across and down need to be common English words. You can't repeat any word.
This does require a bit of a search methodology, but also needs you to stop and go back if the letters don't seem to work. There is a bit of back and forth and trial and error.
CharGPT with GPT 4.0 was pretty poor. It would take a guess at what the letters were and, even when later words were outside the constraint, it can't backtrack and try something new. Eventually, after some multi-step attempts to help it, it threw its hands up and stated the puzzle could not be solved within the constraints.
领英推荐
The Sunday puzzle was based on positioning symbols in a grid to match the numbers provided. There are constraints about sybols not repeating horizontally or vertically. It really requires working out a good starting strategy and then logically working out horizontal and vertical positions step-by-step.
Again, ChatGPT was not good at this. It went straight in with incorrect assumptions and produced a final result that didn't work. Even when the miss was pointed out, it couldn't rectify. In fact, I went so far as to give it a starting clue (the 3 Cs on the middle row must mean that the pattern is C_C_C) - it didn't help.
ChatGPT has some extraordinary capabilities. It can work out some quite complex tasks and give some stunning, humanlike results. But it is still not perfect. Because the model is about making the next prediction, it struggles when something before that was incorrect. It just won't take into consideration that something it previously might have been incorrect, then ploughs on regardless.
Actually, maybe that is a very human feature!
Good to know that humans are not completely redundant for solving difficult problems. ChatGPT is a great tool to help assist with that problem solving, but we need to be cognisant of its limitations.
CyberProtection - Identity & Access Management @ Accenture | CISM, CISSP
1 年Agree, certain tasks can be sped up or enhanced, but a story that says people are going to be replaced is sloppy and short sighted. A fun use i found for GPT 4 is to create quizzes about movies or books. I had it write 50 questions about the original Blade Runner (5 prompts for questions and 5 for answers) and it only took about 10 min of work compared to hours if i had prepped it by hand or by using search sites.