Some wrong notes from Google Bard
I had a quick play today with Google Bard, Google's AI competitor to ChatGPT. I was surprised that it didn't seem any better at simple puzzles (in fact maybe a little worse).
A few months ago I wrote a post about ChatGPT's rather surprising (to me at least) inability to reason accurately about something very simple - a tower of three bricks called A, B and C.
I tried the same sorts of questions with Google Bard:
It apparently misunderstood the meaning of "A is above B but below C" AND said the order going from top to bottom is C,B,A while also saying that C must be on the bottom. Oh dear.
So I thought I'd make my "A is above B but below C" clearer:
Hmm.. that's even worse. Maybe slightly salty at being wrongly accused of being contradictory, I thought I'd try being patronising:
Well I let the passive aggressive "If you have any other questions, please let me know" slide, and went for the jugular:
Well, I give up. I feel slightly guilty, like maybe I'm using it the wrong way. But it's not like I'm asking it trick questions or anything obscure.
I do realize that its forte is regurgitating nicely things it has read, rather than reasoning, and that there's doubtless productive ways to use this (and ChatGPT), but I think it comes down to trust. I find it hard to put my faith in the answers of an "artificial intelligence" that is so clearly cretinous in basic intelligence tasks.