The AI arms race heats up: Google's Gemini challenges ChatGPT for the crown
Siva Koppula
Enabling Digital transformation through Digital Twins in the built environment
The past year has been a whirlwind in the world of AI, where we all learnt a new term ChatGPT and the pace shows no signs of slowing down. Just before Christmas, Google unveiled its highly anticipated Gemini, a powerful language model aiming to dethrone OpenAI's ChatGPT as the king of generative AI.
The battle lines are drawn, and early benchmarks suggest a thrilling fight ahead. Google's blog (https://blog.research.google/2022/05/language-models-perform-reasoning-via.html) paints a picture of Gemini outperforming ChatGPT4 on several key parameters, even becoming the first model to surpass human experts in MMLU (massive multitask language understanding).
However, when we delve into text-based interactions, the performance dynamics between Gemini and ChatGPT become more nuanced. Here, Gemini edges out ChatGPT with higher accuracy in factual language tasks and question answering on most parameters. While on most parameters the difference is up to couple of percentage points, I have highlighted two areas below where the difference is maximum – over 7% points.
????????????????? Common sense reasoning (HellaSwag): For an AI application, I think this is the most significant parameter. While this might seem like a technical term, its impact is far-reaching. This ability to understand and apply everyday knowledge is crucial for AI to interpret context, make sound predictions, and provide human-like responses. Surprisingly, ChatGPT4 holds a 7.5% lead over Gemini in this domain, something we should keep an eye on.
领英推荐
????????????????? Python code generation (HumanEval): On the other hand, Gemini dominates when it comes to generating functional Python code, outperforming ChatGPT4 by 7.4%. This is undoubtedly impressive, but we must remember that Python expertise caters to a niche audience. In essence, HumanEval is about solving predefined coding problems accurately, while Natural2Code is more about translating free-form natural language descriptions into workable code. For a general user, Natural2Code's (where the difference between Gemini and ChatGPT reduces to 1%) approach might be more indicative of an AI's ability to assist with a broader range of practical programming tasks than HumanEval
I think for the "average user," common sense reasoning arguably carries more weight than Python code generation. It underpins everyday interactions, from language translation to question answering and recommendations. A strong grasp of everyday logic helps AI seamlessly integrate into our lives, anticipating needs and responding in a way that aligns with human expectations.
Python code generation, while valuable for developers, remains specialized. It shines in automating tasks and streamlining software development, but its impact is limited to a specific user base.
The takeaway? As AI strides into the future, its ability to navigate the nuances of everyday life holds paramount importance. While both Gemini and ChatGPT4 showcase impressive capabilities, stronger common sense reasoning might prove the difference-maker for widespread user adoption. The race is on, and it's going to be fascinating to see how these AI titans evolve and shape the way we interact with technology in the years to come. Exciting times ahead!!