ChatGPT-4o: The Next Evolution in Interactive AI with Advanced Vision and Listening Capabilities
Exciting News from OpenAI: Yesterday, I had the pleasure of watching OpenAI's CTO, Mira Murati, unveil the revolutionary ChatGPT-4o ('four-oh') model, boasting ground breaking AI interaction capabilities—it can now see, hear, and speak. Stay tuned as it rolls out to everyone in the upcoming weeks, completely free of charge, accessible via desktop and smartphone, complete with a sleek new user interface.
GPT-4o marks a significant leap forward in AI capabilities, seamlessly integrating text, vision, and audio interactions into its reasoning. The demo’s were very impressive, lightning fast, conveyed with a voice that Alan Turing couldn't tell from a human (albeit a Californian). It understands emotion, and facial expressions and can talk back translating 50 different languages “as spoken by 97% of the worlds internet population”. So, yes, making AI available to all (restated as OpenAI’s mission).
The demo's showcased GPT-4o's ability to recognise emotions, objects via your camera, interpret handwritten messages (including a badly drawn ‘I love ChatGPT'), as well as answering questions about what a graph was describing. From explaining complex code in lay terms to providing personalised assistance, GPT-4o was shown to make interacting with AI very simple. More than that, enjoyable. OpenAI is working with governments and organisations to ensure safety is a key element brought into its persona.
The effect of the new GPT-4o? It will swell the 100 million people who regularly interact with ChatGPT and you’ll probably see GPT-4o integrated into cars and other appliances given its wide abilities. Its human interaction voice is so compelling that even my aged mum would hold, and enjoy, great conversations with 'her'. You'll use it to practice for interviews, sing the kids a lullaby, watch your golf swing and offer improvements.
You’ll use it as your assistant providing an educated second opinion on many aspects of your life, in the way you want to interact – talk to it, draw a diagram, turn on your camera and let it ‘see’ the scenario you are in.
领英推荐
There are chargeable options which will be faster, and have 5x greater capacity. Its believed that GPT-4o will revert to GPT3.5 if its runs out of steam. ?
GPT-4o will also increase pressure for Nvidia’s newest AI semiconductors and its undoubted popularity make even greater demands on data centres daunting water and power supplies. But yes, you are going to love it. Mira admitted at the end that the demo was "only possible due to the availability of the most advanced GPU's" (computing power) - so will the rest of us have a lesser experience ? Find out in a few short weeks.
So, no AGI but this is, after all, “just” a ‘Spring Update’. Mira hinted at “progress towards the next frontier and the next big thing”…… aka GPT5, and AGI ?
#GTP-4o #AI
Looking to return to work after a career break
6 个月Do you think this has been ushed out as there are further delays with training and safety guiderails on GPT-5? OpenAI have been losing out as other models, including “free” models were being launched that were outperforming GPT-4. SORA was the press darling but has yet to see the true light of day. I do appreciate that this is a “new” model, but it’s still fundamentally built on the GPT-4 architecture. Also, OpenAI were crafty releasing this the day before Google’s big developer conference ?? I have read that this new model now leads the standard AI benchmark tests, and the mysterious GPT2 chatbot that appeared on an AI review website recently to much hype was actually GPT-4o in disguise, getting some real world testing incognito. Proof of the pudding is in the eating as they say… but at least even “free” users will be able to have some sort of limited interaction.