GPT-4o: OpenAI’s Enhanced Model to Improve ChatGPT Experience
Discover the key updates in GPT-4o, available for all ChatGPT users, which promises faster, smarter, and more natural AI interactions with enhanced voice, visual, and language processing abilities.
Launched in November 2022, ChatGPT is a form of generative AI, developed by OpenAI, that functions as a chatbot and virtual assistant. Nearly 2 years after its initial release, ChatGPT has undergone several updates, including a customizable free version, GPTs, GPT-3,5 Turbo, and GPT-4 Turbo. Additionally, its image and voice recognition technology have been enhanced.?
This year, OpenAI introduced the latest update of ChatGPT: GPT-4o. Through their article, OpenAI claims that this newest chatbot mode represents a significant step towards more natural human-computer interaction.
What is GPT-4o?
Launched in May 2024, GPT-4o is the latest ChatGPT update. The “o” in GPT-4o stands for “omni”, indicating the GPT-4o’s extensive capabilities compared to the older ChatGPT models.
Since its official release, GPT-4o has become the default model for CharGPT. Therefore, users on the free plan have full access to GPT-3.5 and limited access to GPT-4o. On the other hand, users on the Plus plan enjoy increased capacity limits to access GPT-4o, as well as early access to new features like advanced data analysis, DALL-E image generation, and GPTs.
One of the most significant enhancements in GPT-4o is the capability to process all types of inputs and outputs - text, vision, and audio end-to-end by the same neural network.?
To prove an example, previous ChatGPT models required three separate processes for voice input: one to transcribe audio to text, one to process the text with GPT-3.5 or GPT-4, and a third to convert the text back to audio.?
OpenAI acknowledges that this lengthy process led to a loss of valuable information from the main intelligence source, GPT-4. Therefore, with GPT-4o, OpenAI addresses this issue by shortening multi-step operations into a single model capable of processing any type of input and output using the same neural network.
GPT-4o Key Updates
GPT-4o introduces several other significant enhancements that collectively enable more efficient, versatile, and high-quality interactions with the AI.
领英推荐
Improved Information Processing
As mentioned above, the previous GPT versions used multiple models to process audio inputs, losing a lot of important information along the way. GPT-4o simplifies this by using a single neural network. This way, GPT-4o can capture subtle details like the speaker’s tone or background noises, resulting in higher-quality responses.
Lower Latency in Brand New Voice Mode
GPT-4o provides faster responses in voice mode compared to previous models. Datacamp reports an average latency of 0.32 seconds for GPT-4o, which is 9x faster than GPT-3.5 and 17x faster than GPT-4. This speed almost matches the average human response times, making it possible for a real-time conversation with the AI model.
Enhanced Vision Capabilities
In addition to smarter voice input, GPT-4o can also understand and generate output based on visual inputs. Users can upload pictures and screenshots as a query and ask GPT-4o to answer the question shown in the picture or respond to questions about the images. This way, users can interact with the chatbot more flexibly.
Better Tokenization for Non-Roman Alphabet
Tokenization is a process of breaking text into smaller parts for easier machine analysis. In this sense, GPT-4o has improved tokenization for the non-Roman alphabet to be less than the previous models. For example, the token used in Gujarati has been reduced from 145 to 33, Urdu from 82 to 33, and Vietnamese from 46 to 30 tokens. This means queries with those languages will be processed faster, and combined with the new voice mode, enables real-time speech translation.
Final Thoughts
GPT-4o is OpenAI’s first model with multimodal capabilities. In their announcement, OpenAI acknowledged the limitations and risks associated with its new audio modalities.?
While the advancements in GPT-4o are still preliminary, OpenAI views it as a foundational step towards practical deep learning applications. By emphasizing its omni-capabilities, GPT-4o aims to be a more versatile AI, offering more reliable support for users across various fields.