IMO Weekly Highlights - 02192024
IMO picked up a few pieces of notable news about AI and tech companies for the past week. This week, OpenAI unveils video AI model Sora capable of generating 60-second clips.
?
OpenAI? is not content with just being known as the ChatGPT or even LLM company: today it unveiled a demo of?Sora, its new AI text-to-video generation model, with co-founder and?CEO Sam Altman posting on X (formerly Twitter)?that it was a “remarkable moment.”
While the product is not officially usable by the masses yet due to what Altman said in his post was “starting red-teaming,” or oppositional?testing of its security defenses, flaws and misuses, the founder did note that it was being made available to a “limited number of creators,” with public expansion to come at a later date.
Google unveiled?Gemini 1.5, the latest iteration of its conversational AI system, touting major advances in efficiency, performance and long-form reasoning capabilities on Feb 15th.
The new system, detailed in a?blog post?by Google AI chief Demis Hassabis, incorporates significant architecture improvements that allow its core model, Gemini 1.5 Pro, to perform on par with the company’s largest Gemini 1.0 Ultra model while using less computing resources. The Gemini 1.0 Ultra model was introduced last week.
Mark Zuckerberg, the founder and CEO of Facebook, which?became Meta Platforms?back in 2022, has some thoughts about?Apple’s new Vision Pro spatial computer, and he’s not afraid to call the other company out for what he sees as some deficiencies, especially when compared to his own company’s rival Quest 3 headset.
领英推荐
On Feb 14th,?Zuck posted on his Instagram account?a roughly 3:30 second long video recorded in a candid and only lightly produced manner, with him seated on his couch.
?
Nvidia?is introducing?Chat with RTX?to create personalized local AI chatbots on Windows AI PCs.
It’s the latest attempt by Nvidia to turn AI on its graphics processing units (GPUs) into a mainstream tool used by everyone.
The new offering, Chat with RTX, allows users to harness the power of personalized generative AI directly on their local devices, showcasing the potential of retrieval-augmented generation (RAG) and TensorRT-LLM software. At the same time, it doesn’t burn up a lot of data center computing and it helps with local privacy so that users don’t have to worry about their AI chats.