AI in the News: OpenAI's o1 Launch, Fei-Fei Li's Vision for Spatial Intelligence, and Military AI Readiness

AI in the News: OpenAI's o1 Launch, Fei-Fei Li's Vision for Spatial Intelligence, and Military AI Readiness

Hi! Here's your Friday, September 13, 2024 edition of AI in the News. I've gathered quite a few interesting articles that are perfect for bookmarking and diving into over the weekend. If you enjoy what you read, don’t hesitate to share the love with your friends and give me a shout-out when you share any content you find here! ??


OpenAI o1

OpenAI introduced a new AI model yesterday, o1, claiming it can 'reason' and tackle complex problems in science, coding, and math more effectively than previous models. I’ve selected two articles for you: one to help you understand what the model is about, and the other to offer further reflection.

OpenAI releases o1, its first model with ‘reasoning’ abilities - The Verge

  • OpenAI has released o1, a model focused on coding and math, but not as capable as GPT-4o in different areas
  • The new model uses a different training methodology, including reinforcement learning.
  • OpenAI aims to develop autonomous systems with "reasoning capabilities', stating, “If a model is capable of more than pattern recognition, it could unlock breakthroughs in areas like medicine and engineering.” ?
  • In the API, o1-preview is $15 per 1M input tokens and $60 per 1M output tokens. For comparison, GPT-4o costs $5 per 1M input tokens and $15 per 1M output tokens.

OpenAI’s Big Reset - The Atlantic

  • "Perhaps the most important consequence of these longer processing times is not technical or financial costs so much as a matter of branding. “Reasoning” models with “chains of thought” that need “more time” do not sound like stuff of computer-science labs, unlike the esoteric language of “transformers” and “diffusion” used for text and image models before."

  • "Instead, OpenAI is communicating, plainly and forcefully, a claim to have built software that more closely approximates our minds. Many rivals have taken this tack as well."

"The language of humanity might be especially useful for an industry that can’t quite pinpoint what it is selling. Intelligence is capacious and notoriously ill-defined, and the value of a model of “language” is fuzzy at best."

Deep Dive

Microsoft’s Hypocrisy on AI - The Atlantic

  • Microsoft is promoting AI as a solution to climate issues while simultaneously marketing its technology to fossil-fuel companies like ExxonMobil and Chevron, raising concerns about hypocrisy in its environmental commitments.
  • Internal documents reveal that Microsoft has sought to optimize oil and gas production through AI, with executives noting the potential for significant revenue opportunities in the fossil-fuel sector, despite the company's public climate goals.
  • Critics within Microsoft argue that the company's partnerships with fossil-fuel companies contradict its sustainability efforts.

The Godmother of AI Wants Everyone to Be a World Builder - Wired

  • Fei-Fei Li is cofounding World Labs to develop spatial intelligence, aiming to create systems that can construct immersive worlds with realistic physics and logic, despite skepticism about the AI industry's current state.
  • Li emphasizes the need for a new generation of AI models, stating, “The physical world for computers is seen through cameras, and the computer brain behind the cameras. Turning that vision into reasoning, generation, and eventual interaction involves understanding the physical structure, the physical dynamics of the physical world.”
  • World Labs plans to first build a deep understanding of three-dimensionality and physicality before advancing to augmented reality and robotics, with the potential to enhance technologies like autonomous vehicles and humanoid robots.

This Chatbot Pulls People Away From Conspiracy Theories - The New York Times

  • The DebunkBot, an AI chatbot, effectively persuades users to abandon conspiracy theories, with participants' belief ratings dropping by an average of 20% after conversations.
  • "It is the facts and evidence themselves that are really doing the work here," said David Rand, highlighting the importance of personalized information in debunking beliefs.
  • Researchers are exploring real-world applications for the chatbot, such as integrating it into forums or doctor's offices to counter misinformation about vaccines.

Opinion - A.I. Is Changing War. We Are Not Ready. - The New York Times ?

  • The U.S. military is unprepared for the rapid integration of AI-powered autonomous weapons, as evidenced by the Ukrainian military's withdrawal of advanced tanks due to drone attacks, according to the authors. ?

  • The Pentagon continues to invest heavily in outdated legacy systems, with only a small fraction of its budget allocated to innovative technologies like the Replicator initiative, which accounts for just 0.059% of defense spending. ?
  • "The history of failure in war can almost be summed up in two words: Too late," warns the article, emphasizing the urgent need for the U.S. to adapt its military technology to compete with adversaries like China.

Briefly Noted

Google’s new tool lets large language models fact-check their responses

Alibaba’s Taobao shopping app launches AI-powered English version in Singapore, jumps to first place in Apple’s App Store

要查看或添加评论,请登录

Florent Daudens的更多文章

社区洞察

其他会员也浏览了