Meta releases the biggest and best open-source AI model yet
The Ai Spotlights

Meta releases the biggest and best open-source AI model yet

Meta Inc

Back in April, Meta teased the AI industry with an ambitious promise: an open-source model that rivals the top private models from companies like OpenAI. Today, that promise has been fulfilled with the release of Llama 3.1, the largest-ever open-source AI model. Meta claims this model outperforms GPT-4o and Anthropic’s Claude 3.5 Sonnet on several benchmarks. Alongside this, Meta is expanding the availability of its AI assistant based on Llama, adding a feature that generates images based on specific likenesses. CEO Mark Zuckerberg confidently predicts that Meta AI will become the most widely used assistant by the end of this year, surpassing ChatGPT.

The Power Behind Llama 3.1

Llama 3.1 is a significant leap from its predecessors. The largest version boasts 405 billion parameters, trained with the aid of over 16,000 Nvidia H100 GPUs. Although Meta hasn’t disclosed the cost, the Nvidia chips alone suggest an investment of hundreds of millions of dollars.

Why Open-Source?

Given the enormous cost, why is Meta offering Llama 3.1 as an open-source model with a license that only requires approval from companies with hundreds of millions of users? Zuckerberg, in a letter on Meta’s blog, argues that open-source AI models will soon surpass proprietary models in development speed and efficiency. He draws a parallel to the success of Linux, the open-source operating system that now powers most phones, servers, and gadgets.

Collaboration and Deployment

Meta is partnering with over two dozen companies, including Microsoft, Amazon, Google, Nvidia, and Databricks, to facilitate the deployment of Llama 3.1. The model’s weights are being released to allow companies to train it on custom data and fine-tune it to their needs. Meta claims that running Llama 3.1 in production costs roughly half that of OpenAI’s GPT-4o.

Benchmark and Training

Despite the impressive performance claims, Meta’s benchmark comparisons exclude Google’s Gemini due to difficulties in replicating its results using Google’s APIs. Meta is also vague about the data used to train Llama 3.1, citing trade secrets and potential copyright issues. However, it is known that synthetic data played a significant role, with the 405-billion parameter model enhancing the smaller 70 billion and 8 billion versions.

Future of AI Training

Ahmad Al-Dahle, Meta’s VP of generative AI, acknowledges the industry’s concern about the availability of quality training data but suggests there’s still some runway left. He foresees Llama 3.1 being popular with developers as a “teacher” for smaller, more cost-effective models.

Advanced Capabilities

For the first time, Meta’s red teaming of Llama 3.1 included testing for cybersecurity and biochemical applications. The model exhibits emerging “agentic” behaviors, such as integrating with search engine APIs to retrieve information and execute tasks based on complex queries. It can, for example, plot the number of homes sold in the U.S. over the past five years by generating and executing Python code.

Meta’s release of Llama 3.1 marks a pivotal moment in the AI industry, potentially setting a new standard for open-source development and collaboration.


Pearson Leads the AI Charge with New Innovations to Enhance Student Learning and Educator Efficiency

Pearson Logo

Pearson the world's lifelong learning company, has unveiled a suite of new generative AI-powered tools designed to revolutionize teaching and learning across various age levels. This announcement comes as part of Pearson's broader strategy update to investors.

Last summer, Pearson became the first major higher education publisher to integrate generative AI study tools into its proprietary academic content. Building on this momentum, Pearson is set to introduce additional features next month, aimed at providing a richer, more engaging experience for both students and educators.

Key Announcements:

Expanded Reach of AI Study Tools:

  • Pearson’s AI study tools will soon be available to millions more students, expanding to an additional 80 titles, including international editions.
  • Over 70,000 students have already started using these tools in Pearson's platforms, Mastering and MyLab, and their closely integrated Pearson+ eTextbooks.

New AI Study Tool Features:

  • Personalized Learning Experiences: Students can upload their syllabus to generate tailored learning experiences in the order specified.
  • AI Tutor Assistance: A new AI tutor will provide step-by-step problem-solving help, unique video content, and practice questions.
  • Enhanced Video Interactivity: An AI tutor will be available on top of each video to answer questions about the concepts presented.

AI Instructor Tool:

  • A generative AI tool designed to help instructors efficiently build assignments will be introduced to 25 business, math, science, and nursing titles in the US.
  • This tool aims to reduce the time spent on assignment design, allowing educators to focus more on direct student interaction.

AI Tools for High School Learners and Teachers:

  • Pearson's Connections Academy will adapt AI tools from higher education courseware for specific high school subjects.
  • Connections Academy teachers will also have AI-based tools to design assessments.

English Language Teaching Assistant:

  • Pearson is developing an AI teaching assistant to support English language teachers in planning and creating lessons for English language learners.

Commitment to Responsible AI Application:

Pearson's use of generative AI is backed by learning scientists and vetted by subject matter experts, ensuring the integration of trusted content. The company's approach aims to responsibly advance product innovation and enhance the learning experience.

Tony Prentice, Chief Product Officer, expressed his enthusiasm: "It's encouraging to see students so actively engaged with our AI-powered tools, embracing this massive change in the technology landscape and driving new demand. Our unique approach combines AI with our trusted content to unlock new ways to personalize learning and teaching that will help people realize the life they imagine through learning."


GE HealthCare Teams Up with AWS to Revolutionize Medical Diagnostics with AI

AWS in Gen AI

GE HealthCare has entered into a strategic collaboration with Amazon Web Services (AWS) to develop generative AI models and applications aimed at enhancing medical diagnostics and patient care. This partnership will leverage AWS's robust cloud solutions, with AWS becoming GE HealthCare’s preferred cloud provider.

Key Collaboration Highlights:

Generative AI Applications:

  • GE HealthCare will use AWS’s Bedrock platform to develop specialized generative AI applications designed to improve patient care.

AI-Powered Development Assistance:

  • GE HealthCare's developers will utilize AWS' AI-powered software development assistant, Q Developer. This tool will provide coding suggestions, aiding in the development of solutions based on multimodal clinical and operational data.

Modernizing Applications with SageMaker:

  • GE HealthCare plans to modernize existing applications using Amazon SageMaker. This platform will help build machine learning-based systems, including models for web-based medical imaging applications.

Integration of AWS Solutions:

  • The collaboration includes integrating AWS solutions such as HealthLake and HealthImaging. This will enable customers to analyze various types of patient data effectively.

Leadership Insights:

“This new collaboration with AWS allows us to build on our legacy of innovation by embracing the power of AI to expedite the creation of medical technologies that we expect will redefine clinical workflows and the delivery of care,” said Peter Arduini, GE HealthCare’s president and CEO.

GE HealthCare’s AI Investments:

GE HealthCare has a history of investing in AI capabilities:

  • Caption Health Acquisition: In February 2023, GE HealthCare acquired Caption Health, integrating its AI-powered image guidance into GE HealthCare’s digital ecosystem.
  • Deep Learning in MRI Scans: The company has been leveraging deep learning to improve MRI scans, facilitating faster diagnoses and reducing scan times.
  • Ultrasound Image Segmentation: Before partnering with AWS, GE HealthCare enhanced ultrasound image segmentation using a foundation model, achieving over 90% accuracy in identifying anatomical structures with minimal human input.

"By combining generative AI with our deep expertise, we're igniting a new era in health care,” said Taha Kass-Hout, GE HealthCare’s global chief science and technology officer. “Our work with AWS is a big step towards helping clinicians make medical care simpler, more efficient, and deeply personalized. It's about advancing the way we care for people everywhere, one innovative solution at a time.”


NVIDIA Unveils Generative AI Models and NIM Microservices for OpenUSD


NVIDIA announced significant advancements to the Universal Scene Description (OpenUSD) framework, aiming to extend its adoption in robotics, industrial design, and engineering. These advancements, built on the NVIDIA Omniverse? platform, will enable developers to create highly accurate virtual worlds, propelling the next evolution of AI.

Key Announcements:

New Offerings and NIM Microservices:

  • NVIDIA NIM? Microservices: AI models that generate OpenUSD language to answer queries, generate OpenUSD Python code, apply materials to 3D objects, and understand 3D space and physics, accelerating digital twin development.
  • USD Connectors: Integrations with robotics and industrial simulation data formats allow users to stream massive, fully NVIDIA RTX? ray-traced datasets to Apple Vision Pro.

Generative AI Applications:

  • USD Code NIM Microservice: Automatically generates OpenUSD-Python code based on text prompts for visualization.
  • USD Search NIM Microservice: Allows natural language or image-based searches through libraries of OpenUSD, 3D, and image data.
  • USD Validate NIM Microservice: Checks compatibility of uploaded files against OpenUSD release versions and generates RTX-rendered, path-traced images.

Upcoming Microservices:

  • USD Layout NIM Microservice: Assembles OpenUSD-based scenes from text prompts based on spatial intelligence.
  • USD SmartMaterial NIM Microservice: Predicts and applies realistic materials to CAD objects.
  • fVDB Mesh Generation NIM Microservice: Generates OpenUSD-based meshes from point-cloud data.
  • fVDB Physics Super-Res NIM Microservice: Performs AI super resolution on frames to create high-resolution physics simulations.
  • fVDB NeRF-XL NIM Microservice: Generates large-scale neural radiance fields in OpenUSD.

Industry Impact:

Foxconn:

  • Using NIM microservices and Omniverse to create digital twins of factories, enhancing industrial manufacturing and autonomous machine development.

WPP:

  • Implementing USD Search and USD Code NIM microservices in its generative AI-enabled content creation pipeline for clients like The Coca-Cola Company.

New Connectors and Tools:

Robotics and Industrial Workloads:

  • Siemens Collaboration: Integrating OpenUSD pipelines with Simcenter portfolio for high-fidelity, real-time visualization of simulation data.
  • Unified Robotics Description Format Connector: Enables seamless integration of robotic data across applications.

OpenUSD Ecosystem Expansion:

  • OpenUSD Exchange SDK: Allows developers to build robust OpenUSD data connectors.
  • Omniverse Developer Tools: Enable streaming of large-scale OpenUSD scenes to Apple Vision Pro via the NVIDIA Graphics Delivery Network.

Quotes from Leaders:

“The generative AI boom for heavy industries is here,” said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA. “With the enhancements and accessibility NVIDIA NIM microservices are bringing to OpenUSD, industries of all kinds can build physically based virtual worlds and digital twins to drive innovation while preparing for the next wave of AI: robotics.”

“OpenUSD is revolutionizing the way we create and interact with 3D content,” said Steve May, CTO of Pixar and chairman of the Alliance for OpenUSD (AOUSD). “With these new services and APIs for OpenUSD built by NVIDIA, we expect accelerated growth and adoption of USD.”

Availability:

  • USD Search, USD Code, and USD Validate NIM Microservices: Available in preview on the NVIDIA API catalog.
  • OpenUSD to URDF Connector: Available with NVIDIA Isaac Sim?.
  • Generative AI Integration: Developers can start with new Omniverse tools and reference workflows for building generative AI-enabled synthetic data pipelines with OpenUSD.


OpenAI Unveils SearchGPT: The Future of AI-Powered Search Engines


Open Ai SearchGPT

OpenAI is stepping into the search engine arena with its latest innovation, SearchGPT, an AI-powered search engine designed to revolutionize how we access and interact with information online. This new tool, initially available as a prototype, promises to offer real-time access to web data, delivering organized and context-aware search results.

Key Highlights of SearchGPT:

Real-Time Information Retrieval:

  • SearchGPT aims to provide immediate access to information across the internet, organizing it into concise summaries rather than just presenting a list of links.

Interactive Search Experience:

  • Users can input queries into a large text box, receiving detailed answers with short descriptions followed by attribution links. For example, a search for "music festivals in Boone, North Carolina" in August 2024 will yield summarized festival details with source links.

Follow-Up Questions:

  • The search engine allows users to ask follow-up questions, maintaining the context of the original query for a more seamless search experience.

Visual Answers:

  • A unique feature called “visual answers” includes AI-generated videos and images related to the search query. Details on this feature are expected soon.

Prototype Launch and Future Integration:

Limited Release:

  • SearchGPT will initially be accessible to 10,000 test users. This limited release helps OpenAI refine the tool and address any inaccuracies in real-world usage.

Integration with ChatGPT:

  • OpenAI plans to eventually incorporate SearchGPT’s capabilities into ChatGPT, enhancing the AI assistant’s ability to provide up-to-date information.

Collaboration with News Partners:

Responsible Development:

  • OpenAI emphasizes its collaboration with various news organizations, including The Wall Street Journal, The Associated Press, and Vox Media. These partners provided valuable feedback to ensure ethical and accurate use of information.

Publisher Controls:

  • Publishers can manage how their content appears in SearchGPT results, including opting out of using their data to train AI models while still being surfaced in searches.

Competitive Edge and Industry Implications:

Rivaling Google and Perplexity:

  • SearchGPT positions OpenAI as a direct competitor to established players like Google and Perplexity, which are also incorporating AI features into their search engines.

Enhanced User Experience:

  • Unlike traditional search engines, SearchGPT offers clear, in-line attributions and links, helping users quickly verify sources and engage with additional information through a sidebar.

Strategic and Financial Considerations:

Cost Management:

  • OpenAI’s operational costs for AI training and inference are significant, with projections reaching $7 billion this year. As SearchGPT will be free during its initial launch, the company will need to develop monetization strategies to sustain the service.

Conclusion:

SearchGPT represents a significant leap forward in the search engine market, harnessing the power of AI to provide more intuitive and accurate search results. By prioritizing context, interactivity, and ethical collaboration with publishers, OpenAI aims to set new standards for information retrieval.


Stay tuned for more updates and insights from the cutting edge of AI technology.

#ileafainewsletter #ai

要查看或添加评论,请登录

iLeaf Solutions的更多文章