AI-Powered news roundup: Edition 8

AI-Powered news roundup: Edition 8

Not a week goes by without dozens of artificial intelligence and generative AI articles making the headlines, and we're here to help. Our bi-weekly AI news roundup will get you up to speed with the most important developments – in less than 5 minutes.


In this edition of the newsletter:

  1. EU AI Act officially enters into force
  2. YouTuber files class action against OpenAI over video transcript scraping
  3. OpenAI launches SearchGPT to challenge Google and Bing
  4. OpenAI collaborates with U.S. AI Safety Institute for next model's safety testing
  5. Meta launches Llama 3.1, heralds a new era for Open Source AI
  6. Meta announces SAM 2 for advanced object segmentation


1. EU AI Act officially enters into force

Source: European Commission

The European Union's AI Act, the first comprehensive regulation for artificial intelligence, came into force on August 1, 2024. This landmark legislation introduces a risk-based approach to AI governance, aiming to balance innovation with safety and ethical standards.

  • Risk Categories: AI systems are classified into four risk levels: minimal, limited, high, and unacceptable. High-risk applications, like AI used in law enforcement or medical software, must comply with stringent requirements, including risk mitigation and human oversight. Unacceptable risk AI, such as systems enabling social scoring, are banned outright.
  • Timeline: Most rules will apply from August 2026, with the first set of regulations, including bans on certain high-risk AI applications, starting in six months. Provisions for general-purpose AI will begin in one year.
  • Penalties: Companies failing to comply with the Act could face fines up to 7% of their global annual turnover.

The Act also includes transparency requirements for limited-risk AI systems, such as chatbots, to inform users they are interacting with AI. The EU aims to lead in responsible AI development, ensuring technologies benefit society while safeguarding citizens' rights.


2. YouTuber files class action against OpenAI over video transcript scraping

Source: TechCrunch

YouTube creator David Millette has filed a class action lawsuit against OpenAI , accusing the company of using transcripts from his and others' YouTube videos without consent to train AI models like ChatGPT. The lawsuit, filed in the U.S. District Court for the Northern District of California, claims OpenAI violated copyright laws and YouTube’s terms of service, profiting from the creators' work without credit or compensation.?

Represented by Bursor & Fisher, Millette seeks over $5 million in damages for affected creators. This case adds to OpenAI's growing legal challenges, including a recent lawsuit by Elon Musk.


3. OpenAI launches SearchGPT to challenge Google and Bing

Source: OpenAI

OpenAI is testing SearchGPT, a new search engine prototype designed to provide timely answers by drawing from web sources. Similar to ChatGPT, it offers direct responses with links to relevant sources and supports follow-up queries. SearchGPT collects general location data to enhance search accuracy and allows users to share precise location details via settings.?


Image credits: OpenAI

Powered by OpenAI’s GPT-3.5, GPT-4, and GPT-4o models, SearchGPT is available to a limited group of users and aims to integrate responsibly with publishers to avoid content cannibalism.

You can join the SearchGPT waitlist here: https://chatgpt.com/search



4. OpenAI collaborates with U.S. AI Safety Institute for its next model's safety testing

Source: TechCrunch

OpenAI CEO Sam Altman announced a partnership with the U.S. AI Safety Institute, providing early access to its next major generative AI model (Lllama 4.0) for safety testing. This move aims to bolster AI safety efforts following criticism of OpenAI’s previous safety measures, including disbanding an internal safety team earlier this year.

In response to ongoing concerns, OpenAI has recommitted to allocating 20% of its computing resources to safety research and has established a safety commission. The U.S. AI Safety Institute, part of the National Institute of Standards and Technology, works with major tech companies to develop AI safety guidelines. This collaboration comes as OpenAI supports the Future of Innovation Act, which would formalize the institute's role in setting AI standards.


5. Meta launches Llama 3.1, heralds a new era for Open Source AI

Source: Meta

Meta has unveiled Llama 3.1 405B, the first frontier-level open source AI model, boasting unmatched flexibility, control, and state-of-the-art capabilities. This model supports eight languages and expands context length to 128K, rivaling top closed-source rival models like GPT-4 Turbo and Claude 3.5. It aims to empower developers with tools for synthetic data generation and model distillation, fostering new workflows and innovation.

Meta is enhancing the Llama ecosystem with additional components, including a reference system and new security tools like Llama Guard 3 and Prompt Guard. They have also released a request for comment on the Llama Stack API to standardize third-party integration. Over 25 partners, including AWS, NVIDIA, and Google Cloud, are offering services for Llama 3.1 405B.


6. Meta announces SAM 2 for advanced object segmentation

Source: Meta

Meta has introduced the Meta Segment Anything Model 2 (SAM 2), an advanced version of its object segmentation model, capable of real-time segmentation in both images and videos. SAM 2, released under a permissive Apache 2.0 license, features a significant upgrade in segmentation accuracy and efficiency, requiring three times less interaction time than previous models. It also excels in zero-shot generalization, meaning it can segment unseen objects without custom adaptation.


Video source: Meta press release

Key features of SAM 2 include:

  • Enhanced performance: SAM 2 provides superior accuracy in image and video segmentation compared to existing models.
  • Open access: Both the SAM 2 code and the extensive SA-V dataset (51k videos, 600k+ annotations) are available under permissive licenses, fostering open research and development.
  • Versatile applications: From video effects and scientific research to autonomous vehicles and medical imaging, SAM 2's capabilities support a wide range of real-world use cases.

The official release from Meta includes a web-based demo for real-time interactive segmentation, reflecting Meta's commitment to open science and collaborative innovation.

Jarno Rikama

Director, Public BU & Tampere Site Lead at Siili Solutions

3 个月

??

回复

要查看或添加评论,请登录