AI4Future: Top AI News (22-28th July)
Kate Shcheglova-Goldfinch
Research Affiliate at CJBS, regulatory innovations consultant and Freeman of the WCIB
Tracking the news stream this week, I was impressed by the technological breakthroughs from industry giants: Google DeepMind’s success in solving mathematical problems at the International Mathematical Olympiad (IMO), Google's update to the free version of the Gemini chatbot, and OpenAI’s SearchGPT test, challenging Google’s dominance.
There were also significant regulatory developments — the Senate passed the Deepfake Bill, two of the European Parliament's committees (told Euractiv) set up a joint working group to monitor the implementation of the EU AI Act, and the OECD published a new report on AI and privacy.
The Data Provenance Initiative's research revealed a sharp decline in the volume of data available for AI model training, providing many intriguing insights. Lastly, Sam Altman, CEO of OpenAI, provoked much thought with his column in the Washington Post, where he reflects on the future of AI — whether it will be authoritarian or democratic.
Here is my news roundup for this week.
OpenAI Tests SearchGPT to Challenge Google’s Dominance
OpenAI is testing a new search engine, SearchGPT, using generative AI to produce search results, potentially challenging Google’s dominance in online search. The prototype initially starts with select users and publishers, and plans to integrate these features into ChatGPT rather than offer a standalone product.
The Senate Passed a Deepfake AI Bill
The Senate unanimously passed a bill on Tuesday (23rd July) letting victims of nonconsensual intimate images created by AI — or “deepfakes” — sue their creators for damages. The Disrupt Explicit Forged Images and Non-Consensual Edits Act?(DEFIANCE) Act lets victims of sexually explicit deepfakes pursue civil remedies against those who produced or processed the image with the intent to distribute it. Victims who are identifiable in these kinds of deepfakes can receive up to $150,000 in damages under the bill and up to $250,000 if the incident was connected to “actual or attempted sexual assault, stalking, or harassment” or “the direct and proximate cause” of those harms. It’s now up to the House to take up the bill before it can be moved to the president’s desk to be signed into law.
Google Upgrades and Expands Gemini Chatbot
To compete with AI rivals, Google is enhancing the no-fee tier of its Gemini chatbot. Starting 25th July, the faster Gemini 1.5 Flash model is available on web and mobile in 40 languages across 230 countries. Google claims significant improvements in quality, latency, reasoning, and image understanding.
AI Reaches Silver-Medal Standard in Solving Math Olympiad Problems
Breakthrough AI models AlphaProof and AlphaGeometry 2 have achieved a silver-medal level by solving four out of six problems in this year's International Mathematical Olympiad. These advancements in mathematical reasoning could unlock new frontiers in science and technology.
The European Parliament Forms Group to Monitor AI Act Implementation
The European Parliament's committees on Internal Market (IMCO) and Civil Liberties (LIBE) have created a joint working group to oversee the AI Act's implementation, sources told Euractiv. This move follows concerns about transparency and civil society involvement in the AI Office's staffing and processes. The European Commission is coordinating the Act's rollout.
OECD Releases Report on AI and Privacy
The OECD's new report examines privacy risks and opportunities in AI, highlighting synergies and potential areas for international cooperation. It maps OECD Privacy Guidelines to AI Principles, reviews national and regional initiatives, and calls for collaborative efforts to ensure AI systems respect privacy.
The Data That Powers AI Is Disappearing Fast
New research from the Data Provenance Initiative has found a dramatic drop in content made available to the collections used to build artificial intelligence.
The internet has become the primary data source for general-purpose and multi-modal AI systems. The scale and diversity of online datasets underpin both open and closed AI systems, such as OLMo, GPT-4o, and Gemini.
However, using internet content for AI raises ethical and legal issues, prompting initiatives to improve data quality and provenance, and to isolate public and consented data for AI training.
Recent research sheds light on the internet’s role in AI development, revealing significant shifts in data collection practices and regulations, as well as changes in consent structures online, which notably impact AI developers.
Who will control the future of AI?
"A democratic vision for artificial intelligence must prevail over an authoritarian one," Sam Altman, co-founder and CEO of OpenAI, says in his column to Washington Post.
"That is the urgent question of our time. The rapid progress being made on artificial intelligence means that we face a strategic choice about what kind of world we are going to live in: Will it be one in which the United States and allied nations advance a global AI that spreads the technology’s benefits and opens access to it, or an authoritarian one, in which nations or movements that don’t share our values use AI to cement and expand their power?
There is no third option — and it’s time to decide which path to take. The United States currently has a lead in AI development, but continued leadership is far from guaranteed. Authoritarian governments the world over are willing to spend enormous amounts of money to catch up and ultimately overtake us."
Insightful papers I have observed this week: