AI 2030 Evangelist Digest 005- AI4Future: Top AI News (2 August-8 September)
By Kate Shcheglova-Goldfinch
AI-gov Lead & Research Affiliate at CJBS and regulatory innovations consultant, AI 2030 Evangelist
THE ORIGINAL LINK OF THIS NEWSLETTER
https://www.dhirubhai.net/pulse/ai4future-top-ai-news-2-8-september-kate-shcheglova-goldfinch-jqgse/
This week has underscored that the development of the AI market is far from plain sailing for leading companies. Increasingly, AI developers are facing copyright infringement claims. This issue is compounded as startups push the boundaries of data usage, stymieing new technological breakthroughs. As a result, aggressive data collection, which is becoming ever more complex, remains their recourse. For instance, Anthropic faces a lawsuit from publishers for allegedly stealing hundreds of thousands of copyright-protected books. Another case this week highlights the necessity of ethical data collection in line with current legislation, particularly concerning sensitive information under GDPR. Consequently, Clearview AI, a US-based facial recognition firm, was fined €30.5 million for what the Dutch data regulator DPA termed an illegal database and may face an additional €5 million penalty for non-compliance with the imposed requirements.
Additionally, market volatility and sensitivity to “big names” were on full display. Nvidia’s market capitalization plummeted by $279 billion, a stark indication that investors are taking a more realistic view of AI and that maintaining a leadership position amidst fierce competition is no simple task. Meanwhile, OpenAI co-founder Ilya Sutskever was riding high, with his new AI startup, focusing on safety, attracting $1 billion in funding.
Moreover, the week was marked by a historic regulatory event - the signing of the Council of Europe’s Framework Convention on AI (CETS No. 225). The United States, European Union, United Kingdom, and Israel were the first to sign the international treaty, which emphasizes human rights and democratic values as key to regulating AI models in both public and private sectors.
Finally, an intriguing trend emerged from the University of Oxford, which has established an AI lab aiming to unite leading philosophers and AI practitioners. The lab’s goal is to cultivate a new generation of philosopher-technologists - yet crucial for progress in AI development.
A round-up of this week’s key developments.
AI hit by copyright claims as companies approach ‘data frontier’
This month, a trio of authors filed a lawsuit against Anthropic for the “theft of hundreds of thousands of copyright-protected books.” The class-action suit adds to a growing list of ongoing copyright infringement cases, the most notable of which was brought by The New York Times against OpenAI and Microsoft late last year.
?
U.S. AI Safety Institute Signs Agreements Regarding AI Safety Research, Testing and Evaluation With Anthropic and OpenAI
The U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) announced agreements that enable formal collaboration on AI safety research, testing and evaluation with both Anthropic and OpenAI.
Each company’s Memorandum of Understanding establishes the framework for the U.S. AI Safety Institute to receive access to major new models from each company prior to and following their public release. The agreements will enable collaborative research on how to evaluate capabilities and safety risks, as well as methods to mitigate those risks.?Additionally, the U.S. AI Safety Institute plans to provide feedback to Anthropic and OpenAI on potential safety improvements to their models, in close collaboration with its partners at the U.K. AI Safety Institute.?
?
?
Clearview AI fined by Dutch agency for facial recognition database
?
U.S. facial recognition company Clearview AI has been fined 30.5 million euros ($33.7 million) for building what Dutch data protection watchdog DPA said on Tuesday was an illegal database. DPA also issued an additional order, imposing a penalty of up to 5 million euros on Clearview for non-compliance. "Clearview AI does not have a place of business in the Netherlands or the European Union, it does not have any customers in the Netherlands or the EU," Jack Mulcaire, Clearview AI's chief legal officer, told Reuters.
?
?
$279bn wiped off Nvidia stock in Wall Street sell-off
On 3rd September, Nvidia experienced a record $279 billion market value loss due to a Wall Street downturn. The AI giant’s shares (NVDA.O) fell by 9.5%, marking the largest single-day market value drop for any US company, as investors curbed their enthusiasm for AI amidst poor economic data.
?
?
OpenAI co-founder Sutskever's new safety-focused AI startup SSI raises $1 billion
Safe Superintelligence (SSI), newly co-founded by OpenAI's former chief scientist Ilya Sutskever, has raised $1 billion in cash to help develop safe artificial intelligence systems that far surpass human capabilities, company executives told Reuters.
SSI, which currently has 10 employees, plans to use the funds to acquire computing power and hire top talent. It will focus on building a small highly trusted team of researchers and engineers split between Palo Alto, California and Tel Aviv, Israel.
?
?
Council of Europe opens first ever global treaty on AI for signature
The Council of Europe?Framework Convention on artificial intelligence and human rights, democracy, and the rule of law?(CETS No. 225 )?was opened for signature during a?conference of Council of Europe Ministers of Justice in Vilnius. It is the first-ever international legally binding treaty aimed at ensuring that the use of?AI systems?is fully consistent with?human rights, democracy?and the?rule of law.
The Framework Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America and the European Union.
The Framework Convention is an open treaty with a potentially global reach. The treaty provides a legal framework covering the entire lifecycle of AI systems. It promotes AI progress and innovation, while managing the risks it may pose to human rights, democracy and the rule of law. To stand the test of time, it is technology-neutral.
?
?
Oxford launches Human-Centered AI Lab
The University of Oxford has announced the establishment of the Human-Centered AI Lab (HAI Lab), a pioneering research initiative supported by the Cosmos Institute. This ground-breaking lab will create a space for technologists and philosophers to collaborate on translating philosophical concepts into open-source software and AI systems, fostering a vibrant community for big-picture thinking about a future of AI that enhances human flourishing.
Novel Chinese computing architecture 'inspired by human brain' can lead to AGI, scientists say
Scientists in?China ?have created a new computing architecture that can train advanced?artificial intelligence ?(AI) models while consuming fewer computing resources — and they hope that it will one day lead to artificial general intelligence (AGI).
LLMs are currently limited because they can't perform beyond the confines of their training data and can't reason well like humans.
Insightful papers I have observed this week:
ABOUT Kate Shcheglova-Goldfinch
Kate has over 20 years of expert experience in the financial market, including 5 years of experience as an EBRD (NBU) consultant on fintech projects, including the development of the NBU Fintech Strategy 2025 and the creation and launch of the NBU regulatory sandbox. She has extensive experience in creating and moderating educational programmes for the financial market and regulators on topics such as fintech, digital assets (blockchain, DeFi), open banking, open finance, and AI. Currently, she is focused on AI regulation on a global level and in Ukraine, particularly on ethical implementation in the financial sector, and is preparing to launch an educational programme on AI for regulatory institutions. She has successfully launched educational programmes with Cambridge Judge Business School over the past three years. Since 2019, Kate has been ranked in global lists such as TOP50 Fintech Global, TOP100 Women Thought Leaders, Influential Fintech Women UA and UK, TOP10 Regulatory Experts and Policy Makers UK, TOP3 UK Banker of the Year23 (Women award), and TOP100 Thought Leaders in Govtech by Thinkers360 (24). She is AI2030 (community) fellow. In 2024, Kate was elected as a delegate of United Nations Women UK. Kate sees her mission as spreading innovative knowledge at all levels, including professional financial and regulatory spheres, enhancing Ukrainian expertise through creating global collaborations, and improving the representation of women in the tech industry and the AI sector.
About AI 2030 : AI2030 is a member-based initiative aiming to harness the transformative power of AI to benefit humanity while minimizing its potential negative impact. Focused on Responsible AI, AI for All, and AI for Good, we aim to bridge awareness, talent, and resource gaps, enabling responsible AI adoption across public and private sectors.
AI 2030 does not claim ownership of the newsletter; they are the intellectual property of the authors. AI 2030 disclaims all liability for the content, errors, or omissions within the newsletter. Readers are advised to use their judgment when assessing the information presented.
Contact us at: [email protected]
Become an AI 2030 Member: https://ai2030.org/membership/
?? Stay Connected & Engaged:
Join our LinkedIn Group: https://lnkd.in/e_CrPkc
Join the next Tech Pulse 2030 at 1871: https://lnkd.in/deWDaz32
Don't miss out on our AI 2030 Summit Series: https://www.ai2030.org/
Sponsor our Summits, click here: https://lnkd.in/gNvxHXdJ
Join the Movement: Become a Member of AI2030: https://lnkd.in/gN5PcCUc
?
Love this