AI 2030 Evangelist Digest 006- AI4Future: Top AI News (September 9-15)

AI 2030 Evangelist Digest 006- AI4Future: Top AI News (September 9-15)

By Kate Shcheglova-Goldfinch

AI-gov Lead & Research Affiliate at CJBS and regulatory innovations consultant, AI 2030 Evangelist

Image generated using FLUX.1 by Black Forest Labs with a detailed prompt, without modifications (Photo credit: AI4FUTURE & FLUX.1


THE ORIGINAL LINK OF THIS NEWSLETTER

https://www.dhirubhai.net/pulse/ai4future-top-ai-news-9-15-september-kate-shcheglova-goldfinch-qwooe/


This week has seen significant developments in the regulatory sphere concerning safety and harm prevention associated with the implementation of AI. Australia’s regulator published two strategic AI documents, while China unveiled a new framework for classifying AI risks, which appears more detailed than both the EU’s AI Act and California’s Bill (the latter has been passed but awaits the governor’s signature by 30 September).

?

Additionally, regulators have introduced preventive mechanisms to curb AI model misuse and have effectively “cleaned up the market.” For instance, EU regulators have launched an investigation into Google’s AI model, and in the US, following an incident involving Grok, an AI chatbot from X that spread misinformation regarding ballot submission deadlines, authorities realised the potential risks of AI use in the public sector.

?

The week also highlighted key leaders and laggards in the global labour market when it comes to AI adoption. According to research by The Peninsula Group, employers in Australia and New Zealand are the most frequent users of AI, while Irish employers are the least likely to adopt it. In terms of practical applications, we saw further steps from the UK to scale up AI-powered sensors aimed at improving logistics and traffic management. This week, cyclists were the beneficiaries of an experiment in which AI will prioritise them over cars.

?

Among the breakthroughs this week, Microsoft and Quantinuum set a record with the creation of logical qubits, a key step toward the next generation of reliable quantum hardware. Additionally, OpenAI unveiled its ChatGPT o1, which promises PhD-level intelligence.

A round-up of this week’s key developments.

Nvidia-Backed Sakana AI Eyes Strategic Partnerships in Japan

Sakana AI sees potential for more strategic investors in Japan as rising US-China tensions spur the country to boost its own AI ecosystem. Growing geopolitical risks are heightening interest in the Tokyo-based startup, which recently announced a $100 million-plus funding round that includes Nvidia Corp., according to Sakana’s co-founder and Chief Executive Officer David Ha. “We think that a strong economy like Japan would want to advance their own AI ecosystem, and we want to be part of that,” Ha said in an interview with Bloomberg TV Monday.

More here

?

According to a research, employers in Australia and New Zealand are the most frequent users of AI on a regular basis, while Irish employers are the least likely to adopt it.

The survey of 79,000 businesses across Australia, Canada, Ireland, New Zealand, and the UK was conducted by HR firm The Peninsula Group. The research found that employers in Australia and New Zealand are the most likely to use AI regularly, while Irish employers are the least likely. Nearly one-in-four Irish employers said they were fearful of the unknown when it came to AI and over half of Irish SMEs said they were concerned about the risk AI poses to security.

More here

?

Apple launches iPhone 16 as it bets on AI future

Apple on Monday unveiled its long-awaited, artificial intelligence-boosted iPhone 16 and promised improvements in its Siri personal assistant as it rolled out new software, beginning in test mode next month. "The next generation of iPhone has been designed for Apple Intelligence from the ground up. It marks the beginning of an exciting new era," Chief Executive Tim Cook said at a product launch.

More here

?

China releases security governance framework concerning AI

A framework concerning the security governance of artificial intelligence (AI) was released on Monday at the main forum of this year's China Cybersecurity Week held in Guangzhou, capital of south China's Guangdong Province. Issued by China's National Technical Committee 260 on Cybersecurity, which was formed by the Standardization Administration, the framework features principles in managing the security of AI, such as staying accommodative and prudent to ensure safety, managing risks for swift governance, and opening up to cooperation for joint governance.

More here

?

UK Council tests AI traffic lights that prioritise cyclists over cars

Developed by VivaCity, the sensors aim to promote active travel and prioritise cyclists over motor vehicles in the next step in what has been dubbed the 'war on motorists'. The council claims that by detecting cyclists earlier, the sensors both help reduce the chance of collisions and reduce waiting times at crossings.

More here

?

Microsoft makes quantum breakthrough, plans commercial offering

At Quantum World Congress on Tuesday, Microsoft announced that it and partner Quantinuum had broken a record in the creation of logical qubits. In addition, Microsoft also announced that it is working with partner Atom Computing to build what it described as the world’s most powerful quantum machine. “Through this collaboration, we’re bringing a new generation of reliable quantum hardware to customers by integrating and advancing Atom Computing’s neutral atom hardware into our Azure Quantum compute platform,” Jason Zander, executive vice president of strategic missions and technologies at Microsoft, wrote in a blog post Tuesday.

More here

?

Google's AI model faces European Union scrutiny from privacy watchdog

?

European Union regulators said Thursday they're looking into one of Google's artificial intelligence models over concerns about its compliance with the bloc's strict data privacy rules. Ireland's Data Protection Commission said it has opened an inquiry into Google's Pathways Language Model 2, also known as PaLM2. It's part of wider efforts, including by other national watchdogs across the 27-nation bloc, to scrutinize how AI systems handle personal data. Google's European headquarters are based in Dublin, so the Irish watchdog acts as the company's lead regulator for the bloc's privacy rulebook, known as the General Data Protection Regulation, or GDPR.

More here

?

X’s AI chatbot spread voter misinformation – and election officials fought back

When the Grok tool gave false information, a collection of election officials sprang into action to tamp it down. Soon after Joe Biden announced he was ending his bid for re-election, misinformation started spreading online about whether a new candidate could take the president’s place.

Screenshots that claimed a new candidate could not be added to ballots in nine states moved quickly around Twitter, now?X , racking up millions of views. The Minnesota secretary of state’s office began getting requests for factchecks of these posts, which were flat-out wrong – ballot deadlines had not passed, giving Kamala Harris plenty of time to have her name added to ballots.

More here

?

Australia’s Department of Industry, Science and Resources has released two key AI regulation documents for public consultation

The first, titled “Implementing Mandatory Safeguards for AI in High-Risk Areas”, outlines 10 proposed measures aimed at mitigating AI-related risks and harms, enhancing public trust, and providing greater regulatory certainty for businesses. The second document, “Voluntary AI Safety Standard”, serves as a preliminary version, offering guidance for organisations to harness the benefits of AI while addressing and mitigating associated risks.

More here

https://www.industry.gov.au/sites/default/files/2024-09/voluntary-ai-safety-standard.pdf

https://consult.industry.gov.au/ai-mandatory-guardrails

?

?

California Governor Gavin Newsom has until 30 September to decide whether to sign the California AI Bill, a decision that will have repercussions far beyond the state’s borders

California’s move to regulate artificial intelligence has divided Silicon Valley, with opponents warning that the legal framework could undermine competition and weaken the US’s position as a global technology leader. After fierce battles to amend or soften the bill during its passage through the California legislature, business leaders, including from OpenAI and Meta, are anxiously awaiting Governor Newsom’s decision on whether he will approve the measure. He has until 30 September to make his choice.

More here

?

OpenAI launches new AI model family o1 claiming PhD-level performance

?OpenAI announced its “o1” AI model family beginning with two models: o1-preview and o1-mini, which the company says are designed to “reason through complex tasks and solve harder problems” than the GPT series models. Both models are available today for ChatGPT Plus users but are initially limited to 30 messages per week for o1-preview and 50 for o1-mini. However, OpenAI also cautions that “As an early model, it doesn’t yet have many of the features that make ChatGPT useful, like browsing the web for information and uploading files and images. For many common cases GPT-4o will be more capable in the near term.”

More here

?

Insightful papers / books I have observed this week

AI and policing: The benefits and challenges of artificial intelligence for law enforcement

The Law of Artificial Intelligence by Matthew Lavy and Matt Hervey


ABOUT Kate Shcheglova-Goldfinch

Kate has over 20 years of expert experience in the financial market, including 5 years of experience as an EBRD (NBU) consultant on fintech projects, including the development of the NBU Fintech Strategy 2025 and the creation and launch of the NBU regulatory sandbox. She has extensive experience in creating and moderating educational programmes for the financial market and regulators on topics such as fintech, digital assets (blockchain, DeFi), open banking, open finance, and AI. Currently, she is focused on AI regulation on a global level and in Ukraine, particularly on ethical implementation in the financial sector, and is preparing to launch an educational programme on AI for regulatory institutions. She has successfully launched educational programmes with Cambridge Judge Business School over the past three years. Since 2019, Kate has been ranked in global lists such as TOP50 Fintech Global, TOP100 Women Thought Leaders, Influential Fintech Women UA and UK, TOP10 Regulatory Experts and Policy Makers UK, TOP3 UK Banker of the Year23 (Women award), and TOP100 Thought Leaders in Govtech by Thinkers360 (24). She is AI2030 (community) fellow. In 2024, Kate was elected as a delegate of United Nations Women UK. Kate sees her mission as spreading innovative knowledge at all levels, including professional financial and regulatory spheres, enhancing Ukrainian expertise through creating global collaborations, and improving the representation of women in the tech industry and the AI sector.


About AI 2030 : AI2030 is a member-based initiative aiming to harness the transformative power of AI to benefit humanity while minimizing its potential negative impact. Focused on Responsible AI, AI for All, and AI for Good, we aim to bridge awareness, talent, and resource gaps, enabling responsible AI adoption across public and private sectors.


AI 2030 does not claim ownership of the newsletter; they are the intellectual property of the authors. AI 2030 disclaims all liability for the content, errors, or omissions within the newsletter. Readers are advised to use their judgment when assessing the information presented.

Contact us at: [email protected]

Become an AI 2030 Member: https://ai2030.org/membership/

要查看或添加评论,请登录

AI 2030的更多文章