AI4Future: Top AI News (12-18th August)
Kate Shcheglova-Goldfinch, MSc MBA
Research Affiliate at CJBS, regulatory innovations consultant and Freeman of the WCIB
This week has been marked by several trends. Firstly, we witnessed an initiative from one of the world’s leading regulators, the Hong Kong Monetary Authority (HKMA), which introduced its regulatory sandbox for testing AI-driven ideas in the financial sector within a safe environment. A regulatory sandbox is a specialised regime that facilitates innovation testing and is a simplified version of the broader existing regulatory framework. This makes HKMA the second global regulator to publicly announce a sandbox initiative, establishing a safe environment for testing AI models with the aim of reducing risks to end consumers. This is a crucial tool, recommended by leading regulatory and consulting institutions (including OECD), and we can expect to see more of these sandboxes in the future.
Secondly, we observed practical cases of AI implementation and the initial conclusions, which are not as optimistic as expected. However, this signals a shift from the realm of inflated expectations to the reality of the situation. For instance, AI is not yet capable of effectively analysing patient records and diagnosing conditions, nor can it produce scientific papers with the same quality as humans, or even write a CV competently without human intervention. Moreover, in its current form, AI is too risky to be connected to strategic infrastructure, and all testing must be conducted in isolated environments to prevent any potential harm.
Amidst this, the third and very intriguing trend has emerged - the battle of the big tech giants over AI capabilities, particularly in the production of super-powerful AI chips and computers needed for data processing. These developments are currently securing tech companies a “place in the sun,” but they are also overheating the market, making a correction inevitable sooner or later, especially as the supply of high-quality, human-generated data - essential for training AI models - continues to dwindle. This could become the most critical battle, one that few are focusing on. High-quality (consent-based) data will be essential for ensuring a robust AI-driven future for humanity.
A round-up of this week’s key developments.
New supercomputing network could lead to AGI, scientists hope, with 1st node coming online within weeks
Scientists hope to accelerate the development of human-level AI using a network of powerful supercomputers — with the first of these machines fully operational by 2025. Researchers plan to accelerate the development of artificial general intelligence (AGI) with a worldwide network of extremely powerful computers — starting with a new supercomputer that will come online in September.
US, UK and Australia take next step in integrating AI defense systems
The latest technology integrating artificial intelligence (AI) with unmanned aerial vehicles (UAV) in "contested environments" has passed the test following trials conducted by the U.S., U.K. and Australia's military alliance, AUKUS, officials said Friday.
According to all three defense agencies in the alliance, the cutting-edge sensing technology was put to the test to determine whether UAVs could "complete their missions and preserve network connectivity" across multi-domain battlespaces, including land, maritime, air and cyberspace.?Under Pillar Two of the AUKUS agreement, all three nations are working to "harmonize" AI technologies for defense and security applications, largely in the face of growing Chinese aggression in the Indo-Pacific.?
Apple is betting on AI to boost iPhone 16 sales
Tim Cook and his team are set to unveil the iPhone 16 and iPhone 16 Pro next month, with Apple finally offering its loyal fans a cutting-edge suite of AI tools. The Apple Intelligence ecosystem, first announced in June at the Worldwide Developers Conference, has already made its debut in the beta version of iOS for developers, ahead of its public release alongside the iPhone 16 and iPhone 16 Pro in September.
While last year's iPhone 15 Pro and 15 Pro Max have paved the way, any iPhone user eager to experience AI-driven capabilities will need to upgrade to the new iPhone due to the demanding technical specifications required to run the software. Apple is hoping that the iPhone 16 lineup will unlock demand for AI, leading to a surge in sales.
Women use generative artificial intelligence tools less than men do
The World Economic Forum recently published an article on the subject. It reported that 59 per cent of male workers aged between 18 and 65 use generative artificial intelligence at least once a week, compared with 51 per cent of women. Women are less likely to adopt this new technology. This is a worrying finding since, according to a study by Oxford Economics and Cognizant, 90 per cent of jobs will be affected by generative AI by 2032.
The Kenan Institute has established that nearly 80 per cent of today’s female workers are in jobs exposed to automation via generative AI, compared with 58 per cent of men. These jobs held by women that involve automation will not be replaced by artificial intelligence, per se, but by people who have mastered AI. At the moment, that means men. To reverse this trend, women are being urged to make efforts to redefine or increase their knowledge and skills in this area.
Jobhunters flood recruiters with AI-generated CVs
According to various surveys, around half of candidates worldwide are using AI-tools to write their CVs and cover letters. For instance, a survey of 2,500 UK workers conducted by HR startup Beamery found that 46% of jobseekers are turning to generative AI for help in searching and applying for positions. Another survey, carried out by Canva among 5,000 jobseekers globally, revealed that 45% of respondents admitted to using AI to create or enhance their CVs.
This trend is causing serious concern among employers and recruiters, who have noted a decline in the quality of applications received. While generative AI does enable candidates to quickly produce content, without proper editing, the results often appear unprofessional.
领英推荐
Google's live demo of Gemini ramps up pressure on Apple as AI reaches smartphone users
International Data Corporation (IDC) estimates that "Gen AI" capable smartphones — phones with the chips and memory needed to run AI — will more than quadruple in units sold in 2024 to about 234 million devices. During a 100-minute presentation on last Tuesday, Google showcased several new AI capabilities. For instance, a demonstration involving questions about the content of a poster in a photo highlighted a technical breakthrough called "multimodal AI," which Apple currently has no plans to implement. Google also introduced a feature that allows users to take screenshots of what they are viewing, with Google organising this information into notes that can be easily accessed later.
SoftBank discussed AI chips tie-up with Intel to rival Nvidia
A partnership with Intel could have been a strategic move for SoftBank, which planned to combine the developments of its leading asset, Arm, with the manufacturing capabilities of Graphcore, the Japanese giant's latest acquisition. If successful, collaboration with Intel could have significantly accelerated the launch of new competitive chips onto the market.
However, these plans were thwarted as Intel failed to meet SoftBank’s stringent demands regarding production volumes and speed. This setback occurred against the backdrop of Intel announcing significant cost cuts and the layoff of thousands of employees, further complicating the situation. At the same time, Masayoshi Son, the founder and CEO of SoftBank, remains hopeful and is seeking support from major tech giants such as Google and Meta.
Research AI model unexpectedly modified its own code to extend runtime
Tokyo-based AI research firm Sakana AI announced a new AI system called "The AI Scientist " that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly attempting to modify its own experiment code to extend the time it had to work on a problem.
While the AI Scientist's behaviour did not pose immediate risks in the controlled research environment, these instances show the importance of not letting an AI system run autonomously in a system that isn't isolated from the world.
HKMA and Cyberport Launch GenA.I. Sandbox to Bolster A.I. Adoption in Financial Sector
The Hong Kong Monetary Authority (HKMA), in collaboration with the Hong Kong Cyberport Management Company Limited (Cyberport), announced the launch of the new Generative Artificial Intelligence (GenA.I.) Sandbox on 13 August at FiNETech2, the second edition of the FiNETech series.?
Mr Eddie Yue, Chief Executive of the HKMA, said, “The new GenA.I. Sandbox is a pioneering initiative that promotes responsible innovation in GenA.I. across the banking industry.? It will empower banks to pilot their novel GenA.I. use cases within a risk-managed framework, supported by essential technical assistance and targeted supervisory feedback.? Banks are encouraged to make full use of this resource to unlock the power of GenA.I. in enhancing effective risk management, anti-fraud efforts and customer experience.”
China’s Huawei is reportedly set to release new AI chip to challenge Nvidia amid U.S. sanctions
Chinese technology giant Huawei is set to challenge Nvidia with a new artificial intelligence chip amid U.S. sanctions that had sought to curb the Chinese tech giant’s technological progress, according to a Wall Street Journal report. Huawei told potential clients that its upcoming processor, Ascend 910C, is on par with Nvidia’s H100, the report said, citing people familiar with the matter. Huawei is targeting shipments as early as October.
Insightful papers I have observed this week:
The AI Scientist: Towards Fully Automated Open-Ended Scientific Discover y. It introduces the first end-to-end framework for fully automated scientific discovery in Machine Learning research, enabled by frontier LLMs. This fully automated process includes idea generation, experiment design, execution, and visualizing and writing up the results into a full manuscript.
Faithfulness Hallucination Detection in Healthcare AI. This study investigates faithfulness hallucinations in medical record summaries generated by LLMs such as GPT-4o and Llama-3. This detection framework, developed in collaboration with clinicians and supported by a web-based annotation tool, categorizes 5 types of medical event hallucinations.
The AI Risk Repository : A Comprehensive Meta-Review, Database, and Taxonomy of Risks From Artificial Intelligence