AI 2030 Evangelist Digest 010- AI4Future: Top AI News (October 7- October 13, 2024)

AI 2030 Evangelist Digest 010- AI4Future: Top AI News (October 7- October 13, 2024)

By Kate Shcheglova-Goldfinch

AI-gov Lead & Research Affiliate at CJBS and regulatory innovations consultant, AI 2030 Evangelist

Image generated using FLUX.1 by Black Forest Labs with a detailed prompt, without modifications (Photo credit: AI4FUTURE & FLUX.1

THE ORIGINAL LINK OF THIS NEWSLETTER

https://www.dhirubhai.net/pulse/ai4future-top-ai-news-7-13-october-kate-shcheglova-goldfinch-msc-mba-drqfe/

?

This week has truly been a “Nobel moment” for the AI industry, underscoring the profound importance of scientific breakthroughs in this and adjacent fields for humanity. The Nobel Prize in Physics was awarded to AI’s “Godfather” Geoffrey Hinton and John Hopfield, while Demis Hassabis, head of Google DeepMind, took the prize for Biology. This mirrors last week’s trend of positioning top scientists in key roles across the commercial AI sector, big tech, and in shaping AI regulation.

?

The market continues to see intensifying competition, with AMD preparing to release an AI chip to rival Nvidia’s Blackwell.

?

The trend towards widespread AI adoption has been evident in various sectors, including the regulatory-actuarial field (the UK’s Financial Reporting Council published revised Technical Actuarial Standards to support the growing use of AI and machine learning in actuarial work) and academia (a study from the Centre for Decent Work and Industry revealed that 71% of staff in Australian universities now use AI). However, at the boardroom level, the outlook is less promising. A recent survey by Deloitte’s Global Boardroom Programme, which involved nearly 500 executives across 57 countries, revealed that AI is not being discussed at 45% of board meetings. AI adoption is also shedding light on the role AI skills play in market competitiveness. PwC UK has launched a review of its operations, which includes creating a dedicated technology and AI division, intensifying the competition for positions for nearly 3,000 employees.

?

A round-up of this week’s key developments.

?

PwC launches UK operations overhaul to include standalone tech and AI unit

?

The new head of PwC UK has initiated a review of the firm’s operations, which includes the establishment of a dedicated technology and artificial intelligence division. Management has acknowledged that this may be “unsettling” for staff. As the company informed its employees last week, the reorganisation will impact around 2,700 staff and partners, and forms part of a new strategy aimed at positioning the firm as a “market leader,” according to a document seen by the Financial Times.

?

More

?

71% of Australian university staff are using AI

New study by Centre for Decent Work and Industry surveyed more than 3,000 academic and professional staff at Australian universities about how they are using generative AI. It included academics, sessional academics (who are employed on a session-by-session basis) and professional staff. Overall, 71% of respondents said they had used generative AI for their university work. Around one-third of those using AI had only used one tool, and a further quarter had used two. A small number of staff (around 4%) had used ten tools or more. General AI tools were by far the most frequently reported. For example, ChatGPT was used by 88% of AI users and Microsoft Copilot by 37%.

More

?

AI’s Role in Revolutionizing Anti-Money Laundering Efforts

As financial institutions defend against increasingly sophisticated criminal tactics, AI is becoming a critical differentiator. This transformation is particularly notable in the anti-money laundering (AML) space. In fact, experts predict the AML market will balloon to?$16.37 billion by 2033 , up from $3.18 billion in 2023. AI will be an important factor in the growth of AML solutions market share. AI brings three key advantages in the realm of AML: enhanced data processing, intelligent risk analysis, streamlined due diligence.

More

?

'Godfather of AI' shares Nobel Physics Prize

The Nobel Prize in Physics has been awarded to two scientists, Geoffrey Hinton and John Hopfield, for their work on machine learning. British-Canadian Professor Hinton is sometimes referred to as the "Godfather of AI" and said he was flabbergasted. He resigned from Google in 2023, and has warned about the dangers of machines that could outsmart humans. The announcement was made by the Royal Swedish Academy of Sciences at a press conference in Stockholm, Sweden. American Professor John Hopfield, 91, is a professor at Princeton University in the US, and Prof Hinton, 76, is a professor at University of Toronto in Canada. The Academy listed some of the crucial applications of the two scientists’ work, including improving climate modelling, development of solar cells, and analysis of medical images.

More

?

Google DeepMind boss wins Nobel for proteins breakthrough

British computer scientist Professor Demis Hassabis has won a share of the Nobel Prize for Chemistry for "revolutionary" work on proteins, the building blocks of life. Prof Hassabis, 48, co-founded the artificial intelligence (AI) company that became Google DeepMind. Professor John Jumper, 39, who worked with Prof Hassabis on the breakthrough, shares the award along with US-based Professor David Baker, 60. Proteins are the building blocks of life and are found in every cell in the human body. Better understanding proteins has driven huge breakthroughs in medicine. Speaking about being awarded the prize, Prof John Jumper said it felt “so unreal at this moment" but that “the prize represents the promise of computational biology”.

More

?

The UK’s Financial Reporting Council (FRC) has published revised Technical Actuarial Standards (TAS) to support the growing use of artificial intelligence and machine learning (AI/ML) techniques in actuarial work

?

The updated guidelines aim to assist practitioners applying the principles of Technical Actuarial Standard 100 (TAS 100) when using these methods, ensuring the continued quality of actuarial work in this rapidly evolving field. The new guidance provides examples related to model bias, understanding and communication, governance, and stability when using AI/ML models in technical actuarial work.

More

?

Governance of AI: A critical imperative for today’s boards

In a new Deloitte Global Boardroom Program survey of nearly 500 board directors and executives across 57 countries, 45% say AI is not yet on the boardroom agenda.?Over three-quarters of respondents (79%) say their boards have limited, minimal, or no knowledge or experience with AI. Just 2% say their boards were highly knowledgeable and experienced with AI.

More

?

AMD launches AI chip to rival Nvidia's Blackwell

AMD launched a new artificial intelligence chip that is taking direct aim at Nvidia's data center graphics processors, known as GPUs. The Instinct MI325X, as the chip is called, will start production before the end of 2024. If AMD's AI chips are seen by developers and cloud giants as a close substitute for?Nvidia's ?products, it could put pricing pressure on Nvidia, which has enjoyed roughly 75% gross margins while its GPUs have been in high demand over the past year.

More

TikTok to slash 'hundreds' of jobs as they shift towards AI: UK staff receive email about 'difficult' cuts

TikTok ?is axing 'several hundred' jobs in the UK and?Malaysia ?as part of a drive for more?artificial intelligence ?in its content moderation. Around 125 people have been told they might be made redundant, according to the Communication Workers Union. TikTok employs about 500 UK workers in its UK moderation division, and an internal email to staff seen by MailOnline warned of 'difficult' decisions.

More

?

Apple's study proves that LLM-based AI models are flawed because they cannot reason

A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills. The group?has proposed ?a new benchmark, GSM-Symbolic, to help others measure the reasoning capabilities of various large language models (LLMs). Their initial testing reveals that slight changes in the wording of queries can result in significantly different answers, undermining the reliability of the models.

The group investigated the "fragility" of mathematical reasoning by adding contextual information to their queries that a human could understand, but which should not affect the fundamental mathematics of the solution. This resulted in varying answers, which shouldn't happen.

More

?

ABOUT Kate Shcheglova-Goldfinch

Kate has over 20 years of expert experience in the financial market, including 5 years of experience as an EBRD (NBU) consultant on fintech projects, including the development of the NBU Fintech Strategy 2025 and the creation and launch of the NBU regulatory sandbox. She has extensive experience in creating and moderating educational programmes for the financial market and regulators on topics such as fintech, digital assets (blockchain, DeFi), open banking, open finance, and AI. Currently, she is focused on AI regulation on a global level and in Ukraine, particularly on ethical implementation in the financial sector, and is preparing to launch an educational programme on AI for regulatory institutions. She has successfully launched educational programmes with Cambridge Judge Business School over the past three years. Since 2019, Kate has been ranked in global lists such as TOP50 Fintech Global, TOP100 Women Thought Leaders, Influential Fintech Women UA and UK, TOP10 Regulatory Experts and Policy Makers UK, TOP3 UK Banker of the Year23 (Women award), and TOP100 Thought Leaders in Govtech by Thinkers360 (24). She is AI2030 (community) fellow. In 2024, Kate was elected as a delegate of United Nations Women UK. Kate sees her mission as spreading innovative knowledge at all levels, including professional financial and regulatory spheres, enhancing Ukrainian expertise through creating global collaborations, and improving the representation of women in the tech industry and the AI sector.


About AI 2030 : AI2030 is a member-based initiative aiming to harness the transformative power of AI to benefit humanity while minimizing its potential negative impact. Focused on Responsible AI, AI for All, and AI for Good, we aim to bridge awareness, talent, and resource gaps, enabling responsible AI adoption across public and private sectors.


AI 2030 does not claim ownership of the newsletter; they are the intellectual property of the authors. AI 2030 disclaims all liability for the content, errors, or omissions within the newsletter. Readers are advised to use their judgment when assessing the information presented.

Contact us at: [email protected]

?? Join our LinkedIn Group: https://lnkd.in/e_CrPkcAI

2030 Summit Series: https://www.ai2030.org/

Sponsor our Summits: https://ai2030.org/sponsor/

Join the Movement: Become an AI 2030 Member: https://lnkd.in/gN5PcC

要查看或添加评论,请登录