Stanford's HAI report is out
(c) Stanford, HAI, https://aiindex.stanford.edu/report/

Stanford's HAI report is out

HAI (Human-Centred Artificial Intelligence) at Stanford University has just released its 7th AI Index report.

Here is the link: https://aiindex.stanford.edu/report/

The summary points (1) and (7) below are well established by now: AI, especially generative AI, can improve worker productivity for certain tasks but not for others. We also see that lower-skilled workers benefit more from AI assistance than higher-skilled workers, partly because the AI systems are trained with the explicit or tacit knowledge of the expert workers (the so-called levelling hypothesis).

While this US-based university claims a US victory over China and Europe regarding AI models developed (4), China seems to lead the rest of the world regarding robots installed and data availability due to different levels of privacy concerns.

The report indicated that people are increasingly concerned about the impact of AI (10). That’s a good thing because we should be concerned. I said this before, “AI is not just a tool”, but in combination with other technologies, a potential threat to societies, democracies, and humanity. That’s why AI regulations have to be established, not just in the US (9), but worldwide, hoping that we are not already too late.

?


The following text is a direct quote from the cited website:


“The HAI report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). The mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.

?

The 2024 Index is the most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, HAI has broadened the scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine. The top take-aways are:

  1. AI beats humans on some tasks, but not on all.

AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.

  1. Industry continues to dominate frontier AI research.

?In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.

  1. Frontier models get way more expensive.

According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.

  1. The United States leads China, the EU, and the U.K. as the leading source of top AI models.

?In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union’s 21 and China’s 15.

  1. Robust and standardized evaluations for LLM responsibility are seriously lacking.

New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.

  1. Generative AI investment skyrockets.

Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.

  1. The data is in: AI makes workers more productive and leads to higher quality work.

In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still other studies caution that using AI without proper oversight can lead to diminished performance.

  1. Scientific progress accelerates even further, thanks to AI.

In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications—from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.

  1. The number of AI regulations in the United States sharply increases.

The number of AI-related regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.

  1. People across the globe are more cognizant of AI’s potential impact—and more nervous.

A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 38% in 2022.”

?

?

Ciprian G. Florea

Transformation Leader, Operations Strategy Advisor - End-to-end Supply Chain, Digitalization, M&A

7 个月

Thank you Stefan Michel for sharing! I am convinced the productivity increase for the low skilled workers is an immediate benefit for scarce resources. This is equally a risk for mediocrity and a limitation of creativity for high skilled workers. Once again we nourish a lazy “go with the flow”. The biggest risk for me is people manipulation and how we would keep the democracies alive within an “deep faked” environment and with increasingly easier to manipulate people. We ma need a different education !!! I look forward to se the other opinions

回复
John Edwards

AI Experts - Join our Network of AI Speakers, Consultants and AI Solution Providers. Message me for info.

7 个月

Such an insightful report, looking forward to discussing the implications.

回复

This yearly publication is really great. Some of my recent articles on digital infrastructure and open technologies have benefitted from It. You are right to emphasize Stefan Michel that behind a AI model there is a wide number of interconnected technologies. I believe that the issue is not who wins on a single technology but who depends on whom for what. Also I hope that regulations will focus on defining the applications where we need to keep Humans in the loop for decisions and the associated transparency.

回复
Michael Watkins

Author of The Six Disciplines of Strategic Thinking | Leadership transition acceleration expert | Best-selling author of The First 90 Days | Speaker on leadership and organizational transformation

7 个月

Thank you for highlighting this Stefan Michel. I think we need to be careful with statements such as “ AI beats humans on some tasks, but not on all.” We need to add the word “yet” to every such statement and quickly add “And for how long?” If, as Anthropic CEO Dario Amodei credibly asserted in his recent The New York Times interview with Ezra Klein, AI capability is growing exponentially, then it will be a short time before AI beats humans on all cognitive tasks. I’m not a doomer, but the economic and social impacts will be vast and fast in coming.

要查看或添加评论,请登录

Stefan Michel的更多文章

社区洞察

其他会员也浏览了