State of AI 2024

State of AI 2024

Image: Mark Zuckerberg as a hippie in an ASI doomsday scenario in his secret Hawaii compound.

People talk about the "State of AI" a lot these days, but what's the best annual report? In my mind it's actually the AI Index Report by HAI.

But, you don't have a lot of time? This is mostly infographics.

Hello Everyone,

This week I’m working on some summaries of AI related Reports for my paid subscribers. I’m also launching my new Subreddit where I will explore breaking Emerging Tech news, if that’s your thing. The community is called “Singularity News”.

I’m a news fanatic always analyzing the latest in exponential and emerging tech like robotics, semiconductors & AI chips, quantum, synthetic biology and other fields - as well as BigTech in AI. I’m so passionate about this stuff, I am one of the more active emerging tech nerds On Substack!

  • I’m always trying to get high quality guest contributors to talk about subjects related to AI’s development.
  • I’m eager to discover new ways of covering geopolitical and Sovereign AI elements and increasingly policy.

This week I’m also going through and thinking a lot about the HAI’s AI Index Report. It’s one of the best “State of AI” reports.

The Stanford Institute for Human-Centered Artificial Intelligence (HAI)

The AI Index 2024 is out.

Read the Report (500+ Slides)

What is Standford HAI?

I resonate with this organization. The mission of HAI is to advance AI research, education, policy and practice to improve the human condition. This is the 7th edition of this AI Index.

Source. https://aiindex.stanford.edu/report/

What is It?

“The AI Index is an initiative to track, collate, distill and visualize data relating to artificial intelligence. It aims to become a comprehensive resource of data and analysis for policymakers, researchers, executives and other interested parties who want to be able to rapidly develop intuitions about the complex field of AI.”

  • This year’s report measures and evaluates the rapid rate of AI advancement from research and development to technical performance and ethics, the economy and education, AI policy and governance, diversity, public opinion and more.

Chart of the Day

Generative AI investment skyrocketed in 2023

  • Question, will massive investment in Generative AI give returns (ROI) in revenue and make the world a better place? ??

In April, 2024 I propose we don’t actually know the answer.

We do appear to stand on a precipice of sorts. ??

Hey Harry? Potter is that you?

The tremendous progress of generative models is evident when you compare how, over time, Midjourney has responded to the prompt: "a hyper-realistic image of Harry Potter." - Source.

Midjourney’s Results Chart a Generative AI Voyage to Hyper-Realism

Vision for 2024 - Progress in AI

  • ?? A decade ago, the best AI systems in the world were unable to classify objects in images at a human level. Today, AI systems routinely exceed human performance on standard benchmarks.
  • ??? Progress accelerated in 2023. New state-of-the-art systems like GPT-4, Gemini, and Claude 3 are impressively multimodal.

?? Companies are racing to build AI-based products, and AI is increasingly being used by the general public.

? But current AI technology still has significant problems. It cannot reliably deal with facts, perform complex reasoning, or explain its conclusions.

  • ?? AI faces two interrelated futures. First, technology continues to improve and is increasingly used, having major consequences for productivity and employment. It can be put to both good and bad uses. In the second future, the adoption of AI is constrained by the limitations of the technology.
  • ?? As AI rapidly evolves, the AI Index aims to help the AI community, policymakers, business leaders, journalists, and the general public navigate this complex landscape.

??? By comprehensively monitoring the AI ecosystem, the Index serves as an important resource for understanding this transformative technological force.

? On the technical front, this year’s AI Index reports that the number of new large language models released worldwide in 2023 doubled over the previous year.

  • ? Although global private investment in AI decreased for the second consecutive year, investment in generative AI skyrocketed. ???

The AI Index program is part of the Stanford Institute for Human-Centered Artificial Intelligence, led by Denning Co-Directors John Etchemendy and Fei-Fei Li.

Stanford HAI relies on philanthropic support as it pursues its mission to advance AI research, education, policy, and practice to improve the human condition.

A growing annual report and AI Index for all of Human Civilization

The AI Index, led by an independent and interdisciplinary group of AI thought leaders from industry and academia, is one of the most comprehensive annual reports on progress in AI; it tracks trends in research and development, technical performance, responsible AI, economics, policy, public opinion, and more.

Join me in exploring this massive document and the key summaries. Take this in slowly and at your leisure, it’s a lot of information.

This article is simply a holistic introduction to the subject material as is the first in a series of posts.

Subscribe now

Note: This Report summary is best viewed on a PC big screen, and read on a web browser not in an Email inbox. Also a reminder to ↗ increase the font size if you are on the Substack app.

Get the app

How do you summarize a report with over 500 slides? For this introduction, I want it to be as visual and skimmable as possible.

Essentially AI reports and State of AI reports especially help us frame where we are and where we are going in AI. The AI Index is certainly recognized globally as one of the most credible and authoritative sources for data and insights on artificial intelligence.

Industry Invests in AI

Business leaders have certainly paid attention. Global private investment in generative AI skyrocketed, increasing from roughly $3 billion in 2022 to $25 billion in 2023. Nearly 80 percent of Fortune 500 earnings calls mentioned AI, more than ever before.

While AI private investment has steadily dropped since 2021, generative AI is gaining steam. In 2023, the sector attracted $25.2 billion, nearly ninefold the investment of 2022 and about 30 times the amount from 2019 (call it the ChatGPT effect). Generative AI accounted for over a quarter of all AI-related private investments in 2023.

U.S. Wins $$ Race

And again, in 2023 the United States dominates in AI private investment. In 2023, the $67.2 billion invested in the U.S. was roughly 8.7 times greater than the amount invested in the next highest country, China, and 17.8 times the amount invested in the United Kingdom. That lineup looks the same when zooming out: Cumulatively since 2013, the United States leads investments at $335.2 billion, followed by China with $103.7 billion, and the United Kingdom at $22.3 billion.

American BigTech Dominates

This past year Google edged out other industry players in releasing the most models, including Gemini and RT-2. In fact, since 2019, Google has led in releasing the most foundation models, with a total of 40, followed by OpenAI with 20. This year we are expecting Llama-3 by Meta next month and GPT-5 later this year. As well as a rumored LLM by Amazon called Olympus.

Where is Corporate Adoption? And What’s Next?

More companies are implementing AI in some part of their business: In surveys, 55% of organizations said they were using AI in 2023, up from 50% in 2022 and 20% in 2017. Businesses report using AI to automate contact centers, personalize content, and acquire new customers.? However a new trend many are interested in is called Agentic AI.

Agentic AI, the Next Frontier?

Agentic AI also saw significant gains. Researchers introduced several new benchmarks — including AgentBench and MLAgentBench — that test how well AI models can operate semi-autonomously. Although there are already promising signs that AI agents can serve as useful computer science assistants, they still struggle with some more complex tasks like conducting our online shopping, managing our households, or independently operating our computers. Still, the introduction of the aforementioned benchmarks suggests that researchers are prioritizing this new field of AI research.?

AI is Claiming Better than Human Performance in More Areas

I consider this somewhat controversial.

Move Over, Human - Oh Really?

As of 2023, AI has hit “human-level performance” on many significant AI benchmarks, from those testing reading comprehension to visual reasoning. Still, it falls just short on some benchmarks like competition-level math. Because AI has been blasting past so many standard benchmarks, AI scholars have had to create new and more difficult challenges. This year’s index also tracked several of these new benchmarks, including those for tasks in coding, advanced reasoning, and agentic behavior.

The benchmarks used by machine learning scientists however are not related to real-life situations outside of an LLM Chatbot. To say that the term AGI is ill-defined, is an understatement.

Western Nations Increasingly Mistrust AI

Download Trust in AI

This report by KPMG is fairly detailed for a number of nations.

  • deepfakes
  • hallucinations
  • proliferation of synthetic media
  • more bots related to advertising
  • more misinformation related to elections
  • centralized models and corporations abusing their power.

There are many reasons to mistrust AI in 2024 and this will likely continue to be the case. But it also has to do with trust in AI products in our day to day lives.

While the Commonwealth Worries About AI Products

When asked in a survey about whether AI products and services make you nervous, 69% of Aussies and 65% of Brits said yes. Japan is the least worried about their AI products at 23%. The difference between western nations and developing nations is tremendous.

Training Costs for the Best Models will Cost Billions Soon and likely Keep Rising

Anthropic’s CEO has made some interesting comments around this recently. A $10 Billion model by 2026 perhaps?

  • Given the rising costs of compute and to train the best models of the future, few will actually be able to compete.
  • OpenAI, Google, Meta, Beijing Academy of AI, Amazon, Maybe Anthropic or Mistral if they got vastly more funding?
  • So even as Open-source and “Open-weight” models become more efficient, the best models will become more expensive making it a more exclusive race.

Generative AI Fueling Science and Emerging Tech

While large language models captured the world’s attention last year, these were not the only technical advancements at the frontier of AI. Promising developments in generation, robotics, agentic AI, science, and medicine show that AI will be much more than just a tool for answering queries and writing cover letters.?

Last year's AI Index first noted AI’s use in accelerating science. In 2023, significant new systems included GraphCast, a model that can deliver extremely accurate 10-day weather predictions in under a minute; GNoME, which unveiled over 2 million new crystal structures previously overlooked by human researchers; and AlphaMissence, which successfully classified around 89 percent of 71 million possible missense mutations.

Regulation Ramps Up

Policymakers are also responding. There were 2,175 mentions of AI in global legislative proceedings in 2023, nearly double the number from the previous year. U.S. regulators passed 25 AI-related regulations in the last year, a new record. Some of these regulations included copyright guidance for generative AI material and cybersecurity risk management frameworks. More broadly, 2023 was a banner year for major AI policy; the EU proposed its comprehensive AI Act and President Biden unveiled the Executive Order on AI.

Public Sentiment Dips Negative

The general public took notice of AI, too, and responded with nervousness. Survey data from Pew suggests that in 2022 only 38 percent of Americans reported feeling more concerned than excited about AI technology. In 2023, that figure rose to 52 percent. Across the globe, 52 percent of surveyed respondents felt nervous about AI, a 13 percentage-point increase from the previous year.?

Privacy and Copyright Issues Loom Large

"Obtaining genuine and informed consent for training data collection is especially challenging with LLMs, which rely on massive amounts of data," the report says. "In many cases, users are unaware of how their data is being used or the extent of its collection. Therefore, it is important to ensure transparency around data collection practice."

Follow Stanford HAI on Twitter/X

The #AIIndex2024 shows business leaders are paying attention, while regulations are ramping up.

Download the full Report

Top Takeaways

  1. AI beats humans on some tasks, but not on all. AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.
  2. Industry continues to dominate frontier AI research. In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.
  3. Frontier models get way more expensive. According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI’s GPT-4 used an estimated $78 million worth of compute to train, while Google’s Gemini Ultra cost $191 million for compute.
  4. The United States leads China (some claim by as much as 2 years now), the EU, and the U.K. as the leading source of top AI models. In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union’s 21 and China’s 15.
  5. Robust and standardized evaluations for LLM responsibility are seriously lacking. New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.
  6. Generative AI investment skyrockets (in 2023 relative to 2022 that is). Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.
  7. The data is in: AI makes workers more productive and leads to higher quality work. In 2023, several studies assessed AI’s impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI’s potential to bridge the skill gap between low- and high-skilled workers. Still, other studies caution that using AI without proper oversight can lead to diminished performance.
  8. Scientific progress accelerates even further, thanks to AI. In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications— from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.
  9. The number of AI regulations in the United States sharply increases. The number of AIrelated regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.
  10. People across the globe are more cognizant of AI’s potential impact—and more nervous. A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 37% in 2022.

Is the pace of Generative AI Accelerating?

Although AI was already showing exciting new capabilities in 2022, in 2023 the technology accelerated. Could the evolution of LLMs and more efficient open-weight SLMs be compounding this further in 2024?

Is Google Really behind OpenAI in LLM Innovati

n?

Follow GDM on Twitter/X

What does the AI Boom Like like for AI Careers?

AI hiring has been growing at least slightly in most regions around the world, with Hong Kong leading the pack; however, AI careers are losing ground compared with the overall job market, according to the 2024 AI Index Report.

Stanford’s AI Index looks at the performance of AI models, investment, research, and regulations. But tucked within the 385 pages of the 2024 Index are several insights into AI career trends, based on data from LinkedIn and Lightcast, a labor market analytics firm.

AI Hiring is up but not where you’d expect

But don’t get too excited—as a share of overall labor demand, AI jobs are slipping

This snapshot of 2023 busts some myths around the supply-demand market for AI roles.

Hype Around Generative AI Obscures Real Demand

AI productivity gains may be smaller than you’re expecting

According to ING they may be way smaller than you might have expected in 2024.

Source.

Rise of Fear, Uncertainty Doubt related to Generative AI

The share of people who think AI will “dramatically” affect their lives in the next 3-5 years rose from 60 percent to 66 percent globally. Over half now express nervousness about AI products and services.?

I believe there’s misinformation on both the disruption to jobs potential and the productivity gains. Certainly hallucinating, confabulating, closed-source models have limited viability for most Enterprise settings. Just don’t tell shareholders listening to Earnings calls.

Self-Reported AI Benchmarks are as ill Defined as AGI

A few standard benchmarks for general capabilities evaluation were commonly used by these developers, such as MMLU, HellaSwag, ARC Challenge, Codex HumanEval, and GSM8K. However, consistency was lacking in the reporting of responsible AI benchmarks. Unlike general capability evaluations, there is no universally accepted set of responsible AI benchmarks used by leading model developers.

With so many new models flooding into HuggingFace, which now claims it has over one million models. The benchmarking is becoming a chaotic sea of bragging and seeking credibility.

Evolutions in Synthetic Media

Language Insights Power Non-Language Models

The last year also saw exciting developments outside of language modeling. In 2023, researchers used insights from building LLMs, specifically transformer architectures for next-token prediction, to drive progress in non-language domains. Examples include Emu Video (video generation) and UniAudio (music generation). You can now make videos and generate music with AI models powered by some of the same ideas that brought you ChatGPT.?

It’s not clear what synthetic video, music and culture might look like next year, nevermind in ten years.

Closed Models Still Outperform Open-Source and Open-Weight Models in 2023 & First half of 2024

Household Robots That Tell Jokes

While 2023 set some foundations, it’s expected in 2024 we see really gains in Generative AI converging with robotics.

Robotics is another domain recently accelerated by language modeling techniques. Two of the most prominent robotic models released in 2023, PaLM-E and RT-2, were both trained on combined corpora of language and robotic trajectories data. Unlike many of its robotic predecessors, PaLM-E can engage in manipulation tasks that involve some degree of reasoning — for example, sorting blocks by color. More impressive, it can also caption images, generate haikus, and tell jokes. RT-2, on the other hand, is especially skilled at manipulating in never-before-seen environments. Both these systems are promising steps toward the development of more general robotic assistants that can intelligently maneuver in the real world and assist humans in tasks like basic housework.

Carbon and Energy Footprint of Biggest Closed Source Models will Become a Serious Problem

The AI Index team also estimated the carbon footprint of certain large language models. The report notes that the variance between models is due to factors including model size, data center energy efficiency, and the carbon intensity of energy grids. Another chart in the report (not included here) shows a first guess at emissions related to inference—when a model is doing the work it was trained for—and calls for more disclosures on this topic. As the report notes: “While the per-query emissions of inference may be relatively low, the total impact can surpass that of training when models are queried thousands, if not millions, of times daily.”

Many believe new energy solutions will be needed to adjust to the AI datacenters and supercomputers of the future. Microsoft and Sam Altman are aligned in their investments in this. (e.g. Helion)

More PhDs are flocking to “Industry”

Extreme Centralization of AI/ML Talent Stockpiles (a symptom of Monopoly Capitalism)

In 2022 (the most recent year for which the Index has data), 70 percent of new AI PhDs in North America took jobs in industry. It’s a continuation of a trend that’s been playing out over the last few years. Competition for the top talent in machine learning researchers and engineers means even in “industry”, only a few startups and firms will be able to keep up for the top 1% of these PhDs in Generative AI and machine learning related positions.

AI Whitewashing on Earnings Calls was Rampant in 2023 and in 2024

  • This is one of the symptoms of a text-book definition of an AI bubble.
  • Nvidia has inflated the entire semiconductor class of stocks to unreasonable levels in 2024 which will invariably lead to investor pain.

AI Papers have Been Exploding in the 2020s

Machine Learning Itself and Generative AI Became the Key Topic Since 2017

AI Related Patents Began to take off in 2018

  • The increases since the birth of Generative AI (Circa 2017) is notable.

China has Applied and Been Granted Way More Patents than the U.S. or Europe

The APAC Region is a Hot Bed for AI Patents

The R&D Rich of AI Patents Per Capita?

  1. South Korea
  2. Luxembourg
  3. United States
  4. Japan
  5. China
  6. Singapore
  7. Australia
  8. Canada
  9. Germany
  10. Denmark

Singapore, South Korea, and China experienced the greatest increase in AI patenting per capita between 2012 and 2022 according to the data.

Where was AI Patent Growth Taking Place at the Fastest Rates in the Decade of 2012 to 2022?

  1. Singapore
  2. South Korea
  3. China (all at substantially higher rates than in Europe or North America).

Here ends our first article of coverage on Slide 45 of 502.

Browse the Report

I hope this has given you a broad introduction to the topic of AI Index 2024. Some opinions in the text are my own.







Credits

This report was made possible by: Nestor Maslej, Loredana Fattorini, Raymond Perrault, Vanessa Parli, Anka Reuel, Erik Brynjolfsson, John Etchemendy, Katrina Ligett, Terah Lyons, James Manyika, Juan Carlos Niebles, Yoav Shoham, Russell Wald, and Jack Clark, “The AI Index 2024 Annual Report,” AI Index Steering Committee, Institute for Human-Centered AI, Stanford University, Stanford, CA, April 2024.


Augustine Ofoegbu

Software Engineer

5 个月

Thanks for sharing. In summary, the hype around AI is justified. The near present and future is only going to get more exciting (and unpredictable). Let's enjoy the ride!

回复
Mirza Munawarbaig

Pursuing - B.Tech - Electronic and Computer Engineering | Data Analytics | Python | SQL | Database | Data Science Enthusiasts

5 个月

Thanks for sharing ,

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了