AI and Electricity: A System Dynamics Approach - Explained (4/10) - "Limits To Growth" Scenario
Rémi Paccou ??
Sustainability Researcher | Energy System Analysis, Climate Change Mitigation, Sustainable AI/Digital, Data Centers & ICT | PhD Student at CIRED, Chair Prospective Modeling for Sustainable Development
Welcome to our fourth series on "AI and Electricity Scenarios: A System Dynamics Approach".
Today, we'll explore the results of "Limits To Growth" Scenario, a perspective developed by the School of Thought of Demand Dynamics Analysts.
This scenario imagines a future where AI becomes powerful, but its growth is checked by real-world limitations. These limits could be things like scarce resources, social problems, or environmental damage. This scenario emphasizes the need to find a balance between AI progress and the well-being of our planet and society.
This scenario draws inspiration from the seminal 1972 report The Limits to Growth, led by Donella Meadows and commissioned by the Club of Rome. The report used computer models to simulate the consequences of exponential economic and population growth against finite resources. It highlighted the potential for economical collapse if growth continued unchecked, challenging prevailing notions of limitless progress.
General Analysis of Energy Consumption Trends (2025-2030)
The "Limits To Growth" scenario depicts a constrained trajectory for AI development, hindered by both endogenous and exogenous limitations. Energy consumption rises from 100 TWh in 2030 to 510 TWh by 2030, as shown in the exhibit below.
Traditional AI demonstrates divergent trends in energy consumption between 2025 and 2030. Training energy consumption doubles from 20 TWh to 40 TWh, indicating increased computational demands possibly due to more complex models or larger datasets. In contrast, inferencing energy consumption shows a more modest increase from 18 TWh to 30 TWh over the same period. This smaller growth in inferencing energy use suggests some efficiency gains in deployment and execution of AI models, potentially through improved hardware or optimized algorithms.
Overall, total AI energy use is projected to grow significantly, underscoring several fundamental constraints in the AI industry. In the training phase, Gen AI faces challenges such as grid power availability in key data center hubs, manufacturing bottlenecks for specialized AI chips, and data scarcity for large language models. The inferencing phase, grapples with operational deployment issues, potential network latency problems, and the prospect of reduced consumer and industry adoption following the expected peak of hype around 2026.
Key Insight #1
Generative AI training is likely to face constraints due to limitations in power availability, chip manufacturing, data scarcity, and cost challenges
In the Limits To Growth scenario, Gen AI development faces a three-pronged challenge: limited power, chip shortages, and data constraints. These factors may collectively limit AI advancement, making it increasingly difficult and concentrated among a select few industry players. The exponential growth in computational requirements for training large language models has led to unprecedented energy demands. Projections suggest that by 2030, data center campuses may require 1 to 5 GW to support training runs of 1e28 (e stands for “times 10 to the power of”) to 3e29 FLOP. This represents a staggering increase from GPT-4’s estimated 2e25 FLOP, underscoring the escalating power needs of advanced AI models. Such energy-intensive processes raise concerns about sustainability and the feasibility of continued AI model scaling.
Chip manufacturing bottlenecks further complicate this scenario, as the production of advanced AI chips is constrained by packaging and high-bandwidth memory capacities. While current estimates suggest a capacity for 100 million H100-equivalent GPUs, potentially supporting a 9e29 FLOP training run, projections vary widely. Estimates range from 20 million to 400 million H100 equivalents, corresponding to 1e29 to 5e30 FLOP. This uncertainty in chip production capabilities might add another layer of complexity to future AI development. Also, in this scenario, data scarcity emerges as another significant hurdle. By 2030, available training data could range from 400 trillion to 20 quadrillion tokens, potentially enabling training runs of 6e28 to 2e32 FLOP. This estimate factors in the projected 50% growth of the indexed web by 2030 and the potential tripling of available data through multimodal learning incorporating image, video, and audio inputs. However, as models grow larger, finding high-quality, diverse data becomes increasingly challenging, potentially limiting further improvements in model performance.
These constraints, combined with escalating training costs projected to exceed a billion dollars by 2027, may create substantial barriers to entry in the AI industry. As a result, AI development is likely to become concentrated among a few key players with the resources to overcome these challenges. This scenario, with its prohibitively high cost of generative AI training, means that very few companies will be capable of developing their own models. Consequently, the AI landscape may evolve into an oligopoly, where a handful of tech giants and well-funded organizations can afford to push the boundaries of AI technology. It could lead to reduced diversity in AI development and applications, increased concentration of AI technologies, and a potential slowdown in AI innovation due to limited competition. However, this concentration of AI capacities might also shift the focus from pure scale to efficiency and optimization of existing models, as well as drive greater emphasis on specialized, task-specific AI models that require fewer computational resources.
Key Insight #2
Generative AI inferencing growth is susceptible to potential constraints from power and infrastructure
While our findings suggest that global generative AI inference could reach 212 TWh by 2030, its development is constrained by power availability and infrastructure limitations. This scenario paints a picture of an AI landscape grappling with the challenges of scaling inference capabilities to meet growing demand. The sheer volume of compute required for inference workloads presents significant hurdles, even though these workloads can be more distributed than training.
The evolution of AI inference is limited by aggregate capacity in various regions and the rapid advancement of AI models. These limitations are underscored by ClearML surveys, which revealed that 52% of organizations are actively exploring alternatives to GPUs for inference in 2024(196), and found that only 25% of organizations believe their GPU infrastructure achieves 85% utilization. Indeed, despite efforts to optimize GPU utilization, many data centers report underutilization during peak times. In this scenario, AI infrastructure efficiency could become a critical bottleneck. Underutilized GPUs may exacerbate energy consumption, with Gen AI queries potentially consuming four to five times more power than typical internet searches. Without significant efficiency breakthroughs, AI growth could be limited by energy availability and costs, potentially slowing adoption and development, and creating significant barriers to the widespread scaling of generative AI inference capabilities.
These challenges collectively contribute to a scenario where the growth of AI inference capabilities is constrained by physical and infrastructural limitations. Organizations are increasingly hindered by external constraints such as limited energy resources and infrastructure limitations. This may lead to a more reserved approach to AI deployment, with companies focusing on optimizing existing infrastructure and exploring more energy-efficient alternatives. The Limits To Growth scenario also implies potential regional disparities in AI inference capabilities. Areas with more robust power infrastructure and cooler climates may have advantages in scaling their AI operations, potentially leading to geographical concentrations of AI inference capabilities.
Key Insight #3
Generative AI deployment may be constrained in scaling due to adoption barriers and lack of proven Return On Investment
In the context of the Limits To Growth scenario, the generative AI market is approaching a critical inflection point. Despite the initial rapid adoption of Gen AI, particularly on the consumer side, the technology is encountering systemic barriers reminiscent of resource limitations in traditional growth models. The potential stagnation or decline in adoption rates reflects predictions of diminishing returns as generative AI technology reaches certain thresholds. Industrial companies face challenges in understanding and integrating generative AI into their performance and productivity processes. A Goldman Sachs report notes that high costs associated with generative AI adoption may lead to diminishing returns if organizations cannot effectively integrate these solutions.
This trend is further confirmed by Gartner’s forecast that 30% of generative AI projects might be abandoned by 2025 due to a lack of ROI, highlighting a growing emphasis on demonstrable value in an increasingly cautious market. After 2026, these trends may become structural, marking the official end of the generative AI hype. While generative AI holds immense promise, with the potential to contribute up to $4.4 trillion annually to the global economy, many enterprises are struggling to move beyond the experimentation phase and demonstrate clear returns on investment. The Limits To Growth scenario aligns with Gartner’s 2024 prediction due to factors such as poor data quality, inadequate risk controls, escalating costs, or unclear business value.
As the industry potentially enters Gartner’s “Trough of Disillusionment,” companies are being forced to reevaluate their AI strategies, focusing on more targeted, value-driven applications. As described in the forecast results, generative AI’s electricity use begins to plateau around 2029, stabilizing or even declining until 2035. This plateau may be contingent on the emergence of more efficient AI models or the next AI generation such as Meta’s world model.
In this scenario, initial barriers like power availability and infrastructure limitations are early signs of a broader shift in the AI landscape. As these constraints become more apparent, the industry is moving away from inflated expectations and toward a focus on demonstrable ROI. This shift is forcing organizations to adopt a more measured approach, prioritizing efficient AI deployment over speculative investment. This aligns with the Limits To Growth scenario, where the initial optimism surrounding AI is being tempered by the realities of resource constraints and the need for practical, application-driven AI.
Learnings for "Limits To Growth"
We observed that the "Limits To Growth" scenario, inspired by the 1972 report, highlights crucial considerations for the future of AI. We identified several key risks:
Reduced AI electricity consumption is not indicative of a sustainable and resilient development trajectory
It's important to note that reduced AI electricity consumption within the "Limits To Growth" scenario is not necessarily indicative of a sustainable and resilient development trajectory. The forecast evolution highlights the scenario's specific system dynamics mechanism, characterized by a constrained and limited growth trajectory, with multiple feedback loops impeding AI's prosperous development.
In the next episode, we will present the results for the scenario: "Abundance Without Boundaries" from the School of Thought: Techno-Efficiency Optimists. This scenario embodies the Jevons Paradox where improvements in AI efficiency paradoxically lead to increased overall energy consumption.
Looking forward to sharing this soon!