Microchips over megabytes
Understand what matters as AI enters your organisation – subscribe to my weekly newsletter for free .
?? Thanks to our sponsor OctoAI . OctoAI delivers 5x more affordable LLMs at higher performance.
Two critical bottlenecks in developing LLMs are the scaling of computation, and the scaling of data. Focusing on these inputs is one way of effectively regulating pioneering foundational models. Data, being distributed and intangible, is hard to control. Compute, on the other hand, tangible and rooted in physical constraints, is not. Compute requires specialised hardware, such as advanced chips, produced through a highly complex and resource-intensive supply chain - a process epitomised by the cutting-edge technologies developed by companies such as ASML .
A recent research paper (co-authored by EV readers Adrian Weller & Diane Coyle) argues that these physical constraints of compute make it an ideal target for AI regulation. This approach could allow governments to:
The US has already started to embrace this concept, as evidenced by the AI Executive Order , which mandates oversight of models that exceed a computational power of 10^26 – a threshold nearly reached by Google’s latest Gemini Ultra model . In addition, the recent US semiconductor restrictions on China are a clear example of this enforcement in action.
The implications of surveillance and intervention in computing go far beyond AI. Computing is beginning to be seen as a public infrastructure. In my latest commentary , I highlight the fundamental role of compute in improving social welfare through wider access to information:
领英推荐
There are no energy-poor, information-scarce economies that have good social outcomes for their people.
However, there are risks to centralising the regulation of computing even in well-governed societies. It raises the stakes for corporate capture and may unhelpfully redirect innovation, hamper economic development, or foster a surveillance culture.
?? Today’s edition is supported by OctoAI .
OctoAI is one of the fastest, most affordable platform for generative AI inference.
Early innovators building with LLM-powered apps are flocking to open-source on OctoAI away from closed-source models like GPT for faster speeds, lower costs, and greater scalability and flexibility.
Read how OctoAI delivered 5x cost savings for LatitudeAI, makers of the hit AI role-playing game AI Dungeon while achieving superior performance at scale. Then learn how you can do the same.
Building best in class RAD & RPA community
9 个月Its brilliant as always.
Co-Founder Valuent | I help sales leaders grow effectively with Salesforce CRM
9 个月Maria Schulze Schwienhorst-De Biasio talking of value add newsletters to stay up to date on the big trends in technology. This one is gold. Check it out.
3x founder | I share tips on how to use AI reliably | Founder & CEO @Astellar AI
9 个月Azeem Azhar interesting considerations and I agree that so much will be determined at the compute level. Sam Altman and his $7 Trillion seem to confirm this. In this respect, it's interest to watch out the AI landscape is carving out: https://www.dhirubhai.net/posts/axelcoustere_ai-competition-is-weird-but-this-goes-deeper-activity-7160911458426368000-UXDE