How Biden’s AI policies pushed Silicon Valley toward Trump
[Photo: ANGELA WEISS/AFP via Getty Images]

How Biden’s AI policies pushed Silicon Valley toward Trump

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I look at Donald Trump’s efforts to woo AI companies with promises of self-regulation. I also look at the latest training data scandal, as well as a new study about generative AI’s demand on the power grid.

Sign up to receive this newsletter every week via email here . And if you have comments on this issue and/or ideas for future ones, drop me a line at [email protected] , and follow me on X (formerly Twitter) @thesullivan .


Trump and his allies woo Silicon Valley with hands-off AI policy

The tech media has for months been reporting on a supposed shift to the right by Silicon Valley founders and investors. The narrative seems driven by recent pledges of support for Trump from the likes of Marc Andreessen, Joe Lonsdale, and Elon Musk. The Trump camp has been wooing tech companies and investors, including those within the AI sector.?

The Washington Post reported Tuesday that a group of Trump allies and ex-cabinet members have written a draft executive order that would mark a radical shift away from the Biden administration’s current approach to AI regulation. The draft executive order comes from the right-wing think tank, America First Policy Institute, which is led by Trump’s former chief economic adviser Larry Kudlow. The document proposes an AI regulatory regime that relies heavily on the AI industry to regulate itself when it comes to the safety and security of its models, and would establish “industry-led” agencies to “evaluate AI models and secure systems from foreign adversaries,” the Post reports. It would also create “Manhattan Projects” to develop cutting-edge AI for the military.?

By contrast, the Biden administration’s current executive order (EO) on AI is chiefly concerned with the security risks to U.S. people and interests that the very largest AI models might pose. The administration seems particularly worried that such models, delivered as a service via an API, could be used to wage some kind of cyberwar on the U.S. The Biden order, which was signed into law last October, proposes that makers of such models regularly report to the Commerce Department on the development, safety testing, and distribution of their products. The EO’s reporting requirements apply only to the very largest AI models that are hosted in very big data centers. Right now, only a few well-monied AI companies have built such models.?

Click here to read more about the AI sector’s growing interest in Trump.


YouTube is a victim of AI’s original sin: web scraping

AI models are trained largely on large corpuses of text scraped from the internet. Huge training datasets were being created while online publishers and creators had no idea it was happening. That’s how GPT-2 originally began showing hints of real language savvy and some kind of intelligence. Now, of course, publishers are wise to the situation, and many have found new revenue sources by licensing their data to AI companies for training.?

Google, whose AI researchers opened the door to LLMs, was also a victim of the web data harvesting practiced by AI developers. A new investigation by the nonprofit news organization Proof finds that Anthropic, Nvidia, Apple, and Salesforce used the subtitles and transcripts of thousands of YouTube videos to train their language models. These included videos by popular creators such as MrBeast and Marques Brownlee , and from the channels of MIT, Harvard, NPR, Stephen Colbert, John Oliver, and others. The Proof investigators found that, overall, the training dataset included text from 173,536 YouTube videos across more than 48,000 channels.?

Click here to read more about how AI companies are using YouTube videos to train their large language models.


CoreWeave CEO Mike Intrator on generative AI’s effect on the power grid

Recent studies have shown that the advance of generative AI models may significantly increase demand on the power grid. A new study released Wednesday by Columbia University shows that by 2027, the GPUs that run generative AI models will constitute about 1.7% of the total electric use in the U.S., or 4%? of the total projected electricity sales. “While this might seem minimal, it constitutes a considerable growth rate over the next six years and a significant amount of energy that will need to be supplied to data centers,” the report says.?

People within the AI infrastructure business have been thinking about the problem for a while now. “I think that the U.S. is in a position where the amount of power that's going to be required and the scale of the power that's required for these data centers is going to put increasing pressure on the grid,” says Mike Intrator, CEO of CoreWeave, which offers cloud computing designed for AI training and inference.

Click here to read more about how CoreWeave CEO Mike Intrator is thinking about AI’s power grid problem.


More AI coverage from Fast Company:?


Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了