Making sense of the White House’s plan to catch up with the fast-moving AI industry
[Source images: MANDEL NGAN / AFP) (Photo by MANDEL NGAN/AFP via Getty Images; Rawpixel]

Making sense of the White House’s plan to catch up with the fast-moving AI industry

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan , a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m looking at the White House’s new executive order on AI, a sprawling document describing the government’s plan to control the technology’s risks—and benefit from its utility. Also, I’m checking in on the latest mega-investments in hot AI labs.?

If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here . And if you have comments on this issue and/or ideas for future ones, drop me a line at [email protected] , and follow me on X (formerly Twitter) @thesullivan .


Assessing Biden’s executive order on AI

President Biden released on Monday a sprawling executive order designed to help the federal government ensure AI systems’ safety. AI companies have for months been promising to build large language models (LLMs) responsibly and protect their products from cyberattacks; Biden’s 100-page-long executive order codifies the government’s expectations on those fronts and lays out its expectations of AI developers.?

The executive order is an impressive attempt to catch the federal government up with rapid advances in AI research. It also reveals a pragmatic approach to safety by addressing what the government sees as the most immediate security threats, including bad actors’ use of AI to generate misinformation that might destabilize national security, speed up the manufacture of biological weapons, or (perhaps more practically) discriminate against certain classes of people.?

The executive order calls on tech companies building larger AI models to report the details of their safety testing to the government on an ongoing basis. It also calls on the National Institute of Standards and Technology to begin developing the government’s own set of safety evaluations. In addition, the executive order mandates that a number of government agencies (like the Departments of Homeland Security, Energy, and Commerce) begin working on managing AI risks within their jurisdictions. The Department of Homeland Security, for example, is directed to assess how adversarial AI systems might threaten government infrastructure. Biden’s executive order directs government agencies to begin recruiting new AI talent to help with such work.?

It remains to be seen how the order changes the behavior of the big AI labs. The tech companies building the riskiest models have already pledged to develop AI responsibly, and already spend some of their R&D dollars on safety research. As the government gets more involved, the difficulties of oversight and enforcement may become more apparent. Will the government really be able to pierce the veil of secrecy at the largest, wealthiest, and increasingly powerful AI companies enough to discover poor safety practices and/or a lack of transparency? Even if it can, it’s not clear that the executive order has the legal teeth to push recalcitrant AI companies back into compliance. (Executive orders typically find their authority in existing law.) It would require an act of Congress to really ensure the government is able to control the AI sector, and the chances are low that the current Congress can pass any kind of meaningful tech regulation.


Money continues pouring into the AI arms race

Follow the money and it becomes clear that the AI space is dominated by a fairly small set of research labs. Some of them exist within companies like Meta , but many are independent startups. And the advancements and discoveries that happen in those specialty labs are apparently very hard to duplicate, even by super-rich tech titans. That’s exactly why Microsoft, which had been working on natural language research for years, put a staggering $10 billion behind OpenAI rather than trying to compete with its own language models.?

Microsoft is hardly alone. Even Google, the “AI-first” company with thousands working on the technology is placing bets on smaller research labs. The Wall Street Journal reported last week that Google intends to put $2 billion behind Anthropic . Google is following in the footsteps of Amazon, who just weeks ago invested $4 billion in the AI startup. Meanwhile, The Information reports that the AI startup Mistral (which bills itself as the “Europe’s OpenAI”) is trying to raise $300 million just four months after a $113 million seed round. Tech giants have also placed smaller bets: Microsoft invested in Inflection AI , Oracle backed Cohere , and Google and Samsung invested in Israeli AI lab AI21 . Expect to see more mega-deals as Silicon Valley’s AI arms race intensifies.


Are we being led into an AI bubble, Part 2

I’m constantly asking the question : Does a given generative AI system actually reinvent the way a business or personal app works in a way that can be measured in time, dollars, and quality of life? My take is that such tools do exist, but they aren’t yet good enough, or secure enough, to have a dramatic impact.?

Mike Volpi, a partner at Index Ventures who invests in AI companies, contends that generative AI is far more useful than crypto and blockchain, and will eventually pay dividends. LLMs and ChatGPT showed us a lot of promise this year, but, in tech, promises sometimes need years to pan out. Volpi recalls that the advent of the internet in the mid-’90s didn’t start transforming business and life until years later, after an internet bubble at the end of the decade.?

He believes that many will grow frustrated over the next few years that the promises of generative AI haven’t yet come to fruition. “All the promises were promised too early,” he tells me. “We hear about all these wonderful things that generative AI can do, and I believe they actually really are going to happen, I just think we are over-indexing on how quickly they will happen.”

The main problem, Volpi says, is that a “knowledge gap” exists between the generative AI model (which most big enterprises get via an API) and the end user application. In other words, in real life an LLM is usually just one component of a working AI system, and the AI must work together with private, proprietary data sets in a safe and secure way before they can deliver value to the end user via an app. Many Fortune 500 companies, he suggests, are still figuring out how to do this. One source who spoke to me on background said, so far, generative AI’s “killer app” within big companies is enterprise search—the act of locating and recalling bits of corporate knowledge when needed.


More AI coverage from Fast Company:

From around the web:

Michael (Mike) Webster PhD

Franchise Growth Strategist | Co-Producer of Franchise Chat & Franchise Connect | Empowering Brands on LinkedIn

1 年

The recall of corporate knowledge could be very important.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了