The Leading Edge: OpenAI Under Federal Investigation
National Journal
Solutions and tools to help government affairs professionals navigate the intersection of policy, politics, and people.
By Philip Athey , Editor
In November 2022, OpenAI released ChatGPT, kicking off the current AI revolution and launching both the company and its CEO Sam Altman into stardom. Already celebrated by the tech community, the young CEO gained fans on Capitol Hill when he asked for his company and the new technology to be strictly regulated.
On Wednesday, Altman got his wish—and then some. The New York Times reported that the Federal Trade Commission has cleared the way to investigate OpenAI and its main partner Microsoft for possible antitrust violations. The investigation is part of a bigger antitrust probe into the meteoric rise of AI-related companies. While the FTC looks into OpenAI and Microsoft, the Justice Department will investigate the dealings of Nvidia, the primary manufacturer of AI semiconductor chips.
For Altman and OpenAI, the investigation is just the latest plunge in a larger fall from grace that took the company from being seen as a shiny beacon of ethical technological progress to an example of all of Silicon Valley’s ills.
OpenAI was founded in 2015 as a non-profit dedicated to researching safe AI for the betterment of humanity. Only later did Altman launch the for-profit subsidiary that released ChatGPT. But even after profit entered the equation, OpenAI maintained that safety and improving humanity was their main goal. Though their developers were handsomely compensated, Altman himself took a salary of just $65,000 a year.
But cracks started to show when in November 2023 OpenAI’s board removed Altman as CEO over fears that he was pushing too quickly and not taking safety seriously. At Microsoft’s urging, the board reinstated Altman just five days later. Board members who led the ouster effort soon left, replaced by leaders more friendly to Microsoft and the for-profit side of the business.
More safety fears flared in May, when OpenAI co-founder Ilya Sutskever announced that he would leave the company along with AI safety researcher Jan Leike, saying that “safety culture and processes have taken a backseat to shiny products.” Soon after, OpenAI dissolved its team focused on the long-term risk of AI.
领英推荐
OpenAI’s board has since set up a new safety and security committee run by Altman, The Verge reported .
As for the regulation that Altman has called for? Well, the tech industry seems to like the idea of regulation more than the practice of it. Tech leaders have mobilized lobbyists to fight specific pieces of federal legislation , including any that holds AI developers liable for the misuse of their technology.
Even Altman’s humble salary is belied by his $2.8 billion worth of investments into tech startups and companies, many of which have direct ties to OpenAI, The Wall Street Journal reported .
OpenAI is far from the first Silicon Valley company to start with a noble goal that slowly gets pushed aside when it becomes clear just how much money they can make. But what has happened at the company over the past year should be a warning sign to lawmakers to not always trust tech CEOs bearing gifts—or asking to be regulated.
PolicyView: AI is a twice-monthly intelligence report from National Journal that provides a comprehensive view of AI legislation at the state and federal levels. We track what’s gaining momentum in specific areas of the country, what industries are most likely to be affected, and which lawmakers and influencers are driving the conversation.
To learn more and request the latest report, visit policyviewresearch.com