The fog around Sam Altman’s 2023 ouster is beginning to clear
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.
This week, I’m focusing on new revelations around the reasons for Sam Altman’s firing by OpenAI’s board last November. I also talk to a consultant about how companies are deploying generative AI, and explore why the leader of OpenAI’s super-alignment group decamped for Anthropic.
Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at [email protected], and follow me on X (formerly Twitter) @thesullivan.
Will Sam Altman be honest about safety as the stakes get higher?
Last week Scarlett Johansson accused OpenAI CEO Sam Altman of ripping off her voice for the “Sky” persona in ChatGPT. The allegation seriously called into question Altman’s honesty. One crisis communications expert I spoke with said OpenAI’s response to the Johansson letter—including a story in the Washington Post that seemed to absolve the company—had a very “orchestrated” look to it.?
The subject of Altman’s honesty has continued over to this week, mostly because of new information about the reasons for the OpenAI leader’s (somewhat mysterious) firing by the company’s board last November. The board said at the time only that Altman hadn’t been straight with them. Now, one of the board members who ousted Altman, researcher Helen Toner, has for the first time given some actual examples of how Altman was less than honest with the board. She said during a Ted AI Show episode that:
Click here to read more about Helen Toner’s claims about Sam Altman.
How big companies are using generative AI right now
Big companies, despite some setbacks, continue pushing to implement generative AI into their workflows. While every company has different needs, it’s possible to identify some general patterns of how generative AI is being used in enterprises today.?
领英推荐
Bret Greenstein, who leads PricewaterhouseCoopers’s generative AI go-to-market team, says AI tools are changing the way coders work within enterprises. Once they start using AI to create requirements and test cases, and optimize code, they don’t want to go back to writing things from scratch. “Once you see it, you can’t unsee it,” he says. PwC announced this week that it’s now a reseller of OpenAI’s ChatGPT Enterprise solution, which includes access to the GPT-4 model, the DALL-E image generation tool, and enhanced security features.
Companies are aggregating huge volumes of customer feedback, Greenstein tells me, then using large language models to find patterns in the data that are addressable and actionable. “Most businesses look for high volumes of documents either coming in or going out—high volumes of labor transcribing, reading, summarizing, and analyzing those kinds of content, whether it's code or customer feedback or contracts,” he says.
Click here to read more about how companies are using generative AI.
AI Safety researcher Jan Leike lands at Anthropic
Former OpenAI AI safety researcher Jan Leike has taken a job at rival Anthropic after his high-profile departure from OpenAI. Leike left OpenAI (along with company cofounder Ilya Sutskever) after the “super-alignment” safety group he led was disbanded. Leike will lead a similar group at Anthropic. “Super-alignment” refers to a line of safety research focused on making the superintelligent AI systems of the near future harmless to humans. Leike is the latest in a growing list of safety researchers who have departed OpenAI, adding to concern that OpenAI is giving short shrift to safety.?
Click here to read more about Jan Leike’s move to Anthropic.
More AI coverage from Fast Company:
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Interviewer: Writer: Photographer: Video: Multimedia Specialist: Graphics: Content Strategist: Content Creator
4 个月Interesting. The concept of “super-alignment”?leads me to believe we must program AI to love humans and humanity. But how? Give it a Godhead to submit to with commandments?