Sam Altman is out as OpenAI CEO, a dispatch from one of AI’s buzziest conferences, and more AI and tech news this week
LinkedIn News
Bringing you the business news and insights you need to stay informed.
Welcome back to LinkedIn News Tech Stack, which brings you news, insights and trends involving the founders, investors and companies on the cutting edge of technology, by Tech Editor Tanya Dua . Check out our previous editions here .
First, some breaking news: Sam Altman is out as OpenAI ’s CEO because the "board no longer has confidence" in his ability to lead. Here’s more.
A deep dive into one big theme or news story every week.
If the second Cerebral Valley AI Summit , held this week in San Francisco, is any indication, the city is seeing an AI-driven resurgence.
San Francisco – or at least the Hayes Valley neighborhood to which the name Cerebral Valley is ascribed – was alive and kicking, with everyone I encountered seemingly drunk on the promises of AI. The energy was truly infectious.
This was in stark contrast to the pandemic days of 2020 and even a year ago, when I was there last. Not to mention, I counted at least seven gigantic billboards peddling AI products by companies ranging from C3.ai to Zoom on the Uber drive from SFO to my Airbnb.
But the real action was at SFJAZZ, where close to 300 of a who’s who in AI gathered for the event on Wednesday, a few miles from all the political action at the APEC Summit.
If Presidents Biden and Xi were the main draws there, Cerebral had AI heavyweights like Clara Shih and Mustafa Suleyman , founders like Ali Ghodsi and Naveen Rao , and investors like Reid Hoffman and Vinod Khosla – sometimes agreeing and otherwise duking it out on everything from regulation to the question of open-source versus proprietary large language models.
Here are the key highlights:
The open-source versus proprietary model debate
When Meta released its open-source Llama AI models earlier this year, it positioned the move as decisively different from proprietary models like OpenAI’s GPT or Google’s Bard, which charge based on usage – setting off a debate on the merits and demerits of both approaches.
Since then, the AI world has been squarely divided into two camps, with the discourse showing no signs of slowing down, even this week.
Databricks CEO Ali Ghodsi said open-source models were “absolutely essential” for the sake of transparency and to have the necessary guardrails in place to prevent against their drawbacks, given how much lack of clarity shrouds researchers around just how LLMs work overall.
“We understand how we built them, but we don't understand why they exactly work – it’s terrifying,” he said. Do we want “two companies that have two secret models that they don't want to share anything about? Or do we want the researchers – all the labs around the world – to spend time trying to understand what's going on and make progress toward understanding how these things work and how we can control them and how we can align them?”
For others, like Salesforce ’s AI division chief Clara Shih, having an open-source architecture is a no-brainer because the company serves a broad swath of enterprise clients that use a range of platforms and technologies. Salesforce, to that end, allows clients to use anything from their own models to open-source as well as proprietary models in its AI offerings.
But investor Vinod Khosla and Kanjun Qiu , CEO of the AI startup Imbue , were on the opposite end of the spectrum, with Khosla arguing that models needed to be kept under lock and key owing to the geopolitical context and to keep China’s AI ambitions under check. Qiu, meanwhile, said that more specific proprietary models were necessary to build autonomous AI models like the ones Imbue was working toward.
Regulation and the AI Executive Order
The overarching sentiment toward the White House’s recent executive order on AI was positive, with most praising the tone and scope of the Biden administration’s approach.
“Some amount of regulation is needed, and the White House executive order, we thought, was very well done,” said Salesforce’s Shih.
One of the smartest elements of the executive order was how closely it involved the Department of Commerce, said Hoffman, because it showed that the administration was trying to make sure AI is ultimately good for American industry, jobs and workers.
Databricks’ Naveen Rao called out the focus on transparency, and the role played by the National Institute of Standards and Technology in setting standards and ensuring safety, as positive developments. But he opposed the compute limits that the order set, requiring disclosures for models larger than 10^26 FLOP – referring to floating-point operations, or the number of computer operations used to train an AI system.
“These things change constantly – something that was really big a year ago is really not that big anymore,” Rao said. “So I don't think it's a great idea to start dictating these kinds of limits.”
Khosla took a contrarian point of view, saying that aggressive AI regulation could lead the U.S. to lose some momentum in its “techno-economic race with China.”
领英推荐
Khosla and Hoffman also pushed back on the idea that the FTC should be monitoring the AI industry for anti-competitive conduct, with Khosla calling FTC chair Lina Khan a “crazy, left-wing kooky.”
Existential risk or not?
Another polarizing issue in the AI community since before ChatGPT even came out has been whether AI systems can be conscious or sentient – and pose an existential threat to humanity.
Echoing a group of high-profile signatories who earlier this year asked companies to pump the brakes on "giant AI experiments" until the risks are manageable was Holden Karnofsky , the director of AI strategy at Open Philanthropy and husband of Anthropic co-founder Daniela Amodei . He warned about AI’s risks and called for “strong red lines” that would force companies to halt their AI work if they were unable to contain their foundation models.
Others like Hoffman and Khosla didn’t seem particularly worried about existential risks of AI, with Khosla dismissing it as “nonsensical” talk from academics who had nothing better to do.
That’s not to say that they aren’t taking other risks associated with AI seriously. Hoffman highlighted how companies like OpenAI and Inflection AI employ safety teams so that people can’t get models to teach them how to make bombs, for instance. Databricks’ Rao agreed, saying that instead of existential risk the focus should be on real threats like disinformation and robot safety.
On the other hand, dozens of top VC firms like General Catalyst also just signed voluntary commitments for the startups they invest in to build AI responsibly.
For Inflection’s Mustafa Suleyman, a bigger concern is how to ensure that the tangible benefits from AI’s advances are equitably distributed.
“The great opportunity over a 20-year period is that if we can truly create value at the scale that we in Silicon Valley are now all predicting, then we have a different problem, which is how do we capture that value and redistribute it so that everybody can enjoy the benefits of that kind of life?” he said.
A new challenger to Nvidia?
In the AI era, everybody wants what Nvidia has : a trillion-dollar business thanks to the booming demand and mass shortage of its GPUs, without which no company can actually run AI models.
And while cloud providers like Microsoft (LinkedIn’s parent), Google and AWS have recently made investments to develop their own chips to reduce their dependence on the chipmaker and seize the opportunity, it’s another contender that may emerge as the challenger to Nvidia.
People at the conference (both on-stage and off) were buzzing about AMD ’s MI300x chips, which are designed to compete against Nvidia’s advanced H100 chips. AMD has said that it expects to sell $2 billion worth of its AI chips next year , with CEO Lisa Su adding that the company had won commitments from "multiple large hyperscale customers," referring to large tech and cloud computing companies.
“We're very bullish and very optimistic on that MI300x,” said Chase Lochmiller, founder and CEO of Crusoe Energy. “The interesting component here is that if you look at an H100, it has 80 gigs of high-bandwidth memory. The H200 that was just announced has 141 gigs. But the MI300x has 192 gigs – so you can do a lot more with a much bigger model.”
Here’s where we bring you up-to-speed with the latest advancements from the world of AI.
Catch up on the tech headlines you may have missed this week and what our members are saying about them on LinkedIn.
Here’s keeping tabs on key executives on the move and other big pivots in the tech industry. Please send me personnel moves within emerging tech.
Thanks for reading. Please share Tech Stack and forward it around if you like it! Pitch me the interesting investors, founders, ideas and companies powering emerging technologies like AI to reach the inboxes of 750,000+ subscribers and millions more on LinkedIn.
[email protected] at Beech Hill Securities, Inc.
1 年Mwer
Leading Vodafone's Global Networks Automation Engineering
1 年LinkedIn?using #ChatGPT?to write its @Linkedin News articles ??
We help startups and SMEs save 100,000’s of $$ in technology cost | Startup Success Catalyst | Startup Mentor
1 年Can Sam Altman write another big chapter in his book just like Steve Jobs did? OpenAI