Issue 36: Anthropic, the rising star
Sharmilli Ghosh
Product Management | GTM | ISV & SI Partnerships | Startup Founder | Board Member | Investor |
Last October (’23), we bought shares of Anthropic, at a whopping $18B in valuation. At first, I thought it was very expensive, but compared to its peers and its promise, it was easy to be convinced that perhaps it was cheap. In May, Elon Musks xAI closed a new round at $24B post money. Fast forward to this October (’24), Anthropic is in discussion for a new funding round at approximately $40B in valuation. ?Putting it in perspective, OpenAI went from $80B to $157B in 12 months, and Anthropic is the next rising star – see chart below. Unconfirmed until round closes, but that is phenomenal and worthy of a deep dive.
I wanted to cover them in a newsletter, but ran into one from Sabrina Wu and Vivek Ramaswami : Anthropic: Closing the Gap? Thought I’d come back to it in a bit, and here I am. This is written with a different lens - educational vs analytical.
Let us start with a view of the top Gen AI startups as of November 2024:
Anthropic Overview
Overview Anthropic is an AI safety and research company, founded in 2021 by former OpenAI employees including siblings Dario Amodei and Daniela Amodei, along with Jared Kaplan, Benjamin Mann, Jack Clark, Sam McCandlish, and Tom Brown. The company is headquartered in San Francisco, California, and is structured as a Delaware Public Benefit Corporation (PBC), which mandates a balance between private and public interests in its operations
Mission and Vision Anthropic's mission centers on creating AI systems that are both useful and safe. Anthropic differentiates itself from competition, by a) focusing on the enterprise market (while OpenAI has targeted consumers with its?GPT models) and b) pricing their models based on factors that are important to enterprises.
Anthropic introduced "Constitutional AI," a framework that guides behaviors of AI systems, using a predefined set of rules called the "constitution." This allows the AI to evaluate its own outputs and adapt its behavior in accordance with principles.
Products and Technology Anthropic's primary AI model is known as "Claude" and its evolutions (Claude Instant, Claude 2, and Claude 3). Anthropic has an interesting naming convention for its models – Haiku is the smallest, then Sonnet and then Opus. With Claude 3.5 Sonnet they introduced features to understand humor, handle complex workflows and interpret charts and graphs. Following 3.5 Sonnet, there will be Opus 3.5 and Haiku 4.0. With Opus they have capabilities such as interpreting images and complex code generation.
Funding and Financials Anthropic has raised significant capital since its inception, with key investments from tech giants like Amazon and Google. As of 2024, Amazon has invested $4 billion, while Google has contributed $2 billion. Other notable investors include Menlo Ventures and various venture capital firms. The company has raised approximately $8 billion to date and has a valuation of around $18.4 billion and is in current talks to raise the next round.
Competitive Landscape Anthropic competes with several other AI-focused companies, including OpenAI, and others described in the table above. Despite its relatively recent establishment, it has positioned itself as a leader in AI safety and interpretability research. The company's long-term strategy includes expanding its research portfolio and commercial applications through partnerships and new product offerings.
Corporate Structure and Governance Anthropic has established a unique corporate governance model. It is structured as a "Long-Term Benefit Trust," which guides the company’s priorities toward public benefits rather than short-term profitability. The board of trustees includes leaders from prominent organizations in AI safety and public health, such as the Clinton Health Access Initiative and the Alignment Research Center.
Future Outlook Anthropic aims to continue advancing AI safety research and developing models that are interpretable and reliable. With substantial funding and strategic partnerships, the company plans to expand its product offerings, which now include Claude enterprise plans and integrations with cloud platforms like Amazon Web Services (AWS).
Anthropic's rapid valuation acceleration is justified by several key factors:
Anthropic has several strategic partnerships that are shaping its product roadmap and direction, per their annual investor report. The key partners listed inlcude:
·??????? Amazon Bedrock?
·??????? Quora Poe
·??????? SAP?
·??????? SK Telecom in South Korea?
Products
Anthropic has developed a suite of products centered around its advanced AI models, collectively known as the Claude family. These models are designed to be reliable, interpretable, and steerable, catering to various applications across industries.
The Claude family comprises several models tailored to different performance and speed requirements:
Anthropic's products are designed to serve businesses of various sizes, from small startups to large enterprises and of course developers. Here's a breakdown of Anthropic's product offerings based on the size of the business they target:
?
1. Products for Small Businesses and Startups
2. Products for Medium-Sized Businesses
3. Products for Large Enterprises
4. Products for Developers and Independent Creators
Strategic Partnerships & Key Customers
Anthropic has primarily been working with organizations focused on responsible AI development, improving large language models, and creating safe AI systems. Some early customers and collaborators have used Anthropic’s AI models to shape the company's roadmap and direction:
Competitive landscape?
Anthropic operates in a highly competitive AI landscape, contending with major players such as:
?
Setbacks and controversies
Anthropic has encountered a few setbacks and controversies since its inception. Here are some notable instances:
1.?Disagreement Leading to Founding
The initial founding of Anthropic itself was rooted in a controversy. The company was established in 2021 by former OpenAI researchers who left due to internal disagreements over OpenAI’s direction, particularly its close partnership with Microsoft. This move reflected a divide within the AI research community over corporate influence and commercialization strategies for AI technology
2.?Data Leak (2024)
In January 2024, Anthropic confirmed that it had suffered a data leak, which exposed sensitive information about its research and internal operations. This incident raised concerns about the security and safety measures the company had in place to protect its proprietary information and the ethical implications of handling such breaches
3.?Regulatory Scrutiny
Anthropic has faced scrutiny from regulators and industry stakeholders regarding the safety and potential risks associated with advanced AI models. As Anthropic’s research focuses on safety and reliability, it has been involved in discussions about the ethical use of AI and the potential for misuse. In particular, Anthropic’s approach to "Constitutional AI" has sparked debate about how ethical principles are encoded and enforced in AI systems, and whether these principles truly align with diverse global perspectives
4.?Funding Controversies
Anthropic has received funding from multiple large corporations, including Amazon and Google. While this has enabled rapid growth and development, it has also led to questions about the influence these corporations may have on the company’s research direction and priorities. Given Anthropic's focus on AI safety, some industry observers have expressed concerns that the company could be pressured to prioritize commercial objectives over ethical considerations
5.?Dispute over AI Safety Standards
As a company deeply involved in AI safety research, Anthropic has occasionally been at odds with other AI labs and research institutions over what constitutes appropriate safety standards. This includes debates around the transparency and interpretability of AI models and how to best implement safeguards against misuse or unintended consequences of AI technologies
These controversies reflect both the challenges and growing pains that Anthropic faces as it navigates its role as a leader in AI safety while balancing commercial and ethical considerations.
Risks to future?
Anthropic faces several risks to its future growth and stability:
Anthology Fund
One of the most interesting recent PR announcements by Anthropic is the launch of the?Anthology Fund, a $100 million initiative in partnership with Menlo Ventures. The fund aims to support startups that leverage Anthropic's technology to develop AI solutions across key areas such as healthcare, legal services, education, energy, and scientific research. The initiative provides startups with access to Anthropic's AI models, $25,000 in credits, and venture support, helping drive responsible AI innovation and growth in the AI ecosystem.
Closing
Anthropic stands at the forefront of a transformative era in AI, with its unwavering focus on safety, alignment, and ethical AI development. By addressing critical concerns around the responsible deployment of artificial intelligence, Anthropic has carved out a distinct niche in an increasingly competitive landscape. As the world continues to grapple with the rapid integration of AI into everyday life, Anthropic’s mission provides a crucial counterbalance to the industry’s pace, ensuring that progress does not come at the expense of ethical standards. The company's commitment to transparency and collaboration underscores its potential to be a guiding light for the future of AI—one that is not only powerful but also profoundly human-centric.
I Will Train and Coach You to Become a High-Ticket Sales Pro
1 周Anthropic is totally blowing it with me as a loyal paying subscriber. I was loving Claude Sonnet until they kept popping up warnings about long chats, limits reached and being demoted to only having access to Haiku, and losing my entire two hour chat content with Claude who was helping me write a kid's Christmas Mystery story with activities. They had me right where they could have kept me for good, but I started using ChatGPT 4o again, because I got so sick of seeing the warnings, which I NEVER get with OpenAI and I pay the same amount. ChatGPT has gotten much better, and while I still would have preferred using Claude Sonnet because writing stories, scripts, and other important projects using words is one of Claude's greatest strengths IMO. I guarantee you people are going to get sick of your stingy limits and constant warnings. I'm cancelling my subscription right now.