The Interface-Infrastructure Flywheel
Better experiences demand better infrastructure. This week: AI companies start moving into hardware.
Controversial thought: The future of AI experiences will be won at the hardware level.
Every great interface shift—desktop, mobile, now AI—follows the same playbook. Better experiences demand better infrastructure.
When the iPhone 3G launched in 2008, it ushered in a new era of computing, rewiring how we thought about software. Web apps suddenly felt clunky. Apple’s tight vertical integration of hardware and software made the product experience seamless – like the responsiveness of native apps (50ms latency vs. 200ms on web) – and pushed the world toward mobile-first everything.
Apple’s tight integration of hardware and software made native apps feel buttery smooth (50ms latency vs. 200ms on web). The shift was instant:
?? App Store launched with just 500 native apps
?? 10M downloads in 3 days
?? Almost 18 years later in 2024, 88% of users still prefer mobile apps over web
Apple knew this from the start: own the stack, control the experience. By designing its own chips—the A-series, M-series, and accelerators like Neural Engine—it optimized iOS all the way down to the silicon. Faster interactions, smoother animations, better battery life.
Computing evolves in cycles—better infrastructure enables better interfaces, and better interfaces push infrastructure forward.
The infrastructure-interface flywheel is bidirectional:
The pattern will repeat. This may be a coincidence, but it’s interesting to note that in the news this week, AI companies are leaning into hardware. Perplexity, for instance, is partnering with Cerebras’ for high-speed inference chips, while OpenAI is rumored to be announcing its in-house chip this year.
If AI is the next dominant interface, its infrastructure has to evolve too. Just like the iPhone designed its own silicon to shift how we interact with software, AI-first experiences will need a new foundation entirely.
Have a great weekend.
领英推荐
Tara
Must-know News in AI this week
OpenAI cancels its o3 AI model in favor of a ‘unified’ next-gen release: OpenAI has effectively canceled the release of o3, which was slated to be the company’s next major AI model, in favor of what CEO Sam Altman is calling a “simplified” product offering.In a post on X, Altman said that in the coming months, OpenAI will release a model called GPT-5 that “integrates a lot of [OpenAI’s] technology,” including o3, in its AI-powered chatbot platform ChatGPT and API.
Cerebras-Perplexity deal targets $100B search market with ultra-fast AI: Cerebras Systems and Perplexity AI have announced a partnership that promises to deliver near-instantaneous AI-powered search results at speeds previously thought impossible. The collaboration centers on Perplexity’s new Sonar model, which runs on Cerebras’s specialized AI chips at 1,200 tokens per second – making it one of the fastest AI search systems available.
AI chip startup Positron raises $23.5 million seed round to take on Nvidia: Positron, a startup chip maker that aims to compete with Nvidia has raised $23.5 million to scale production of its U.S.-made artificial intelligence chips. Valor Equity Partners, known for backing Elon Musk’s companies, Atreides Management, Flume Ventures and Resilience Reserve were among the investors who participated in the round. Reno-based Positron says its chips, which are manufactured in Arizona, use less than a third of the power of Nvidia’s top-of-the-line H100 graphical processing units, while maintaining the same performance.
OpenAI is reportedly getting closer to launching its in-house chip: OpenAI remains on track to start producing its in-house AI chip next year, according to a report from Reuters. Sources tell the outlet that OpenAI plans to finalize its design over the next few months before sending it to the Taiwan Semiconductor Manufacturing Co (TSMC) for fabrication.
Meta in talks to acquire AI chip firm FuriosaAI: Meta is reportedly in talks to acquire a South Korean chip firm as the social media giant looks to bolster its AI hardware infrastructure. Meta may announce its intent to purchase FuriosaAI, a chip startup founded by former Samsung and AMD employees, as soon as this month, per Forbes. FuriosaAI develops chips that speed up the running and serving of AI models, including text-generating models like Meta’s Llama 2 and Llama 3.
Mistral AI unveils Le Chat, a fast AI chatbot: French AI startup Mistral recently launched its new chatbot app, Le Chat, for both iOS and Android platforms, aiming to compete directly with established names like ChatGPT, Claude, and Gemini. Positioned as Europe's alternative to these popular applications, Le Chat stands out for its impressive speed – boasting the ability to generate responses up to 1,000 words per second, powered by innovative technology developed in partnership with Cerebras Systems.
Coursera co-founder Andrew Ng and his team announced Agentic Object Detection. The groundbreaking AI requires no prior training. Simply describe an object, and the AI will think, plan, and locate it, akin to the capabilities of OpenAI's o1/o3 models.