More Pixels Don't Mean a Better Camera—Why Bigger AI Models Aren’t Always Better

More Pixels Don't Mean a Better Camera—Why Bigger AI Models Aren’t Always Better

If you've ever bought a camera or a smartphone, you've probably seen the "megapixel" hype. More pixels must mean a better camera, right?

Actually, not always.

For most common uses—social media posts, viewing on laptops, or even printing at sizes up to A3—12 to 24 megapixels are more than sufficient. Beyond that, you're paying for storage space and computational resources without gaining noticeable improvement. The law of diminishing returns kicks in swiftly.

My friend Shankar Maruwada , who works closely with Nandan Nilekani—pioneer of India's digital public infrastructure and mentor to Shankar and the EkStep team—recently captured this perfectly while talking about AI Models:

"Solving bigger problems > Building bigger models."

Read about it here

Foxconn’s FoxBrain: Small, Efficient, and Purpose-Driven

Taiwanese tech giant Foxconn recently launched its first Large Language Model (LLM), FoxBrain, developed using just 120 of Nvidia’s H100 GPUs over about four weeks. For perspective, Grok-3, Elon Musk's recent AI launch, runs on a staggering 200,000 GPUs.

FoxBrain represents the innovation we truly need—small, efficient, and directly addressing real-world enterprise challenges like manufacturing, supply chain management, and intelligent decision-making.

Initially designed for internal applications, FoxBrain excels in data analysis, decision support, mathematics, reasoning, problem-solving, and code generation. It’s specifically optimized for traditional Chinese and Taiwanese languages, offering tailored solutions rather than generalized, resource-intensive capabilities.

Foxconn plans to open-source FoxBrain, fostering collaboration and expanding practical applications in industries like manufacturing and supply chains.

Grok-3 vs. FoxBrain: The Megapixel Race of AI

Grok-3 is like the 100 megapixel camera when a 24 mega pixel will do

When Grok-3 launched, I watched the event live. Musk proudly announced its massive computational resources. Yet despite the hype, I waited in vain for a "wow" moment. Grok-3 offered incremental improvements without transformative breakthroughs:

  • No groundbreaking shifts in natural language processing.
  • No revolutionary enhancements in reasoning or efficiency.

It reminded me of a camera boasting 100 megapixels—impressive on paper, but irrelevant for everyday users who don't print billboard-sized images.

My post on this here

The Law of Diminishing Returns in AI

AI models today are caught in a similar "megapixel" race. Big tech companies continue scaling models, stacking GPUs, but this approach quickly reaches diminishing returns. More GPUs, more compute, higher energy consumption—but minimal real-world benefit.

That's why models like FoxBrain, DeepSeek, and Stanford’s S1 matter more. They prioritize efficiency, intelligence, and solving tangible problems for enterprises and everyday users over building bigger models that serve the purpose of a few.

Why Efficient AI Matters More Than Ever

Foxconn’s FoxBrain, DeepSeek and Stanford's S1 symbolize a critical shift toward AI that:

  • Enhances productivity without massive infrastructure.
  • Addresses data privacy and security by enabling enterprises to host smaller, efficient models internally.
  • Reduces environmental impact through lower energy consumption.

What Does Nilekani, Architect of India's Digital Revolution say about solving for India's education about AI?

Nandan Nilekani, along with my friend, Shankar Maruwada , CEO and Co-founder of the EkStep Foundation, emphasizes solving India's biggest problems at scale through efficient technology. EkStep leverages technology, big data, and mobile platforms through initiatives like 'Sunbird'—an open, free digital infrastructure designed to transform education by embedding interactive QR codes in textbooks, reaching millions of students across India.


India's Education can only be solved with small efficient AI models

Nilekani believes India will soon be the largest daily user of AI globally. Why? Because India doesn't just want AI—it needs it. India's challenges are unique: 1.4 billion people, nearly 20,000 dialects, vast literacy gaps, and complex socioeconomic issues.

India has the ability to solve a lot of problems for its population, but the hard part has always been in the distribution, not the solution. In India, AI can help bridge this access gap.

AI can be a game changer in education, helping close the literacy gap. AI’s applications are useful not only for students; they extend to teachers, who are often overwhelmed by administrative tasks that detract from teaching.

India has set up Unified Payments Interface, a real time payment system, which handles more than 10 billion transactions a month. It is low-cost, real-time and the largest interoperable payment system in the world. Its super efficient.

In my opinion, Nilekani’s vision of solving significant societal issues with AI cannot be achieved with models like Grok-3 or even OpenAI's GPT models, which consume massive computational resources but offer incremental benefits. Instead, purposeful, efficient models like Stanford’s S1, DeepSeek, and Foxconn's FoxBrain are needed. Not only India but enterprises and individuals globally require models like these—models that solve real-world problems effectively without excessive computational demand.

My Final Take

Borrowing a leaf from Shankar's book I would say

Few Hundred GPU FoxBrain > 100's of thousands of GPUs Grok-3

The future belongs to smaller, smarter, and more efficient AI models like FoxBrain, S1 and DeepSeek. It’s time the industry—and all of us—recognize that solving bigger problems always matters more than building bigger models.

Just as you don’t need a 100-megapixel camera for everyday photography, we don't need 200,000-GPU AI models for most practical applications.

Saria Chowdhury

Ai & Analytics | Innovation | Data & Project Management

1 周

Optimization, not just scale, will be key to AI’s mass adoption. Love seeing India making its mark in this space. Thanks for sharing this insightful perspective, Ekhlaque!

Rakesh Kumar

Cybersecurity Leader | Expert in Information Security Strategy | Risk Management & Compliance Specialist | Cloud Security Architect | 18+ Years of Empowering Organizations with Innovative Solutions

1 周

The analogy between camera megapixels and AI model size is insightful. Just as increasing megapixels doesn't automatically enhance photo quality, scaling up AI models doesn't guarantee better performance. Recent studies, such as "When Do We Not Need Larger Vision Models?" , highlight that smaller, efficiently designed models can match or surpass larger ones. This perspective encourages a shift from sheer size to thoughtful architecture and data utilization in AI development.

Ekhlaque Bari

Human+AI Evangelist | Founder & CEO XdotO | AI Strategist MINFY | Generative AI | Keynote Speaker | AI Masterclasses | AI Workshop | ISB | IIM | Boards and CXOs | Ex-CIO GE, Max Life, Jubilant, SMFG, SBI Card

1 周

Big Tech are still to wake up to the reality of small, efficient AI Models. They are too deeply invested in the theory of scaling and fail to notice the law of diminishing returns of marginal utility.

要查看或添加评论,请登录

Ekhlaque Bari的更多文章

社区洞察