The Possible, the Practical, and the Profitable
How to Evaluate Technology from From the Light Bulb moment to the Last Mile implementation…
Originally posted on the Outsideshot substack.
We all know the story: Thomas Edison and his team tested over 6,000 materials before creating a practical incandescent light bulb in 1879. It was the quintessential "light bulb moment" – that breakthrough instant when innovation seems to solve everything. By 1882, Edison's Pearl Street Station was powering 85 customers in lower Manhattan, and the electric future seemed assured. The technology worked: electricity could be generated reliably, transmitted effectively, and transformed into light and power. Public demonstrations were spectacular, making gas lamps seem antiquated overnight.
Edison famously quipped that “Genius is one percent inspiration and ninety-nine percent perspiration.” This is almost always understood to describe that the success of his discovery was more the result of a tireless effort to try every possible combination to come up with the best filament for the light bulb. But the lesson is broader. Edison was emphasizing that it’s not just about having brilliant ideas, or breakthroughs, or demos, or even products; the real achievement comes from the sustained effort and labor that turn those developments into real world solutions that scale.
So Edison's light bulb moment, revolutionary as it was, marked the beginning rather than the end of electricity's journey to universal utility. The gaps between "working technology" and “compelling use cases” and then "working system" would take decades to bridge. Success demanded far more than generators and bulbs: it required standardization of electrical systems, development of safe wiring practices, creation of intuitive interfaces like the modern wall plug, and establishment of reliable transmission infrastructure. It required solving countless "last mile" problems – the final connections and implementations that turned brilliant technology into practical reality.
Today's artificial intelligence is having its own two-year long (and counting) run of light bulb moments. LLMs perform spectacular feats in demos, computer vision systems achieve superhuman accuracy, and each new breakthrough seems to prove the inevitability of an AI-powered future. We’re almost there. But there is that pesky Last Mile.
Consider a recent AI scheduling system deployment my colleagues and I were asked to review (some details changed to protect the confidentiality of the project)… In controlled demonstrations, the language model flawlessly understood scheduling logic and generated natural conversations. When focused on outbound scheduling calls, it proved its business case convincingly. Performance metrics for the test cases were strong: faster scheduling, higher success rates, reduced human intervention.
Yet deep into implementation, that initial brightness dimmed against the harsh reality of system integration. While the core AI performed exactly as demonstrated, the actual system struggled with modern "last mile" problems: telephony integration latencies created awkward conversational gaps, calendar API calls introduced unexpected delays, and the interaction between various system components degraded the smooth experience shown in controlled demonstrations. We found a combination of telephony knowledge gaps and unexpected chain reactions were conspiring to create performance crushing latency. All correctable with the appropriate domain expertise and systems engineering, but very difficult for even brilliant software developers to identify.
This parallel illuminates a crucial truth about AI implementation: having a "light bulb moment" – even thousands of them – isn't enough. Just as electricity required more than working generators and bright bulbs to transform society, AI needs more than impressive models and compelling demos to deliver real value.
Three Phases of Emerging Technologies: The Possible, The Practical, and The Profitable
There are three milestones on the path to widespread successful adoption of emerging technologies. They can be thought of as capabilities (what’s possible), functionalities (what’s practical as a use case), and last mile solutions (what’s ultimately production ready – and profit driving). Understanding where any given solution exists along this spectrum can make a huge difference in gauging its expected value.
The excitement level decreases as you move along the progression, but the total realizable value increases massively along the same path.
When selling transformative technology, whether Edison's electricity or modern AI, vendors typically race to each phase. The first to each highlights their achievement focus on three key elements that shine brightest in demonstrations:
1. Possible: This is when a new capability is unleashed. In its most exciting form, it’s the stunning moment that Arthur C. Clarke crystallized when he wrote that “any sufficiently advanced technology is indistinguishable from magic.” For Edison, it was the bright, steady light of his bulb. For modern AI, it was most notably the introduction of ChatGPT, exposing the ability to understand and generate human language, recognize images, or solve complex problems. But it’s echoed in each new frontier push or modality. These capabilities are often demonstrated in controlled environments with perfect conditions, or in very simple workflows.
2. Practical: This is how these capabilities apply to specific use cases that solve real business needs.. Edison showed how electric light could illuminate homes, power machinery, and enable new industries. AI vendors demonstrate how their technology can schedule appointments, analyze documents, or automate customer service. These demonstrations transform abstract abilities into concrete applications. Competitive differentiators about speed, accuracy, or efficiency. Edison proved electric light was cleaner, safer, and more efficient than gas. AI vendors tout faster processing times, higher accuracy rates, and improved efficiency metrics. These statistics, while genuine, are typically measured under optimal conditions.
3. Profitable: The real downstream value of technology innovation can only be realized when the practical use cases are actually delivered in live, complex, dynamic environments. This, especially in the beginning, can be an enormous lift that requires a nuanced understanding of all the moving parts. For the adoption of electricity in factories, major hurdles had to be overcome to distribute power to the locations, to retrofit equipment to leverage the new power source, to understand and accommodate the new safety risks, and to train personnel to install, maintain, and service the new hardware.
From Practical to Profitable: The Systems Thinking Challenge
The journey from practical to profitable demands more than just functional technology. It requires robust systems thinking. What transforms promising technology into valuable solutions is the complete system: the infrastructure, interfaces, error handling, and management capabilities that turn capabilities and functionalities into reliable real-world results.
The challenges of new technology implementations like AI remind me of the insights of two seminal thinkers in systems and organizational theory: Russell Ackoff (nerd out by reading Redesigning the Future or watching him, he is hilarious and brilliant) and Charles Perrow (nerd out by reading Normal Accidents: Living with High-Risk Technologies). Their frameworks, developed well before the era of artificial intelligence, demonstrate remarkable prescience in explaining today's implementation challenges. Ackoff's work from the 1970s and Perrow's analysis from the early 80s weren't contemplating AI assistants or agents, yet their insights about complex systems and organizational challenges map perfectly onto today's AI implementation landscape. It's a testament to the power of their ideas that frameworks developed for manufacturing, healthcare, power, and early computer systems so precisely diagnose the challenges of implementing modern artificial intelligence.
One of Ackoff's key insights was that systems deliver real value only when all component pieces work together effectively. He illustrated this powerfully with the example of an automobile, a system that moves you from A to B. While an automobile can easily be described as a collection of parts, none of them can move you from A to B or even provide much value on their own until those parts work together as a system. Even more tellingly, he noted that if you took the "best" components from different manufacturers - say, a Ferrari engine, Porsche brakes, a Jeep suspension, and a Rolls Royce chassis - they wouldn't work together at all without redesigning interfaces and connections, despite each being "best in class." Individual components, no matter how impressive in isolation, provide little value without successful integration into a complete system. Consider an AI voice agent: despite its sophisticated natural language capabilities, it does not deliver value without reliable telephony infrastructure to connect it with users, integration to contact center and CRM platforms to identify users and tasks, and external tools to take actions. Each component must not only work, but work in concert with others.
However, implementing these components proves far more challenging than simply checking items off a list. A second crucial insight of Ackoff's was about the interdependence of system components - how changes to one part of a system can produce unexpected effects in others, even when those changes appear innocuous in isolation.
This is where Perrow's writing becomes particularly relevant. Charles Perrow, in his theory of Normal Accidents, argued that in complex, tightly coupled systems, accidents are inevitable and should be considered "normal." The more complex a system (with many interacting parts) and the more tightly coupled (with little slack or buffer between parts), the more prone it is to cascading failures that are difficult to predict or prevent.
Through these lenses, we can better understand why many AI implementation challenges are particularly resistant to traditional solution approaches.
Understanding AI implementation challenges through Ackoff and Perrow's frameworks suggests several important implications:
1. Solutions must be systemic rather than piecemeal. Attempting to solve individual problems without considering their interactions will likely make the overall mess worse.
2. Tight coupling needs to be explicitly acknowledged and designed for. Rather than trying to prevent all possible failures, systems should be designed to fail gracefully and recover quickly.
3. Implementation strategies need to focus on managing messes rather than solving problems. This means developing approaches that can handle complexity and uncertainty rather than trying to eliminate them.
4. Organizations need to build in slack and buffers – what Perrow would call creating "loose coupling" – even at the cost of some efficiency, to prevent cascading failures.
The organizations that succeed in AI implementation will be those that recognize they're dealing with messes rather than problems, and that build systems resilient enough to handle the normal accidents that will inevitably occur in such complex, tightly coupled systems.
Caveat Emptor: Strategic Implications for Buyers
Understanding the primacy of last mile excellence fundamentally changes how organizations should evaluate and implement AI solutions:
1. Invert the Evaluation Process: Rather than starting with capabilities and working forward, begin by examining the last mile challenges in your environment and work backward. The most impressive AI means little if it can't operate within your actual constraints.
2. Prioritize Integration Over Innovation: When choosing between solutions, superior integration capabilities should outweigh marginal improvements in core AI performance. A 10% more accurate model that takes twice as long to integrate is often the wrong choice.
3. Value Battle-Tested Over Brilliant: Seek evidence of successful real-world deployment over impressive demonstrations. Like early electricity providers, the best AI vendors are often those who have learned hard lessons about system integration.
领英推荐
4. Focus on Failure Modes: Examine not just how systems perform when everything works, but how they degrade when things go wrong. The best solutions maintain grace under pressure.
Identifying Last Mile Excellence (what to look from sellers)
How can organizations identify solutions likely to excel in last mile implementation? Look for:
1. Reference Architectures
- Detailed integration documentation
- Clear system requirements
- Comprehensive API specifications
- Performance optimization guides
2. Implementation Experience
- Track record in similar environments
- Case studies with concrete metrics
- Documented lesson learning
- Active developer community
3. Support Infrastructure
- Robust monitoring tools
- Debugging capabilities
- Performance optimization tools
- Implementation partnerships for specific aspects
Looking Forward
As AI technology continues to advance, the gap between capability and implementation will remain a crucial consideration. Like electricity before it, AI will ultimately become a utility – but only through the patient solving of last mile challenges.
Organizations that understand this reality will make better technology choices, focusing not just on the bright light of AI capabilities but on the crucial infrastructure that makes those capabilities practically useful. The winners in the AI era will be those who excel not just at selecting powerful technology, but at implementing it effectively.
In the end, Edison's greatest achievement wasn't the light bulb – it was the creation of a system that made electric light practical and universal. Similarly, AI's greatest achievements won't be its impressive capabilities, but the systems that make those capabilities consistently useful in the real world.
Testing This Framework
If this analysis is correct, we should see several developments over the next few years:
1. The most successful AI solutions won't differentiate with models, but with the most robust implementation frameworks and integration capabilities.
2. Enterprise AI adoption will increasingly favor vendors who excel at last mile solutions over those with marginally better core capabilities.
3. The next wave of prominent AI startups will focus more on solving implementation challenges than on developing new core AI capabilities.
4. Large enterprises will prioritize their internal "AI implementation infrastructure" over buying access to more sophisticated models.
However, if this framework is wrong and last mile excellence isn't the key differentiator, we should instead see:
1. Continued market dominance by companies with the most advanced core AI capabilities, regardless of their implementation sophistication.
2. Enterprise customers consistently choosing solutions with the best demos and raw performance metrics, even at the cost of more complex integration.
3. The next wave of AI startups focusing primarily on developing more powerful models rather than better implementation tools.
4. Large enterprises investing primarily in access to more sophisticated AI models rather than in implementation infrastructure.
Time will tell which path the industry takes, but the history of transformative technologies suggests that mastering the last mile is where the real value lies.
Very insightful post, Fish. As the core technologies become increasingly undifferentiated and commoditized, it seems clear that it is in the last mile that most of the value will be created. And you are spot in arguing that it’s in systems that the value will be found.? I would also argue that, while the systems integration aspects are critical, we also need to learn to tame this new technology and develop a good understanding of the new capabilities that it unlocks.? I still find it hard to grapple with a technology that can be both amazingly powerful and surprisingly unreliable. But this is not a novel problem. When I started working on conversational agents, over 25 years ago, we were working with technology – speech recognition – that was then both amazing and somewhat unreliable. What distinguished great applications from bad ones was not, as you rightly pointed out, marginally better technology. It was all the great engineering and conversational design that went into creating a solution that leveraged the technology’s strengths while walking around its limitations.?
Systems Engineer at SIG | AWS SAA | DevOps | M.Sc. Eng
1 个月Very insightful, Fish. One of the notable choices from DeepSeek has been the decision to adopt an API format compatible with OpenAI. This approach seems to trace the same footsteps as S3, where its API became (pretty much) the standard. I wonder if the same will happen in this case. By leveraging the AI API, developers can seamlessly utilize AI as a SaaS, and the product built on top of the AI API is what will bring in value/profit. For startups, this strategy enables rapid growth through the utilization of the API, while long-term stability and maturity will be achieved by developing their own AI infrastructure. The points that you raised then come into play, as integration with the APIs and the "last mile" implementation become the big challenges to overcome.
Chief Data Officer for Retail & DTC ?? Snowflake Pro
1 个月This is great. I think the entire debate around AI hinges on this “Last Mile” problem youre describing.