The Edge Marketplace – Bigger than the Internet?
The Edge Marketplace - the internet equivalent of edge opportunities over the next 10 years, has the potential to dwarf the first 10 years of the modern internet age (95 – 2005). However, there are risks and significant barriers to the development of the edge marketplace.
We often speak and write about the ‘edge’. We speak about the technologies that will enable the edge (5G, distributed cloud, low power and automated infrastructure, easy access to facility space/power/cooling, fiber and more). We also speak of the tech trends that will benefit from an enabled edge environment (IoT, Robotics, AR/VM, AI/ML, Drones, Autonomous Vehicles, smart cities, etc.) and business trends that demand improved customer intimacy and experience like Digital Transformation. Michael Dell has said that “The Edge will be a 100Xs bigger than the public cloud” and the famous VC Peter Levine suggests “Edge will kill the Cloud”. Other than the assumption that edge will be “big”, in most of the discussions I’m a part of or the articles I’ve read, there is little discussion for what the Edge Marketplace could become with the appropriate push.
How the internet started
While there is significant debate on when the internet was born many agree that the connection and messaging between an Interface Message Processor (IMP) with an attached computer in Los Angeles to another IMP and a second computer in Stanford Research Park on October 9th 1969 was the genesis. Twelve years after this first message on the ARPANet there were still only 213 computers on the network. However, fourteen years later there were 44 million. Still, that’s 26 years to get to just 44 million (out of a possible 6 billion) people on the network (internet). Growth really accelerated after 1995. Between 1995 and 2005 internet users grew from 44 million to over 1 billion. Between 1995 and 2005 we added an average of 274,000 new users every day.
Why is “How the Internet Started” important?
Most of us as casual observers see the internet as having started when we could use a browser and begin to easily hunt for things on the web circa 1995. Unfortunately, the internet started 26 years earlier, and the barriers to entry for the 99.9% of the rest of us meant it remained a pretty small marketplace for 30 years. Imagine if you had started a company like VMware, Amazon or maybe a web page design company in 1969, to say you were early would be a gross understatement. The edge, as we commonly reference it today, started in about 1998 with the introduction of Akamai CDN. Delivering content cached at the edge was really the first major enabler of the edge. It’s now 2019 or 21 years later and we’re only beginning to leverage the edge the way we could be. The internet began to blossom as a market maker or marketplace in 1995 for a number of reasons, including but not limited to; lower cost PCs, greater access to an internet connection via home or office, and software that made it easy for the average customer to use. While we may have had PCs or MACs with different interfaces and operating systems and we might have used Netscape vs Mosaic etc., the access to content was for the most part unified by HTTP & HTML. This meant that when a new user came on the internet, they weren’t forced to select the Mosaic or Apple portion of the internet, they could see and use it all. In effect, the internet was always there, we just didn’t have many of the enabling technologies that were needed to make it useful and efficient, we needed to reduce the barriers to entry.
Barriers to entry could continue to drag on the pace of development of the edge marketplace
One of my arguments here is that we run the risk of delaying the growth of the Edge Marketplace by having APIs (cloud design) be the gate keeper to access and utilization of edge capacity and capability. We might be forced to pick a public cloud provider (I.e., AWS or ATT) and then be limited to deployment on infrastructure that is part of that cloud provider’s ecosystem. In truth, you could also deploy to other cloud providers, but that means you run the risk of unique development and design requirements depending on who you deploy to and what you’re deploying. The overhead with having to deploy to multiple cloud providers means that many applications or solutions won’t see the light of day because the barrier to entry is too high. Reducing the barrier to entry is the key to having the Edge Marketplace have its 1995 internet moment occur.
What barriers to entry remain?
The barriers to entry vary depending on what perspective you are approaching the market from. If you’re a gaming company looking to over ultra-low latency in most urban environments then access to effectively and efficiently distributed capacity (space, power, cooling, security, network & maybe compute). If you’re an enterprise and you want to provide a new customer experience that requires localized data and low latency, then finding a common cloud platform to use that hits the majority of urban areas is what you need. In the interest of abbreviation, I’ve just listed in no particular order a limited set of the existing barriers to the rapid creation of a global edge marketplace:
- Latency options spanning the gamut of < 10 to 100 ms
- A unified deployment model for deploying code to the majority of end users
- Security and governance for edge-based devices and associated data
- Data center capacity in 1000s of locations that is easily acquired in small increments
- Simplified billing
- Localization for data sensitivity
- 5G deployments in most urban areas
- The ability to effectively work across telco operators (workloads span providers)
- The ability to effectively work across cloud providers (“ “)
I believe the barriers to entry are hard barriers because I don’t see a few big money, killer applications forcing the development of global edge infrastructure and deployment methods happening anytime soon. If a few big money applications were to hit the market, they could push through the barriers because their potential value per customer would warrant it. I see it being much more likely that the edge marketplace will be developed by 1 million little applications that are made easily available to millions or billions of potential users almost instantly (similar to an iPhone app). The problem with this type of development is no one buyer is a catalyst for major global edge infrastructure development. In other words, show me the money.
Unlike the internet of 1990
In 1990 we didn’t sit around contemplating whether “I should look for new shoes online” or “Should I book my flight to Hawaii from the app on my phone”. Why didn’t we? We didn’t because there was no reason to believe we could and nothing really telling us that the possibility was around the proverbial corner. The iPhone is a similar example, we weren’t mountain biking the Coyote hill trails wondering what the one millionth application should be on our phones. In fact, if anyone had suggested that there could possibly be more than 1 million application options on your phone you would have laughed at them. However, today isn’t like 1990 or 2004, we know there are millions of potential opportunities at the edge and millions more that we haven’t even imagined yet. We also have most of the technologies we need at the price point to make sense for enabling the edge, but we don’t have everything. General availability of 5G is likely a necessity. I argue that even if 5G is widely available in four years, we still won’t be able to leverage it to its ultimate potential. I believe our risks can be captured in a few key areas:
- A splintered market, caused by the lack of a standard for global code deployment
- Easy access to a network of global capacity (for compute, network, space, power & cooling)
- No single place to buy “edge”
- Difficulty in addressing a complex set of global government requirements (especially as a lone developer)
- Shared resource models among friends and competitors
- No options for sharing workloads across networks (I.e., ATT – BT)
The next five years
I find it especially interesting that I’m effectively advocating for a true commoditization of cloud offerings. I find it interesting, because I didn’t think it would happen and yet now, I believe without it the opportunity at the edge will be limited in scope and delayed in delivery. In effect we need HTML for Edge Computing. Maybe an open or standard serverless framework for applications could help, but will that accommodate the majority of workloads at the edge? We need to find ways to abstract for code delivery and execution across a variety of network and infrastructure offerings. A developer or application owner should be able to select delivery based on region/countries/latency and then deploy. Today, attempting to deploy to a group of customers that aren’t already close to an AWS, GCP, Azure or Alibaba cloud campus is difficult at best, impossible at worst. All that being said, I still have no doubt that the next five years will be interesting, and that people will find a way to get past the existing hurdles. I also think we’ll see companies making money through reducing barriers to entry allowing for increased sales of hardware, software and services. I hope that we see an organic ability for the seller (coder/application developer) and the user to benefit at the edge in thousands of new ways, but I fear it will take a few years longer to get there than many of us want.
Helping businesses devise, deliver, and discover what’s next
6 年Great thoughts, Mark. I've certainly seen understanding about the ecosystem developing rapidly in the past couple of years. I personally find the relationships that need to be enabled interesting; infrastructure owners, cloud providers, hardware groups, real estate groups, individual developers, group/company developers. Right now at Ori, we're trying to tackle some of the major barriers you mention. It's time for the extensive infrastructure supporting existing networks to see the light of day and let developers make use of resources most don't know exist.
I think that future compute resource will fall mostly into two domains: 1 milliwatt to 100 watts: sensors, smartphones, edge IoT controllers, maybe cars etc. Silicon will improve to allow on-device AI inference & even learning 10kilowatts - 1 gigawatts: small-to-hyperscale datacentres, near to (green) power supplies & fibre. Essentially for “heavy lifting” especially mass data storage & AI/other hard compute tasks The bit in the middle - 100W-10kW - will be tricky for numerous reasons. Power supply, developer access, factors like VPN & multi-network use etc.
Founder & Partner at Ratchet Capital, Influencer Marketing, and Venture Creation | Driving Sustainable Economic Growth & Innovation Globally | CEO at Agricare Technologies | Advisor
6 年Good thoughts Mark. Thank you for this. While we are fully committed to the possibilities of edge, and making it a core part of our router strategy, there are practical considerations that suggest the Edge Cloud will be complementary, rather than destructive of, the core Cloud. Read the?Dean Bubley?commentary from last year:?https://disruptivewireless.blogspot.com/2018/03/mec-and-network-edge-computing-is.html
CTO, Co-Founder | Macrometa
6 年Good article. Edge computing to a large extent is actually a data problem. We actually address many of barriers mentioned in the article but can speak about it publicly yet.