Edge: The Next Battleground for the Infrastructure Giants

Edge: The Next Battleground for the Infrastructure Giants

There’s a lot of clickbait out there to the effect of “the edge will eat the cloud.” To jump to the punchline, this isn’t going to happen. Edge computing is actually a huge opportunity for cloud providers. It represents the opportunity for them to extend all of their cloud-native best practices (e.g., platform independence, loosely-coupled microservice architecture, and continuous delivery) that they have established in their centralized data centers over the past decade to regional and on-premises locations. This trend also sets the industry up for a major battle among IT tech giants to capture and retain on-premises customer infrastructure.

In this article, I share my thoughts on how this will play out with the major technology providers such as the cloud scalers and large IT hardware and software OEMs. For a primer on the edge continuum and related tradeoffs, I recommend the recent LF Edge community white paper, Sharpening the Edge II: Diving Deeper into the LF Edge Taxonomy and Projects, which I collaborated with the LF Edge community on.

Edge: The Last Cloud to Build

It’s important to understand that edge computing inherently has the cloud in mind in some form or another. Otherwise, we’re just talking about traditional on-premises applications. Solutions can be more cloud- or edge-centric, but the edge and the cloud have a symbiotic relationship, and the two compute models are inherently complementary.

When it comes down to it, the edge is the last cloud to build. Initially, the clouds have focused their edge efforts on regional and metro data centers; however, we’re starting to see them go even further. Early examples include the 1 and 2U form factors for Microsoft and AWS’ Edge and Outposts offerings. We’re even seeing AWS starting to offer “easy button” IoT gateway hardware and applications as a service with their new Monitron industrial sensing solution.?

What’s a Server Anyway??

When I say “server,” you probably envision a big rectangular metal box installed in a rack. This is the classic form factor and remains popular today. However, as the market evolves, it’s best to think about a server’s purpose instead of its physical manifestation. From this standpoint, a “server” is any computing resource that provides cloud-like services for downstream users and devices. This is compared to UI-centric client devices like smartphones, tablets and PCs that are typically focused on the needs of individual end users, and IoT devices that may do some local processing to filter and act on data, but otherwise feed it upstream for visibility and further analysis.

In the early days of computing, the server function was enabled by expensive, specialized mainframes. Fast-forward to today, and low-cost, off-the-shelf servers are commonplace. All this has resulted in a general commoditization of data center infrastructure, including “servers” taking on new forms. As an example, we’re seeing $30 Raspberry Pi’s now being used to power Kubernetes clusters. This is pretty much a tiny $100 “cloud.”

Things have, of course, accelerated on the device side as well. For example, smartphones have an order of magnitude more processing power than mission control did during the early space missions. We’re even seeing more and more computing happening in microcontroller-based devices with the rise of TinyML.

An Increasingly Software-Defined World

The rise of software-defined infrastructure through the virtualization of functions like processing, storage, and networking has further accelerated the commoditization of the underlying hardware. The result of the architectural evolution of software being abstracted from hardware makes it easier to move application workloads from node to node, making hardware increasingly fungible. Cloud-native development principles are more about how software is built and delivered, not where, or on what, it’s actually deployed.

We’ve seen this trend manifested in Hyper-Converged Infrastructure (HCI). In the early days, the processing, networking, and storage elements of data centers took the form of discrete hardware. Initial convergence was driven through standards that facilitated interoperability between these discrete components. The HCI trend has consolidated all these functions into single nodes through software-defined virtualization. Convergence is common in the technology world – in another example, today’s smartphones have consolidated the functions of many different devices (e.g., phone, camera, GPS, and web browser).

Edge: The Next Battleground

Over the past ten years, the public clouds have pioneered the concept of delivering centralized computing to many users as-a-service. Customers pay as they go, based on their needs for processing, storage and networking, including data egress. One of the results of offering centralized processing as-a-service is that it eliminates the hardware considerations for users in terms of server architecture and physical form factor. As a user consuming computing resources from a public cloud, I don’t care if the hardware is a single pizza-box, delivered through 20 different boxes, “hyper-converged”, pink…whatever. I just want the right performance at the right cost and availability.?

The Tier 1 global hardware OEMs such as Dell and HPE used to do a significant amount of business with the public cloud providers, selling them both servers for their main data centers and fully-integrated modular data centers to expand capability when they ran out of existing indoor real estate. This business has declined significantly for the hardware OEMs over the years because public clouds have achieved so much scale that it’s more cost-effective for them to cut the traditional OEMs out of the loop and procure commoditized hardware directly from low-cost ODMs based overseas. In a precursor to this trend, Google pioneered using white-box desktop PCs in their data centers in the early 2000s. The big hardware OEMs have since been left to focus on their historical dominance serving end users’ private and on-premises data center needs.?

The rise of edge computing brings specific hardware needs back into focus because it inherently involves the physical world. Homogenized servers suited for data centers with robust physical security and well-defined network perimeters don’t often cut it for distributed edge deployments because of unique considerations for physical space, power and cooling, specific form factors, I/O, ruggedization and certification, processing constraints, manageability and security.?

For the public clouds, the edge represents an opportunity to attract more customers by extending the same curated, as-a-service consumption model that they have perfected in their centralized data centers out into the field. As a result, they are all heavily investing in edge services with the likes of Azure Edge, AWS Outposts, and Google Anthos. In the process, they are primarily doing the same as they have been doing in their centralized data centers in recent years – procuring their edge hardware directly from overseas suppliers. This poses a massive threat to the Tier 1 hardware OEMs because every customer a cloud scaler locks in with an Edge as-a-Service offering effectively locks the hardware OEMs out.

The cloud scalers’ “easy button” attack on the edge is now encroaching on the Tier 1 infrastructure OEM’s on-premises bread and butter, so the race is on for these OEMs to retain their customer base as the clouds look to lock them into their own ecosystems. This is why we’re seeing them make significant investments in infrastructure-as-a-Service (IaaS) offerings such as Dell Apex and HPE Greenlake, with the intention of providing customers with the same Edge-as-a-Service “easy button,” but with a cloud-agnostic twist. Similarly, the Tier 1 software infrastructure OEMs such as VMware, Red Hat and SUSE that offer solutions for management, orchestration, virtualization, and containerization are also investing heavily here, and they have the same opportunity to provide the cloud-agnostic “easy button,” while also being hardware-agnostic.

On the hardware and software infrastructure OEM’s side is that customers eventually get the bill for the clouds’ “easy button” once their edge solution hits scale. At this point, customers will be faced with the tradeoff of simplicity vs. being locked into a high price tag. While the hardware providers’ Edge-as-a-Service promise is no cloud lock-in, the software infrastructure providers can do the same while also providing choice of hardware. I’m most bullish about software infrastructure OEM’s ability to square up against the clouds for this reason. However, many customers will still want their choice of hardware to be bundled into a whole Edge-as-a-Service package. The ultimate winners will be those that simplify the consumption of computing services while providing choice, not those that make the best widgets.

In Closing

Early adoption of new technologies is often driven by providers that take a highly curated, easy-to-consume, albeit proprietary approach. The market then evolves into a more open model for the masses, and open always wins in the end for scale. As we saw with AOL back in the early ‘90s for the internet and Apple in the dawn of smartphones, each trend started with proprietary approaches, but then really started taking off once open alternatives came into the picture.?

In the case of AOL, people realized they could just use their ISP and favorite browser to get to the internet, and Android enabled more choice for smartphones spanning an open ecosystem. AOL is now, of course, AWOL, and Android has 80%+ of the global smartphone market share. Similarly, when it comes to Edge-as-a-Service offerings, we’ll first see walled gardens from? public clouds, then we’ll see more open approaches that give customers very similar benefits, while retaining flexibility and choice.?

Organizations that adopt these emerging services will continue to run some workloads at the edge in the long run. Meanwhile, other workloads that start at the edge will eventually get migrated back into the cloud. Where workloads are deployed across the continuum is ultimately about balancing performance and cost, while meeting any specific needs for autonomy, security, privacy, and data sovereignty.?

As enticing as the cloud scalers’ “easy button” value prop may seem, the long-term reality for customers is that the costs will likely become untenable when their data really starts flowing. After all, the clouds’ business model is to make it easy and inexpensive to get data in – and expensive to keep it there and get it out.?

As such, it’s critical to invest in a multi-cloud strategy rooted in an open edge to have maximum long-term flexibility and bargaining power. By working with cloud-agnostic infrastructure providers that are offering Edge-as-a-Service offerings, organizations can “have their cake and eat it too,” with solutions that address their business challenges with the pay-as-you-grow benefits of the public cloud, and while maintaining maximum control over their data.?

It will be interesting to see how this all plays out over the next few years, but I certainly wouldn’t bet against the traditional IT hardware and software infrastructure players.

Oscar De Leon

IoT Solution partnership with ITS Practices (Digital Velocity, Hybrid Infrastructure, Security)

2 年

Congratulations

Maybe, or public cloud networks provided the best quality dial tone they can to the edge sites we own where the majority of the important work gets done

Good educational read, thanks Jason!

Joe Pearson

Open Source Portfolio @ IBM Software Networking & Edge Computing | Chair, LF Edge TAC | Chair, Open Horizon | Edge Monsters

2 年

Thanks for posting this, Jason Shepherd . I rely on your insights when pondering possible futures.

Bill O'Such

M&A | Strategy | Consulting | Edge Computing | Digital Transformation | CSO | Product | Alliances | IIoT | Software | SaaS

2 年

100% agree it is edge + cloud. I think the subtleties are the dynamics on which part of the edge and the ferocity of the battle among the players - network/telco edge, on-premise edge, far-edge .... and probably other ones.

要查看或添加评论,请登录

Jason Shepherd的更多文章

社区洞察

其他会员也浏览了