New cloud math involves clustering
The VMworld conference happening this week is a good reminder of just how dominant a position VMware has in enterprise private cloud. Yes, they have made multiple missteps with their public cloud strategy (remember vCloud Air?). But they still have a $38B market cap and this year is particularly important since all the public cloud players Amazon, Microsoft Azure, Google, IBM and Oracle are beginning to aggressively target VMware workloads.
In August 2016 VMware and AWS announced their partnering, but this week marks the initial availability of VMware Cloud for AWS -- which essentially allows enterprises to run vSphere on AWS. But the devil is definitely in the details when it comes to figuring out what to run, how to run it and ultimately what it will cost.
One company that truly understands all these details to an incredible degree of precision is CloudPhysics. As an investor and board member since founding, I have witnessed their original premise of "VMware workload analytics" evolve into a powerful solution that can uniquely analyze granular resource utilization (CPU, memory, network and storage) and turn that into insights around infrastructure decisions. They have amassed a huge data lake of workload resource metrics and recently put their data science team on answering the key question: "Is VMC on AWS competitive to traditional public clouds?" And their answer is pretty shocking.
Most VMware customers can save significant money moving clusters to VMC on AWS.
The initial adoption of VMware was predicated on server consolidation and leveraging virtualization to run multiple guest machines on shared hardware. So, moving individual guest machine workloads to run on a public cloud is generally not very cost effective. But, when working in clusters things get much more interesting, and complicated. The team at CloudPhysics provides more technical detail, but at the highest level the efficiencies involve the economics of resource utilization at the cluster level versus individual machines.
Think about the most prevalent model of resourcing applications, namely sizing infrastructure for peak load. Whether you pick 99th or 95th percentile, most of the time there are (by definition) massively underutilized resources. Now factor that unused capacity across RAM, CPU, SSD/disk and bandwidth and you should be able to get real significant savings.
Eventually, we will want to densely pack as many workloads together in clusters/pods or larger groupings like data centers or even logical groups based on criticality (BTW take a look at HashiCorp Nomad). But today, the average private cloud vSphere cluster is only 6-8 host machines. And the vast majority of workloads running on those clusters are Windows guests. So, despite the hopes of Azure Stack, Kubernetes and others, VMware is by far biggest player in private cloud software. Their annual revenue of $7B is heavily weighted by vSphere, whereas AWS just crossed $4B in quarterly revenue.
VMware might just have found their silver lining with VMC on AWS.
Global Channel & Global Sales Acceleration | Partner Programs | Operations
6 年Cool article about Enterprise Cloud, VMC and CloudPhysics
Building ATX
7 年Glad to read about CloudPhysics Irfan!