Cost Wars: Data Center vs. Public Cloud
Gajendra Singh
Technical Architect | Technical Leader | Product Designer & Developer | Technology Advisor & Strategist | SMTS | TCS | Ex-Oracle, DXC, CSC | Thapar University | FMS, University of Delhi
Nine years ago Amazon launched Amazon Web Services (AWS) and kicked offperhaps the most transformative shift in the history of the $300B data center industry.
The idea stemmed from something simple: Amazon had built such a massive and highly efficient data center that it could sell off capacity with pay-as-you-go pricing. But Amazon started a revolution that’s on track to completely redefine how computers work in the enterprise. And as with any transition, there will be winners and losers, and therefore different points of view.
One of the biggest debates AWS has unleashed regards the salient question of which is cheaper—deploying and hosting your own infrastructure OR using infrastructure in the public cloud? Numerous studies about this debate have resulted (via Gartner, InfoWorld, ITWorld, InformationWeek).
The math around this issue tends to get very fuzzy because a true apples-to-apples comparison requires capturing a lot of soft costs. The cost of a CPU chip is easy to measure. But the calculus gets more complicated when considering that on the public cloud you get a lot more than a bare chip—you get a fully managed service. Therefore you need to capture the cost of labor associated with deploying and maintaining the server. This cost is nontrivial, and the measurements I’ve seen vary widely depending on which outcome one is rooting for—on-premise servers or cloud-based servers. In addition, cost needs to be amortized over the life of the equipment—a figure that varies. And the cost of cloud computing is dropping faster than on-premise equipment, which needs to be factored in as well. Given all of these variables, I’ve found the true cost of computing on the cloud to be roughly twice the cost of a fully loaded do-it-yourself data center.
When comparing costs, however, we must also consider utilization, which is a key parameter. On the public cloud, you pay only for what you use. When you build your own server, you pay for it all the time—whether it’s busy or not. If you’re running an application at 100 percent utilization, then 50 percent of the time it might be cheaper to build the server yourself. But in practice it’s really hard to keep a server running at 50 percent or more utilization around the clock. Very few apps work this way. Gartner estimated that properly managed storage infrastructure has server utilization of less than 15 percent, CIO Magazine cited a Gartner analyst calling it at about 25 percent, and most recently AWS pegged on-premise utilization at less than 20 percent. When I talk to actual IT ops people, they smirk and say, “More like 10 percent.”
Where does the truth lie? In my opinion, it doesn’t matter. The driver for using the public cloud is not a 10 percent or even a 90 percent cost improvement. It’s about something more important. Formerly I was the general manager for a line of business at Cisco, the IT company that prides itself on having a world-class IT organization that sets benchmarks for efficiency. Because I was the head of a BU, my customers were asking me to take products we delivered as appliances and turn them into cloud services. This request shouldn’t have been too hard as we had a very well-run IT team and they were software products with no custom hardware, so all we needed was a big data center. But the process of launching these services was painful, because our rock star IT had their hands tied by infrastructure limits and had to make new infrastructure appear out of thin air. I needed a forecast on customer count and type—large versus small. I needed to approve a large amount for CAPEX up front based on this forecast. It took rounds and rounds of executive review. Once we finally got the green light, we needed to get in line and wait for the “new data center buildout” somewhere in the heart of Texas. The problem: My business stood still while I and my team were waiting for bulldozers in Texas to turn over a cow field and build a data center. If I could have simply deployed our software on the public cloud, knowing that it was as secure or more secure than when running on-prem, and never needed a forecast, I would have asked, “Where do I sign?”
This example illustrates the desire that has been pent up in the enterprise for decades. The business needs agility—the freedom to deploy new services as soon as they are ready, and not be bogged down in forecasts that are almost certainly wrong.
In a highly competitive market, great companies are defined as much by what they don’t do as what they do. I once heard an IT executive from an F100 bank say, “I have no doubt we can build a massive scale-out white-box OpenStack cloud infrastructure to run Exchange. The question is why would we do that?”
Why would we? In today’s new world, why are there still bulldozers turning up cow pastures for new corporate data centers? Perhaps it’s just because old habits die hard.