When customers want/need private/hybrid/shared cloud and REIT datacenters over the public cloud

When customers want/need private/hybrid/shared cloud and REIT datacenters over the public cloud

This is an english version of my article here adapted to be less focused on the brazilian audience.

It may seem a little odd to a lot of IT industry observers that despite all the exponential growth of the public cloud segment (AWS, Azure, Google, Oracle, etc) and with several industry analysts advising "the public cloud will kill the private IT in x years", we are still seeing fast adoption of private IT tools, particularly in segments like Openstack (software to orchestrate private clouds pushed by companies like Red Hat, Ubuntu, Cisco, Mirantis, Intel, etc), Eucalyptus (software to orchestrate private clouds pushed by HPe) , Microsoft Cloud OS, Apache Mesos and Kubernetes(*) (software to orchestrate clusters / datacenters pushed by Apache, MesoSphere and Google), Containers(*) (the path to serverless portable architecture like LxC and Docker) and even REIT datacenters (Equinix, Digital Realty,  CyrusOne, etc). Don't trust my words, you can Google all the information above.

(*) Although containers and Mesosphere's DC/OS can be run on public cloud, there are a lot of private deployments of both technologies  

So what's going on ? Are all those industry analysts wrong ? In fact, the IT Industry is creating the future market, but customer is still king.

Let me establish 2 concepts very, very summarized because there are already several definitions on the Internet. 

PRIVATE CLOUD

 

It's a combination of hypervisor/container software + orchestration software + hardware hosted on-premises or off-premises in a commercial datacenter like Equinix or Digital Realty dedicated to one single organization.

Since the private platform runs only known workloads, it offers costs and performance predictability and also completely avoids common issues on public clouds like the "noisy neighbor" effect.

Additionally some datacenters and CSPs (Cloud Service Provider) offer a "shared cloud" service, where the workloads are still known and controlled but both hardware and hypervisor layers are shared among more than one single organization. This setup is cheaper (much cheaper!) than a traditional private cloud setup while been still predictable, stable and robust. 

HYBRID CLOUD

 

It's when a project uses a private cloud platform and the public cloud at the same time to run different workloads. The 2 cloud environments communicate with each other using any kind of connectivity like a VPN over the Internet or better a direct connection like Azure ExpressRoute or AWS Direct Connect.

There are 2 main reasons to use a Hybrid Cloud setup: The first one is to start a migration of the project from private to public, giving people time to adapt the systems and even themselves to the new reality.  It helps breaking paradigms while teaches the staff the ropes of the public cloud.

But far more interesting is the second main reason, where the project consumes the best of the 2 architectures. By mixing both environments users can enjoy the better pricing, performance, predictability, compliance and  latency of the private cloud and still consume new public cloud services, like pay-per-use, autoscaling, machine learning, vision and voice APIs, CDN, cheap cloud storage, etc.

Rather than searching for a "silver bullet", users are learning today to choose the best platform (or platforms) their project needs

CHOOSING THE BEST WORKLOAD PLACEMENT

 

Among all the ways to start the process of decision about what should go public and what should stay private, one important start point is to understand if the project is a PET or CATLLE.

PET ARCHITECTURE

 

We call the traditional IT architecture a "PET" because it's something we have to "take care". Good examples are static SQL database servers, traditional block storage platforms and all static-developed systems like ERPs, E-Learning and enterprise systems. PETs usually require high-performance and low-latency as they can't autoscale by themselves and also require replication and high-availability as they can not recover from platform failures (like server hangs, database crashes, outages, etc).  Almost all IT systems created before the cloud are PET-style and won't be rewritten soon to cloud style. Even TODAY there are a lot of systems been created as PETs because of  the lack of knowledge in cloud development. 

PET systems in general are great opportunity for private/hybrid clouds providers and REIT datacenters. PET users usually want to run their systems with fixed pricing and known stability, performance and SLA 

 CATTLE ARCHITECTURE

 

On the other side, the public cloud brought the immediate provisioning of pay-per-use servers and services which created the concept of  "CATTLE architecture". Rather than taking care of a problematic server, just kill it and spin up another one. It's possible to do the same with containers when using microservices-style systems. The catch: The system has to be tailor-made to support this new possibility. It has to be built using a "stateless approach" without SPOFs (single points of failure). One of the AWS's biggest cases of stateless programming is the NETFLIX platform. NETFLIX even created a project (called "Chaos Monkey", it's worthy to visit) to randomly kill its servers in a effort to test the resiliency of the platform. 

Another great thing that comes with this "fast and easy" creation of new servers is that stateless systems can now be "horizontally scaled" (adding more servers instead of growing RAM, CPU, bandwidth, etc), something impossible to do in the physical world.  Again, the system has to be built from scratch to support this  (e.g. Microsoft SQL Servers database don't scale this way)

Notice that it is hard to design a true stateless system. Even NETFLIX got hurt when AWS crashed in 2011. In fact traditional IT is around for decades and softwares run the same way in either Dell, IBM or HP servers while cloud computing providers still fight for dominance with very different sets of APIs, services, models and behaviors. The new challenge of cloud software is how to stay cloud-neutral while using the best services available.

Note: It's true that users can host a CATTLE system in a private cloud platform, but having said that they won't get the benefits of "unlimited" scale and pay-per-use (e.g. shutting servers off won't cut the bills). 

SHORT SUMMARY OF PLACEMENT

 

Users should go with ... 

... a private/shared cloud and/or REIT Datacenters if they want/need fixed costs, predictable performance, to host legacy or new systems made using traditional IT architecture that won't be rewritten, low-latency, compliance, known behavior, to host long-term execution servers (servers up 24x7x365). Knowing there's this niche, all public clouds have their own "private cloud" platform available, but still payed-per-use which brings cost unpredictability. 

... the public cloud if they want/need variable costs, horizontal scalability capabilities (to add more capacity by adding more servers), to deal with spikes, pay-per-use (e.g to shut down servers during weekends to save money or to pay extra for instantaneous more capacity), to use specific services (cloud database, cloud storage, machine learning, etc) and to host short-term execution servers (e.g test platform).

... a hybrid cloud setup if they want/need to connect their actual private platform to a public cloud to take advantage of both platforms. For example an e-commerce system where the front end is hosted on some public cloud (taking advantage of autoscaling) but the storage is hosted within a REIT datacenter to keep the data private due some compliance. Or some legacy ERP system that need to have voice/vision capabilities but doesn't make sense to rewrite the whole system. Or a transient test platform hosted on a public cloud while the production system is hosted on a private cloud.

Of course there's this gray area of personal taste, vision, knowledge and experience. Users will choose what they are more comfortable with although in general hosting long-term static systems on public clouds is more expensive and requires more performance fine-tuning then on a private cloud platform.  

WHAT ABOUT THE FUTURE ?

"We know from chaos theory that even if you had a perfect model of the world, you'd need infinite precision in order to predict future events." (Nassim Nicholas Taleb)
Personal note: We don't have even a perfect model of the world... (Pina)

A new generation of developers have been learning cloud programming, microservices and SDI (software defined infrastructure) for a while, well prepared for CATTLE architecture development. There is fast evolution in several fields like containers, clusters, PaaS, serverless computing, wearables, IoT, big data, etc. We're seeing new platforms empowering the developer who doesn't want to care about "servers" or "infrastructure" anymore. Industry analysts say that the future of IT will be made of several services running on the public cloud talking to each other using APIs and the "Ops" part of "DevOps" will eventually die.

I think it might be early to predict that. All public clouds themselves will evolve and change. It's possible that the traction of wearables, VR, IoT and Bigdata will create so many new needs that will change the IT industry again. Just like when Nokia was concerned with Motorola competition and both were smashed by Apple. And talking about competition, Bill Gates used to said he was worried about "2 guys in a garage" more than a decade ago. 

In my opinion, it's not obvious that companies will immediately rewrite their legacy systems that are running well principally when their final users can not tell the difference between the "old" system and the "new" one. The way humans operate is not to mess with something that's working well and this decision can last for years. There are even brand new systems been migrated to private cloud platforms in the quest for better pricing, predictability, compliance, management, low-latency, etc. This movement (or lack of it) opens a fantastic opportunity for private clouds providers and REIT datacenters. 

User is still king and a very large number of them is waiting for the next wave to recode no matter what all IT industry analysts are saying. Thankfully the IT landscape is so big that there are plenty of opportunities for both public and the private cloud as they are today.  

Antonio Carlos Pina

Wagner Pinheiro

Software Engineer | Manager | 4x AWS Certified: Dev, Arch & DevOps Pro | GCP Architect Pro | SAFe RTE | Data Science MBA

8 年

要查看或添加评论,请登录

Antonio Carlos Pina的更多文章

社区洞察