Moving to autonomous and self-migrating containers for cloud applications.
David Linthicum
Internationally Known AI and Cloud Computing Thought Leader and Influencer, Enterprise Technology Innovator, Educator, Best Selling Author, Speaker, GenAI Architecture Mentor, Over the Hill Mountain Biker.
The trouble with existing approaches to cloud computing, including leveraging IaaS and PaaS, is that they have a tendency to come with platform lock-in.?Once you’ve ported an application to a cloud-based platform, including Google, AWS, IBM, and Microsoft, it’s tough, risky, and expensive to move that application from one cloud to another.?
This is not by design.?The market moved so quickly that public and private cloud providers were unable to build portability into their platform and still keep pace with demand.?There’s also the fact that portability is not in the best interests of cloud providers.?
Enter new approaches based upon old approaches, namely containers, and thus Docker and container cluster managers, such as Google’s Kubernetes, as well as hundreds of upstarts.?The promise is to provide a common abstraction layer that allows applications to be localized within the container, and then ported to other public and private cloud providers that support the container standard.?
Containers are also much more efficient for creating workload bundles that are transportable from cloud to cloud. ?In many cases, virtualization is too cumbersome for workload migration. ?Thus, containers provide a real foundation for moving workloads around hybrid clouds and multiclouds, without having to alter much, if any, of the application.
More specifically, containers provide these advantages:
·?????Reduced complexity through container abstractions.
·?????The ability to use automation with containers to maximize their portability.
·?????Better security and governance from placing services around, rather than inside, containers.
·?????Better distributed computing capabilities, because an application can be divided into many separate domains -- all residing within containers.
·?????The ability to provide automation services that offer policy-based optimization and self-configuration.
Containers provide something we've been trying to achieve for years: A standard application architecture that offers both managed distribution and service orientation.
Most compelling right now is the portability advantage of containers. ?However, I suspect we'll discover more value over time. ?In fact, I suspect that containers will become a part of most IT shops, no matter whether they are moving to the cloud or not.[1]
Defining a new value for containers
“Containers are predicated on the goal of deploying and managing n-tier application designs. By their nature, containers manage n-tier application components, e.g. database servers, application servers, web servers, etc., at the operating system level. Indeed, portability is inherent because all operating system and application configuration dependencies are packaged and delivered inside a container to any other operating system platform. Containers are preferable to virtual machines?here because they share compute platform resources very well whereas virtual machine platforms tend to acquire and hold resources on a machine-by-machine basis.”[2]
The essence of the assertions I make in this article is that containers have the ability to move from cloud to cloud, and system to system, and thus there is also the ability to provide automation for this process.?In other words, the ability to not only leverage containers, but have them automatically “live migrate” from cloud to cloud as needed to support the requirements of the application.?
At the center of the containers evolution is a cloud orchestration layer that can provision the infrastructure required to support the containers, as well as perform the live migration of the containers, including monitoring their health after the migration occurs (see Figure).
While the use of containers is nothing new, and certainly pre-dates Docker, the concepts of auto-provisioning and auto-migration are often pushed, but very elusive in practice.?These concepts have a few basic features and advantages, including:
·?????The ability to reduce complexity by leveraging container abstractions.?The containers remove the dependencies on the underlying infrastructure services, which reduce the complexity of dealing with those platforms.?They are truly small platforms that support an application or an application’s services that sit inside of a very well-defined domain; the containers.?
·?????The ability to leverage automation with containers to maximize their portability, and thus their value.?Through the use of automation, we script things we could also do manually, such as migrating containers from one cloud to another.?Or, we can reconfigure communications between the containers, such as tiered services, or data service access.?However, today it’s much harder to guarantee portability and the behavior of applications when using automation.?Indeed, automation often relies upon many external dependencies that can break at any time.?Automation remains a problem that we need to solve.?However, it’s indeed solvable.?
·?????The ability to provide better security and governance services by placing those services around, rather within, containers.?In many instances, security and governance services are platform-specific, not application-specific.?The ability to place security and governance services outside of the application domain provides better portability, and less complexity during implementation and operations.?
领英推荐
·?????The ability to provide better distributed computing capabilities considering that an application can be divided into many different domains, all-residing with containers.?These containers can be run on any number of different cloud platforms, including those that provide the most cost and performance efficiencies, and therefore applications can be distributed and optimized as to their utilization of the platform from within the container.?For example, placing an I/O-intensive portion of the application on a bare metal cloud that provides the best performance, a compute-intensive portion of the application can run on a public cloud that’s able to provide the proper scaling and load balancing, and perhaps even a portion of the application can run on traditional hardware and software.?They all work together to form the application, and the application has been separated into components that can be optimized.?
·?????The ability to provide automation services that provide policy-based optimization and self-configuration.?None of this works without providing an automation layer that can “auto-magically” find the best place to run the container, as well as deal with the changes in the configurations, and other things specific to the cloud platforms where the containers reside.
However, we have learned that n-tier applications have inherent limitations. “They are designed to scale up with very little focus paid on scaling down and no attention paid to scaling out or in. They typically are rife with single points of failure and tend to manage their own state via the use of cluster-style computing. Each tier of the n-tiered architecture must be scaled independently of the other tiers.”[3]
Also, keep in mind that it is not always true that the automation/orchestration required will be portable. Indeed, that’s likely the new “lock in” layer – once you’ve built out the operational side, how easy it that to migrate from cloud to cloud??We think it’s true that portability of container clustering and orchestration is going to quickly become the bottleneck.[4]
Making the business case
The problem with technical assertions is that they need to define a business benefit in order for them to be accepted by the industry as a best practice.?The technical benefits I’ve defined above need to be translated into direct business benefits that provide a quick return on investment.?
These business benefits will include:
The ability to automatically find least-cost cloud providers.?Part of the benefit of moving from cloud-to-cloud is that you can leverage this portability to find the least-cost provider.?Assuming that most things are equal, the applications that exist within a set of containers can live-migrate over to a cloud that offers price advantages for similar types of cloud services, such as storage.
For example, an inventory control application that exists within a dozen or so containers may have some storage intensive components that cost $100K a month on AWS.?However, on Google they are $50K a month for the same types of resources.?Understanding this configuration possibility within the orchestration layer, the containers can auto-migrate/live-migrate over to the new cloud where there is a 50 percent savings.?If Google raises prices, and AWS lowers prices, the reverse could occur.
The ability to support better reliability.?We’ve all done business cases around up-time and down-time.?In some instances, businesses can lose as much as $1 million dollars an hour when systems are not operating.?Even if the performance issue lasts for only an hour or two, the lost productivity can move costs well into thousands of dollars per minute.?
This architecture has the ability to avoid outages and related performance issues by opening up other cloud platforms where the container workloads can relocate if issues occur on the primary clouds.?For example, if AWS suffers an outage, then the containers can be relocated to Google in a matter of minutes, where they can operate once again until the problem is resolved.?You may choose to run redundant versions of the containers as well on both clouds, supporting an active/active type of recovery platform.?
Facing realities
Containers may sound like distributed application Nirvana.?They certainly offer a better way to utilize emerging cloud-based platforms.?However, there are many roadblocks in front of us, and there is a lot of work to get done.?
We need to consider the fact that today’s automation and orchestration technology can’t provide this type of automation as of yet.?While it can certainly manage machine instances, even containers, using basic policy and scripting approaches, automatically moving containers from cloud-to-cloud using policy-driven automation, including auto-configuration and auto-localization, is really not there yet.?
Also, we’ve only just begun our Docker container journey, and we still have a lot to learn about the potential of this technology, as well as its limitations.?Learning from the use of containers and distributed objects from years ago, the only way this technology is able to provide value is through cloud coordination of those who support containers.?While having a standard here is a great thing, history shows that vendors and providers have a tendency to march off in their own proprietary directions for the sake of market share.?If that occurs, all is lost.
The final issue is that of complexity.?It only seems like we’re making things less complex.?Over time, the use of containers as the means of platform abstraction will result in applications that morph toward architectures that are much more complex and distributed.?Moving forward, it may not be unusual to find applications that exist within hundreds of containers, running on dozens of different models and brands of cloud computing.?The more complex these things become, the more vulnerable they are to operational issues.?
All things considered, containers may be a much better approach to building applications on the cloud.?PaaS and IaaS clouds will still provide the platform foundations and even development capabilities.?But, these things will likely commoditize over time, moving from a true platform to good container hosts.?It will be interesting to see if the larger providers want to take on that role.?Considering provider interest in Docker, that indeed may be their direction.?
The core question now: If this is the destination of this technology, and application hosting on cloud-based platforms, should I redirect resources toward this new vision??I suspect that most enterprises already have their hands full with the great cloud migration.?However, as we get better at cloud application architectures using approaches that better account for both automation and portability, we’ll eventually land on containers.
[1] https://www.infoworld.com/article/3032164/cloud-computing/fad-no-containers-are-here-to-stay.html
[2] https://containerjournal.com/2015/06/05/containers-are-designed-for-an-antiquated-application-architecture/
[3] https://containerjournal.com/2015/06/05/containers-are-designed-for-an-antiquated-application-architecture/
[4] Lori MacVittie of F5.com
Microsoft Cloud Security Coach | Helping SMBs Grow by Enabling Business-Driven Cybersecurity | Fractional vCISO & Cyber Advisory Services | Empowering Secure Growth Through Risk Management
7 个月David, thanks for sharing!
Insightful article, a solution for such #cloud #interoperability is likely to come from the #opensourcecommunity because I don’t see why a #cloudprovider would want to invest resources in making it easy to lose clients than invest in improving their #cloudplatform so customers don’t leave in the first place
Chief Technology Officer @ possibl.ai | Leadership, GenAI, Enterprise AI, AI Governance, AI COE, Fractional AI Officer
2 年Martin Arndt
Navigator for multi-cloud & AI adoption journeys - MIT x 1, OCI x 3, Azure x 2, AWS x 1
2 年This makes sense. Thx. In many enterprises, complexity and/or cloud costs has slowed down the initiative on its tracks. Hopefully, these get addressed over time. I know of enterprises that still allocate budget for VM or OS replatforming and they would love this.
Internationally Known AI and Cloud Computing Thought Leader and Influencer, Enterprise Technology Innovator, Educator, Best Selling Author, Speaker, GenAI Architecture Mentor, Over the Hill Mountain Biker.
2 年https://www.dhirubhai.net/posts/davidlinthicum_the-next-frontier-in-cloud-computing-activity-6953665069134057473-XWtD?utm_source=linkedin_share&utm_medium=ios_app