7 Mindset Shifts for Successful Cloud Migration
TNG Technology Consulting
We solve hard IT problems: Agile Software Development, Artificial Intelligence, DevOps & Cloud!
Imagine a company and?its?central applications: Oracle database in the backend. A nice monolith, maybe Java, maybe C# or maybe even PL1 or Cobol,?which has sprawled?over time, hosted on a VM, or maybe even a mainframe? And some frontend; Tomcat, ancient JavaScript (granted, that could be anything older than two years). Maybe there is some CI, but?there?is certainly no CD. The teams around the central applications probably don't even own the database server, because the license is too expensive and shared by various teams. And here comes management, setting a new task:?we'll go to the cloud! Everything will be better!
In this position, whether you are an engineer or a manager, what can you do to help succeed?
Migrating old applications to the cloud: prerequisites
The first and most important question is "why migrate?". This will also set the most important immediate goals.
Most reasons I have encountered are invalid, such as "Upper management told us to" or "Because it's fancy". In contrast, valid goals include:
The goal "all of the above" is asking too much.
Other than the first, any one of of these goals could involve lots of changes and considerable effort, so?all of them put?together will just lead to frustration and endless projects. Yes, it can be the end goal, but it shouldn't be the first goal. It is also fair to point out that for any application, any one of the goals above may not be achievable. The cloud is not a free solution for everything.
Migrating old applications to the cloud: Mindset
But, you say, wasn't I promised more? Well, a monolithic (or even microservicey) application which has sprawled for years and is not containerized won't get many advantages from being shifted to the cloud. This architecture is suited to the way the application is currently deployed and operated, and that might be?very?different from the cloud. A classic "lift and shift" will probably make most of the problems that the application stack has?worse. Changing architecture is?hard?and it takes time. And in changing the architecture, the most important thing is that the mindset also needs to change.
So what's the difference? We all love our agile manifesto, so let's try this again:
These items fulfill three different purposes:
I believe that if you understand this, your migration can be a success - but you'll also understand that it is rarely simple.
Those last three points above?encapsulate most of the valid migration reasons I mentioned above. The only goals left are saving money and improving availability. The latter will be done by some form of redundancy and better deployments, the former is uniquely specific to your software, your architecture, and the cloud providers you choose.
Why does this mean "changing your mindset"? What I mean is that if you really follow these principles, the developers will have to change as much as your whole organization. It's all about reducing?time-consuming?interdependence and enabling developers to fix issues themselves and provide features fast, while keeping at least the same safety and security standards as before. Let's have a deeper look:
Cross-functional teams over split into Dev and Ops
Cross-functional?teams are already common?in?agile transformations. However, adding ops still seems daunting to many. I've heard many developers say "Oh, that means I need to learn all that stuff, too?". Yes, but most of it is already done by the cloud provider! Operating a modern cloud system isn't much more difficult than writing full stack software. That was not true in the past:
So in principle, you don't have to learn that much more. There is still a classical "Ops" team - it's the cloud provider. It's just that the boundary between the teams has moved and the abstractions are now "right" in a way that they require less communication between the teams.
Here is the?mindset shift:?Learning cloud "DevOps" tooling is a bit like learning library X or a build system or even some intricacies of the industry you are programming for:?It's?part of your job and it's not more difficult than any other part you do. And besides, "cross functional team" doesn't necessarily mean that everybody needs to understand every technology on the same level. It just means that the?team as a whole?has enough expertise with every technology they need.
So if you want to migrate to the cloud and if you want to actually gain speed, you need to invest in DevOps. Train your teams, give them tools,?allow them to make mistakes.
More granular resources over making one resource do lots of things
There is a tendency in larger organizations to have dedicated teams which provide infrastructure to everybody else. This is often the result of the experience where developer teams start out using a tool but don't have the time to properly maintain it (I'm looking at you, Jenkins servers!), which means that the tooling is often broken. So, why not just provide a central team which provides the software for everybody? Often,?the approach?works?well, but it also provides a fair amount of friction:?budget?concerns and multi-tenancy configuration overhead can slow teams down?considerably.
Often, organizations try to migrate this "platform" thinking to the cloud: I've heard about organizations deploying a central key store and thinking about deploying some IAM solution to restrict access to the key store in the cloud. But that's not going to help you achieve speed and better deployments!
mindset shift:?The cloud provider should handle that! They know how to do IAM for their resources and it's very simple to deploy dozens of the same services. Therefore, in most cases the right solution is:
领英推荐
Auditing of Infrastructure over upfront application processes
I've had the pleasure?of devising?an architecture and then?applying for implementation with IT security in older organizations. It's never quick, it's rarely painless, your architecture must be approved beforehand and is forgotten thereafter. The major cloud providers have put a lot of thought into security and although their own security may still be lacking sometimes, they have developed a lot of tools to sell to you to help you with going fast and NOT breaking security:
For most organizations, this is a?mindset?shift:?instead of sharing services and depending on upfront processes, IT and IT security share knowledge and provide guidance so that each team can move generally on its own and in the direction it needs to, reducing interdependence and friction.?
Enforcing security over upfront security review
After reading the last item, you may feel a little insecure: how?can I catch mistakes? Developers often don't have a lot of knowledge about the organization's security architecture. Of course you shouldn't let everyone go wild, when IT security is at stake. So let's focus on how to maintain your current level of security while letting teams handle most of their day-to-day tooling and operations themselves. The key is to?enforce security:
In an on-premise environment with partly manual deployment, much of this is impossible or prohibitively expensive, but in the cloud it is much cheaper. This is a?mindset?shift?because you try to improve security before going to production and by enforcing your rules with tools instead of documentation. Since it's automated, it's even faster and less error-prone.
Small zero trust networks over one company network to rule them all
Many companies still have most of their software within one gigantic network so that - in principle - everything can talk to everything. This means that it's very hard work to ensure the boundary of that network.
Separating networks logically within a company wasn't easy in the past and they were often segregated physically: one office, one network, and any reconfiguration required you to be directly connected to your networking devices.?With modern software defined networks, it is easier to cut your networks logically, not physically.
Zero-trust means that you need to verify the authenticity and authority of the caller everywhere and only grant appropriate and minimal rights. This was hard with old basic authentication and passwords for every application, but is no issue with current Single-Sign-On solutions.
Communicating over the boundary of networks requires encrypted transport. This was hard and it is still easy to mess up renewing certificates, but much easier to automate nowadays with tools like?Let's Encrypt?and cert-manager in a Kubernetes cluster.
To simplify thinking about necessary network topologies,?here is a?mindset?shift:?Since the architecture of your applications follows the topology of your organization anyways, why not let the boundaries of your networks also follow the boundary of your teams or products? Sure, if you often change your structure, this will require some redesign, but as all networks are software defined and documented as code, it won't be that much of a problem! Since every team can provision their own tools most of the time, this network segmentation will not pose a problem:?your?team's tools are within your team's network.
Everything as Code over application code and manual deployment
Deploying a monolith is often easy: you?build your code, copy your application to some server and then just press "start". Deploying a bunch of microservices? Not so much.
Splitting up your networks, adding infrastructure to your developer teams, adding security measures into the infrastructure, splitting up your applications to be "serverless" and save costs: all?of that means the landscape of your deployment will become more complex, often vastly so. As teams get faster, more errors happen. This means that the complexity and risk of a cloud system is often much higher than that of a standard monolithic architecture. In?your old?monolith, that complexity is completely hidden inside your code. Since you have good automated testing for these (you have, right?), you know that you can manage that complexity.?
So the?first?step to handle the complexity is to actually write code for your architecture. Infrastructure as Code.?
And while you're at it, automate the deployment, because complex deployments are error-prone. Automated deployments will also mean that you can deploy much faster, much more often and - with the right architectural choices in the application - more and more seamlessly.?
And while you're at it, why not also add your configuration to source control and then, why not add (nearly) EVERYTHING ELSE? Please keep your secrets outside of source control though...
Treating just about everything you type into your computer as "code" is certainly a?mindset?shift?for most developers, but actually, this is just the first step:
Immutable, idempotent deployments over long lived infrastructure
To fully understand the depth of this concept, let me give you the whole history of programming languages in three bullet points:
Actually, we want to do the same with deployments:
And all major cloud providers have invested?a lot of time?in helping you achieve immutable infrastructure and idempotent deployments. There is so much more convenience around deploying containers than around deploying and maintaining VMs for instance. Embrace it!
What will happen next? I'm excited to find out!