Part Three : Microservices, Devops and Continues Delivery #microservices

Part Three : Microservices, Devops and Continues Delivery #microservices

We have been driving the evolution of software by various trends Object to intention, Reusability and now the big thing is how do we deliver faster, how do we deliver our values to the customer faster? Well if we think about it , it is really not the amount of time it takes to deliver the value, but is more to do with the no of iterations can we get in. We should be aiming to deliver the value to the customer faster, rather than over engineer a product. Products get better each day and every iteration can make it a much better product that it was initially delivered. So quicker we deliver the value to our customer, quicker we can iterate and find out what customer really wants. We have often observed the first requirement document that we have released to our customer always is not exactly what the customer needs. So we need to have more iteration to have a better understanding of what customer really wants. Hence Microservices helps us in deriving multiple iteration with each microservice having its own release cycle and with smaller interdependent releases.

Refactoring Team :

The Bigger change and a hard one is Human and Organizational. Team belonging to the same team working in isolation is not good for the team or Organization. Eg : Requirement team Isolated from Development team, Development team Isolated from Testing Team, Testing team Isolated from Ops team. So in this case there is no accountability for the software the team is building. Hence with Devops we now have one team with Requirements, Designers, Developers, Testers and Ops in one team, working on one thing together so it is one flat team. It minimizes that Isolation and make it easy and quicker to put the code in development, Testing and production, Hence the iteration is quicker. Less Isolation we have the quicker we can ship the code to production and we can develop more software that is fit for production. 

Another big them about Devops is , we need to automate everything we work with to deliver the code. As soon as you have more no of Microservices and each one has its own releases, you are looking at a sizable amount to deployment every day. Hence you would need to have an immutable infrastructure which every process has to be automated. Every Build, Deployment, Code Quality Checks and Testing has to be automated.

Continues Delivery :

How do we achieve a state where we are able to do multiple releases per day and how do you have multiple things running independently and how to tack all of these. In order to reach the end state, we need a platform to develop Microservices on top of, in which we can lots of different teams writing lot of different Microservices and all these can be shipped smoothly through Development, Testing, Staging and production. Here we would need, Continues Integration and Continues Delivery which facilitates, Automating Build of Things and migration of those pieces of software through the platform.

Implementation Details :

In the Java world we got into the habit of creating .jar file or .war file and move it from Development, Testing, Stage and Production the binary around the environment. This is perfectly fine, however only slight problem is the Application Server, Configurations, JVM Settings and Operating system is not part of the binary. It is a binary that has compiled Java files, so that .war might work fine on your laptop running on windows, however the same code might not work on Linux environment in the production, there is nothing wrong with the code, but the Java version on the Linux box might be different. There can be so many issues you can face when you move your .war to a new environment, because you are not testing everything, you are just testing the .war and you are assuming that the .war in the App server that is tested. What Docker dose here is change everything.

Docker :

Docker is a standard way of packaging software in a virtual container and then it can be installed and run on any machine. You can now put everything you need into a Container i.e. JVM, Operating System Patches, Configuration, Environment variables, Any specific version of the App Server you need and this will be defined inside a Docker Container and ported as it is with in the Container with all this configurations.

Internally Docker is still a Linux process, all the Docker container really is it is using Linux Containers things like cgroups (abbreviated from control groups) is a Linux kernel feature that limits, accounts for, and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes, Namespaces are a feature of the Linux kernel that isolates and virtualizes system resources of a collection of processes. Examples of resources that can be virtualized include process IDs, hostnames, user IDs, network access, inter-process communication, and file-systems. Namespaces are a fundamental aspect of containers on Linux and things like that. All you are doing is running just one Linux operating system process , the process kind of thinks it is in a VM, so it thinks and feels like each process has its own VM , but in reality each process is isolated from other processes.

How is it different from Virtual Machines ?

Every time you run a VM , you need to run a whole new operating system that is pretending to be a whole new computer. With Docker Container you are running a process in a container in Isolation, which means you are sharing the operating system. Here you are not running lots of Linux Kernels and Lots of Device Drivers pretending to be a Filesystem, Memory or a Keyboard to mimic a new Computer. So Docker Container is just a process but it is Isolated hence each process gets its own Disk, so you don’t have processes clashing with each other for common resources. In Dockers you can have two Process refer to same folder structure eg : /temp because each process has its own folder structure in Isolation. Each Containers also gets their won ports, You can have each container listen to the same port , i.e. 8080 because each container has its own Networking stack so it can listen to any port it has available. Imagine a bad process getting into CPU loop , starving other process of resources, Here Docker sets limitation to the amount of resources each process can consume, Hence avoiding such situations.

So idea behind Docker is put your stuff in a container and anybody can run on any computer and you can densely pack these Containers together on the hardware. Hence your Hardware utilization is high.

What is the Issue with just using Docker as it is ?

The thing about Docker though is like running a program from command line i.e. we run

> docker run 

Something will run for a while , but there is a possibility that it might stop running or the process dies due to some unknown reason or the Box in which it is running might die. Hence Docker running by itself is not enough to rely upon to run multiple microservices in a cluster. Hence we need a Manager which monitors the health of the Docker and help in restarting the Docker Container.

Kubernetes :

Kubernetes is all written in Erlang and it is used to Orcastrate container. Kubernetes or K8 actually make the Container as a service platform or K8 take the Docker Containers and turns them into self-annealing auto-scaling cloud. K8 can be run on any Environment including laptops. When you start a K8 you specify the number of replicas=3 let’s say, this is asking the K8 to always run 3 instance of this container forever. If any of those processes goes down K8 swings into action and spins up a new instance to keep up the mandatory 3 instances of the container. This is like a cloud now it makes sure it is always running the specified no of Container instances as long as enough computing power is in your cloud or in your cluster of K8. K8 takes care of it immediately.

K8 Architecture Diagram :

 Here is a typical architecture diagram K8 has a kind of an API Server Master kind of thing that is highly available and then has a bunch of nodes these nodes have a Docker Container Demon on them to start and stop Docker Containers and it has a K8 demon which we call the Cubelet which runs on every machine and basically the master talk for the machine and instruct them if it needs to start a Container and it starts the Container. Hence it is a simple, lightweight and if you want to boot up the K8 , it is using the Open Shift distribution, its one binary you type


  > Ship Start

This will start up the K8 Clusture.


Kubernetes : Sub features :

  • Pods
  • Replication Controller
  • Services

Pod :

It means one or more containers or you can Imagine Pod to be in Likes of a JVM. The idea behind Pod is, it is  Atomic Deployment unit that K8 will deploy. So you define a Pod which means this is one Image and now K8 will Either Deploy it or Un-Deploy it or Re-Deploy it. You can have multiple Container in a Pod and Each Pod can have its own Environment variables, you can define ports and you can use some Persistent volumes because it might be that you want to run a Database and you want your application to store some information in the Database, so you have to put that information in the Persistent Volume which is outside of the Docker Image. So you have to now imagine the Docker Image like an installation of the software and if you want to store some information on to the Database , you will have to put that information on to Persistent Volume, so that the Information you want to store is independent of the install. Hence in future you can upgrade the Installation/Software to a new version, so that your Data in the Database is in isolation to the Software. This way your State is alive even though for some reason the Container goes down.

                Each Pod has its own unique IP address , let’s say you are running two Tomcat Servers on you your laptop, typically either you manually map all the ports in every Container, since every Pod now has an IP address , hence you can use this IP address as your host and then give the desired port number or use the standard 8080.

Replication Controller:

Simple definition to it is as it is called , it is responsible to Replicate Controllers. What it does is defines a kind of Pod you want to run and then it defines how many you want to run, then it makes it happen so it’s like a declarative thing where you say I want to run three of these and the K8 acts upon it. All the resources in K8 is just a Jason or Yamel file, You can update them over a REST API or Command Line Tool to get and set them , you can also use the K8 web console to Edit or Create it.

 Services :

Imagine you are running 3 Tomcat Servers, when you want to interact with these 3 Tomcat Servers, we would require the IP addresses to interact with the services deployed on the servers. One way or the hard way to do that is you keep querying the K8 Rest API’s and get the number of Pod’s running right now and get their respective IP addresses and then we could create a client side load balancers, let’s wait right here and thing do we need to do all this ?

Here in the Service you can define the Name of the service, then in the specification you give it the port in which the service is going to listen to and the Target port is the internal port that is used for the interaction inside the container. The selector is the key value pair

Retrospect :

To sum-up microservices gives you the flexibility to create and run thousands of miniature services which you can run on your own DC’s or on Cloud. By following this architecture we can scale and re write services at will , rather than refactoring the existing service. When you hit the scaling issue on a particular service, you can opt to rewrite it and create a new version and route the new customers to the new service that can handle the load. 

Chris PaRDo

#://CNXT | $://THeXDesK | #://CuRReNCyx $://ANCHoRx | $://ASSeTx $://iSSueRx | #://BoNDx | $://CeNTRaLBaNx | $://THeFeDWiRe $://THeCeNTRaLDesK_x_#://CNXTAi_x_#://CoNTRax

7 年
回复

要查看或添加评论,请登录

Shankar Kalyanraman的更多文章

社区洞察

其他会员也浏览了