State & Logic : Building Scalable Applications - Part II - Scaling Application Logic
Scale Up and Scale out via https://channel9.msdn.com/

State & Logic : Building Scalable Applications - Part II - Scaling Application Logic

Scalability is defined as an ability of a system to support growing number of users and volume of data without degradation of performance. As discussed in the previous article State & Logic : Building Scalable Applications - Part 1 - The Building Blocks | LinkedIn, there are two parts to an application. i.e State and Logic. In this article, we will explore the options available for scaling logic components in the server side.

The logic side of apps, relies on the CPU for processing the data. They could be business rules, calculations or transformation of data. The way to scale the applications is to add more resources. There are broadly two approaches to scale the applications. They are

  1. Scale Up
  2. Scale Out

Scale up is the mechanism by which we add more resources to the same server. For example, increasing the RAM or Increasing the processors. Upgrading the servers without adding any new instances of the application is known as scaling up methodology. Scaling up has its limits due to maximum RAM and Processor cores limitations and may not be the best approach.

Scale out is the mechanism by which we add more servers and run multiple instances of the application. For example, running multiple instance of the application and distributing the traffic via a load balancer. Scale out helps you to scale infinitely. But to enable the applications to scale with scale out methodology, the state the application relies on should be externalized. The state should not be stored along with the application.

Example, session if used should be stored in an external service, so that if the request is handled by any instance can load the same session state. There are clever solutions such as sticky sessions but they limit the ability to scale and also impact resiliency. If the instance hosting the session goes down then the session data will be lost.

Scaling out can be accomplished in multiple forms. In the starting, bare metal servers were added and applications were manually installed. When the applications were not able to fully exploit the available capacity of bare metal servers, the scaling out happened via spinning up of new Virtual Machines with applications. The VMs created a overhead and consumed resources for the guest OS. This resulted in a birth of containerization. The major difference between VM and a container is that VM virtualized the hardware (shared same hardware, but each VM instance got a slice of hardware without impacting other VMs) while the container virtualized the OS (shared same OS, but still isolated the OS services such as network and memory from other applications) and allowed multiple workloads run on same OS.

The containerization helps to distribute applications as a single image file. This image file contains the recipe or steps to start the application. One of the most famous examples of containerization technology is Docker. The ability to automatically start a new instance of the app, led to container orchestration. One of the most famous examples of container orchestration technology is Kubernetes. Kubernetes allows you to declaratively specify the number of instances of the app / service required and it internally maintains the running instances to meet the criteria specified. When Kubernetes detects the the application instance is down, then it uses the image file to spawn a new container and ensure the specified number of instances is alive. Kubernetes can be run on multiple servers in a clustered environment and provides zero down time deployments, A/B Testing and high availability.

With cloud computing, we can run the Kubernetes cluster in Infrastructure As A Service buy installing them in a VM or can use the Managed Kubernetes services such as Amazon EKS , Azure Kubernetes Service, Google Kubernetes Engine.

Another pattern getting popular these days in Functions As A Service (FaaS) or Server less computing platform. In this, there is no necessity to provision or manage infrastructure. This is an abstraction on top of Managed Kubernetes service where only a function is provided and platform takes care of scaling the instances of the application. This is very much useful in event oriented architecture to handle the events and in IOT to handle data ingestion from edge devices. AWS Lambda, Azure Functions, Google Cloud Functions, Open FaaS for Kubernetes are some of the server less platforms provided by various vendors

By choosing Kubernetes architecture, can help avoid vendor lock-in and allows us to seamlessly migrate from on-premise VMs / across any cloud providers if designed properly.

With these many options to scale the logic part of the application, scaling the state becomes very much essential. We will look at the challenges in scaling the state and how patterns such as micro services with its own database, sharding, eventual consistency helps in tackling those challenges in the next article in this series.

Views are my own. Not set in stone.

#scalability #kubernetes #docker #design #architecture


要查看或添加评论,请登录

Ramesh Vijayaraghavan ????的更多文章

社区洞察

其他会员也浏览了