MPL migrates to K8s in AWS and partners with Tetrate to Deliver a Better Experience to 90+ Million Gamers

MPL migrates to K8s in AWS and partners with Tetrate to Deliver a Better Experience to 90+ Million Gamers

Mobile Premier League (MPL) is India’s largest fantasy sports and online gaming platform. The Bengaluru-based startup has raised $396 million to date, according to market intelligence platform Tracxn. With offices spread across India, Singapore and New York, MPL reportedly has more than 90 million registered users on its platform.?

At one time, MPL was running all of its workloads on Amazon Elastic Compute Cloud (EC2) virtual machines (VMs), using AWS Application Load Balancer to route incoming traffic to proper resources based on Uniform Resource Indicators (URIs). In an effort to modernize infrastructure and reap the agility, scalability and reliability gains of modern microservices architectures, MPL set out to shift its VM-based workloads to microservices in Kubernetes. Tetrate assisted MPL in this transition, using the VM onboarding capability in Tetrate Service Bridge (TSB) to manage the risks and complexity of migrating to new infrastructure without interrupting service to its customers.

With more than 90 million gamers accessing its platform, MPL needs to be able to run its platform at scale using an architecture that is flexible, reliable and efficient. That’s why MPL joined the ranks of enterprises today that are shifting their workloads from monolithic to microservices architecture, or, phrased another way, moving from VMs to containers orchestrated by Kubernetes.

The Migration Challenge: How to Migrate Incrementally to Reduce Risk and Manage Complexity

In reality, the shift from VMs to containers is rarely completed in one fell swoop. The risk of failure or downtime and the complexity of migrating a whole application fleet to a new architecture and environment makes a “big bang” cutover impractical. Instead, the shift is typically made in stages where applications are migrated incrementally to mitigate risk and manage complexity.

During the transition, however, the business operates a hybrid infrastructure with some components running on VMs and others in Kubernetes. Applications and services running in both environments must still somehow communicate with each other securely and reliably across infrastructure boundaries to maintain business continuity. This can pose significant operational challenges—especially for service discovery, new deployments rollout, traffic management and security policy implementation.

To meet this challenge, MPL turned to Tetrate’s expertise and TSB’s ability to onboard EC2 virtual machine workloads to the Istio-based service mesh in Kubernetes—to facilitate exactly this kind of cross-boundary communication. Once enrolled in the mesh, Istio and Envoy manage traffic, security and resiliency between those VMs and Kubernetes workloads.

TSB can manage workloads on heterogeneous infrastructure in on-premises data centers and public clouds like AWS, GCP and Azure. TSB supports onboarding standalone VMs as well as autoscaling groups-based pools of virtual machines.

In this article, we will detail how to migrate VM-based applications to Kubernetes in AWS environments using Istio, Envoy and Tetrate Service Bridge. This article will go into detail on how Mobile Premier League (MPL)? gaming successfully advanced VM apps into the service mesh with the help of TSB. Readers will walk away understanding the architecture and procedure of onboarding an EC2 instance to a service mesh.

Why Onboard VMs into a Service Mesh?

  • VMs are treated like a Kubernetes pod and accessed via Kubernetes service objects.
  • VMs are assigned a strong SPIFFE identity for use in identity-based authentication and authorization operations like mTLS.
  • Secure (mTLS) communication with other VMs and Kubernetes-based services.
  • Leveraging features built into service mesh like mTLS, service discovery, circuit breaking, canary rollouts, traffic shaping, observability and security with other workloads.
  • Makes VMs eligible workload for zero trust network architecture.
  • Makes easy migration of VM-based workloads to containerized Kubernetes microservices.

MPL Experience Migrating to Kubernetes

Prior to implementing this change with TSB, MPL was running all of its workloads in EC2 virtual machines. Its infrastructure is composed of pools of EC2 VMs and an AWS Application LoadBalancer (ALB). There are multiple Auto Scaling groups-based EC2 VMs constituting one application per group. ALB routes to these VMs based on the incoming URI. In this architecture, service discovery was managed by Consul, where Consul agents run on each virtual machine.

We were fully in EC2 Virtual machines and our plan was to migrate all of our workloads to Kubernetes. With the help of TSB’s VM onboarding capability, this migration became very easy. We initially explored upstream Istio for this, but there we had to face a lot of config management and performance issues for each VM. However, with TSB it’s very minimal config and the VM onboarding agent did the rest for us. We could achieve our VM workloads migration to Kubernetes without any service disruption or any added complexity. Thanks to the Tetrate team who assisted us promptly when we were running into severe issues and making this transition successful.?—Swapnil Dahiphale, MPL Senior Cloud Executive
MPL's Initial deployment architecture


Strategies for Migrating EC2 Workloads to Kubernetes?

The primary objective was to migrate all of the VM-based workloads to Kubernetes. As shown in the architecture diagram (Figure 1), in one instance of production there are? > 100 VM-based applications. And each of these applications communicates to other applications with the help of consul service discovery. This direct app-to-app communication? presents an? interdependency that adds to the complexity of migrating them to Kubernetes.?

To mitigate risk and to make the migration process manageable, we took an incremental approach rather than migrate all applications to Kubernetes en masse.

The basic migration strategy was as follows:

  • Choose? one service at a time to migrate to Kubernetes.
  • Analyze the upstream and downstream service dependency graph.

  • Migrate this service to Kubernetes and onboard it into the Istio service mesh.
  • Also onboard the virtual machines of segmented upstream and downstream applications from the dependency graph into the service mesh.

At the end of this process:

  • We have migrated? one service (application)? to Kubernetes.
  • All of the other VMs in the dependency graph—both upstream and downstream—are now accessible via Kubernetes service object as part of the service mesh.
  • All components in the dependency graph—those running in upstream and downstream virtual machines as well as the service migrated to Kubernetes—use Kubernetes-style? service-to-service communication managed by Kubernetes and Istio.

Using this? process, we have migrated one application to Kubernetes and the VMs in its upstream and downstream dependency graph are now part of the mesh. Those onboarded VMs have their own dependency graphs that still contain workloads? outside of the mesh. That communication continues to use the older app-to-app communication method using Consul discovery, so we operate both systems for the duration of the migration process. By repeating this process for every service, we successfully migrated all workloads to Kubernetes and were able to decommission the Consul-based infrastructure.


Four Phases of Migration to Kubernetes

  • Phase 1: Transition Individual Workloads to Kubernetes
  • Phase 2: All VMs Onboarded to the Mesh
  • Phase 3: Replace Application Load Balancer with Istio Ingress Gateway
  • Phase 4: Decommission EC2 VMs

End State: Migration Complete, All VMs Decommissioned


Continue Reading the details on the Tetrate Blog



要查看或添加评论,请登录

Tetrate的更多文章

社区洞察