All public cloud providers seem to implement GitOps way of deploying applications in the managed K8s clusters. The expectation is that some CI/CD or users write all application K8s resources to be deployed in a git repository. The sync agent (normally fluxv2 or ArgoCD) in each cluster discovers new resources to be synchronized by reading the git repository and applying them locally by talking to the local K8s API server.
Following picture depicts a typical GitOps model.
Typical process flows is something like this:
- Developer updates the code in the git repository.
- CI system builds the code and ensures that built code is all good, by running automated tests in a test/lab/staging environment.
- CI system writes the container images in the container registry. Version string may be updated.
- CI system also updates K8s yaml files in a git repo.
- CI system informs the Sync agent in K8s Cluster (if it is first time) about git repo related information.
- Sync agent pulls the K8s resources from git repo, finds out any differences from previous installation and applies differences, thereby ensuring that the running state is the same as the state declared in the K8s resource git repo.
It is all good for simple applications and uniform K8s clusters. For geo-distributed applications, edge computing, there are few additional requirements.
- On-demand instantiation of applications on K8s clusters: One may not like to deploy applications in all possible edges all the time. What is the point of deploying applications when there are no users nearby? So, there shall be some intelligent entity that makes decisions on when to place the workloads.
- Intelligent selection of clusters to place the workloads: If there are choices of K8s clusters to place the workload on, some intelligent entity needs to select the right K8s clusters. Criteria to select could be based on capabilities of the cluster, based on the cost of placing workload on the clusters, based on distance of the cluster from users, based on the resources availability of clusters and even can be based on the amount of green gas energies that clusters emit.
- On-demand scale-out (bursting) of the applications: There may be a need to bring up parts of the application in other clusters to take up the load. Again, some intelligent entity needs to make a decision on scale-out and scale-in based to meet SLAs.
- Customization of resources to the applications: When an application is duplicated across multiple clusters, it is not always true that all K8s resources are exactly the same. In some cases, some microservices may need to be assigned with different CPU resources than when deployed in different clusters. Any customization that is required based on cluster types requires some intelligent entity taking intents and applying them to the K8s resources.
- Automation of service mesh and other connectivity & security infrastructure: Some K8s resources that need to be added are based on the type of infrastructure they have. For intra and inter application communication within a cluster or across clusters require some additional automation related to service mesh (such as ISTIO/Envoy), firewalls, NAT and others. Hence, there is a need for some intelligent entity that automates these entities by creating new K8s resources based on the type of infrastructure that is there at each cluster.
- Dependency and order of priority of application deployments: It is observed that in few cases, there are dependency challenges. Some microservices may not start well if they are brought up before other microservices. To support these dependencies, one shall have an intelligent entity that understands these dependencies and prime the git repositories with resources at the right time. As an example, if X depends on Y to be up and running, then X specific resources are expected to be primed in the git repository only after Y status is 'ready'.
EMCO can be that intelligent entity. As shown in the below picture, EMCO can be primed with deployment intents. Once the deployment intents are specified, this can be set in the GitOps chain to ensure that intents are honored.
There is some confusion in the Industry on multi cluster orchestration. Few treat the traditional GitOps model as multi-cluster ready where each cluster synchronizes resources from the git repository. Few think that it is good enough. But, Multi Cluster orchestration requires some intelligent entity that needs to make decisions on many things as listed in this post.
Global Technology Leader, Principal Technology Architect - 5G | 5GC | O-RAN | Private Networks | MEC | Telco Cloud | NFV-MANO
3 年The EMCO project is really impressive, thanks for sharing this information. There are very few good options available to handle geo-distributed workloads deployments?and LCM
AI & Cloud Automation | Edge Computing | AI-Driven GTM Strategy | Speaker & Thought Leader
3 年EMOC intelligence to deploy Edge application and Network services on Edge cloud is really impressive. Could you please point the additional links/documents to implement these use cases with EMCO. I hope these feature are available with open source EMCO version.