Containers on AWS (EKS vs ECS)
Containers on AWS - Generated using Amazon Titan on Bedrock

Containers on AWS (EKS vs ECS)

There are many ways to run container workloads on AWS. The main ones though are EKS and ECS.

EKS and ECS are both container orchestration platforms. They manage the availability, scalability and resources of a collection of container workloads. Containers have transformed the ay most software is written and delivered. But running individual compute instances each with Docker quickly becomes a nightmare to manage efficiently.

A container orchestration platform is vital to running containers at any sort of scale and will help manage the resources, allow services to communicate and ensure security and efficiency.

Both EKS and ECS have their own advantages and disadvantages. Let's look at them both in a bit more detail.

For reference, other options are Elastic Beanstalk and rolling your on solution on EC2 (like KOPS or Docker Swarm).

An intro to EKS

EKS is Amazons managed Kubernetes service. It is an easy way of deploying a Kubernetes cluster without having to manage the setup yourself. If you have not heard of Kubernetes it is an open source container orchestration platform. It manages allocating containers to hardware, networking internal DNS and storage. Kubernetes is a very flexible industry leading container platform. There are many add-ons to enhance its abilities. Kubernetes has a rich ecosystem of open source add-ons to extend its capabilities.

EKS abstracts and automates a lot of the complexity of setting up a Kubernetes cluster, but there is still quite a bit of set up to make it work nicely with IAM, persistent volumes, secret manage and other was services. It may have improved now, but the first time I set up a Kubernetes cluster pre-EKS it was a nightmare!

EKS Architecture

EKS consists of a managed control plane and worker nodes. The control plane is managed by AWS and does not appear as a compute resource in your account.

Worker nodes are created by a node group. A node group then manages an auto-scaling group with EC2 instances. There is also the option to use Fargate (serverless compute), more about that later.

Probably the most common way to create an EKS cluster is with access to 3 public and 3 private subnets. These have to then have appropriate tags applied to identify them.

Private subnets are used for deploying compute nodes. Public subnets are used for deploying other resources like EKS managed load balancers to expose services.

This is a very high level description and there is loads of room for customisation!

EKS public access

Because Kubernetes has its own control plane separate to AWS, you can choose if the EKS control plane is available over the internet or not. You can lock it down by IP address but that is not really possible if you want to use a tool like Github Actions to deploy. If you do expose the control plane make sure you secure it and that no Kubernetes tokens are ever in source control or similar.

Kubernetes also has a great feature to allow you to execute a command or gain a shell in a running container. While I would not encourage this in a production environment it is great tool for debugging and development. The only caveat is you must have network access to both the control plane and the container. If your containers are in a private subnet you may need to set up a VPN.

One strong recommendation for using EKS is to use the IAM based access (see below) as much as possible. This will handle the rotation and management of control plane credentials far better than the Kubernetes native approach.

What about ECS

ECS is Amazon's home grown container orchestration platform. It pre-dates EKS as an AWS service. It has many of the same concepts but is a lot less complex. It is also far more Amazon specific. Integrating with other AWS services using IAM or secrets manager is far quicker and easier with minimal set up. The control plane is also serverless (and free). Control of container resources is much more tightly integrated into the AWS console and control plane.

With ECS you will have less flexibility and control than EKS but a vastly simplified set-up and management.

ECS is still a very capable product and integrates easily with other AWS services.

Common features

There are lots of common features. Here are some of the key highlights:

  • Both of them can work with AWS networking. They will use VPC IP addresses and can use security groups
  • Both of them can use AWS roles to allow the use of other AWS services securely
  • Both of them have the concept of 'pods' that consist of multiple containers. This allows the use of sidecars or closely dependant containers that should be deployed together
  • Both of them can use Fargate as a serverless option (see below). This can be combined with EC2.

Going head to head

When EKS was first launched, I thought that it might possibly replace ECS. But now a few years later they are both happily coexisting. They do have some key differences.

  • ECS is far easier to set up and use. It is tightly integrated into the AWS ecosystem. EKS has a lot more to set up and configure
  • ECS is slightly cheaper. The control plane is free to use.
  • Kubernetes (and therefore EKS) is a de-facto standard. It is the leading open source container platform. Kubernetes is common across cloud providers and can also be used on premise.
  • Kubernetes does have more controls around container placement.
  • EKS is designed to be customised.
  • EKS has more 3rd party options around scaling, and monitoring
  • Some 3rd party apps are easier to deploy on Kubernetes as the come complete with a Helm Chart*.

*NOTE: Helm is a common Kubernetes add on to manage alll the resources for an application. It is similar to an IaC solution and has better templating facilities than a conventional Kubernetes.

Another big difference is how you deploy to each. ECS is integrated into the AWS control plane. It is easily to deploy with your existing IaC (infrastructure as code) solution. Kubernetes has its own control plane. You will normally use IaC to create your cluster and manage the nodes. For deploying your Kubernetes resources you can either use Kubernetes' own yaml syntax or use a product like helm. It is possible to use Terraform to deploy Bothe AWS resources and Kubernetes, but you will end up with some dependancy problems if you create a Kubernetes cluster and then try and manage it in the same terraform stack!

Interesting features

Running Anywhere

Both ECS and EKS now have 'Anywhere' options with a on-premise and AWS Outposts. EKS also has EKS Connector to register existing Kubernetes clusters with AWS and view them via the console.

Fargate

Fargate is a serverless container option. It creates compute on demand and you avoid the overhead of managing nodes. It is much quicker to start up than traditional EC2. It also has a higher degree of workload isolation as individual workloads are spawned on separate Fargate instances. Fargate can also be combined with EC2. A good approach is to use EC2 for base load and long running Kubernetes system workloads.

Fargate uses Firecracker AWSs microVM framework that they open-sourced a couple of years ago. It guarantees memory cleaning and complete isolation. It has an optimised memory footprint and better hardware utilisation

Fargate is a bit slower than Lambda to start up as it is spinning up a micro-instance and then registering it with the Kubernetes or ECS cluster but it is a very quick way to add capacity quickly. It is however significantly faster than launching a new EC2 instance. It can also simplify resource utilisation. Fargate uses a range of available CPU and memory size 'templates'. With ECS these align to the pre-determined task sizes. With EKS the are rounded up. Also a small Kubernetes overhead is added. The blow link explains it all:

https://docs.aws.amazon.com/eks/latest/userguide/fargate-pod-configuration.html

As long as you correctly specify your resource requirements you can eliminate wasted resources.

Which to choose

If the majority of these apply to you choose ECS:

  • If you are new to containers or you are not already familiar with Kubernetes
  • If you are building an AWS only workload
  • If you want a simple setup
  • You want to reduce overhead and running costs
  • If you want to use multiple AWS native services


If this list sound more like you then choose EKS:

  • You want to standardise workloads across multiple cloud providers and/or on premise
  • You need a lot of flexibility and you have complex demands
  • You are deploying a large system and want a high level of control
  • You are deploying a mixture of instance types like ARM and x86 or Windows and Linux
  • You want to take advantage of the large open source community and 3rd party ecosystem

Conclusion

I have a long history with Kubernetes and have used it for many projects. That possibly makes me a bit too quick to overlook ECS. Having used it again recently, it was so much faster to get from zero to running workload. For even quite respectable workload sizes it has all the features you need. It is quick and reliable. EKS is clearly more powerful, but often you don't need that power.

If you are looking for a container platform for a single project then ECS can be a far quicker and easier option. If you are also using other Amazon features like app mesh, sql queues and load balancers then they will integrate easily.

If you are looking at a consolidated platform then EKS may be a better fit. You can isolate teams using namespaces but still share underlying infrastructure If you have a very complex workload and multiple teams all running on the same platform then you may need some of the advanced features.

The one big caveat is there will be a lot of wasted effort changing (although it's not a one way door decision). This does not mean go straight for EKS as you will 'eventually' need it though. A little planning is needed.

Good luck and happy building!

Kubernetes extra details

It you are going beyond the choosing stage and you want to get hands on with EKS here are some extra details that could help you...

eksctl

eksctl is the official command line tool of EKS. I believe it started life as an open source project and was then adopted by AWS. Now it is officially maintained by AWS

eksctl is a great tool for managing your EKS cluster. It is normally the documented solution. My big problem with it though is it is another tool. I already have my IaC tool and kubectl.

One of the great advantages with eksctl is it access both the AWS control plane and Kubernetes. This is necessary for a lot of operations.

Another problem is most of the examples are a series of commands to run rather than a declarative style like IaC. Also you have to transfer values (like the id of a VPC or similar) from assets created using IaC. Py personal preference is to not use eksctl. I use my chosen IaC to create all AWS components and then eksctl to manage anything through the Kubernetes control plane

Terraform and EKS

If you are using Terraform and EKS there are a couple of bumps in the road. The biggest one is Terraforms inability to use dynamic information when creating a provider.

Terraform uses a number of providers like the AWS provider to manage different types of resource. The way Terraform works is to initialise the providers first and then carry out actions. It is therefore not possible to create a EKS cluster using the AWS provider and manage it using the Kubernetes provider in the same stack. Terraform tries to set up the Kubernetes provider before executing the create cluster step. It fails because there are no credentials.

If you have already created the EKS cluster and got credentials then you can work-around this problem. But that does not feel like the correct answer. It is better to separate out EKS creation and Kubernetes management into separate stacks. Similar issues can exist when trying to use other providers like creating an RDS instance and then using terraform to manage the schema using a Postgres provider.

IAM and EKS

There are to distinct ways IAM interacts with EKS

  • Using an IAM role to access the Kubernetes control plane
  • Using a service role to access an AWS resource requiring IAM permissions

One thing is to think about which IAM user is used to create an EKS cluster. The IAM user who creates an EKS cluster will 'forever' be the administrator of the cluster. It is best practice for a dedicated IaC user to be the owner of an EKS cluster and the to add other users as needed.

To add an IAM user to kubernetesd you need to first create the IAM user with the permissions to access the cluster. You then need to create the Kubernetes Role and RoleBinding.

You then map the two together by editing the was-auth ConfigMap in Kubernetes.

kubectl edit configmap aws-auth -n kube-system        
apiVersion: v1
kind: ConfigMap
data:
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::ACC_ID:role/NODE_ROLE_NAME
      username: system:node:{{EC2PrivateDNSName}}
    - groups:
      - KUBE_GROUP_NAME
      rolearn: arn:aws:iam::ACC_ID:role/IAM_ROLE_MANE
      username: KUBE_USERNAME
    - groups:
      - system:bootstrappers
      - system:nodes
      - system:node-proxier
      rolearn: arn:aws:iam::ACC_ID:role/FARGATE_ROLE_NAME
      username: system:node:{{SessionName}}        

This is what my config map looks like on a new cluster. All the items in block capitals need to be filled in with the correct values.

You can the set it up so other IAM users (or roles) can access Kubernetes. You can use the aws cli to update your Kubernetes credentials (based on your IAM credentials) using this command:

aws eks update-kubeconfig --region REGION --name EKS_CLUSTER_NAME        

The other way round is allowing containers running in EKS to access AWS resources. This is done by creating a service account in Kubernetes and mapping it to an IAM account. Authentication is done using OICD. There are a few steps:

  • Create an OICD provider for IAM (only need to do this once)
  • Create the IAM role and policy - Create a new IAM role and add a policy as you normally would
  • Create a Kubernetes service account
  • Create an IAM trust relationship scope to the specific service account and assign it as the assume role policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::AWS_ACCOUNT:oidc-provider/OICD_PROVIDER"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "$oidc_provider:aud": "sts.amazonaws.com",
          "$oidc_provider:sub": "system:serviceaccount:KUBE_NAMESPACE:KUBE_SERVICE_ACCOUNT"
        }
      }
    }
  ]
}        

  • Assign the IAM role to service accounts - this is done by adding an annotation to the service account:

metadata:
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT:role/IAM_ROLE_NAME        

  • Configure pods to use the service account. - You just need to add serviceAccountName to the pod specification.

WARNING - It is possible just to let your EKS node role have access to everything your pods need. This breaks the principles of least privilege and credential isolation and should be highly discouraged.

Back to the comparison with ECS - In ICS land you pretty much get all this for free.

Jocelyn. Fontaine

I elevate your business through innovative cloud solutions, combining architecture, AI, and a seamless user experience.| AWS Architecture | Azure Architecture | Cloud Security |Serverless Computing | Tech Lead | DevOps

2 个月
回复

要查看或添加评论,请登录

Andrew Larssen的更多文章

  • Network security and AWS Transit Gateway

    Network security and AWS Transit Gateway

    There are a few ways you can improve your networking security using AWS Transit Gateway. If you are using AWS multi…

  • Advanced RAG with Amazon Bedrock

    Advanced RAG with Amazon Bedrock

    Recently I have been using Amazon Bedrock Knowledge Bases extensively. It really makes setting up a RAG solution very…

  • Running an AWS Event?

    Running an AWS Event?

    A while ago I ran an AWS immersion day where I needed us to create 30 accounts for an AWS event. It was a complete…

  • Bedrock Knowledge Bases

    Bedrock Knowledge Bases

    In my opinion large language models (LLM) and GenAI that produces text has a couple of super powers: The ability to…

  • Why can't I use AWS?

    Why can't I use AWS?

    First a quick disclaimer: This is an opinion piece. It is based on my world view and personal experience.

  • Amazon Bedrock - Choosing a model

    Amazon Bedrock - Choosing a model

    One of the great things about Bedrock is it is a multi model platform. You can easily choose from a whole range of…

  • Reaching the Summit

    Reaching the Summit

    It's been a couple of months since AWS London Summit At Excel in London Docklands. It is the largest* of the European…

  • Hands on with AWS Bedrock

    Hands on with AWS Bedrock

    Introduction I have had the opportunity recently to work on my first couple of products with Bedrock. While I have been…

  • AWS Security Groups - The Good, the bad and the Ugly

    AWS Security Groups - The Good, the bad and the Ugly

    AWS security groups have been around for ages. They are one of the fundamental controls in a VPC (virtual private…

  • Protein AI

    Protein AI

    Introduction This is based on a Protein AI workshop I attended at AWS Re:Invent 2023. More information about the…

社区洞察

其他会员也浏览了