Azure Functions on Kubernetes with KEDA
Orestis Meikopoulos
Engineering Manager | Cultivating Technical Leadership | C# & .NET Content Creator | Public Speaker
The Azure Functions runtime provides flexibility in hosting functions where and how you want. Kubernetes-based Event Driven Autoscaling (KEDA) pairs seamlessly with the Azure Functions runtime and tooling to provide event driven autoscaling capabilities in Kubernetes (K8S).
How Kubernetes horizontal pod auto-scaler works
In an Azure Kubernetes Service (AKS) cluster, there is a component called horizontal pod auto-scaler (HPA), which is responsible for scaling in / out the pods in your workloads. By default, it can only use memory and CPU metrics from the pods to configure the profile for the desired number of replicas that a pod should have. This could be limiting if you want to use other metrics to scale your application.
For example, if you had deployed an Application Gateway inside the cluster, then you would want to be able to use other metrics like how many requests / second it is receiving, to increase the number of pods it has. This, by default, cannot be done, but there are some advanced scenarios that allow you to use extended metrics from other services to configure your autoscaling profile.
Here comes “KEDA”, which is event-driven autoscaling. When you deploy this in your cluster, it uses a way of configuring community-built plug-ins. Those plugins offer a standard way to expose metrics to K8S, which can then be used for various things, like configuring auto-scaling behavior for your cluster.
KEDA Architecture
KEDA is a single-purpose and lightweight component that strives to make application autoscaling simple. It applies event-driven autoscaling to scale your application to meet demand in a sustainable and cost-efficient manner with scale-to-zero.
The KEDA add-on makes it even easier by deploying a managed KEDA installation, providing you with a rich catalog of 50+ KEDA scalers that you can scale your applications with on your AKS cluster.
KEDA provides two main components:
- KEDA operator: Allows end-users to scale workloads in / out from 0 to N instances with support for K8S Deployments, Jobs, StatefulSets or any custom resource that defines / scale subresource.
- Metrics server: Exposes external metrics to HPA in K8S for autoscaling purposes such as messages in a Kafka topic, or number of events in an Azure Event Hub.
How Kubernetes-based Azure Functions work
The Azure Functions service is made up of two key components: a runtime and a scale controller. The Functions runtime can run anywhere, it executes your code, and includes logic on how to trigger, log, and manage function executions. The other component is a scale controller, which monitors the rate of events that are targeting your function, and proactively scales the number of instances running your app.
Kubernetes-based Functions provides the Functions runtime in a Docker container with event-driven scaling through KEDA. KEDA can scale in to 0 instances and out to n instances. It does this by exposing custom metrics for the K8S HPA. Using Functions containers with KEDA makes it possible to replicate serverless function capabilities in any Kubernetes cluster.
Deploying a function app to Kubernetes
We are now going to create a demo function app deployment on AKS with KEDA. It will consist of a queue-trigger based function, which will just log messages read from an Azure Storage Queue and will scale in / out, from 0 to N instances, based on the load of messages present in the queue.
Prerequisites to run the demo
To run the demo, you will need the following prerequisites:
1) Install the following components:
2) Add The Azure Functions Core Tools, which can be installed with the following command:
npm install -g azure-functions-core-tools
Detailed steps to follow
Let’s start the demo by following the below steps:
1) Create a new folder “myfunction”, open a Powershell Window and cd to it.
2) Create Azure Storage Account and create a queue named “myqueue” and get its connection string:
- Example using Azure CLI (after you have successfully authenticated by running az login):
az login az account set -n {subscriptionId} az group create -l westeurope -n aks-functions-demo-rg az storage account create --sku Standard_LRS --location westeurope -g aks-functions-demo-rg -n {storageAccountName} az storage queue create -n myqueue --connection-string $(az storage account show-connection-string --name {storageAccountName} --query connectionString)n
3) You can deploy any function app to a Kubernetes cluster running KEDA:
- Since your functions need to run in a Docker container to be able to be deployed in an AKS cluster, your project needs a Dockerfile.
- You can create a Dockerfile by using the --docker option when calling func init to create the project:
func init --docker
- Choose: dotnet -> C#
4) To create a new queue-trigger function, use the func new command, by typing:
func new
- Choose: QueueTrigger
- Name the function: QueueFunction
5) Open Function App in VS Code:
- E.g., using command:
code .
6) Check the created Dockerfile:
- It states that it will simply publish everything from the function app into a container and then run the function there using the Azure Functions Runtime.
7) Open local.settings.json file:
- Put a key named "storageconnection" and insert there the Access Key from Azure Storage.
- To get the Azure Storage Access Key you can run:
az storage account show-connection-string --name {storageAccountName} --query connectionString
8) Open QueueFunction.cs and change the QueueTrigger queue name to “myqueue” and connection to "storageconnection".
9) Run the docker build command, by typing:
docker build -t myfuncapp:v1 .
The above command:
- Tests if the container compiles (ensure you are logged in to your local docker with docker login first).
- If the command succeeds, then your function compiles and now you have a local container with the function in it.
Next, we are going to deploy the function app to AKS and make it scale with KEDA.
10) Create a simple AKS cluster using the az aks create command, with the "--enable-addons monitoring" option to enable Azure Monitor Container insights:
- The following example creates a cluster named aksFunctionsDemoCluster with one node and enables a system-assigned managed identity, by running the following command:
az aks create -g aks-functions-demo-rg -n {aksResourceName} --enable-managed-identity --node-count 1 --enable-addons monitoring --generate-ssh-keys
11) To manage a Kubernetes cluster, you can use the Kubernetes command-line client, kubectl:
- kubectl is already installed if you use Azure Cloud Shell.
- If you are running this locally, install kubectl locally using the az aks install-cli command (if you don’t already have it):
az aks install-cli
12) Configure kubectl to connect to your Kubernetes cluster using the az aks get-credentials command:
az aks get-credentials -g aks-functions-demo-rg -n {aksResourceName}
The above command:
- Downloads credentials and configures the Kubernetes CLI to use them.
- Uses ~/.kube/config, the default location for the Kubernetes configuration file. Specify a different location for your Kubernetes configuration file using --file argument.
13) Install KEDA in cluster with the func kubernetes install command, by typing:
func kubernetes install --namespace keda --validate=false
14) To build an image and deploy your functions to Kubernetes, run the func kubernetes deploy command, by typing:
func kubernetes deploy --name queuefunctionapp --registry <container-registry-username> (e.g., in docker-hub like we saw in our example, but you can use your private registry here as well)
The deploy command does the following:
- The Dockerfile created earlier is used to build a local image for the function app.
- The local image is tagged and pushed to the container registry where the user is logged in.
- A manifest is created and applied to the cluster that defines a Kubernetes Deployment resource, a ScaledObject resource, and Secrets, which includes environment variables imported from your local.settings.json file.
Sometimes, however, you will need a little more control over how the function gets deployed. We can add the ‘--dry-run > deploy.yaml’ to the previous command:
- That will not do the build, push, and deploy but instead will output the Kubernetes desired state to a file.
- For example:
func kubernetes deploy --name queuefunctionapp --registry <container-registry-username> --dry-run > deploy.yaml
The above allows you to add a few settings to the ScaledObject resource, like pollingInterval, cooldownPeriod, minReplicaCount, maxReplicaCount.
- Since we used the --dry-run option we now need to build, push, and deploy our functions ourselves, by running the following commands:
docker build -t <your-docker-user-id>/queuefunctionapp docker push <your-docker-user-id>/queuefunctionapp kubectl apply -f deploy.yaml.
15) The container with the functions is now deployed to AKS, we can check this by running:
kubectl get deployments
Notice the 0 instances of the queuefunctionapp deployment. This is what we want as there are no messages in the queue to process.
16) Test if this scales by putting a lot of messages inside the queue. We are going to simulate that, by running a little console app:
17) Continue running kubectl get deployments and wait for the scaling system to kick in:
Later, when it is done processing messages, it will automatically scale back.
18) Head over into Container Insights for the AKS cluster and find the function app containers while they are reading the messages from the queue and go to the live logs / events tabs to see in real-time the messages coming from the demo code.
Conclusion
KEDA (Kubernetes-based Event Driven Autoscaling) is a lightweight component that brings event-driven autoscaling to Kubernetes. It provides autoscaling based on a variety of external metrics by integrating with Kubernetes Horizontal Pod Autoscaler (HPA). KEDA allows scaling workloads from zero to N instances, making autoscaling more efficient and cost-effective. KEDA architecture includes two main components: KEDA operator and Metrics server. The operator allows end-users to scale workloads in/out from 0 to N instances with support for Kubernetes Deployments, Jobs, StatefulSets, or any custom resource that defines/scales subresource. The Metrics server exposes external metrics to HPA in Kubernetes for autoscaling purposes, such as messages in a Kafka topic or the number of events in an Azure Event Hub. KEDA can be used to scale any workload running on Kubernetes, including Azure Functions.
Senior Cloud Data Engineer, Global Talent UK
5 个月The issue I had is local.settings.json is not seen by the system, even if you remove it from dockerignore and funcignore files. As you reference this variable a "storageconnection", Azure will seek for AzureWebJobsstorageconnection environment variable (written here https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-storage-queue-trigger?tabs=python-v2%2Cisolated-process%2Cnodejs-v4%2Cextensionv5&pivots=programming-language-python#connection-string), which you did not specify. This code might work when you test your container in your local docker, but not on AKS. The solution is simply adding ENV variable with the name I mentioned to Dockerfile, so connection string will be built-in within a container