Autoscaling .NET Core Azure Functions on Kubernetes
Everyone who worked with containers on Kubernetes probably agrees that there is a pretty steep learning curve in the beginning. Once you overcome that, you’ll find that it’s an awesome and very capable product and that it will make your life better (when used in the right context of-course). One area where Kubernetes is not so great is event-based autoscaling. That’s where KEDA comes in. KEDA stands for Kubernetes-based Event-driven Autoscaler. In this blog I will show you how to write your first .NET Core Azure Function and build an AKS cluster with Virtual Nodes. Finally, we will deploy the function and have KEDA scale that based on the number of messages on an Azure Storage Queue.
Let’s have KEDA introduce itself:
KEDA is a single-purpose and lightweight component that can be added to any Kubernetes cluster. It works alongside standard Kubernetes components like the Horizontal Pod Autoscaler and can extend functionality without overwriting or duplication. With KEDA you can specify the that apps you want to scale in an event-driven way while other apps continue to function. This makes KEDA a flexible and safe option to run alongside other Kubernetes applications and frameworks.
Create a simple Net Core Azure Function
I am assuming that you’ve installed the Azure Function Core Tools v3. If not, please do that now. Creating a new function is a simple as running a few commands:
mkdir hello-keda
cd hello-keda
# init directory, select option 1. dotnet
func init . --docker
# create a new function, select option 1. QueueTrigger
func new
Open the «name-of-your-function».cs. This is the function that run on the cluster later on. There’s one thing we need to do here and that is to set the Connection property to ‘AzureWebJobsStorage’.
using System;
using System.Threading;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Host;
using Microsoft.Extensions.Logging;
namespace hello_keda
{
public static class hello_keda_queue
{
[FunctionName("hello_keda_queue")]
public static void Run([QueueTrigger("myqueue-items", Connection = "AzureWebJobsStorage")]string myQueueItem, ILogger log)
{
Thread.Sleep(5000);
log.LogInformation($"C# Queue trigger function processed: {myQueueItem}");
}
}
}
Open the local.settings.json and you will find the definition of this setting. It will now hold a value of ‘UseDevelopmentStorage=true’. We’re going to change that to an actual Azure Storage Queue connection string.
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=<yourStorageAccountName>;AccountKey=<YourKey>",
"FUNCTIONS_WORKER_RUNTIME": "dotnet"
}
}
Storage Queue
Since we’re using a trigger based on a Azure Storage Queue, we obviously need one. Let’s create that now as we need to get it’s connectionstring and set that in config.
az group create -l westeurope -n akskedatest
az storage account create --sku Standard_LRS --location westeurope -g akskedatest -n akskedatest
CONNECTION_STRING=$(az storage account show-connection-string --name akskedatest --query connectionString)
az storage queue create -n myqueue-items --connection-string $CONNECTION_STRING
az storage account show-connection-string --name akskedatest --query connectionString
Go back to the local.settings.json file and paste your connection string. If you want, you could now locally test and debug your function running the following command:
func start
The AKS cluster
Now that we’ve created our function, it’s time to create a cluster. Before we do that, let’s talk about Virtual Nodes for a moment. Why would you want to add Virtual Nodes to your AKS cluster?
AKS with Virtual Nodes
I find Azure Functions especially useful for handling background processes or batches. Scenario’s in which there is nothing to do for most of the day and then suddenly over a million messages on your queue. If you would use an Azure Function on a dedicated AKS cluster to handle that for you you, you would probably need to scale the whole cluster to not impact the non-batch related processes that also run on that cluster. That takes a while and so that would impact the performance of the stuff that’s running on your cluster. This is where Virtual Nodes can help you out. It allows you to quickly spin up thousands of new containers without having to provision vm’s in your current cluster and wait for them to start. Azure simply extends you cluster with pods running on Azure Container Instances. Containers deployed there only take a few seconds to start. The beauty is that you can still talk to one Kubernetes API and based on configuration in your yaml files and the current load on the cluster, your workload get’s deployed.
Create the cluster
Here are the instructions to get from zero to a full functioning Kubernetes cluster in Azure (AKS) with Virtual Nodes.
We first need to create the virtual network:
az network vnet create \
--resource-group akskedatest \
--name akskedatest \
--address-prefixes 10.180.0.0/23 \
--subnet-name aKSSubnet \
--subnet-prefix 10.180.0.0/24
The Virtual Nodes need their own subnet on our virtual network:
az network vnet subnet create \
--resource-group akskedatest \
--vnet-name akskedatest \
--name virtualNodeSubnet \
--address-prefixes 10.180.1.0/24
Create a Service Principle:
az ad sp create-for-rbac --skip-assignment
Get the id of the vnet, we need that in the next step:
az network vnet show --resource-group akskedatest --name akskedatest --query id -o tsv
Make the Service Principle a ‘Contributor’ on the new virtual network:
az role assignment create --assignee <appId> --scope <vnetId> --role Contributor
Get the id of the subnet for the AKS cluster, we need that in the next step:
az network vnet subnet show --resource-group akskedatest --vnet-name akskedatest --name aKSSubnet --query id -o tsv
Create the cluster, replace the variables with the ones you got from the previous commands:
az aks create \
--resource-group akskedatest \
--name akskedatest \
--node-count 1 \
--network-plugin azure \
--service-cidr 10.0.0.0/16 \
--dns-service-ip 10.0.0.10 \
--docker-bridge-address 172.17.0.1/16 \
--vnet-subnet-id <subnetId> \
--service-principal <appId> \
--client-secret <password>
Enable the Virtual Nodes add-on:
az aks enable-addons \
--resource-group akskedatest \
--name akskedatest \
--addons virtual-node \
--subnet-name virtualNodeSubnet
That’s it! Connect to the cluster:
az aks get-credentials --resource-group akskedatest --name akskedatest
Now let’s see if the cluster is up and running using the following command:
kubectl get nodes
It should show a similar output with two node, of dedicated vm for our AKS cluster and one for the Virtual Node.
NAME STATUS ROLES AGE VERSION
aks-nodepool1-39973734-vmss000000 Ready agent 25m v1.15.11
virtual-node-aci-linux Ready agent 23m v1.14.3-vk-azure-aci-v1.2.1.1
Install KEDA
Now that we have our cluster up and running it’s time to install KEDA. Luckily that’s a bit quicker compared to the cluster. I’ll be using the Helm 3 method here but there are other options.
helm repo add kedacore https://kedacore.github.io/charts
helm repo update
kubectl create namespace keda
helm install keda kedacore/keda --namespace keda
Deploy the function
It’s time to deploy the function! The easiest way to do that is to run the following command:
func kubernetes deploy --name hello-keda --registry <docker-user-id>
That will build the function in a docker container, push that to your registry and then deploy the function to AKS. Sometimes however, you will need a little more control over how the function gets deployed. We can add ‘–dry-run > deploy.yaml’ to the previous command. That will not do the build, push and deploy but instead will output the Kubernetes desired state to a file.
func kubernetes deploy --name hello-keda-queue --registry <docker-user-id> --dry-run > deploy.yaml
That allows you for example to add a few settings to the ScaledObject resource which could then look like the following example in which I added the pollingInterval, cooldownPeriod, minReplicaCount and maxReplicaCount:
apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: hello-keda-queue
namespace: default
labels:
deploymentName: hello-keda-queue
spec:
scaleTargetRef:
deploymentName: hello-keda-queue
pollingInterval: 5 # Optional. Default: 30 seconds
cooldownPeriod: 5 # Optional. Default: 300 seconds
minReplicaCount: 0 # Optional. Default: 0
maxReplicaCount: 20 # Optional. Default: 100
triggers:
- type: azure-queue
metadata:
type: queueTrigger
connection: AzureWebJobsStorage
queueName: myqueue-items
name: myQueueItem
If we want to allow our function to run on a Virtual Node there is one last change we need to make. We need to add
tolerations:
- operator: Exists
to our Deployment. My complete deployment than looks like this one:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-keda-queue
namespace: default
labels:
app: hello-keda-queue
spec:
selector:
matchLabels:
app: hello-keda-queue
template:
metadata:
labels:
app: hello-keda-queue
spec:
containers:
- name: hello-keda-queue
image: erwinstaal/hello-keda-queue
env:
- name: AzureFunctionsJobHost__functions__0
value: hello_keda_queue
envFrom:
- secretRef:
name: hello-keda-queue
tolerations:
- operator: Exists
Since we used the –dry-run option we now need to build, push and deploy our functions ourselves:
docker build -t <your-docker-user-id>/hello-keda-queue .
docker push <your-docker-user-id>/hello-keda-queue
kubectl apply -f deploy.yaml
Now let’s see the scaling in action, run the following command:
Kubectl get deployments -w
Go to the Azure Portal and add a message on the Queue. It should only take a few seconds before you see a change in your deployment and your message being picked up from the queue! If you want another few seconds you should see the deployment scale down again. Happy auto-scaling!