Alibaba Cloud Functions on Kubernetes with Event Driven Autoscaling Ability

Image for post
Image for post

In the cloud-native age, container images have become a standard tool for software deployment and development. Custom container runtime offers developers a simpler experience and makes development and delivery more efficient. Developers can deliver their functions as container images and interact over HTTP with Alibaba Cloud Function Compute.

As described by the official website, KEDA is a Kubernetes-based event driven autoscaler, which enables you to automatically scale containers in Kubernetes based on the number of events needing to be processed. In this blog, we will show you how you can combine these two concepts, in particular, to deploy functions in Kubernetes with Kubernetes Event Driven Autoscaler (KEDA) using Alibaba Cloud Function Compute.

How It Works

A runtime is a container with the function’s runtime dependencies and everything needed for monitoring, logging, etc. The container can be deployed in any Docker or OCI standard environment, including Kubernetes. An event driven system is complicated in Alibaba Cloud Function Compute, but Kubernetes with KEDA do have the ability to handle some certain scenarios.

KEDA can scale Kubernetes Deployments and any other scalable objects to and from zero(0–1), and it also can work with HPA to autoscale Deployments to and from some certain scale(1-n), by exposing rich metrics to HPA controller.

Deploy Functions in Kubernetes with KEDA

Before You Begin

  • KEDA is installed in Kubernetes cluster.

Build Function Image

git clone Customize your own image name, e.g.
export FC_DEMO_IMAGE=""
docker build -t ${FC_DEMO_IMAGE} .# Docker login before pushing, replace {your-ACR-registry}, e.g.
# It's OK if you want to push your image to your dedicated registry.
# Make sure your Kubernetes cluster has access to your registry.
docker login
# Push the image
docker push ${FC_DEMO_IMAGE}

Deploy Function

export DEPLOYMENT_NAME={demo-deployment-name} # Customize your own deployment name, e.g. demo-java-springboot
export CONTAINER_PORT=8080 # In this case, container port should be 8080.
export SERVICE_PORT=80 # Customize service port, e.g. 80
export SERVICE_NAME=${DEPLOYMENT_NAME}-svc# Create deployment
# Expose your deployment. WARNING: it will cost you some credit.
/bin/bash ./hack/
# Verify your deployment is available.
curl -L "http://`kubectl get svc | grep ${SERVICE_NAME} | awk '{print $4}'`:${SERVICE_PORT}/2016-08-15/proxy/CustomContainerDemo/java-springboot-http/"

Enable Autoscaling with KEDA

In this section, we take cron trigger and CPU trigger as examples.

Cron Trigger

export SCALED_OBJECT_NAME={cron-scaled-obj} # Customize your own ScaledObject name, e.g. cron-scaled-obj# Create ScaleObject with cron trigger
/bin/bash ./hack/
## Deployment replicas will be 5 between 15 and 30 every hour
kubectl get deployments.apps ${DEPLOYMENT_NAME}

Cron trigger will work like below.

Image for post
Image for post

CPU Trigger

# Create ScaleObject with CPU trigger
/bin/bash ./hack/
## Put some stress on deployment
## Add 30qps stress for 120s
/bin/bash ./hack/ 2>&1 > /dev/null &
## We will see all pods' CPU usage are increasing
## And pods count will reach limit in a while
kubectl top pod | grep ${DEPLOYMENT_NAME}

CPU trigger will work like below.

Image for post
Image for post


Original Source:

Follow me to keep abreast with the latest technology news, industry insights, and developer trends.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store