Deploying a High-Reliability Kubernetes Ingress Controller

In Kubernetes clusters, Ingress is a collection of rules that authorize the inbound access to the cluster and provide you with Layer-7 Server Load Balancer capabilities. You can provide the externally accessible URL, Server Load Balancer, SSL, and name-based virtual host. As the access layer of the cluster traffic, the high reliability of Ingress is important. This document describes how to deploy an Ingress access layer that provides high performance and reliability.

High-Availability Deployment Architecture

To implement high reliability, the single point of failure must be solved first. Generally, the single point of failure is solved by deployment with multiple copies. Similarly, use the multi-node deployment architecture to deploy the high-reliability Ingress access layer in Kubernetes clusters. As Ingress is the access point of the cluster traffic, we recommend that you have the Ingress node exclusive to you to avoid the business applications and Ingress services from competing for resources.

As shown in the preceding figure, multiple exclusive Ingress instances form a unified access layer to carry the traffic at the cluster entrance and expand or contract the Ingress nodes based on the backend business traffic. If your cluster scale is not large in the early stage, you can also deploy the Ingress services and business applications in the hybrid mode, but we recommend that you limit and isolate the resources.

Deploy a High-Availability Ingress Access Layer in the Container Service Cluster

A Kubernetes cluster obtained through the Container Service console has a default Nginx Ingress Controller service with two pod replicas. The service has been mounted to an Internet SLB instance. Run the following commands to check the service:

~ # 1> Check the nginx-ingress-controller pod replicas.
~ kubectl -n kube-system get pod | grep nginx-ingress-controller
nginx-ingress-controller-674c96ffbc-7h4nt 1/1 Running 0 4h
nginx-ingress-controller-674c96ffbc-rvfcw 1/1 Running 0 4h
~ # 2> Check the IP address of the SLB to which the nginx-ingress-lb service is mounted.
~ kubectl -n kube-system get svc nginx-ingress-lb
nginx-ingress-lb LoadBalancer 80:30990/TCP,443:30076/TCP 4h

To deal with gradually increased cluster service scale, perform the following operations to increase the Ingress Controller nodes to ensure high performance and availability.

Adjust the Number of Replicas

You can simply adjust the number of pod replicas in the Nginx Ingress Controller for rapid scale-out or scale-in.

~ # 1> Run the scale command to add a pod replica. (Determine the number of replicas to be added based on the specific service volume.)
~ kubectl -n kube-system scale --replicas=3 deployment/nginx-ingress-controller
deployment.extensions "nginx-ingress-controller" scaled
~ # 2> Check the pod replica status.
~ kubectl -n kube-system get pod | grep nginx-ingress-controller
nginx-ingress-controller-674c96ffbc-7h4nt 1/1 Running 0 4h
nginx-ingress-controller-674c96ffbc-rvfcw 1/1 Running 0 4h
nginx-ingress-controller-674c96ffbc-xm8dw 1/1 Running 0 12s

Specify the Nodes for Deployment

Load Balancers require high computing and I/O performance. We usually recommend that you deploy the Nginx Ingress Controller on nodes with high main frequency and high I/O performance. When a Kubernetes cluster contains multiple node instances with different specifications, you can tag specific nodes and deploy the Nginx Ingress Controller on those specified nodes.

~ # 1> Check the status of cluster nodes.
~ kubectl get node
cn-hangzhou.i-bp109znbuf1b19ik17i2 Ready <none> 4h v1.11.2
cn-hangzhou.i-bp109znbuf1b19ik17i3 Ready <none> 4h v1.11.2
cn-hangzhou.i-bp109znbuf1b19ik17i4 Ready <none> 4h v1.11.2
cn-hangzhou.i-bp14p7rlsw8mc28w5wof Ready master 4h v1.11.2
cn-hangzhou.i-bp1845cet96qo07msekf Ready master 4h v1.11.2
cn-hangzhou.i-bp19420uhlyv2e5k4kmh Ready master 4h v1.11.2
~ # 2> If you want to deploy the Nginx Ingress Controller on cn-hangzhou.i-bp109znbuf1b19ik17i3 and cn-hangzhou.i-bp109znbuf1b19ik17i4,
~ # tag the two nodes with"true".
~ kubectl label nodes cn-hangzhou.i-bp109znbuf1b19ik17i3"true"
node "cn-hangzhou.i-bp109znbuf1b19ik17i3" labeled
~ kubectl label nodes cn-hangzhou.i-bp109znbuf1b19ik17i4"true"
node "cn-hangzhou.i-bp109znbuf1b19ik17i4" labeled
~ # 3> Update the deployment by adding the nodeSelector configuration.
~ kubectl -n kube-system patch deployment nginx-ingress-controller -p '{"spec": {"template": {"spec": {"nodeSelector": {"": "true"}}}}}'
deployment.extensions "nginx-ingress-controller" patched
~ # 4> Verify that the Nginx Ingress Controller has been deployed on the two specified nodes.
~ kubectl -n kube-system get pod -o wide | grep nginx-ingress-controller
nginx-ingress-controller-7cc9b5956c-fs8kf 1/1 Running 0 50s cn-hangzhou.i-bp109znbuf1b19ik17i4
nginx-ingress-controller-7cc9b5956c-xd77k 1/1 Running 0 1m cn-hangzhou.i-bp109znbuf1b19ik17i3


  1. Ensure that the number of tagged nodes is greater than or equal to the number of pod replicas so that the pod replicas can run on different nodes.
  2. We recommend that you do not deploy the Nginx Ingress Controller on master nodes.

All-Around Monitoring

Monitoring on the Kubernetes Ingress Controller is mandatory. To perform all-round monitoring on Ingress Controller pods and nodes, follow the instructions in Container Service Monitor and CloudMonitor.

To learn more about Alibaba Cloud Container Service for Kubernetes, visit


Follow me to keep abreast with the latest technology news, industry insights, and developer trends.