Getting Started with Kubernetes | Kubernetes CNIs and CNI Plug-ins

What Is A CNI?

How to Use CNIs in Kubernetes

Select an Appropriate CNI Plug-in

  • In Overlay mode, a container is independent of the host’s IP address range. During cross-host communication, tunnels are established between hosts and all packets in the container CIDR block are encapsulated as packets exchanged between hosts in the underlying physical network. This mode removes the dependency on the underlying network.
  • In Routing mode, hosts and containers belong to different CIDR blocks. Cross-host communication is implemented through routing. No tunnels are established between different hosts for packet encapsulation. However, route interconnection partially depends on the underlying network. For example, a reachable route must exist from the underlying network to Layer 2.
  • In Underlay mode, containers and hosts are located at the same network layer and share the same position. Network interconnection between containers depends on the underlying network. Therefore, this mode is highly dependent on the underlying capabilities.

Environmental Restrictions

  • A virtual environment, such as OpenStack, imposes many network restrictions. For example, machines cannot directly access each other through a Layer 2 protocol, only packets with Layer 3 features such as IP addresses are forwarded, and a host can only use specified IP addresses. For a restricted underlying network, you can only select plug-ins in Overlay mode, such as Flannel-VXLAN, Calico-IPIP, and Weave.
  • A physical machine environment imposes few restrictions on the underlying network. For example, Layer 2 communication can be implemented within a switch. In such a cluster environment, you can select plug-ins in Underlay or Routing mode. In Underlay mode, you can insert multiple network interface controllers (NICs) directly into a physical machine, or virtualize hardware on NICs. In Routing mode, routes are established through a Linux routing protocol. This avoids the performance degradation caused by VXLAN encapsulation. In this environment, you can select plug-ins such as Calico-BGP, Flannel-HostGW, and Sriov.
  • A public cloud environment is a type of virtual environment and places many restrictions on the underlying capabilities. However, each public cloud adapts containers for improved performance and may provide APIs to configure additional NICs or routing capabilities. If you run businesses on the public cloud, we recommend that you select the CNI plug-ins provided by a public cloud vendor for compatibility and optimal performance. For example, Alibaba Cloud provides the high-performance Terway plug-in.

Functional Requirements

  • First, consider security requirements.
  • Second, consider the need to interconnect resources within and outside a cluster.
  • Lastly, consider Kubernetes’ service discovery and load balancing capabilities.

Performance Requirements

  • Pod Creation Speed
  • Pod Network Performance

How to Develop Your Own CNI Plug-in

Connect a Network Cable to a Pod

  • Step 1: Configure the allocated IP address to the pod’s NIC.
  • Step 2: Configure the route of the cluster’s CIDR block on the pod’s NIC so that traffic to this pod is routed to its NIC. Also, configure the CIDR block of the default route on this NIC so that traffic to the Internet is routed to this NIC.
  • Step 3: Configure the route destined for the pod’s IP address on the host and direct the route to the veth1 virtual NIC at the peer end of the host. In this way, traffic can be routed from the pod to the host, and access traffic from the host to the pod’s IP address can be routed to the peer end of the pod’s NIC.

Connect the Pod to the Network

  • The daemon process that a CNI runs on each node learns the IP address of each pod in the cluster and information about the node where this pod is located. The daemon process listens to the Kubernetes API server to obtain each pod’s IP address and node information. Each daemon is notified when nodes and pods are created.
  • Then, the daemon process configures network connection in two steps:
  • (1) The daemon process creates a channel to each node of the cluster. This channel is an abstract concept and implemented by the Overlay tunnel, the Virtual Private Cloud (VPC) route table in Alibaba Cloud, or the BGP route in your own data center.
  • (2) The IP addresses of all pods are associated with the created channel. Association here is also an abstract concept and implemented through Linux routing, a forwarding database (FDB) table, or an Open vSwitch (OVS) flow table. Through Linux routing, you can configure a route from an IP address to a specific node. An FDB table is used to configure a route from a pod’s IP address to the tunnel endpoint of a node through an Overlay network. A flow table provided by OVS is used to configure a route from a pod’s IP address to a node.


Original Source:



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alibaba Cloud

Alibaba Cloud

Follow me to keep abreast with the latest technology news, industry insights, and developer trends. Alibaba Cloud website: