Getting Started with Kubernetes | Kubernetes CNIs and CNI Plug-ins

What Is A CNI?

A CNI is a standard network implementation interface in Kubernetes. Kubelet calls different network plug-ins through CNIs to implement different network configuration methods. A series of CNIs are implemented by plug-ins such as Calico, Flannel, Terway, Weave Net, and Contiv.

How to Use CNIs in Kubernetes

Kubernetes determines which CNI to use based on the CNI configuration file.

Select an Appropriate CNI Plug-in

The community provides many CNI plug-ins, such as Calico, Flannel, and Terway. Before selecting an appropriate CNI plug-in in a production environment, let’s have a look at the different CNI implementation modes.

  • In Routing mode, hosts and containers belong to different CIDR blocks. Cross-host communication is implemented through routing. No tunnels are established between different hosts for packet encapsulation. However, route interconnection partially depends on the underlying network. For example, a reachable route must exist from the underlying network to Layer 2.
  • In Underlay mode, containers and hosts are located at the same network layer and share the same position. Network interconnection between containers depends on the underlying network. Therefore, this mode is highly dependent on the underlying capabilities.

Environmental Restrictions

Different environments provide different underlying capabilities.

  • A physical machine environment imposes few restrictions on the underlying network. For example, Layer 2 communication can be implemented within a switch. In such a cluster environment, you can select plug-ins in Underlay or Routing mode. In Underlay mode, you can insert multiple network interface controllers (NICs) directly into a physical machine, or virtualize hardware on NICs. In Routing mode, routes are established through a Linux routing protocol. This avoids the performance degradation caused by VXLAN encapsulation. In this environment, you can select plug-ins such as Calico-BGP, Flannel-HostGW, and Sriov.
  • A public cloud environment is a type of virtual environment and places many restrictions on the underlying capabilities. However, each public cloud adapts containers for improved performance and may provide APIs to configure additional NICs or routing capabilities. If you run businesses on the public cloud, we recommend that you select the CNI plug-ins provided by a public cloud vendor for compatibility and optimal performance. For example, Alibaba Cloud provides the high-performance Terway plug-in.

Functional Requirements

  • First, consider security requirements.

Performance Requirements

Pod performance can be measured in terms of pod creation speed and pod network performance.

How to Develop Your Own CNI Plug-in

The plug-ins provided by the community may not meet your specific requirements. For example, only the VXLAN plug-in in Overlay mode can be used in Alibaba Cloud. This plug-in provides relatively poor performance and cannot meet some business requirements of Alibaba Cloud. In response, Alibaba Cloud developed the Terway plug-in.

Connect a Network Cable to a Pod

A network cable can be connected to a pod as follows:

  • Step 2: Configure the route of the cluster’s CIDR block on the pod’s NIC so that traffic to this pod is routed to its NIC. Also, configure the CIDR block of the default route on this NIC so that traffic to the Internet is routed to this NIC.
  • Step 3: Configure the route destined for the pod’s IP address on the host and direct the route to the veth1 virtual NIC at the peer end of the host. In this way, traffic can be routed from the pod to the host, and access traffic from the host to the pod’s IP address can be routed to the peer end of the pod’s NIC.

Connect the Pod to the Network

After connecting a network cable to the pod, you can allocate an IP address and route table to the pod. Next, you can enable communication between pods by configuring each pod’s IP address to be accessible in the cluster.

  • Then, the daemon process configures network connection in two steps:
  • (1) The daemon process creates a channel to each node of the cluster. This channel is an abstract concept and implemented by the Overlay tunnel, the Virtual Private Cloud (VPC) route table in Alibaba Cloud, or the BGP route in your own data center.
  • (2) The IP addresses of all pods are associated with the created channel. Association here is also an abstract concept and implemented through Linux routing, a forwarding database (FDB) table, or an Open vSwitch (OVS) flow table. Through Linux routing, you can configure a route from an IP address to a specific node. An FDB table is used to configure a route from a pod’s IP address to the tunnel endpoint of a node through an Overlay network. A flow table provided by OVS is used to configure a route from a pod’s IP address to a node.

Summary

Let’s summarize what we have learned in this article.

Original Source:

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store