How to Install and Deploy Kubernetes on Ubuntu 16.04

By Hitesh Jethva, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

Kubernetes is an open-source container management system that is available for free. It provides a platform for automating deployment, scaling, and operations of application containers across clusters of hosts. Kubernetes gives you the freedom to take advantage of on-premises, hybrid, or public cloud infrastructure, releasing organizations from tedious deployment tasks.

Kubernetes was originally designed by Google and maintained by the Cloud Native Computing Foundation (CNCF). It is quickly becoming the new standard for deploying and managing software in the cloud. Kubernetes follows the master-slave architecture, where, it has a master that provides centralized control for an all agents. Kubernetes has several components including, etcd, flannel, kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy, docker and much more.

In this tutorial, we are going to setup multi-node Kubernetes Cluster on Ubuntu 16.04 server.

Prerequisites

  • Two fresh Alibaba Cloud Elastic Compute Service (ECS) instance with Ubuntu 16.04 server installed.
  • A static IP address 192.168.0.103 is configured on the first instance (Master) and 192.168.0.104 is configured on the second instance (Slave).
  • Minimum 2GB RAM per instance.
  • A Root password is setup on each instance.

Launch Alibaba Cloud ECS Instance

First, Login to your https://ecs.console.aliyun.com/?spm=a3c0i.o25424en.a3.13.388d499ep38szx">Alibaba Cloud ECS Console. Create a new ECS instance, choosing Ubuntu 16.04 as the operating system with at least 2GB RAM. Connect to your ECS instance and log in as the root user.

Once you are logged into your Ubuntu 16.04 instance, run the following command to update your base system with the latest available packages.

apt-get update -y

Configuring Your ECS Server

Before starting, you will need to configure hosts file and hostname on each server, so each server can communicate with each other using the hostname.

First, open /etc/hosts file on the first server:

nano /etc/hosts

Add the following lines:

Save and close the file when you are finished, then setup hostname by running the following command:

hostnamectl set-hostname master-node

Next, open /etc/hosts file on second server:

nano /etc/hosts

Add the following lines:

Save and close the file when you are finished, then setup hostname by running the following command:

hostnamectl set-hostname slave-node

Next, you will need to disable swap memory on each server. Because, kubelets do not support swap memory and will not work if swap is active or even present in your /etc/fstab file.

You can disable swap memory usage with the following command:

swapoff -a

You can disable this permanent by commenting out the swap file in /etc/fstab:

nano /etc/fstab

Comment out the swap line as shown below:

Save and close the file, when you are finished.

Install Docker

Before starting, you will need to install Docker on both the master and slave server. By default, the latest version of the Docker is not available in Ubuntu 16.04 repository, so you will need to add Docker repository to your system.

First, install required packages to add Docker repository with the following command:

apt-get install apt-transport-https ca-certificates curl software-properties-common -y

Next, download and add Docker’s GPG key with the following command:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

Next, add Docker repository with the following command:

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Next, update the repository and install Docker with the following command:

Install Kubernetes

Next, you will need to install kubeadm, kubectl and kubelet on both the server. First, download and GPG key with the following command:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

Next, add Kubernetes repository with the following command:

echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Finally, update the repository and install Kubernetes with the following command:

Configure Master Node

All the required packages are installed on both servers. Now, it’s time to configure Kubernetes Master Node.

First, initialize your cluster using its private IP address with the following command:

kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.0.103

You should see the following output:

Note : Note down the token from the above output. This will be used to join Slave Node to the Master Node in the next step.

Next, you will need to run the following command to configure kubectl tool:

Next, check the status of the Master Node by running the following command:

kubectl get nodes

You should see the following output:

In the above output, you should see that Master Node is listed as not ready. Because the cluster does not have a Container Networking Interface (CNI).

Let’s deploy a Calico CNI for the Master Node with the following command:

kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

Make sure Calico was deployed correctly by running the following command:

kubectl get pods --all-namespaces

You should see the following output:

Now, Run kubectl get nodes command again, and you should see the Master Node is now listed as Ready.

kubectl get nodes

Output:

Add Slave Node to the Kubernetes Cluster

Next, you will need to log in to the Slave Node and add it to the Cluster. Remember the join command in the output from the Master Node initialization command and issue it on the Slave Node as shown below:

kubeadm join --token 62b281.f819128770e900a3 192.168.0.103:6443 --discovery-token-ca-cert-hash sha256:68ce767b188860676e6952fdeddd4e9fd45ab141a3d6d50c02505fa0d4d44686

Once the Node is joined successfully, you should see the following output:

Now, go back to the Master Node and issue the command kubectl get nodes to see that the slave node is now ready:

kubectl get nodes

Output:

Deploy the Apache Container to the Cluster

Kubernetes Cluster is now ready, it’s time to deploy the Apache container.

On the Master Node, run the following command to create an Apache deployment:

kubectl create deployment apache --image=apache

Output:

deployment "apache" created

You can list out the deployments with the following command:

kubectl get deployments

Output :

You can see the more information about Apache deployment with the following command:

kubectl describe deployment apache

Output:

Next, you will need to make the Apache container available to the network with the command:

kubectl create service nodeport apache --tcp=80:80

Now, list out the current services by running the following command:

kubectl get svc

You should see the Apache service with assigned port 30267:

Now, open your web browser and type the URL http://192.168.0.104:30267 (Slave Node IP), you should see the default Apache Welcome page:

Image for post
Image for post

Congratulations! Your Apache container has been deployed on your Kubernetes Cluster.

Related Alibaba Cloud Products

After completing your Kubernetes Cluster, it makes perfect sense to scale it for production. That’s the whole design concept of using containers. To do this, we need to set up a database for our application. Ideally, for production scenarios, I do not recommend making your own database. Instead, you can choose from one of Alibaba Cloud’s suite of database products.

ApsaraDB for Redis is an automated and scalable tool for developers to manage data storage shared across multiple processes, applications or servers.

As a Redis protocol compatible tool, ApsaraDB for Redis offers exceptional read-write capabilities and ensures data persistence by using memory and hard disk storage. ApsaraDB for Redis provides data read-write capabilities at high speed by retrieving data from in-memory caches and ensures data persistence by using both memory and hard disk storage mode.

Data Transmission Service (DTS) helps you migrate data between data storage types, such as relational database, NoSQL, and OLAP. The service supports homogeneous migrations as well as heterogeneous migration between different data storage types.

DTS also can be used for continuous data replication with high availability. In addition, DTS can help you subscribe to the change data function of ApsaraDB for RDS. With DTS, you can easily implement scenarios such as data migration, remote real time data backup, real time data integration and cache refresh.

Reference:

https://www.alibabacloud.com/blog/how-to-install-and-deploy-kubernetes-on-ubuntu-1604_592719?spm=a2c41.11553776.0.0

Written by

Follow me to keep abreast with the latest technology news, industry insights, and developer trends.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store