Kubernetes on CoreOS Cluster

By Alex Mungai Muchiri, Alibaba Cloud Tech Share Author. Tech Share is Alibaba Cloud’s incentive program to encourage the sharing of technical knowledge and best practices within the cloud community.

The Kubernetes system manages containerized application in clustered environments. With it, you have an application’s entire lifecycle handled from deployment to scaling. We have previously looked at Kubernetes basics and in this session, we are looking at how to get started on CoreOS with Kubernetes. In this tutorial, we demonstrate Kubernetes 1.5.1 but keep note that these versions keep changing. To see your installed version, run the command below: kubecfg –version

Prerequisites and Goals

We shall start with a basic CoreOS cluster. Alibaba Cloud already provides you with configurable clusters and thus we shall not dwell much on the details. What we need in our cluster is at least one master node and one worker node. We shall assign our nodes specialized roles within Kubernetes, but for reference purposes, they are interchangeable. One of the nodes, the master, will run a control manager and an API server.

This tutorial relied on implementation guides available on CoreOS website. With your CoreOS cluster all set up, let us now proceed. We are going to use two sample BareMetal nodes on our cluster.

Master Node

Our Node 1 shall be the master. We start with SSH in our CoreOS node.

Create a directory called certs

Paste the script below in the certs directory as cert_generator.sh .

Configure the script to executable.

When the executable script runs, it should prompt you to enter the details of your nodes. Thereafter, it should generate certificates to run on your Kubernetes installation.

Create a directory to create the generated keys like so:

Copy following certs to /etc/kubernetes/ssl from certs.

Network Configuration

Next, let us configure flannel to obtain its local configuration in /etc/flannel/options.envand source its cluster-level configuration in etcd. Create a script containing the contents below and note the following:

  1. Replace ${ADVERTISE_IP} with your machine's public IP.
  2. Replace ${ETCD_ENDPOINTS}

You will then create the drop-in below that will contain the configurations above at flanell start, /etc/systemd/system/flanneld.service.d/40-ExecStartPre-symlink.conf

Docker Configuration

You need to have Docker configured for the flannel to manage the cluster’s pod network. We need to require that flannel runs before Docker starts. Let us apply a systemd drop-in, /etc/systemd/system/docker.service.d/40-flannel.conf

Launch a Docker CNI Options file, etc/kubernetes/cni/docker_opts_cni.env

Configure the Flannel CNI configuration with the command below if you are using flannel networking:

Create the Kubelet Unit

Kubelet is responsible for starting and stopping pods and other machine level tasks. It communicates with the API server that runs on the master node with our TLS certificates we installed earlier.

Create /etc/systemd/system/kubelet.service .

  1. Replace ${ADVERTISE_IP} with this node's public IP.
  2. Replace ${DNS_SERVICE_IP} with 10.3.0.10

Set Up the kube-apiserver Pod

The API server handles much of the workload and where most activities take place. An API server is stateless and handles requests, provides feedback and stores results in etcd when necessary.

Create /etc/kubernetes/manifests/kube-apiserver.yaml

  1. Replace ${ETCD_ENDPOINTS} with your CoreOS hosts.
  2. Replace ${SERVICE_IP_RANGE} with 10.3.0.0/24
  3. Replace ${ADVERTISE_IP} with this node's public IP.

Set Up the kube-proxy Pod

As we did with the API server, we are going to run a proxy, which shall be responsible with traffic direction to services and pods. Our proxy keeps up-to-date by regular communication with the API server. The proxy supports both master and worker nodes in the cluster.

Begin with creating /etc/kubernetes/manifests/kube-proxy.yaml, without any configurations necessary.

Set Up the kube-controller-manager Pod

Create a /etc/kubernetes/manifests/kube-controller-manager.yaml that uses the TLS certificate in the disk.

Set Up the kube-scheduler Pod

Next, set up the scheduler to track API’s unscheduled pods and allocate them machines and updates the AP with the decision.

Create a /etc/kubernetes/manifests/kube-scheduler.yaml File.

Load Changed Units

Let us instruct the system to rescan all the changes we have implemented like so:

Configuring Flannel Network

We have already mentioned that the etcd stores cluster level configurations for the flannel. Accordingly, lets have an IP range for our pod network. Etcd is already running, now is the right time to set it. Alternatively, start your etcd

  1. In place of $POD_NETWORK place 10.2.0.0/16
  2. IIn place of $ETCD_SERVER place url (http://ip:port) from $ETCD_ENDPOINTS

Restart the flannel for the changes to take effect, which, by extension, restarts the docker daemon and may affect the running of containers.

Start Kubelet

We have everything configured and kubelet is ready to be started. It will also have the controller, scheduler, proxy and Pod API server manifests up and running.

Make sure kubelet starts after reboots:

Worker Node

For the worker node, we begin by creating a directory and placing the SSL keys we generated in the worker node like so:

Paste the certs to /etc/kubernetes/ssl from certs.

Network Configuration

As we did previously, the local configuration of the flannel should be sourced from /etc/flannel/options.env while the etcd stores cluster-level configuration. Replicate this file and make the necessary adjustments:

  1. In place of ${ADVERTISE_IP} replace with machine's public IP.
  2. Replace ${ETCD_ENDPOINTS}

Create a drop-in to use the configuration above with flannel restart, /etc/systemd/system/flanneld.service.d/40-ExecStartPre-symlink.conf

Docker Configuration

Next, we configure Docker to use the flannel for it to manage the cluster’s pod network. The method is similar to the implementation we did above, flannel should initiate before Docker is running.

Let us apply a systemd drop-in, /etc/systemd/system/docker.service.d/40-flannel.conf

Initiate a Docker CNI Options file like so: etc/kubernetes/cni/docker_opts_cni.env

Set up the Flannel CNI configuration if relying on Flannel networking like so: /etc/kubernetes/cni/net.d/10-flannel.conf

Create the Kubelet Unit

In the worker node, let us create a kubelet service like so:

Create /etc/systemd/system/kubelet.service .

  1. Replace ${ADVERTISE_IP} with node's public IP.
  2. Replace ${DNS_SERVICE_IP} with 10.3.0.10
  3. Replace ${MASTER_HOST}

Setting Up the kube-proxy Pod

Créate a /etc/kubernetes/manifests/kube-proxy.yaml file without any configration settings like so:

Set Up Kubeconfig

For Kubernetes components to communicate securely, use kubeconfig for authentication settings definition. For this use case, the configuration read by kubelet and proxy enable them to communicate with the API.

First create the file: /etc/kubernetes/worker-kubeconfig.yaml:

Start Services

The Worker services are ready to start.

Load Changed Units

Let us instruct the system to rescan disk to update the changes we have made like so:

Start kubelet, and flannel

Initiate kubelet, which in turn starts the proxy.

Enforce the starting of services on each boot:

Conclusion

If you have made it this far, congratulations! You can readily configure Kubernetes to run on your CoreOS cluster. For more information on Kubernetes, please refer to other articles in this tutorial. As a reminder, ensure that you are making the correct changes on this tutorial especially on IP addresses for it to run on your cluster. Cheers!

Do you have an Alibaba Cloud account? Sign up for an account and try over 40 products for free worth up to $1200. Get Started with Alibaba Cloud to learn more.

Reference:https://www.alibabacloud.com/blog/kubernetes-on-coreos-cluster_594228?spm=a2c41.12346072.0.0

Follow me to keep abreast with the latest technology news, industry insights, and developer trends.