Getting Started with Kubernetes | A Brief History of Cloud-native Technologies

By Zhang Lei, a senior technical expert for Alibaba Cloud Container Platform and an official ambassador of CNCF

“In the future, the software will definitely grow on the cloud.” This is the core assumption of the cloud-native concept. The so-called “cloud-native” actually defines the optimal path for enabling applications to exploit the capabilities and value of the cloud. On this path, “cloud-native” is out of the question without “applications,” which act as the carrier. In addition, container technology is one of the important approaches for implementing this concept and sustaining the revolution of software delivery.

A Brief History of Cloud-native Technologies

Status Quo of the Cloud-native Ecosystem

Discussing the cloud-native ecosystem brings a huge set of technologies to the table. The CNCF cloud-native landscape includes more than 200 projects and products that fit within CNCF. Based on this landscape, today’s cloud-native actually covers the following aspects:

2) Cloud-native technology community. For example, more than 20 projects officially hosted by CNCF constitute the foundation of the modern cloud computing ecosystem. Among them, the Kubernetes project has become the fourth most dynamic open-source project in the world.
3) Kubernetes. Currently, major public cloud vendors around the world support Kubernetes. In addition, more than 100 technology start-ups are making sustained investments in Kubernetes. Alibaba is also considering moving all its businesses to the cloud, and directly to cloud-native. This again reflects that major technology companies are embracing cloud-native.

We are at a Critical Moment for the Cloud-native Era

2019 was the critical year of the cloud-native era. Why? Let’s explain it in a simple way.

In 2013, the Docker project was released. The Docker project popularized the sandbox technology that ran on the semantics of all operating systems. It allows users to conveniently and completely package their applications and allows developers to easily obtain the minimum executable unit of an application without relying on any Platform as a Service (PaaS) capabilities. This actually adversely affected the conventional PaaS industry.

In 2014, the Kubernetes project was launched. This was significant in the sense that Google “reproduced” its internal Borg and Omega systems based on the open-source community and proposed the concept of “container design patterns.” Actually, it is easy to understand why Google chose to indirectly make Kubernetes open-source instead of directly making the Borg project open-source. A system like Borg or Omega is too complex to be used by people outside Google. However, the design ideas of Borg and Omega are open to users through Kubernetes. This is also an important background for open-source Kubernetes.

The period from 2015 through 2016 was the era of the “Three Kingdoms” of container orchestration. Docker, Swarm, Mesos, and Kubernetes all competed against each other in the container orchestration field. The reason for the competition is obvious. Despite their own great value, Docker or containers lack value for commercialization or the cloud. To overcome this, they must play a favorable role in orchestration.

Swarm and Mesos both featured powerful ecosystems and technologies. Swarm is more powerful in terms of the ecosystem, whereas, Mesos is more sophisticated in terms of technology. In contrast, Kubernetes has both advantages. Kubernetes finally won the “Three Kingdoms” battle in 2017 and has become the standard for container orchestration since then. A typical event in this process was when Docker announced that it has embedded Kubernetes into its core products, and the Swarm project gradually fell out of maintenance.

In 2018, the concept of cloud-native technologies began to emerge. This occurred because Kubernetes and containers had become predetermined standards for cloud vendors, and the concept of “cloud-centric” software research and development gradually came into being.

In 2019, the situation saw another shift.

What Is Cloud-native? How Is Cloud-native Implemented?

Definition of Cloud-native

Many people are asking, “What exactly is cloud-native?”

Actually, cloud-native is the best path or practice. In more detail, cloud-native provides users with the best practice of exploiting the capabilities and value of cloud in a user-friendly, agile, scalable, and replicable way.

Cloud-native is a concept that provides guidance on software architecture design. Software designed around this concept has the following advantages:

The greatest value and vision of cloud-native are that future software is born and grows on the cloud and complies with a new model of software development, release, and O&M to ensure that the software maximizes the use of cloud capabilities. Now, let’s think about why container technology is revolutionary.

In fact, the revolutionary nature of container technology in IT is very similar to that of the container technology in transportation. To be specific, the container technology enables applications to be defined as “self-contained.” Only in this way can applications be released on the cloud in an agile, scalable, and replicable manner to exert cloud capabilities. This is also the revolutionary impact of the container technology on the cloud. Therefore, container technology is the cornerstone of cloud-native technologies.

Technological Scope of Cloud-native

Cloud-native technologies cover the following aspects:

Two Theories of the Cloud-native Concept

After learning about the technological scope of cloud-native, it is concluded that cloud-native includes a lot of technologies, the essentials of which are similar. In essence, cloud-native technologies are based on two theories:

Infrastructure Evolution to the Cloud

The concept of “immutable infrastructure” reflects that the infrastructure on which applications are running is evolving to the cloud. There is a contrast. Before evolution, the conventional application infrastructure is changeable in most cases. For example, to release or update software, you may often connect the software to the server through SSH, manually upgrade or downgrade the software package, adjust the configuration files on the server in order, or directly deploy the new code to the existing server. Therefore, in such cases, the infrastructure is constantly adjusted and modified.

In contrast, the “cloud-friendly” application infrastructure is immutable on the cloud.

On the cloud, the application’s infrastructure is permanent after the application is deployed. To update the application, change the public image to build a new service to directly replace the old one. The direct replacement is supported because containers provide a self-contained environment, which contains all the dependencies required for running the application. Therefore, the application does not need to learn about container changes, and there is only a need to modify container images. In conclusion, cloud-friendly infrastructure may be replaced at any time due to the fact that agility and consistency are ensured by containers for the application infrastructure in the cloud era.

In other words, the infrastructure in the cloud era is like a “draft animal” that may be replaced at any time, whereas the conventional infrastructure is a unique “pet” that can never be replaced but requires careful care. This is exactly the strength of immutable infrastructure in the cloud era.

Benefits of Infrastructure Evolution to the Cloud

The process in which the infrastructure evolves to be “immutable” provides us with two important benefits.

In addition, the cloud-native infrastructure allows applications to be deployed and maintained in a simple and predictable way. With images and self-contained applications, the entire container that runs based on images actually be self-maintained as operators in Kubernetes. Therefore, the entire application is self-contained, which allows it to be migrated to any location on the cloud. This also facilitates the automation of the entire process.

Furthermore, the immutable infrastructure allows an application to be scaled conveniently from 1 instance to 100 instances and even to 10,000 instances. This scale-out process is common for containerized applications. Finally, the immutable infrastructure allows to quickly replicate peripheral control systems and supporting components. This attributes to the fact that these components are also containerized and comply with the theory of immutable infrastructure.

2019 — The Popularization Year of Cloud-native Technologies

Why was 2019 a critical year? We believe that 2019 was the year when the popularization of cloud-native technologies started.

In 2019, Alibaba announced that it would migrate all of its businesses to the cloud, and “directly to cloud-native.” The concept of “cloud” centric software R&D is gradually becoming the default choice for all developers. In addition, cloud-native technologies such as Kubernetes are becoming a required course for technicians, and a large number of related jobs are emerging.

In this context, “knowing Kubernetes” is far from enough. “Understanding Kubernetes” and “understanding cloud-native architecture” has become increasingly important. Since 2019, cloud-native technologies have been extensively used on a large scale. This is also an important reason why everyone wants to learn and invest in cloud-native technologies at this point in time.

Prerequisite Knowledge

You may be wondering what prerequisite knowledge is required to learn the basics of cloud-native. Generally, the required prerequisite knowledge is divided into the following parts:

1)Linux Operating System Knowledge: Mainly, general basic knowledge is required. Linux development experience is preferred.
2) Computer and Program Design Basics: The knowledge required for an entry-level engineer or a senior undergraduate student is enough.
3) Experience in Using Containers: Basic experience in using containers, such as Docker run and Docker build commands, is preferred. It is best to have some experience in developing Docker-based applications.

Original Source:

Follow me to keep abreast with the latest technology news, industry insights, and developer trends.