Getting Started with Kubernetes | A Brief History of Cloud-native Technologies
By Zhang Lei, a senior technical expert for Alibaba Cloud Container Platform and an official ambassador of CNCF
“In the future, the software will definitely grow on the cloud.” This is the core assumption of the cloud-native concept. The so-called “cloud-native” actually defines the optimal path for enabling applications to exploit the capabilities and value of the cloud. On this path, “cloud-native” is out of the question without “applications,” which act as the carrier. In addition, container technology is one of the important approaches for implementing this concept and sustaining the revolution of software delivery.
A Brief History of Cloud-native Technologies
- From 2004 through 2007, Google applied container technologies such as control groups (Cgroups) throughout the enterprise.
- In 2008, Google merged Cgroups into the Linux kernel backbone.
- In 2013, the Docker project was officially launched.
- In 2014, the Kubernetes project was officially launched. It is easy to figure out the reason. When containers and Docker are available, people need a way to manage these containers conveniently, quickly, and gracefully. Kubernetes is developed precisely to meet this demand. After Google and Red Hat released Kubernetes, the project grew dramatically.
- In 2015, the Cloud-native Computing Foundation (CNCF) was jointly founded by some top-tier cloud computing vendors such as Google, Red Hat, and Microsoft, as well as some open-source companies. CNCF started with 22 founding members, and Kubernetes was the first open-source hosted project. Since then, CNCF developed rapidly.
- In 2017, CNCF grew to 170 members and 14 foundation projects.
- In 2018, CNCF had 195 members, 19 foundation projects, and 11 incubation projects on its third anniversary. This rapid development is pretty rare in the entire field of cloud computing.
Status Quo of the Cloud-native Ecosystem
Discussing the cloud-native ecosystem brings a huge set of technologies to the table. The CNCF cloud-native landscape includes more than 200 projects and products that fit within CNCF. Based on this landscape, today’s cloud-native actually covers the following aspects:
2) Cloud-native technology community. For example, more than 20 projects officially hosted by CNCF constitute the foundation of the modern cloud computing ecosystem. Among them, the Kubernetes project has become the fourth most dynamic open-source project in the world.
3) Kubernetes. Currently, major public cloud vendors around the world support Kubernetes. In addition, more than 100 technology start-ups are making sustained investments in Kubernetes. Alibaba is also considering moving all its businesses to the cloud, and directly to cloud-native. This again reflects that major technology companies are embracing cloud-native.
We are at a Critical Moment for the Cloud-native Era
2019 was the critical year of the cloud-native era. Why? Let’s explain it in a simple way.
In 2013, the Docker project was released. The Docker project popularized the sandbox technology that ran on the semantics of all operating systems. It allows users to conveniently and completely package their applications and allows developers to easily obtain the minimum executable unit of an application without relying on any Platform as a Service (PaaS) capabilities. This actually adversely affected the conventional PaaS industry.
In 2014, the Kubernetes project was launched. This was significant in the sense that Google “reproduced” its internal Borg and Omega systems based on the open-source community and proposed the concept of “container design patterns.” Actually, it is easy to understand why Google chose to indirectly make Kubernetes open-source instead of directly making the Borg project open-source. A system like Borg or Omega is too complex to be used by people outside Google. However, the design ideas of Borg and Omega are open to users through Kubernetes. This is also an important background for open-source Kubernetes.
The period from 2015 through 2016 was the era of the “Three Kingdoms” of container orchestration. Docker, Swarm, Mesos, and Kubernetes all competed against each other in the container orchestration field. The reason for the competition is obvious. Despite their own great value, Docker or containers lack value for commercialization or the cloud. To overcome this, they must play a favorable role in orchestration.
Swarm and Mesos both featured powerful ecosystems and technologies. Swarm is more powerful in terms of the ecosystem, whereas, Mesos is more sophisticated in terms of technology. In contrast, Kubernetes has both advantages. Kubernetes finally won the “Three Kingdoms” battle in 2017 and has become the standard for container orchestration since then. A typical event in this process was when Docker announced that it has embedded Kubernetes into its core products, and the Swarm project gradually fell out of maintenance.
In 2018, the concept of cloud-native technologies began to emerge. This occurred because Kubernetes and containers had become predetermined standards for cloud vendors, and the concept of “cloud-centric” software research and development gradually came into being.
In 2019, the situation saw another shift.
What Is Cloud-native? How Is Cloud-native Implemented?
Definition of Cloud-native
Many people are asking, “What exactly is cloud-native?”
Actually, cloud-native is the best path or practice. In more detail, cloud-native provides users with the best practice of exploiting the capabilities and value of cloud in a user-friendly, agile, scalable, and replicable way.
Cloud-native is a concept that provides guidance on software architecture design. Software designed around this concept has the following advantages:
- First, it is naturally “born on the cloud and grows on the cloud.”
- Second, it is naturally integrated with “cloud” by making the best use of cloud capabilities, to give “cloud” full play.
The greatest value and vision of cloud-native are that future software is born and grows on the cloud and complies with a new model of software development, release, and O&M to ensure that the software maximizes the use of cloud capabilities. Now, let’s think about why container technology is revolutionary.
In fact, the revolutionary nature of container technology in IT is very similar to that of the container technology in transportation. To be specific, the container technology enables applications to be defined as “self-contained.” Only in this way can applications be released on the cloud in an agile, scalable, and replicable manner to exert cloud capabilities. This is also the revolutionary impact of the container technology on the cloud. Therefore, container technology is the cornerstone of cloud-native technologies.
Technological Scope of Cloud-native
Cloud-native technologies cover the following aspects:
- Definition and Development of Cloud Applications: This process includes application definition and image creation, the configuration of continuous integration and continuous delivery (CI/CD), messages, streaming, and databases.
- Orchestration and Management of Cloud Applications: This is also the focus of Kubernetes. This process includes application orchestration and scheduling, service discovery and governance, remote calls, API gateways, and service mesh.
- Monitoring and Observability: This part emphasizes how cloud applications are monitored, logged, and traced, and how destructive tests are implemented on the cloud. This is also the concept of chaos engineering.
- Underlying Technologies of Cloud-native: Technologies such as container runtime, cloud-native storage technology, and cloud-native network technology.
- Cloud-native Toolkit: In addition to the preceding core technologies, use many supporting ecosystems or peripheral tools. For example, the toolkit includes process automation and configuration management, container image repositories, cloud-native security technologies, and cloud-based password management.
- Serverless: Serverless is a special form of PaaS. It defines a more “extreme and abstract” application writing method, incorporating concepts like “Functions as a Service (FaaS)” and “Backend as a Service (BaaS)”. The most typical feature of FaaS and BaaS is pay-as-you-go. Therefore, serverless billing is also important.
Two Theories of the Cloud-native Concept
After learning about the technological scope of cloud-native, it is concluded that cloud-native includes a lot of technologies, the essentials of which are similar. In essence, cloud-native technologies are based on two theories:
- Immutable Infrastructure. This is currently implemented through container images. Immutable infrastructure means that an application’s infrastructure must be immutable, self-contained, and self-described, and is completely capable of migration between different environments.
- Cloud Application Orchestration. Currently, cloud-native technologies are implemented based on “container design patterns” proposed by Google, which will be discussed in Kubernetes articles.
Infrastructure Evolution to the Cloud
The concept of “immutable infrastructure” reflects that the infrastructure on which applications are running is evolving to the cloud. There is a contrast. Before evolution, the conventional application infrastructure is changeable in most cases. For example, to release or update software, you may often connect the software to the server through SSH, manually upgrade or downgrade the software package, adjust the configuration files on the server in order, or directly deploy the new code to the existing server. Therefore, in such cases, the infrastructure is constantly adjusted and modified.
In contrast, the “cloud-friendly” application infrastructure is immutable on the cloud.
On the cloud, the application’s infrastructure is permanent after the application is deployed. To update the application, change the public image to build a new service to directly replace the old one. The direct replacement is supported because containers provide a self-contained environment, which contains all the dependencies required for running the application. Therefore, the application does not need to learn about container changes, and there is only a need to modify container images. In conclusion, cloud-friendly infrastructure may be replaced at any time due to the fact that agility and consistency are ensured by containers for the application infrastructure in the cloud era.
In other words, the infrastructure in the cloud era is like a “draft animal” that may be replaced at any time, whereas the conventional infrastructure is a unique “pet” that can never be replaced but requires careful care. This is exactly the strength of immutable infrastructure in the cloud era.
Benefits of Infrastructure Evolution to the Cloud
The process in which the infrastructure evolves to be “immutable” provides us with two important benefits.
- 1) Infrastructure is consistent and reliable. The same image looks exactly the same in different countries or regions, and the OS environment is the same for applications. Therefore, an application does not need to be concerned about where the container is running. This is the significance of infrastructure consistency.
- 2) An image in cloud-native is self-contained, containing all the dependencies required for running an application. That is also the reason why an image is migrated to any location on the cloud.
In addition, the cloud-native infrastructure allows applications to be deployed and maintained in a simple and predictable way. With images and self-contained applications, the entire container that runs based on images actually be self-maintained as operators in Kubernetes. Therefore, the entire application is self-contained, which allows it to be migrated to any location on the cloud. This also facilitates the automation of the entire process.
Furthermore, the immutable infrastructure allows an application to be scaled conveniently from 1 instance to 100 instances and even to 10,000 instances. This scale-out process is common for containerized applications. Finally, the immutable infrastructure allows to quickly replicate peripheral control systems and supporting components. This attributes to the fact that these components are also containerized and comply with the theory of immutable infrastructure.
2019 — The Popularization Year of Cloud-native Technologies
Why was 2019 a critical year? We believe that 2019 was the year when the popularization of cloud-native technologies started.
In 2019, Alibaba announced that it would migrate all of its businesses to the cloud, and “directly to cloud-native.” The concept of “cloud” centric software R&D is gradually becoming the default choice for all developers. In addition, cloud-native technologies such as Kubernetes are becoming a required course for technicians, and a large number of related jobs are emerging.
In this context, “knowing Kubernetes” is far from enough. “Understanding Kubernetes” and “understanding cloud-native architecture” has become increasingly important. Since 2019, cloud-native technologies have been extensively used on a large scale. This is also an important reason why everyone wants to learn and invest in cloud-native technologies at this point in time.
You may be wondering what prerequisite knowledge is required to learn the basics of cloud-native. Generally, the required prerequisite knowledge is divided into the following parts:
1)Linux Operating System Knowledge: Mainly, general basic knowledge is required. Linux development experience is preferred.
2) Computer and Program Design Basics: The knowledge required for an entry-level engineer or a senior undergraduate student is enough.
3) Experience in Using Containers: Basic experience in using containers, such as Docker run and Docker build commands, is preferred. It is best to have some experience in developing Docker-based applications.