What Is the Difference Between KubeVela and PaaS? An In-Depth Interpretation

Image for post
Image for post

By Deng Hongchao, Senior Technical Expert at Alibaba Cloud and the core maintainer of OAM and KubeVela project, with the title of “the Second Kubernetes Operator”.

After the release of the KubeVela project, many community members across the world asked a similar question: Is KubeVela exactly the same as PaaS products like Heroku? This question is frequently raised because the experience on KubeVela is really great. It can nearly be considered as the Heroku of Kubernetes.

Today, I’d like to talk about this topic: What is the difference between KubeVela and PaaS?

Note: The PaaS mentioned in this article includes both classic PaaS products, such as Heroku, and various Kubernetes-based “cloud-native” PaaS. Although their underlying implementations are different, they provide similar usage interfaces and experiences to users. OpenShift is an exception. As a project that is more complicated than Kubernetes itself, OpenShift is an authentic release version of Kubernetes. It is not included in the easy-to-use and user-oriented PaaS discussed in this article.

Here is the conclusion: Although KubeVela can bring users an experience similar to that of PaaS, KubeVela is not a PaaS product.

Why Is KubeVela Not a PaaS Product?

In detail, although the user experience is good, PaaS itself is often not extensible. Let’s take a look at the new Kubernetes PaaS project, such as Rancher Rio. This project provides a good application deployment experience. For example, rio run allows quick deployment of containerized applications, automatic allocation of domain names, and access rules. However, what if we want Rio to support more capabilities to meet different user demands?

For example:

  • Can Rio help run a scheduled task?
  • Can Rio help run a CloneSet workload of OpenKruise?
  • Can Rio help run a MySQL Operator?
  • Can Rio perform horizontal scaling based on customized metrics?
  • Can Rio conduct progressive gray release through Flagger and Istio?

The key point is that these capabilities are common in the Kubernetes ecosystem, and some are even built-in capabilities in Kubernetes. However, to support any of the aforementioned capabilities in PaaS, a round of development is required for PaaS. Additionally, large-scale reconstruction is likely required due to previous assumptions and designs.

For example, I have a PaaS system that assumes that all applications are run through Deployment. So, the release and scaling functions of this PaaS system are also implemented directly according to Deployment. Now, users are asking for in-place update, which requires the availability of CloneSet in PaaS. The whole system may have to be reconstructed. This problem is even worse when it comes to the O&M capability. For example, my PaaS system supports the Blue Green Deployment strategy. Therefore, a lot of interaction and integration are required between PaaS and traffic management, monitoring, and other systems. Now, if I want my PaaS system to support a new strategy called Canary Release, all the interaction and execution logic must be reconstructed, which is a huge workload.

Of course, not all PaaS are completely inextensible. PaaS products with strong engineering capabilities, such as Cloud Foundry and Heroku, have their own plug-in capabilities and center. These products, on the premise of ensuring user experience and the capability controllability, open up certain plug-in capabilities, such as allowing users to access their own databases or developing some simple features. However, no matter how this plug-in mechanism is designed, it is actually a small closed ecosystem exclusive to PaaS. In the cloud native era, the open source community has already created an almost “unlimited” capability pool, that is, the Kubernetes ecosystem. It outshines any small ecosystem exclusive to PaaS.

The preceding problems can be collectively referred to as the “capability dilemma” of PaaS.

Image for post
Image for post

In contrast, KubeVela aims to use the entire Kubernetes ecosystem as its “plug-in center” from the beginning, and to “deliberately” design each of its built-in capabilities as independent and pluggable plug-ins. This highly extensible model actually has sophisticated design and implementation. For example, how does KubeVela ensure that a completely independent trait is bound to a specific workload type? How to check whether there is any conflict between these independent traits? These issues are solved by taking Open Application Model (OAM) as the model layer of KubeVela. In short, OAM is a highly extensible application definition and capability assembly model.

Moreover, definition files of any workload type and trait can be stored on GitHub after being designed and produced. Thus, these files can be used by any KubeVela user in the world in their own Appfile. For more information, see Documentation of $ vela cap (management commands for plug-in capability).

So, KubeVela advocates the future-oriented cloud-native platform architecture. For this architecture:

  1. The application platform has a completely modular architecture, and all its capabilities are pluggable. The core framework of the platform provides standardized capability encapsulation and assembly process through the model layer.
  2. The capability encapsulation and assembly process can seamlessly integrate with any application management capability in the cloud-native ecosystem. It allows platform engineers to focus on capability R&D and the capability encapsulation process based on this model. By doing so, the platform team can quickly respond to users’ ever-changing application management demands, while providing users with easy-to-use abstractions at the platform layer.

Overall Architecture and Pluggable Capability Mechanism of KubeVela

Image for post
Image for post

In terms of architecture, KubeVela has only one controller running on Kubernetes as a plug-in. This provides Kubernetes with application-layer-oriented abstractions and a user-oriented interface based on abstractions, called Appfile. The core of the Appfile and even KubeVela operation mechanism is OAM. Based on OAM, KubeVela provides a capability assembly process based on registration and self-discovery for system administrators. It allows system administrators to connect any capability in the Kubernetes ecosystem to KubeVela. Thus, KubeVela can adapt to different scenarios (such as AI PaaS and database PaaS) by “matching one core framework with different capabilities”.

Specifically, system administrators and platform developers can use the preceding process to register any Kubernetes API resources (including CRD) and corresponding controllers on KubeVela as “capabilities”. Then, these capabilities are encapsulated into available abstractions (that is, to become part of the Appfile) through the CUE template language.

Next, let’s demonstrate how to insert the alerting mechanism in KubeWatch community into KubeVela as an alert trait.

Step 1: Register the Platform Capability as an OAM Object

As an alerting mechanism, KubeWatch is naturally used as a trait. At this time, it can be registered by writing a yaml file of TraitDefinition:

Image for post
Image for post

The server-side Runtime built in KubeVela recognizes the TraitDefinition registration event that is monitored, and then incorporates this capability into the platform management.

After this step, KubeWatch registration is done and available in KubeVela platform. However, in the next step, it still needs to be exposed to users, so we need to define an interface for external use of this capability.

Step 2: Write a CUE Template to Encapsulate the Exposed Interface

Image for post
Image for post

Add the template to the Definition file and apply $ kubectl apply -f in Kubernetes. Then, KubeVela automatically recognizes and processes the input. At this time, the user can directly declare and use the newly added capability in the Appfile. For example, the user can send alarm information to the designated Slack channel:

Image for post
Image for post

As you can see, this KubeWatch configuration is a new capability expanded through a third party. Managing the Kubernetes extension capability through the KubeVela platform is just as simple like this. With KubeVela, platform developers can quickly build a PaaS on Kubernetes and rapidly encapsulate any Kubernetes capability into an end-user-oriented upper-layer abstraction.

The preceding example just shows a very small part of KubeVela extensibility. In subsequent articles, I will introduce more details about the KubeVela capability assembly process, such as:

  • How to define the conflict and cooperation relationship between capabilities?
  • How to quickly define the CUE template file?
  • How to define powerful “capability modules” based on the CUE language and then install these modules in the KubeVela?
  • And many more!

Summary

The KubeVela project is an official project of the OAM community. It is maintained by several senior members of the cloud native community from Alibaba and Microsoft. It is also the core component of Alibaba Cloud EDAS and multiple internal application management platforms supporting the Double 11. KubeVela aims to build a future-oriented cloud-native PaaS architecture, bringing the best practices such as horizontal extensibility and application-centered features to everyone. It also hopes to promote and even lead the development of the cloud native community in the application layer.

Want to know more?

Original Source:

Follow me to keep abreast with the latest technology news, industry insights, and developer trends.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store