The Development Trends of Six Major Container Technologies in 2021

Trend 1: Container Technologies Represented by Kubernetes Become a New Interface for Cloud Computing

From Cloud Migration to Distributed Cloud Management Acceleration through Cloud-Native in Enterprises

  • For enterprises, containers continuously encapsulate infrastructures downward to shield the differences of underlying architectures.
  • The new Kubernetes interface will further align the basic cloud and edge capabilities. It will also promote the richness and standardization of edge products to accelerate container applications implementation in edge, IoT, and 5G.

High-Density and High-Frequency Challenges of Container Applications and Continuous Refactoring of the Cloud Computing Architecture

  • Technologies continue to evolve, promoted by the high-density and high-frequency application scenarios of container applications, including container-oriented optimized Operating System (OS), bare metal collaboration, and hardware acceleration. They have further enhanced the full-stack optimization and hardware and software integration of cloud computing architectures, bringing the benefits of extreme agility and elasticity to cloud computing users.
  • However, Serverless is still above the new container interface. Next-generation middleware and next-generation application PaaS are still on the rise.

Containers Are Applied in Large-Scale with New Challenges in Automatic O&M, Enterprise IT Governance, and End-to-End Security

  • With the containerization of more applications like workloads, AI big data, and databases, it is the key requirement for large-scale container implementation to unify containers and infrastructure resources to form unified IT governance capabilities, covering people, money, materials, and rights.
  • With more customized controllers and diversified cloud-native product formats, there is a strong demand for ensuring the stability of large-scale Kubernetes clusters, which urgently requires data-based and intelligent Kubernetes automation cluster O&M and fine-grained SLO capabilities.
  • DevSecOps practices continue to build an end-to-end container security network, such as zero-trust security, container identity authentication, lifecycle management of cloud-native products, secure containers, and confidential computing.

Trend 2: High Automation of Cloud-Native Applications

  • More Automated Application Deployment and O&M: Cloud-native business types are diversified. Whether in traditional IT, the Internet, or niche fields, such as web services, search, games, AI, and edges, each type has a special scenario. It must abstract and extract common core deployment and O&M requirements and convert them into more automated capabilities to deepen cloud-native development.
  • More Automated in Risk Prevention and Control: The final state oriented automation is a double-edged sword that brings the declarative deployment capability and potentially enlarges some misoperations. For example, in the event of operation failures, mechanisms like maintenance of replicas number, version consistency, and cascading deletion are likely to bring more adverse impacts. Therefore, it is necessary to inhibit the defects and side effects of other functional automation capabilities through prevention and control automation capabilities, such as protection, interception, traffic limiting, and fuse. By doing so, the rapid expansion of cloud-native can be alleviated.
  • More Automated Operator Runtime: Kubernetes has become a de facto standard for scheduling management engines in container clusters. Its powerful and flexible expansion ability plays an important role. Operator is a special application and also an automated manager of many stateful applications. However, in the past, the overall trend of Operator in Kubernetes remained at a brutal growth in the number, while the surrounding runtime mechanism did not make much progress. In 2021, Operator runtime will be fully enhanced by automation in terms of horizontal expansion, phased upgrades, tenant isolation, security protection, and observability.

Trend 3: Application-Centered Highly Scalable Upper-Layer Platforms

  • An easy-to-use and scalable upper-layer platform based on Kubernetes and standard application models will replace the traditional PaaS and become the mainstream. Despite the increasing variety of software in the cloud-native ecosystem, application-centered software cannot be learned and used easily. Therefore, the easy-to-use ability will become the primary breakthrough point. Besides the guarantee of scalability, the open-source software that uses Kubernetes as an access point can be accessed with or without minor modifications. This is an important feature of this type of application management platform.
  • The standardized application construction method with separated concerns becomes more popular. Building an application delivery platform centered on Kubernetes has gradually become a consensus, and no PaaS platform wants to shield Kubernetes. However, it does not mean that all of the information in Kubernetes is shown to users. Builders of PaaS are eager to provide users with an optimal experience. You can use a standardized application with separated concerns to build models to solve this problem. Builders focus on Kubernetes interfaces, including Custom Resource Definition (CRD) and Operator, while application developers (users) focus on a standardized abstract application model.
  • Further integrated application middleware capabilities, gradually decoupling application logic from middleware logic. The cloud-native ecosystem and the entire ecosystem are developing and changing. The middleware field is expanding from the centralized Enterprise Service Bus (ESB) to Service Mesh supported by Sidecar mode. Instead of providing capabilities through a thick client, application middleware has become a standard access layer supported by the application management platform via Sidecar during runtime. Sidecar will be applied in more middleware scenarios, except for traffic management, routing policies, and access control. It centers on applications, making businesses more focused.

Trend 4: Rapid Cloud-Edge Integration

  • With the integration of AI, IoT, and edge computing, more types of businesses will be involved with larger scales and higher complexities.
  • As an extension of cloud computing, edge computing will be widely used in hybrid cloud scenarios, which requires future infrastructure to enable decentralization, autonomous edge facilities, and edge cloud hosting.
  • The development of infrastructures, such as 5G and IoT, will induce the growth of edge computing.
  • The “cloud” layer retains the original cloud-native management and rich product capabilities and sinks them to the edge through the cloud-edge management channel. This transforms massive amounts of edge nodes and edge businesses into the workloads of the cloud-native system.
  • The “edge” side can interact better with the end through traffic management and service governance to obtain a consistent O&M experience on the cloud. It also has better isolation, security, and efficiency, thus completing the integration of business, O&M, and ecosystem.

Trend 5: Data Transformation Driven by Cloud-Native Is the New Theme

  • Refer to Traditional Task Scheduler: Kubernetes focuses on resource scheduling. However, compared with traditional offline schedulers like Yarn, the scheduling capabilities of big data and HPC still need to be improved. Recently, under the flexible framework of the Scheduler Plugin Framework of Kubernetes, the Capacity scheduling and batch scheduling that adapted to big data and HPC scenarios are being implemented gradually.
  • Fine-Grained Scheduling of Containerized Resources: Kubernetes cluster uses container-based and plug-in-based scheduling strategies to natively support GPU resource sharing and scheduling and isolate GPU resources. Besides, Nvidia Ampere also supports Mig-native scheduling in Kubernetes. These are the unique capabilities of Kubernetes. Resource sharing is not limited to GPU but is essential for RDMA, NPU, and storage devices.
  • New Scenarios of Elastic Data Tasks: Once the elasticity of big data and AI applications catches on, it’s also important to make data elastic (like fluids) to flexibly and efficiently move, replicate, evict, transform, and manage between storage sources, such as HDFS, OSS, Ceph, and Kubernetes upper-layer cloud-native applications. By doing so, big data and AI applications in diverse cloud service scenarios can be implemented.
  • A Unified Cloud-Native Base for AI and Big Data: Based on atomic capabilities, such as job scheduling, resource utilization optimization, and data orchestration, more AI, machine learning platforms, and big data analysis platforms are built in container clusters. There are many similarities among the dependence of AI and big data on data, the demands towards computing, network and storage resources, workload characteristics, operation strategies, importance to online services, and factors affecting IT cost. Therefore, ensuring a unified cloud-native base to support AI and big data operations will force CTOs and CIOs to brainstorm.

Trend 6: Container Security Becomes a Top Priority

Original Source:

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store