From Serverless Containers to Serverless Kubernetes

From Serverless Containers to Serverless Kubernetes

Industry Trends

Source: https://blogs.gartner.com/tony-iams/containers-serverless-computing-pave-way-cloud-native-infrastructure/
  • Extend serverless container application scenarios and product portfolios and migrate more common container workloads to serverless container services.
  • Promote the standardization of serverless containers to alleviate the concerns of users about locking by cloud vendors.

Typical Scenarios and Value for Customers

  • Elastic Scaling of Online Businesses: Serverless containers support auto-scaling for online businesses based on ASK. They quickly scale up by 500 application instances within 30 seconds to easily cope with expected and unexpected traffic spikes. For example, multiple online education customers have used the powerful scaling capability of ASK and ECI to support their businesses during the COVID-19 epidemic.
  • O&M-free Serverless AI Platform: An intelligent and O&M-free AI application platform based on ASK allows developers to create their own algorithm model development environments. The platform is scalable as needed, which greatly reduces the complexity of system maintenance and capacity planning.
  • Serverless Big Data Computing: We built a serverless big data computing platform based on ASK. The serverless big data computing platform uses data computing applications, such as serverless Spark and Presto, to flexibly meet business needs for elasticity, strong isolation, and zero-maintenance for various computing tasks during the rapid growth of different business departments.

Thoughts on the Serverless Container Architecture

Secrets of Kubernetes’ Success

  • Declarative APIs: Kubernetes uses declarative APIs. This means developers can focus on their applications rather than system execution details. For example, different resource types such as Deployment, StatefulSet, and Job abstract different types of workloads. The level-triggered implementation based on declarative APIs provides a more robust distributed system implementation for Kubernetes than edge-triggered.
  • Scalable Architecture: All Kubernetes components are implemented and interact with each other based on consistent and open APIs. Third-party developers provide field-specific extended implementations through Custom Resource Definition (CRD) or Operator, which greatly improves the capabilities of Kubernetes.
  • Portability: With various abstractions such as Service Load Balancer (SLB), Ingress, Container Network Interface (CNI), and Container Storage Interface (CSI), Kubernetes shields business applications from the implementation differences of infrastructure and allows the flexible migration of data.

Design of Serverless Kubernetes

From Node-centric to Nodeless

  • Scheduler: The traditional Kubernetes scheduler selects a proper node from a batch of nodes to schedule pods. The selected node must satisfy various conditions such as resources and affinity. In serverless Kubernetes, no nodes are used, and resources are only limited by the underlying elastic computing inventory. Therefore, we only need to retain some basic concepts such as zone affinity. This greatly simplifies scheduler operations and significantly improves the execution efficiency. In addition, we have customized and extended the scheduler to orchestrate and optimize serverless workloads, which reduces computing costs while ensuring application availability.
  • Scalability: Scalability in Kubernetes is affected by many factors, such as the number of nodes. To ensure Kubernetes compatibility, AWS EKS on Fargate uses a model with a 1:1 ratio between pods and nodes (one pod is run on one virtual node), which limits the scalability of the cluster. A single cluster supports a maximum of 1,000 pods. This cannot meet the needs of large-scale application scenarios. ASK maintains compatibility with Kubernetes while allowing a single cluster to easily support 10,000 pods. The scalability of conventional Kubernetes clusters is subject to many other factors as well. For example, when kube-proxy deployed on a node supports the ClusterIP service, any endpoint change may lead to a “change storm” throughout the cluster. Serverless Kubernetes uses innovative methods to limit change propagation, which will be continuously optimized.
  • Cloud-based Controller Implementation: Based on cloud services provided by Alibaba Cloud, we have implemented kube-proxy, CoreDNS, and Ingress Controller behavior, reducing the system complexity. Consider the following examples:
  • Deep Optimization for Workloads: To fully utilize serverless containers, we need to further optimize the features of workloads.

Serverless Container Infrastructure

  • Lower Computing Costs: The scaling cost of serverless containers is lower than that of ECS instances, and the cost of long-running applications is almost equal to that of ECS subscription plans.
  • Higher Scaling Efficiency: The scaling speed of container groups is much faster than that of ECS instances.
  • Greater Scaling Flexibility: Unlike scaling of traditional ECS instances, a large-scale container application may require tens of thousands of cores in elastic computing capability.
  • Similar Computing Performance: Container groups must provide similar computing performance as ECS instances with the same specifications.
  • Lower Migration Costs: Serverless containers are fully integrated with the existing container application ecosystem.
  • Lower Usage Costs: Serverless containers have fully automated security and O&M capabilities.

Secure Container Runtime Based on Lightweight Micro VMs

Pods and Standard and Open APIs

Pooling for ECI and ECS Instances

Challenges Facing Serverless Containers

  • Creation and assembly of underlying virtualization resources: Through end-to-end trace optimization, resource preparation on ECI can be completed in less than a second.
  • Micro VM operating system startup duration: The Kangaroo container engine profoundly tailors and optimizes the operating system in container scenarios, which significantly reduces the operating system startup time.
  • Image download duration: Downloading an image from a Docker image repository and decompressing the image to a local path is a time-consuming process. It might take 30 seconds to several minutes for downloading an image depending on the image size. In conventional Kubernetes, the worker node caches the downloaded images to a local path, so that the image will not be downloaded and decompressed again upon the next startup. To ensure cost efficiency and strong elasticity, ECI and ECS instances adopt a pooling policy and a computing-storage separation architecture. This means that local disks cannot be used to cache container images in traditional ways. Therefore, we implement an innovative solution by making container images into data disk snapshots. If an image snapshot exists upon ECS instance startup, a read-only disk is created based on the snapshot and automatically mounted as the instance starts. The mounted disk is used as rootfs for the container application. Based on the architecture of Apsara Distributed File System 2.0 and the powerful I/O performance of Alibaba Cloud ESSDs, the image loading time is reduced to less than 1 second.

Future Outlook

Original Source:

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Alibaba Cloud

Alibaba Cloud

Follow me to keep abreast with the latest technology news, industry insights, and developer trends. Alibaba Cloud website:https://www.alibabacloud.com