What improvement Container Instances and Engines Will Make in 2020
This is the fifth post in Seven Major Cloud Native Trends for 2020 Series
In the last month of 2019, AWS finally released Fargate for EKS. This illustrates the industry’s recognition of the use of serverless container instances as underlying runtime resources by cloud-based Kubernetes products. As the underlying operating entities, container instances allow you to focus on building your own businesses and services without the need to configure and manage servers. This eliminates the need for complex infrastructure O&M. At the same time, the genuine pay-as-you-go and real-time expansion capabilities can reduce users’ costs.
Currently, whether it is Amazon’s Fargate, Microsoft’s ACI, or Alibaba Cloud’s ECI, there are still differences in the specific architectures that are used to connect various products to Kubernetes. For instance, Farget transparently transmits node information to provide full support for Kubernetes functions. ACI and ECI connect to Kubernetes through virtual kubelets to manage container instances.
However, regardless of the interconnection method, container instance products still need to be built for elasticity, low cost, and Kubernetes compatibility. With auto scaling, you can scale your services in real time on demand. Therefore, you do not need to select instances and cluster capacities or pay for additional server presets. With real-time scaling, you can pay for only the resources you actually use. With these benefits, Kubernetes has become a de facto standard in the field of container orchestration. As a result, the application scopes of container instances are determined by their compatibility with Kubernetes features.
As we enter the new decade, we believe that container instance products will continue to improve in three ways in 2020. They will improve their elasticity, reduce users’ costs, and improve their integration with Kubernetes. More cloud native applications will also be migrated to Kubernetes + container instances, where they can enjoy the benefits of cloud native technologies.
By looking at similar products from different manufacturers, we can see the common design features of these products:
- One instance corresponds to one pod.
- The product is connected to Kubernetes.
- The security container serves as the underlying container engine.
Among these features, the use of security containers as the underlying container engine is the highlighted capability that most attracts companies. The isolation performance of the security container technology became increasingly important in 2019. As an isolation layer, security containers not only improve the security of the cloud native platform, but also significantly enhance maintainability, service quality, and user data protection. However, the fundamental reason why users choose cloud native is the agility brought by containers. With containers, you can quickly schedule and start them and flexibly use resources. In terms of security technology, however, you cannot reach the level of traditional containers at the moment.
For open source security container engines including Kata Containers and gVisor, they made a lot of progress in 2019. Kata Containers explicitly proposed “pursuing cloud native-oriented virtualization” as a goal for 2020:
- Share resources between sandboxes while keeping the sandbox boundaries clear.
- Instantly and dynamically provide resources to the sandbox on demand, instead of using a fixed allocating method, such as partitioning.
- User-state tools, VMM, and even the application kernel of the host work together to provide services for applications in the sandbox.
In 2020, virtualization containers, such as Kata, will gradually move away from traditional virtualization and become more like application centers. We also expect process-level virtualization, represented by gVisor, to focus more on application optimization. We believe that we will not have a unified security container technology in 2020. However, looking ahead to the first half of the 2020s, we expect the common development of software and hardware to allow mainstream container engines to achieve better isolation performance.
You May Also Like
A Decade of Container Development: Chronicles of Software Delivery
This article narrates the evolution of container technology over a decade and how it reforms the modern software infrastructure and delivery.
CRI and ShimV2: A New Idea for Kubernetes Integrating Container Runtime
This article is based on Zhang Lei’s speech at KubeCon + CloudNativeCon 2018, covering the design and implementation of key technical features.