Cloud Native Is Here, but Not Evenly Distributed

Preface

The Internet has changed the way people live, work, study, and entertain themselves. The rapid development of technologies has driven the evolution of the cloud computing market from the early physical machines to virtual machines (Bare Metal Instance) and then to containers, while the Internet architecture evolved from centralized architectures to distributed architectures, and then to cloud-native architectures. Nowadays, the term “cloud-native” has been elevated by enterprises and developers to the status of an industry standard and the future of cloud computing. If I were to describe cloud-native technologies in one sentence, it would be “the future is here, but not evenly distributed.

Going Back to the Source

Before getting into this topic, let’s see how industry influencers define “cloud native,” namely Pivotal Software and CNCF.

Pivotal Software

Pivotal Software is a leader in the field of agile development (previously contracted with Google) and has an impressive pedigree (it was founded by EMC and VMware). It launched Pivotal Cloud Foundry (a big hit in the field of PaaS between 2011 and 2013) and the Spring Framework and is a pioneer in cloud-native technologies. The following figure shows how Pivotal defines “cloud native”:

CNCF

Cloud Native Computing Foundation (CNCF) is a well-known organization in the industry. It is a foundation co-sponsored by leading open-source infrastructure companies such as Google and RedHat. The mission of CNCF was to compete in the container market dominated by the then-prominent platform Docker. Through the Kubernetes project, CNCF has maintained undisputed leadership in the field of orchestration in the open-source community and is the champion in defining and promoting cloud-native architectures. Here is how CNCF defines “cloud-native”:

Reaching a Consensus

As the community continues to grow the ecosystem and push the boundary of cloud-native architectures, the definition of cloud native is constantly changing. Companies (like Pivotal and CNCF) define this concept differently, and one company may use different definitions at different times. Following Moore’s Law, we can expect the definition of cloud native to continue to shift in the future.

  • Having established itself as the innovator and reformer for the cloud-native ecosystem and technologies, CNCF emphasizes technologies, toolchains, and underlying infrastructure and has a great influence on its target audience made up of developers in the open-source community, Internet companies, and emerging businesses. It adopts a bottom-up approach.

My Personal View of Cloud Native

From the Cloud-native Thinking to the Cloud-native Applications

From the birth of the Internet to the present, we have adopted Internet thinking and then Internet+ thinking (which is essentially Internet native). When enterprises reach a certain stage, they need to develop value thinking (or, value-native thinking). Therefore, it is necessary for cloud computing practitioners to develop cloud-native thinking. Abstract paradigms always preceded tangible solutions in any technological reform or widespread adoption of new methods.

  • How can we build cloud-native applications in a way that breaks away from the traditional methods?
  • What are the key characteristics of cloud-native applications?
  • What are the key technologies adopted in a cloud-native technology framework?

Capabilities of the Cloud

The emergence of cloud computing is closely related to the development and maturity of virtualization technology. It is an emerging IT infrastructure delivery method. that relies on virtualization technology to standardize, abstract, and scale IT hardware resources and software components into product-like services that allow users to “pay as they go”. In a sense, this reconstructs the IT industry’s supply chain. Its models of service delivery include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Function as a Service (FaaS), and Data as a Service (DaaS):

  • Application-oriented solutions: Developers only provide business applications without purchasing server resources. Examples of such solutions include Google Cloud Run and Alibaba Cloud EDAS Serverless.
  • Container-oriented solutions: These are an upgraded version of application-oriented solutions that uses container images to shield environment differences and provides great flexibility. Examples of such solutions include Alibaba Cloud Serverless Kubernetes and AWS Fargate.

Construction of Cloud-native Applications

The preceding section discusses the strong capabilities of the cloud. In comparison with traditional applications, new cloud applications, need to be adapted to these capabilities in each stage of the entire application lifecycle. This involved adaption during the design of software architecture, development, construction, deployment, delivery, monitoring, and O&M. I will discuss this process in terms of various issues users must face.

How to Design Cloud-native Architecture

Great architectures come into being after evolving and progressing over time. They are not created all at once. Therefore, it is meaningless to talk about architectural design. The purpose of architectural evolution must be to solve a certain problem. We can address the problems listed below to better understand the design the cloud-native architecture:

  • Use a governance framework and monitoring solutions to solve communication problems between microservices.
  • User container services to solve the problem of deploying many applications in the microservice framework.
  • Use Kubernetes to solve the problem of orchestration and scheduling for container services.
  • Use service mesh to solve the problem of intrusion for the microservice framework.
  • Run service mesh on Kubernetes to provide better underlying support.

How to Deliver Cloud-native Applications

We introduced DevOps to address the problem of the continuous delivery of applications.

  • Tools: Are there mature O&M tool platforms and monitoring systems that allow the development team to easily handle various online issues, faults, and rollback?
  • Culture: Do developers take direct ownership of the online user experience, taking responsibility for problems caused by code defects, O&M failures, or code changes committed by developers?
  • Delivery measurement: Are the KPIs, including deployment frequency, change lead time, service recovery time, and change failure rate, in line with the user requirements in the industry?

Key Characteristics of Cloud-native Applications

  • Elastic scalability: Using elastic billing policies, applications can complete auto scaling within seconds and dynamically allocate or release resources in accordance with business workloads. This helps users significantly reduce expenses. The key technologies are the lightweight containerization of services and the immutable infrastructure achieved through container services.
  • Fault tolerance: The applications support load balancing, automatic traffic shaping, degradation and circuit breaking, automatic scheduling of abnormal traffic, fault isolation, and automatic failover.
  • Observability: The applications provide a wide range of fine-grained monitoring metrics, such as real-time metrics, tracing analysis, and logs, and support monitoring for automatic alert triggers and persistent queries precise to the second.
  • Release stability: To cope with the stability risks caused by frequent changes, the applications have a fully automated change release system that supports automatic gray and blue-green release policies and can be used to establish a monitoring baseline before, during, and after changes. It is also capable of circuit breaking and automatic rollback in the case of abnormal changes.
  • Ease of management: To transition from manual maintenance to automatic maintenance, the applications support automatic exception analysis and diagnosis without the need to log on to servers.
  • Ultimate user experience: The applications provide an all-in-one experience by offering smooth and easy-to-use features, such as application allocation and creation, resource application, environment configuration, development and testing, release, monitoring and alarming, and troubleshooting. These features can be combined like building blocks, avoiding complex operations.
  • Flexible billing: The applications support various pricing strategies such as pay-as-you-go (by traffic, storage, calls, and duration), subscription (by days, months, or years), reservation, and preemptive billing methods. The business system can dynamically switch to the optimal billing method based on actual conditions.

Key Technologies of Cloud-native Architectures

Containers

The earliest container, known as Chroot Jail, was developed in 1979. It was re-defined in 2008 as LXC (Linux Container) and combined the resource management of cgroups with the view isolation of namespace to achieve process-level isolation. However, the greatest innovation in container technology is the container image (or Docker container). This container contained the complete environment (the file system of the entire operating system) required to run an application. Additionally, it was consistent, lightweight, portable, and language independent. It allowed users to achieve “build once, run anywhere” (that is, in development, testing, and production environments) and completely standardize building, distribution, and delivery activities. It also supplies the foundation of immutable infrastructure.

Kubernetes

Kubernetes is a Linux system for cloud computing and cloud-native architectures.

ServiceMesh

Service mesh aims to decouple the business logic from the non-business logic, allowing developers to focus solely on the business logic. The solution separates a number of client SDKs unrelated to the business logic (such as service discovery, routing, load balancing, and traffic shaping and degradation) from business applications and puts them into a separate proxy (Sidecar) process that is pushed down to the infrastructure middleware mesh (similar to the shift from TDDL to DRDS). With this solution, an application will face fewer risks from changes in the system framework, become more streamlined and lightweight, and enjoy a faster startup speed. This makes it easier to migrate the application to the serverless architecture. The meshes can implement automatic iteration and upgrade based on their own needs. This facilitates global service governance, phased release, and monitoring. In addition, the mesh boundary can be extended to the database mesh, cache mesh, and message mesh. In this way, service communication can be truly standardized by adopting the TCP/IP protocol for inter-service communication.

Infrastructure as Code (IaC)

The infrastructure and its complete life cycle (creation, destruction, scaling, and replacement) are described in code and orchestrated, executed, and managed with appropriate tools, such as terraform, ROS, and CloudFormation. For example, users only need to define the code and then easily create all the basic resources needed by applications (such as Elastic Compute Service (ECS), Virtual Private Cloud (VPC), ApsaraDB for RDS, Server Load Balancer (SLB), and ApsaraDB for Redis), without the need to frequently switch between pages in the console to apply for and purchase resources. With this approach, the infrastructure code is version-controlled, reviewable, testable, and traceable and can be rolled back, maintain consistency, and prevent configuration drift. It is also easy to share, create templates for, and scale the infrastructure code. In addition to improvement in the overall O&M efficiency and quality, IaC allows users to easily see the full picture of the infrastructure.

Cloud IDE

The entire lifecycle of cloud-based IDE research provides a complete experience that integrates development, debugging, pre-release, production environment, and CI/CD release. The cloud platform also offers a variety of code library templates to improve the compilation speed through distributed computing and intelligently realize code recommendation and optimization, automatic bug scanning, and identification of logical and systematic risks. It is conceivable that the development models of the cloud era, completely different from those of the local development environment, will feature higher development efficiency, faster iteration speed, and better quality control.

Implementation of Cloud-native Architectures

As a member of the GTS delivery team that was tasked with empowering enterprises to succeed in their digital transformations, I have been thinking about the ways to help traditional enterprises transform themselves and embrace cloud-native architectures by drawing on the experience of the Internet industry. Here is a roadmap for the implementation of cloud-native architectures.

  • Step 2: Build a PaaS platform. Alibaba Cloud Container Service for Kubernetes (ACK) shields O&M staff from the underlying resources and the complexity of O&M and provides high-performance and scalable container application management capabilities. It also provides developers with an environment in which they can build applications so as to accelerate application development, realize PaaS, and achieve business agility, elasticity, fault-tolerance, and observability.
  • Step 3: Implement DevOps based on PaaS. The PaaS platform boosts business agility by improving infrastructure agility, while DevOps does the same through process delivery. DevOps (Apsara DevOps in Alibaba Cloud) enables continuous integration and delivery of applications, accelerates the creation of value streams, and achieves fast business iteration.
  • Step 4: Establish microservice governance. The microservice-based transformation divides complex services into small independent units that are loosely coupled with each other and support independent deployment and updates. This truly improves agility at the business layer. Users can implement microservices by using Alibaba Cloud EDAS, which supports services such as SpringCloud and Dubbo. However, as technologies continue to develop, the optimal solution for microservice governance is now service mesh (for the compatibility between ASM and Istio).
  • Step 5: Implement advanced management of microservices. The microservice architecture implements API management, distributed integration of microservices, and the automation of microservice processes. API management empowers enterprises to establish a multi-channel ecosystem (with self-owned channels, WeChat, and Tmall) and ultimately build an API economy. The distributed integration and process automation of microservices allow enterprises to set up a unified business mid-end.
  • Step 2: Set up multiple data centers. As the business grows and the importance of the data center increases, enterprises will build a disaster recovery centers or an active-active data center architecture to ensure that the services are still available when one of the data centers fails completely.
  • Step 3: Construct a hybrid cloud. As public clouds become increasingly popular, many enterprise customers are migrating their front-end business systems to public clouds or using cloud services from multiple cloud service providers. In this way, the underlying IT infrastructure eventually becomes a hybrid cloud or multi-cloud implementation.

Afterword

In the cloud era, we require novel thinking and concepts to properly understand application architectures and IT infrastructure in order to correctly answer the question “what does it mean to be cloud-native.” The future is undoubtedly cloud-native. Therefore, in addition to tools, enterprises seeking to transform themselves need a complete philosophy that progresses from concepts to methodologies and then to tools. Only in this way can we better embrace the arrival of the cloud era and maximize the value of cloud-native architectures.

Original Source:

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store