New Frontend Technology Revolution Triggered by Serverless

Image for post
Image for post

By Jiang Hang (Zhuanggong), a frontend engineer at Alibaba Cloud

1) Evolution of the Frontend Development Model

1.1 Dynamic Page Rendering Based on Templates

1.2 Frontend and Backend Division Based on AJAX

1.3 Frontend Engineering Based on Node.js

Around 2013, the first versions of React.js, Angular, and Vue.js were released one after another. Then, page-based development transitioned to component-based development. After development, tools such as Webpack could be used for packaging and building, and build results could be published through a command line tool based on Node.js. Frontend development became normalized, standardized, and engineering-oriented.

1.4 Full-stack Development Based on Node.js

On the other hand, both the frontend and backend were developing further. Almost from the time of Node.js’ birth, the backend began changing from the monolithic application model to the microservices model. This led to a divergence in the frontend and backend division of labor. With the rise of the microservices model, backend interfaces gradually became atomic. Microservice interfaces were no longer directly oriented to pages, and frontend calls became complicated.

In response, the Backend For Frontend (BFF) model was developed. A BFF layer was added between the microservices and frontend. BFF aggregated and tailored the interfaces and then output the interfaces to the frontend. The BFF layer did not assume the underlying backend tasks and was more closely related to the frontend. Therefore, frontend engineers selected Node.js to implement the BFF. This was also part of the widespread application of Node.js on the server-side.

1.5 Summary

2) Frontend Solutions in Serverless Services

2.1 Introduction to Serverless

In fact, Serverless is already associated with the frontend, but you may not be aware of it. Take Alibaba Cloud Content Delivery Network (CDN) as an example. After you publish static resources to the CDN, you do not need to consider the quantity and distribution of the CDN nodes or how they implement load balancing and network acceleration. Therefore, the CDN is Serverless for frontend engineers.

Object Storage Service (OSS) is similar to CDN. You only need to upload files to OSS and then use them directly without considering how the service accesses files or controls their permissions. Therefore, the OSS is also Serverless for frontend engineers. Some third-party API services are also Serverless because you do not need to consider servers while using these services.

However, this intuitive understanding is not enough. We need an accurate definition. From a technical perspective, Serverless is the combination of Function as a Service (FaaS) and Backend as a Service (BaaS).

Serverless = FaaS + BaaS.

Image for post
Image for post

In short, FaaS indicates platforms that run functions, such as Alibaba Cloud Function Compute and AWS Lambda. BaaS indicates backend cloud services, such as cloud databases, object storage services, and message queues. BaaS greatly simplifies application development.

Think Serverless as a function that uses BaaS and runs in FaaS.

Serverless services include the following features:

  • Event-driven: On the FaaS platform, a series of events drive a function.
  • Stateless: Each time a function is executed, different containers may be used and memory and data cannot be shared. To share data, you must use a third-party service such as Redis.
  • No maintenance: By using Serverless, you do not need to worry about servers and their operations and maintenance (O&M) — also a core idea of Serverless.
  • Low cost: Using Serverless services is relatively low cost because you only pay for running each function. If no function is executed, no cost will incur and no server resource usage occurs.

2.2 Architecture of Frontend Solutions in Serverless Services

Image for post
Image for post

The preceding figure shows major Serverless services and corresponding frontend solutions. Infrastructure and development tools are displayed from the bottom up. Cloud computing vendors provide the infrastructure, including cloud computing platforms and various BaaS services, as well as FaaS platforms that run functions. The frontend includes Serverless users. Therefore, development tools are most important for the frontend. They can be used to develop, debug, and deploy Serverless services.

2.3 Framework

2.4 Web IDE

2.5 Command Line Tools

There are two types of command line tools:

1) Tools provided by cloud computing platforms, such as AWS’s AWS, Azure’s AZ, and Alibaba Cloud’s Fun.
2) Tools provided by Serverless frameworks, such as Serverless and NOW. Serverless, Fun and most other tools are implemented with Node.js.

The following provides several examples of command line tools:

  • Creation

Copy code

  • Deployment

Copy code

  • Debugging

Copy code

2.6 Scenarios

2.7 Comparison of Different Serverless Services

Image for post
Image for post

The preceding figure compares different Serverless services in terms of supported languages, triggers, and prices. The results show that the services have both differences and similarities.

  • Almost all Serverless services support languages such as Node.js, Python, and Java.
  • Almost all services support triggers such as HTTP, object storage, scheduled tasks, and message queues.
  • These triggers are also related to a platform’s backend services. For example, Alibaba Cloud’s object storage trigger is triggered by events such as access to Alibaba Cloud OSS, while AWS’s object storage trigger is triggered by AWS S3 events. The two platforms are not universal. Non-uniform standards also count as a problem in Serverless.
  • Almost all platforms use the same billing method. As mentioned earlier, Serverless services are charged based on the number of calls. Each Serverless service provides 1 million free calls each month, followed by about RMB 1.3 per million calls, or provides free execution of 400,000 GB/s, followed by RMB 0.0001108 per GB/s. Therefore, it is cost-effective to use Serverless when the application volume is small.

3) Frontend Development Based on Serverless

First, let’s review the traditional development process.

Image for post
Image for post

In the traditional development process, frontend engineers write code for pages, and backend engineers write code for interfaces. After backend engineers write code for interfaces and deploy the interfaces, frontend and backend engineers work together to debug the interfaces. After debugging, the interfaces are tested and published. Then, O&M engineers maintain the system. The whole process involves different roles in a long process. Therefore, communication and coordination are also difficult.

Based on Serverless, we can simplify backend development. Traditional backend applications are split into functions. Backend engineers only need to write functions and deploy the functions to Serverless services. Subsequently, no server O&M is required, which significantly lowers the threshold for backend development. Therefore, only one frontend engineer is needed to complete all the development.

Image for post
Image for post

When frontend engineers write backend code based on Serverless, they require some familiarity with the backend. In the case of complex backend systems or scenarios where Serverless is inapplicable, backend development is still required and the backend is pushed even further back.

4) BFF Based on Serverless

The following figure shows a universal BFF architecture.

Image for post
Image for post
Figure source: https://www.thoughtworks.com/insights/blog/bff-soundcloud

The bottom layer includes various backend microservices, and the top layer includes various frontend applications. BFF is located before microservices and applications and is often developed by the frontend engineers. Such an architecture resolves the problem of interface coordination but causes certain other problems. For example, if a BFF application is developed for each device, repeated development may be needed. Moreover, in the past, frontend engineers only needed to develop pages and focus on browser rendering. Now, they must maintain various BFF applications. In the past, frontend engineers did not need to consider concurrency, but now the concurrent pressure is concentrated on BFF. In general, the O&M cost is high and frontend engineers are not adept at O&M.

Serverless resolves these problems. It allows using functions to aggregate and tailor interfaces. A request sent by the frontend to BFF can be considered as an HTTP trigger for FaaS, which triggers the execution of a function that implements the business logic for the request. For example, use a function to call multiple microservices to obtain data and then return the processing results to the frontend. In this way, the O&M pressure is shifted from traditional BFF servers to FaaS services, and frontend engineers do not need to worry about the servers anymore.

Image for post
Image for post

The preceding figure shows the BFF architecture based on Serverless. To better manage APIs, add a gateway layer to manage all APIs. For example, use an Alibaba Cloud gateway to divide the APIs into groups and different environments. Based on the API gateway, the frontend does not directly use an HTTP trigger to execute a function. Instead, the frontend sends a request to the gateway, which then triggers a specific function for execution.

5) Server Rendering Based on Serverless

To solve these problems, frontend engineers are trying server rendering. The core idea of this approach is similar to the earliest template rendering. Specifically, the frontend initiates a request and the backend server parses an HTML file and then returns the HTML file to a browser. However, in the past, we used templates of server languages such as JSP and PHP, while now, homogeneous applications are implemented based on React and Vue. This is also the advantage of today’s server rendering solution.

However, server rendering brings additional O&M costs to the frontend because frontend engineers need to maintain servers for rendering. The biggest advantage of Serverless is that it reduces O&M operations. Does this mean Serverless can be used for server rendering? Yes, why not!

In traditional server rendering, the path of each request corresponds to a server router, and this router enables the rendering of the HTML file in the corresponding request path. Server applications for rendering are the applications that integrate these routers.

When Serverless is used for server rendering, each router is split into different functions and then the functions are deployed in FaaS. Each user request path corresponds to an independent function. In this way, the O&M is transferred to the FaaS platform, and frontend engineers perform server rendering without maintaining and deploying the server application.

Image for post
Image for post

ZEIT’s Next.js does a good job of implementing server rendering based on Serverless. The following is a simple example. The code structure is as follows:

Copy code

Here, pages/about.js and pages/index.js are two pages, and next.config.js is configured to use ZEIT's Serverless services. Then, the now command is used to deploy the code in a Serverless manner. During the deployment, pages/about.js and pages/index.js are changed to two functions for rendering the corresponding pages.

6) Applet Development Based on Serverless

In traditional applet development, frontend engineers are responsible for applet development, while backend engineers are responsible for server development. The backend development of applets is essentially the same as that of other backend applications. Backend engineers need to focus on a series of deployment and O&M operations, such as load balancing, backup and disaster recovery, and monitoring and alerting for applications. If the development team is small, frontend engineers need to implement the server.

However, in cloud-based development, developers only need to focus on business implementation, and a single frontend engineer may develop the frontend and backend of the entire application. In cloud-based development, the backend is encapsulated into BaaS services and a corresponding SDK is provided for developers. The developers use various backend services in the same way they would call functions. In addition, application O&M is shifted to cloud development service providers.

Image for post
Image for post

The following provides several examples of Basement cloud development based on Alipay. Functions are defined in FaaS services.

  • Database operations

Copy code

Copy code

  • Call a function

Copy code

7) Universal Serverless Architecture

Image for post
Image for post

The bottom layer implements backend microservices used for complex businesses. The FaaS layer implements business logic through a series of functions and directly provides services for the frontend. Frontend developers implement server logic by writing functions. For backend developers, the backend is pushed back even further. If the business is lightweight, the FaaS layer implements the business logic, and even the microservice layer is unnecessary.

In addition, the cloud computing platform provides the BaaS services at the backend, FaaS, or frontend, which greatly reduces the development difficulty and cost. In cloud-based applet development, BaaS services are directly called at the frontend.

8) Best Practices in Serverless Development

Another important point is the performance of applications developed based on Serverless. How to improve the performance of applications developed on the basis of Serverless?

This section introduces the best practices in testing Serverless functions and improving function performance.

8.1 Function Testing

  • Serverless functions are distributed. You do not know and do not need to know on which hosts the functions are deployed or running. Therefore, you need to perform unit testing on each function.
  • A Serverless application is a group of functions that may depend on other backend services (BaaS). Therefore, you must perform an integration test on the Serverless application.
  • It is also difficult to locally simulate FaaS and BaaS that run functions.
  • FaaS environments and BaaS service SDKs or interfaces may vary with different platforms. This may cause some problems in testing and also increases the application migration cost.
  • The function execution is event-driven. It is difficult to locally simulate events that drive function execution.

So how can these problems be solved?

According to Mike Cohn’s test pyramid, unit testing has the lowest cost and the highest efficiency, and UI testing (integration testing) has the highest cost and the lowest efficiency. Therefore, we recommend performing as many unit tests as possible to reduce the number of integration tests. This also applies to Serverless function testing.

Image for post
Image for post
Figure source: https://martinfowler.com/bliki/TestPyramid.html

To simplify unit testing on functions, separate the business logic from the function-dependent FaaS (such as Function Compute) and BaaS (such as cloud databases). After FaaS and BaaS are separated, test the business logic of a function in the same way as traditional unit testing. After that, write integration tests to verify whether the function works properly when integrated with other services.

8.2 A Bad Example

Copy code

This example has two main problems:

  • The business logic is coupled with FaaS. The business logic is in the saveUser function, and the event and content objects of the saveUser parameter are provided by FaaS.
  • The business logic is coupled with BaaS. Specifically, the db and the mailer backend services are used in the function, so the test function must depend on the db and the mailer services.

8.3 Write a Testable Function

Copy code

In the refactored code, all the business logic is put in the Users class, which does not rely on any external service. During testing, you do not have to input the real db or mailer, input a simulated service, instead.

The following is an example of a simulated mailer.

Copy code

In this way, as long as unit testing is fully performed on the Users class, the business code runs as scheduled. After that, note whether the entire function runs normally by inputting the real db and mailer for simple integration testing.

The refactored code also facilitates function migration. To migrate a function from one platform to another, just modify the calling method of the Users class according to the parameters provided by different platforms, without modifying the business logic.

8.4 Summary

  • Separate the business logic from the function-dependent FaaS and BaaS.
  • Perform full unit testing on the business logic.
  • Perform integration testing on functions to verify that the code works properly.

9) Function Performance

The following figure shows the lifecycle of a function.

Image for post
Image for post
Figure source: https://www.youtube.com/watch?v=oQFORsso2go&feature=youtu.be&t=8m5s

The cold start time is a key performance index of a function. To optimize the performance of the function, you need to optimize each stage in the function lifecycle.

9.1 Impact of Different Programming Languages on the Cold Start Time

  • Compare cold start time with different languages, memory and code sizes by Yan Cui
  • Cold start/Warm start with AWS Lambda by Erwan Alliaume
  • Serverless: Cold Start War by Mikhail Shilkov
Image for post
Image for post
Figure source: Cold start/Warm start with AWS Lambda

The following conclusions are drawn from these tests:

  • Increasing the memory size of functions helps reduce the cold start time.
  • The cold start time of programming languages such as C# and Java is about 100 times that of Node.js and Python.

Based on the preceding conclusions, if you require that the cold start time of Java be as short as that of Node.js, you can allocate more memory to Java. However, a larger memory means a higher cost.

9.2 Cold Start Time Points of Functions

When the first request (the event that drives the function execution) is received, a runtime environment starts successfully and the function is executed. Then, the runtime environment is retained for a period of time, during which it is used to execute subsequent functions. This reduces the number of cold starts and the function execution time. When the number of requests reaches the limit of the runtime environment, the FaaS platform automatically adds the next runtime environment.

Image for post
Image for post

Take AWS Lambda as an example. After a function is executed, Lambda maintains the execution context for a period of time, during which it is used for subsequent Lambda function calls. In effect, the service freezes the execution context after the Lambda function is completed. If AWS Lambda chooses to reuse the context when the Lambda function is called again, the context is unfrozen for reuse.

The following provides two small tests to illustrate the above content.

I used Alibaba Cloud Function Compute to call a Serverless function and drive the function through an HTTP event. Then I initiated 100 requests to the function at different concurrency settings.

At first, the concurrency was 1:

Image for post
Image for post

In this case, the time required for the first request was 302ms, and the time required for the remaining requests was about 50ms each. This tells that a cold start was performed for the function corresponding to the first request, and a warm start was used for the remaining 99 requests because the runtime environment of the first request was reused.

Then, I set the concurrency to 10:

Image for post
Image for post

In this case, the time required for the first 10 requests was 200ms to 300ms, and the time required for the remaining requests was about 50ms each. This shows that a cold start was used for the first 10 concurrent requests, and 10 runtime environments were started at the same time. A warm start was used for the remaining 90 requests.

This also demonstrates our previous conclusion that a function does not cold start every time, but can reuse a previous runtime environment within a certain period of time.

9.3 Reuse Execution Context

Here is an example:

Copy code

The preceding example uses the saveUser function to initialize a database connection. In this case, the database connection is re-initialized during each function execution, so it takes some time to connect to the database each time. Obviously, this is not good for the performance of the function.

Since the execution context of the function can be reused within a short period of time, the database connection can be placed outside the function.

Copy code

In this case, the database connection is initialized only when the runtime environment is started for the first time. When a subsequent request comes in and the function is executed, the connection in the execution context can be directly reused to improve the performance of the function.

In most cases, it is perfectly acceptable to sacrifice the performance of one request in exchange to improve the performance of most requests.

9.4 Preload Functions

This method is relatively effective at present, but pay attention to the following points:

  • Do not call a function too frequently. I recommend an interval of more than five minutes.
  • Directly call a function instead of indirectly calling the function using a gateway.
  • Create a function specifically used for preloading calls, instead of using a normal business function.

This is an effective and advanced solution. However, if your business can afford a lower startup speed for the first request, this solution is unnecessary.

9.5 Summary

Pay attention to the following points when using the preceding solutions:

  • Select a programming language with a short cold start time, such as Node.js or Python.
  • Allocate sufficient memory for function execution.
  • Reuse execution context.
  • Preload functions.

10) Summary

The Serverless architecture gives maximum assistance to frontend engineers as they work to achieve their goals. With Serverless, you no longer need to focus on server O&M or unfamiliar fields. You only need to focus on business development and product implementation. Serverless will certainly bring great changes to the frontend development model, and frontend engineers again will play the role of application engineers. To sum up Serverless in one sentence, we could say “Less is More.”

Original Source:

Follow me to keep abreast with the latest technology news, industry insights, and developer trends.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store