Cloudfront can be simply defined as a CDN (Content Delivery Network), caching your static assets in a datacenter nearer to your viewers. But Cloudfront is a lot more complex and versatile than this simple definition. Cloudfront is a “pull” CDN, which means that you don’t push your content to the CDN. The content is pulled into the CDN Edge from the origin at the first request of any piece of content.
In addition to the traditional pull and cache usage, Cloudfront can also be used as:
A Networking Router
A Web Server
An Application Server
Why is using a CDN relevant?
The main reason is to improve the speed of delivery of static content. By caching the content on the CDN edge, you not only reduce the download time from a few seconds to a few milliseconds, but you also reduce the load and amount of requests on your backend (Network, IO, CPU, Memory, …).
Static content can be defined as content not changing between two identical requests done in the same time frame.
Edges, Mid-Tier Caches, and Origins
Cloudfront isn’t “just” some servers in datacenters around the world. The service is a layered network of Edge Locations and Regional Edge Caches (or Mid-Tier Caches).
Edge Locations are distributed around the globe with more than 400 points of presence in over 90 cities across 48 countries. Each Edge Location is connected to one of the 13 Regional Edge Caches.
Regional Edge Caches are transparent to you and your visitors, you can’t configure them or access them directly. Your visitors will interact with the nearest Edge Location, which will connect to the attached Regional Edge Cache and finally to your origin. Therefore, in this article, we will refer to Cloudfront as the combination of Edge Locations and Region Edge Caches.
What Have We Learned?
Cloudfront is more than just a simple “pull-cache-serve” service
You improve delivery speed to your visitors
You can increase resilience by always using a healthy backend
You improve overall speed to your backend by leveraging AWS’s backbone
You can modify any request to tailor the response to your visitor’s device or region
You don’t always need a backend
You protect your backend by reducing the number of calls reaching it
When thinking about APIs implemented using a serverless infrastructure, usually what comes to mind is the classic AWS Lambda + Amazon APIGateway combination approach with one function for each endpoint.
However, what if I told you that there is an alternative way of building an API with attractive pricing by using only one AWS Lambda Function? And that it’s also possible to easily enhance the API by linking it with other services like Amazon CloudFront and AWS WAF?
That’s nice, right? Join me in this series of articles as I go with you through the concepts, overall benefits, and all the steps required to build a secure Serverless API using the Single-Function API pattern with Lambda Function URL fronted by the CloudFront and WAF services.
In this first article I’m going to focus on the solution’s concepts and its architecture by exploring the following topics:
Why build an API with Lambda Function URL?
What’s the Single-Function API pattern?
The architecture of a secure Serverless API using CloudFront and WAF
In other words, this feature makes it possible to invoke a Lambda Function directly and without the need of any other AWS service, just by requesting a specific URL from anywhere. You can also configure the CORS headers of the Lambda Function URL, which is great if you are going to build an API with it.
But you may be asking yourself: “Why build a Serverless API using Lambda Function URL if there’s already the Amazon API Gateway service which was created exactly for this purpose?” Well, Jaymit Bhoraniya from Serverless Guru analyses this question by comparing both approaches and presenting some use cases that may fit well for one or another in his article entitled AWS Lambda Function URLs vs. Amazon API Gateway.
The two biggest benefits of using Lambda Function URL in my perspective are:
Cost-effectiveness: you don’t pay for the API call, the costs are only the ones already included in the Lambda Function invocation
Bigger Timeout: the API call timeout can be increased up to 15 minutes (Lambda Function maximum timeout), which is way bigger compared to the 30 seconds limitation of API Gateway
Wouldn’t be a bold statement to say that currently, the most common pattern when building serverless REST APIs is by using one Lambda Function per endpoint. When the Lambda Function URL is activated a unique URL with template 'https://<url-id>.lambda-url.<region>.on.aws' is generated, which makes it not suitable for this pattern since it’s not possible to share the same base URL between different Lambda Functions.
In spite of that, you shall not worry respectful reader since there’s still light at the end of the tunnel! Both Alex Casalboni (2022) and Jaymit Bhoraniya (2022) mention Single-Function API as a good use case for the Lambda Function URL feature, which is the concept we are going to dissect in the next section.
What on earth is a Single-Function API?
The Single-Function API term usually represents a Serverless API pattern where only one Lambda Function is responsible for managing all the API endpoints. As opposed to the common pattern previously mentioned where multiple Lambda Functions handle the API endpoints, which I like to call a Multi-Function API.
At the time of writing, if you do a quick search on the web for Single-Function API, you won’t find many things. But, you may end up on an article AJ Stuyvenberg (2021) wrote exploring this Serverless API pattern. However, his article, entitled The what, why, and when of Mono-Lambda vs Single Function APIs, has a difference in the used terminology compared to the one I’m using here and that can be confusing going forward.
💡 What AJ Stuyvenberg (2021) calls a Mono-Lambda API I call a Single-Function API and what he calls a Single-Function API I call a Multi-Function API. Since I’m using the same terminology that Alex Casalboni (2022) used in the AWS Blog and Jaymit Bhoraniya (2022) on Serverless Guru, I’m going to stick with it even though I’m going to use the AJ Stuyvenberg (2021) article as a reference.
In the Single-Function API pattern the API is built around only one Lambda Function and its source code is responsible for all the endpoints routing and their execution. This alternative pattern has some advantages/disadvantages when compared to the good old Multi-Function API and these differences are discussed in more detail in AJ Stuyvenberg’s article (2021).
I want to focus here only on the Single-Function API and point out some of the benefits gained from using this pattern in order to help you acknowledge if these aspects may have an expressive impact in your specific use case. In short, I would highlight the following key advantages taken from AJ Stuyvenberg’s article (2021):
Super flexible with routing
Bring your own framework like express
Simpler deployment/release and less concern about CloudFormation stack limits
Very easy to share code between resources
Less granular IAM permissions
Also, I would add that this pattern is a better fit if you want to migrate APIs from a microservice infrastructure to serverless without much effort. However, it’s worth mentioning that if you choose to build an API with this pattern you need to take extra care, especially on how you will store logs in CloudWatch and how you will manage the function concurrency.
💡 Since all the function logs will be stored in the same log group, AJ Stuyvenberg (2021) suggests one to write highly structured log messages using a custom logger and then rely on CloudWatchLog Insights to filter. This is a great way to manage the logs and I would add that you can also implement the custom logger as a Lambda Layer in order to share the same logging structure between all your APIs.
Securing the Serverless API
So we already have our Serverless API base infrastructure: a Single-Function API with Lambda Function URL. But besides using HTTPS and having CORS headers, this infrastructure lacks on its security side since it’s not possible to link the Lambda Function directly with important services like AWS Shield for DDoS protection and AWS WAF for firewall protection.
Fortunately, there’s a solution for this inconvenience: the AWS CloudFront already has a setting in which it can point to a Lambda Function URL as an origin and consequently enable the possibility of an integration with the AWS Shield and AWS WAF services as well. Besides, by fronting your Lambda Function URL with CloudFront you will also have all the other benefits from this CDN (Content Delivery Network) service, as described in the AWS blog post, Using Amazon CloudFront with AWS Lambda as origin to accelerate your web applications written by Jaiganesh Girinathan and Samrat Karak.
The benefits of using AWS CloudFront are so numerous that some architects even use it in front of an API Gateway instance, as discussed in Adam Novotný’s article, Does Putting CloudFront in Front of API Gateway Make Sense?. Based on the mentioned article (Novotný, n.d.) and AWS blog post (Girinathan & Karak, 2022), I want to point out the following benefits of using a CloudFront distribution in front of a Serverless API:
Shield & WAF: protect your API from DDoS attacks and common exploits
Edge Locations: better performance with lower latency
Caching: custom cache policies and behaviors
Encryption: enable HTTPS communication over SSL/TLS
Custom Domain: add a custom domain for your API
API Key: ensure the request is routed through CloudFront
Geo-Blocking: block requests using geographic restrictions
💡 You can also activate the AWS Shield Advanced for even more DDoS protection.
If you want to go further in the security aspects of CloudFront there’s also the possibility of adding a custom domain and creating a custom SSL certificate for it by using a combination of AWS Route53 and AWS Certificate Manager, as shown in David Sugden’s article, Configure a secure custom domain in CloudFront. Another idea to make the API with CloudFront even safer would be to rotate the configured API Key by running a scheduled job with a Lambda Function like in the example shown in the AWS blog post, Protecting your API using Amazon API Gateway and AWS WAF — Part 2 written by Chris Munns.
With that being said, for didactic purposes, I want to keep our Serverless API simple enough in this article to build its infrastructure only with what I consider the core components: a Single-Function API with Lambda Function URL fronted by a CloudFront distribution using Shield and WAF. Therefore, the final architecture of the Serverless API we are going to create together in the next part of the series will be the following:
You may have noticed by now that this alternative way of building a Serverless API can be very beneficial for some particular use cases where the intention is to develop simple, safe, and performative API infrastructures. Also by removing the need for an API Gateway instance and going straight from CloudFront to a Lambda Function, there are a lot of opportunities to reduce costs in your infrastructure if this approach fits well with your project’s technical requirements. For example, a good use case for this architecture in a real-world scenario would be an internal synchronous API with a lot of processing (bigger timeout) and not so many concurrent calls (less concurrency).
I suggest that besides reading this article you go through the external references mentioned in here to help you decide if this type of architecture is a good choice for your specific needs. Going forward I also suggest that you take a look and explore the more advanced security possibilities like using a custom SSL certificate and automatically rotating API keys.
In the next article of the series, I’ll be covering the step-by-step on how to create the proposed architecture of a secure Serverless API with Lambda Function URL and CloudFront. If you want to know more about serverless infrastructures feel free to reach out to the Serverless Guru team or myself on LinkedIn.
Thank you for reading and stay tuned for Part 2!
Casalboni, A. (2022, April 6). Announcing AWS Lambda Function URLs: Built-in HTTPS Endpoints for Single-Function Microservices. AWS Blog.