Deep Dive Into Serverless

February 7, 2023
Ryan Jones
5 minutes to read

Cloudfront can be simply defined as a CDN (Content Delivery Network), caching your static assets in a datacenter nearer to your viewers. But Cloudfront is a lot more complex and versatile than this simple definition.
Cloudfront is a “pull” CDN, which means that you don’t push your content to the CDN. The content is pulled into the CDN Edge from the origin at the first request of any piece of content.

In addition to the traditional pull and cache usage, Cloudfront can also be used as:

  • A Networking Router
  • A Firewall
  • A Web Server
  • An Application Server

Why is using a CDN relevant?

The main reason is to improve the speed of delivery of static content. By caching the content on the CDN edge, you not only reduce the download time from a few seconds to a few milliseconds, but you also reduce the load and amount of requests on your backend (Network, IO, CPU, Memory, …).

Static content can be defined as content not changing between two identical requests done in the same time frame.

Identical can be as simple as the same URI, or as fine grained as down to the authentication header. The time frame can range between 1 second to 1 year.
The most common case is caching resources like Javascript or CSS and serving the same file to all users forever. But caching a JSON response tailored to a user (Authentication header) for a few seconds reduces the backend calls when the user has the well-known “frenetic browser reload syndrome”.

Edges, Mid-Tier Caches, and Origins

Cloudfront isn’t “just” some servers in datacenters around the world. The service is a layered network of Edge Locations and Regional Edge Caches (or Mid-Tier Caches).

Edge Locations are distributed around the globe with more than 400 points of presence in over 90 cities across 48 countries. Each Edge Location is connected to one of the 13 Regional Edge Caches.

Regional Edge Caches are transparent to you and your visitors, you can’t configure them or access them directly. Your visitors will interact with the nearest Edge Location, which will connect to the attached Regional Edge Cache and finally to your origin. Therefore, in this article, we will refer to Cloudfront as the combination of Edge Locations and Region Edge Caches.

What Have We Learned?

Cloudfront is more than just a simple “pull-cache-serve” service

  • You improve delivery speed to your visitors
  • You can increase resilience by always using a healthy backend
  • You improve overall speed to your backend by leveraging AWS’s backbone
  • You can modify any request to tailor the response to your visitor’s device or region
  • You don’t always need a backend
  • You protect your backend by reducing the number of calls reaching it

Access free book

More from Serverless Guru

Building Serverless REST APIs for a Meal Prep Service with CloudGTO

October 31, 2023
Learn More

How to build an AWS AppSync GraphQL API with multiple data sources

October 26, 2023
Learn More

Building a Secure Serverless API with Lambda Function URL and CloudFront — Part 1

October 17, 2023
Learn More

Adopting a Serverless Architecture

Let's Talk


Serverless architecture is a modernized approach to building and deploying applications and has gained a lot of traction in recent years. It allows developers to focus more on being innovative and writing code, rather than managing infrastructure, which results in significant cost savings, increased scalability, and accelerated developer velocity. However, making the transition to a serverless architecture can be challenging. In this article, we will discuss some tips to help make the transition to a serverless architecture smoother and less stressful! Let’s dive in~

First and foremost, it's essential to understand the limitations of serverless. While it can be a very cost-effective and scalable solution, there may be better fits for some applications. For example, applications with high-compute needs or those that require long-running processes may not be well suited for serverless. It's very important to evaluate your specific use case and determine if serverless is the right fit for your organization before starting the transition process.

Secondly, you should have a solid plan in place for monitoring and debugging in a serverless environment. Considering the fact that serverless functions are event-driven, traditional monitoring and debugging methods most likely do not apply. Tools such as CloudWatch Logs can help with this, but it's important to have a strategy in place for identifying and resolving issues in a serverless environment. Additionally, it's important to have a good understanding of the underlying infrastructure and how it impacts the performance and scalability of your serverless applications.

Another important step is to break down monolithic applications into smaller and more manageable functions. This allows for easier deployment and scaling of your individual components, and can also improve security by reducing the attack surface. It's also a good practice to design your functions in a “stateless" way, meaning they aren't influenced by previous events, so they can easily be scaled horizontally.

Lastly, it's important to have a good testing strategy in place, because serverless functions are event-driven. Testing in a serverless context is similar to an extent with non-serverless workloads, however only relying on unit tests which target small sections of business logic doesn’t go far enough. In a serverless setup, we need unit tests to accelerate developers prior to deployments to the cloud, end-to-end tests on the real deployed infrastructure with automatic rollbacks in place, and performance tests to understand how our code will hold up under production level traffic. 

In conclusion, adopting a serverless architecture can bring your organization many benefits, such as increasing innovation, accelerating developer velocity, reducing costs and maintenance, and enabling your infrastructure to scale more effectively. Although, the transition can come with its own set of challenges. By understanding the limitations of serverless, having a plan in place for areas like monitoring and debugging, setting a clear foundation for how to break down your monolithic applications, and creating good testing strategies, you will have a much smoother transition to a serverless architecture and experience the many benefits it has to offer.

Serverless Guru has been guiding enterprise companies through their digital transformations since 2018. As an Advanced Partner with AWS, we are trusted serverless experts who are ready to tackle any obstacle thrown our way. From green field applications to a full adoption we are your one-stop solution. Being a global team means we’re able to work with any time zone necessary to get the job done! If this sounds interesting, please reach out to for more information!

More from Serverless Guru

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.