Deep Dive Into Serverless

February 7, 2023
Ryan Jones
5 minutes to read

Cloudfront can be simply defined as a CDN (Content Delivery Network), caching your static assets in a datacenter nearer to your viewers. But Cloudfront is a lot more complex and versatile than this simple definition.
Cloudfront is a “pull” CDN, which means that you don’t push your content to the CDN. The content is pulled into the CDN Edge from the origin at the first request of any piece of content.

In addition to the traditional pull and cache usage, Cloudfront can also be used as:

  • A Networking Router
  • A Firewall
  • A Web Server
  • An Application Server

Why is using a CDN relevant?

The main reason is to improve the speed of delivery of static content. By caching the content on the CDN edge, you not only reduce the download time from a few seconds to a few milliseconds, but you also reduce the load and amount of requests on your backend (Network, IO, CPU, Memory, …).


Static content can be defined as content not changing between two identical requests done in the same time frame.

Identical can be as simple as the same URI, or as fine grained as down to the authentication header. The time frame can range between 1 second to 1 year.
The most common case is caching resources like Javascript or CSS and serving the same file to all users forever. But caching a JSON response tailored to a user (Authentication header) for a few seconds reduces the backend calls when the user has the well-known “frenetic browser reload syndrome”.

Edges, Mid-Tier Caches, and Origins

Cloudfront isn’t “just” some servers in datacenters around the world. The service is a layered network of Edge Locations and Regional Edge Caches (or Mid-Tier Caches).

Edge Locations are distributed around the globe with more than 400 points of presence in over 90 cities across 48 countries. Each Edge Location is connected to one of the 13 Regional Edge Caches.

Regional Edge Caches are transparent to you and your visitors, you can’t configure them or access them directly. Your visitors will interact with the nearest Edge Location, which will connect to the attached Regional Edge Cache and finally to your origin. Therefore, in this article, we will refer to Cloudfront as the combination of Edge Locations and Region Edge Caches.

What Have We Learned?

Cloudfront is more than just a simple “pull-cache-serve” service

  • You improve delivery speed to your visitors
  • You can increase resilience by always using a healthy backend
  • You improve overall speed to your backend by leveraging AWS’s backbone
  • You can modify any request to tailor the response to your visitor’s device or region
  • You don’t always need a backend
  • You protect your backend by reducing the number of calls reaching it

Access free book

More from Serverless Guru

Building Serverless REST APIs for a Meal Prep Service with CloudGTO

October 31, 2023
Learn More

How to build an AWS AppSync GraphQL API with multiple data sources

October 26, 2023
Learn More

Building a Secure Serverless API with Lambda Function URL and CloudFront — Part 1

October 17, 2023
Learn More

Serverless Adoption: Slow Down to Speed Up

Let's Talk

In this article, we are going to dive into some thoughts around why serverless adoption isn't just about the tech, it has the potential to change the culture of your company. Naturally serverless has components which will influence the culture of building software, but we can and should take it further.

Let's jump in!

The Key Ingredient to Adoption

At Serverless Guru we focus on establishing a strong foundation for building serverless applications and then we encourage our clients development teams to use it, iterate on it, provide feedback, and ultimately make decisions around where or where things don't fit.

Similar to the various strategies that Stephen Orban talks about in his book, Ahead in the Cloud, there isn't a one-size fits all approach and each individual client might be at different points on the serverless adoption spectrum.

In situations where the foundation we helped set doesn't fit, we encourage client development teams to give us that feedback so we can create additional patterns for future usage based on their use-case. This allows us to continuously improve the blueprints and it's critical to having developers use them 3-6 months from now. If we are not actively working to keep things up-to-date they will fall out of use and then eventually be forgotten, which means that even if 90% of a service foundation exists it will be built from scratch and that's a tragedy.

Ultimately a pattern may not get you 100% of the way, but in a lot of cases it will get you from high level use-case to making deployments in minutes versus hours or days. Past that point, it's up to the developers to add-on to the pattern.

There isn't a one-size fits all approach and each individual client might be at different points on the serverless adoption spectrum

And the beautiful thing about the collaboration between Serverless Guru and our clients is the combination of domain expertise around serverless best practices and the domain expertise of how things work today at the client. When these two parties work in harmony the result is highly tailored patterns which extend past what either could do in isolation and most importantly further buy-in from skeptics.

In a lot of cases, Serverless Guru acts as the calmer of worries and the bringer of confidence in approach. Basically, when doubts arise it's nice to have someone in your corner who knows it can work and knows how it can work. That extra boost of motivation/energy from having someone reassure doubts can sometimes make the difference as it keeps everyone focused on moving the ball forward versus getting stuck in a loop.

Let's keep going.

Cloud Center of Excellence

In the last article, "Serverless Adoption: Cloud Center of Excellence" we talked about "bottom up support", check that article out here. For bottom up support to be successful you need the entire development team to share knowledge back and forth. That takes leadership understanding that to build for the long term, some short term goals need to be adjusted. This is why we have both leadership and individual contributors in the cloud center of excellence team.

This is often the place where we find the most friction. Without leadership buy-in, we may not be able to make a case for spending time towards feeding the system e.g. templates, knowledge sharing, etc. it seems obvious. If you invest into long term activities that speed up development today you reap exponential benefits downstream. It's obvious and leadership may agree to it informally, but without the proper gates being setup to make time for it, it won't happen.

What get's scheduled get's done - Michael Hyatt

Leadership has real concerns they need to think about. They may not understand the concept of slow down to speed up and in most cases they will need to be told by the development team on an individual basis when they need additional time to build for the future, not just today.

Unfortunately, developers in most companies can feel like their opinion on this will fall on deaf ears which means it's critical for leadership to create processes to ask their developers how they can speed up future development and what is missing from our development processes.

There is a book that was recommended to me by Alex Debrie called, Ask Your Developer by, Jeff Lawson, and it goes quite in depth about the power of asking your developers can unlock. A lot of traditional companies create silos around who can and can't recommend improvements or dictate direction. We can take this other seemingly obvious concept (ask the person who builds the stuff, how it can be built better or be improved) and then apply that to our serverless adoption strategy and incorporate it as a core principle of how the cloud center of excellence team should operate. If you haven't read our article "Serverless Adoption: Cloud Center of Excellence", check it out for more context.

Conclusion

When we think about serverless adoptions we often think purely about the tech side of things. However, I believe that with such a revolutionary change in how we build software we can also address other areas which often plague traditional companies such as gatekeeping, lack of long term vision, etc.

None of the topics I've talked about so far are new. They have been talked about for a long time, but by-in-large still persist. We upgrade our tech, but what about everything around the tech?

This is why I believe we can treat the adoption of serverless as a clean slate which can be used to write a different story and address problems that have been sidelined for years. Serverless adoption in itself will have a positive impact and that's already huge, but if we want to maximize our ability to compete in the marketplace we need to make sure we address all of the bottlenecks. Otherwise, we end up with a Ferrari that can only drive 20mph because everything that surrounds the Ferrari is restrictive and unaddressed.

So as we are moving from the Toyota to the Ferrari let's spend time making sure that all the supporting infrastructure is in place to fully utilize the power of the Ferrari. Otherwise as a poker player might say "we are leaving some value on the table".

Thanks for reading and if you found any of this useful, I'd love to hear about it on Twitter @ryanjonesirl.

More from Serverless Guru

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.