Cloudfront can be simply defined as a CDN (Content Delivery Network), caching your static assets in a datacenter nearer to your viewers. But Cloudfront is a lot more complex and versatile than this simple definition. Cloudfront is a “pull” CDN, which means that you don’t push your content to the CDN. The content is pulled into the CDN Edge from the origin at the first request of any piece of content.
In addition to the traditional pull and cache usage, Cloudfront can also be used as:
A Networking Router
A Firewall
A Web Server
An Application Server
Why is using a CDN relevant?
The main reason is to improve the speed of delivery of static content. By caching the content on the CDN edge, you not only reduce the download time from a few seconds to a few milliseconds, but you also reduce the load and amount of requests on your backend (Network, IO, CPU, Memory, …).
Static content can be defined as content not changing between two identical requests done in the same time frame.
Identical can be as simple as the same URI, or as fine grained as down to the authentication header. The time frame can range between 1 second to 1 year. The most common case is caching resources like Javascript or CSS and serving the same file to all users forever. But caching a JSON response tailored to a user (Authentication header) for a few seconds reduces the backend calls when the user has the well-known “frenetic browser reload syndrome”.
Edges, Mid-Tier Caches, and Origins
Cloudfront isn’t “just” some servers in datacenters around the world. The service is a layered network of Edge Locations and Regional Edge Caches (or Mid-Tier Caches).
Edge Locations are distributed around the globe with more than 400 points of presence in over 90 cities across 48 countries. Each Edge Location is connected to one of the 13 Regional Edge Caches.
Regional Edge Caches are transparent to you and your visitors, you can’t configure them or access them directly. Your visitors will interact with the nearest Edge Location, which will connect to the attached Regional Edge Cache and finally to your origin. Therefore, in this article, we will refer to Cloudfront as the combination of Edge Locations and Region Edge Caches.
What Have We Learned?
Cloudfront is more than just a simple “pull-cache-serve” service
You improve delivery speed to your visitors
You can increase resilience by always using a healthy backend
You improve overall speed to your backend by leveraging AWS’s backbone
You can modify any request to tailor the response to your visitor’s device or region
You don’t always need a backend
You protect your backend by reducing the number of calls reaching it
The ability of AWS Lambda to integrate seamlessly with your favorite operational tools and services is one of its most appealing features. As a result, developers can construct spectacular cloud infrastructures, but they sometimes have trouble managing lambda function errors. As a result, this can lead to application downtime. Thankfully in this blog, we will go through how to properly handle function errors as we cover the following items:
What is a lambda function error?
How to gracefully handle a lambda function error
Building a solution
What is a Lambda function error?
A lambda function error is when your function’s code throws an exception or returns an error object [1]. What are some possible causes for a function error? Imagine this scenario: you have a Lambda integration with Stripe to process payments. Your lambda function got invoked and sent a payment request to Stripe, but the request failed because Stripe’s service is down. While Stripe is quite reliable, this is just an example.
How to gracefully handle a Lambda function error?
One approach to handling a lambda function error is using Amazon Simple Queue Service (SQS). In the scenario described earlier, a queue would sit in-between Stripe and the Lambda function. If the Stripe service fails, SQS will store the failed messages in a dead-letter queue using a re-drive policy. This mechanism allows Lambda to re-process failed jobs. Isn’t that cool?
Building the solution
Init Serverless Framework
We will use the Serverless Framework tool to build our infrastructure. The below commands create a Serverless AWS NodeJs template app.
mkdir lambda-error-handling
cd lambda-error-handling
touch .env
sls create --template aws-nodejs
Configure AWS Credentials
Paste the following code into the '.env' file. Enter your personal AWS Access Key ID & Secret Access Key.
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
Define Resources
Standard Queue
Dead-Letter Queue
Lambda Function
Paste the following code into the 'serverless.yml' file. The 'deadLetterTargetArn' is a reference to a dead letter queue where failed jobs will be stored. If a job failed and the receive count for a message exceeds the 'maxReceiveCount', Amazon SQS moves the message to the dead-letter-queue.
#######################################################
# LAMBDA ERROR HANDLING #
# Happy Coding!! ;)) #
#######################################################
service: lambda-error-handling
frameworkVersion: "3"
provider:
name: aws
runtime: nodejs18.x
stage: dev
region: us-east-1
stackName: lambda-error-handling-${sls:stage}
# Allow usage of environment variables
useDotenv: true
## Lambda Function
functions:
Consumer:
handler: handler.consumer
name: ${self:provider.stackName}-Consumer
events:
- sqs:
arn: !GetAtt SourceQueue.Arn
# CloudFormation resource templates here
resources:
Description: >
This stack creates a solution for how to handle Lambda Function errors using Amazon Simple Queue Service (SQS).
Resources:
# Standard Queue
SourceQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:provider.stackName}-SourceQueue
RedrivePolicy:
deadLetterTargetArn: !GetAtt DLQueue.Arn # Defines where to store failed jobs
maxReceiveCount: 1
# Dead-Letter Queue
DLQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:provider.stackName}-DLQueue
Outputs:
SourceQueueURL:
Description: "Source Queue URL"
Value: !Ref SourceQueue
Export:
Name: ${self:provider.stackName}-SourceQueue
DeadLetterQueueURL:
Description: "Dead-Letter Queue URL"
Value: !Ref DLQueue
Export:
Name: ${self:provider.stackName}-DLQueue
Create code for Lambda
The function code is pretty basic. We are simulating a Stripe payment request/response. It simply reads a message from the queue and runs a conditional check. The function will throw an error if the message content is 'PYMT_FAILED'. Otherwise, if the message content is 'PYMT_SUCCEED' the function will run as normal and return a successful response.
Paste the following code into the 'handler.js' file.
module.exports.consumer = async (event) => {
const { Records } = event; // gets Record payload from Queue
let body = Records[0]?.body; // get msg from body
if (body === "PYMT_FAILED") {
throw new Error("Payment request failed");
} else if (body === "PYMT_SUCCEED") {
return {
statusCode: 200,
body: JSON.stringify("Payment request succeeded!"),
};
}
};
Deploy Infrastructure
sls deploy --verbose
Once the stack is successfully deployed, you can go to CloudFormation in the AWS console and retrieve the URL for the Source Queue and the Dead Letter Queue.
Sending Messages to the Queue
We are simulating the process of sending a Stripe failed payment response. For the queue URL, enter the Source Queue URL:
Currently, I am not aware if there is a process to trigger a queue re-drive programmatically. Great opportunity to let AWS know this feature is highly needed. We can start the re-drive using the AWS Console. Head over to the AWS Console and search for SQS and select the Dead-Letter Queue we created earlier. Starting a re-drive will submit the message to the original source queue and trigger the lambda function to re-process the message.
Conclusion
In this article, you have seen what a Lambda Function error is and one approach to properly handling the error using a dead-letter queue. As a next step, you can extend the architecture further by adding alarms for when the queue has reached its maximum re-drive policy.
You can find the complete source code on my GitHub.
For any questions, comments, or concerns, please feel free to message me on LinkedIn or Twitter.