Cloudfront can be simply defined as a CDN (Content Delivery Network), caching your static assets in a datacenter nearer to your viewers. But Cloudfront is a lot more complex and versatile than this simple definition. Cloudfront is a “pull” CDN, which means that you don’t push your content to the CDN. The content is pulled into the CDN Edge from the origin at the first request of any piece of content.
In addition to the traditional pull and cache usage, Cloudfront can also be used as:
A Networking Router
A Web Server
An Application Server
Why is using a CDN relevant?
The main reason is to improve the speed of delivery of static content. By caching the content on the CDN edge, you not only reduce the download time from a few seconds to a few milliseconds, but you also reduce the load and amount of requests on your backend (Network, IO, CPU, Memory, …).
Static content can be defined as content not changing between two identical requests done in the same time frame.
Edges, Mid-Tier Caches, and Origins
Cloudfront isn’t “just” some servers in datacenters around the world. The service is a layered network of Edge Locations and Regional Edge Caches (or Mid-Tier Caches).
Edge Locations are distributed around the globe with more than 400 points of presence in over 90 cities across 48 countries. Each Edge Location is connected to one of the 13 Regional Edge Caches.
Regional Edge Caches are transparent to you and your visitors, you can’t configure them or access them directly. Your visitors will interact with the nearest Edge Location, which will connect to the attached Regional Edge Cache and finally to your origin. Therefore, in this article, we will refer to Cloudfront as the combination of Edge Locations and Region Edge Caches.
What Have We Learned?
Cloudfront is more than just a simple “pull-cache-serve” service
You improve delivery speed to your visitors
You can increase resilience by always using a healthy backend
You improve overall speed to your backend by leveraging AWS’s backbone
You can modify any request to tailor the response to your visitor’s device or region
You don’t always need a backend
You protect your backend by reducing the number of calls reaching it
Lumigo is a platform that primarily focuses on debugging distributed serverless applications on the AWS cloud. Services like X-ray from AWS do a pretty good job at tracing requests with your application but the support for event driven systems isn’t quite there yet. X-ray also falls short in terms of piecing up fragments of certain chained transactions inside serverless architectures because that requires you to navigate between Cloudwatch and X-ray to understand individual events.
Distributed applications are inherently complex, and because of that complexity, they have multiple points of failure. By tracing each and every request with respect to your functions, Lumigo aims to alleviate the process of finding faults and figuring out fixes when breakdowns occur in your serverless architecture. In addition to analyzing code issues and performance hiccups, Lumigo’s insights will allow you to plan the operational limits for your functions based on your usage, providing you a foundation to optimize costs. With Lumigo’s alerts, averting an impending system failure becomes possible as long as the right course of action is taken based on the insights.
We’ll explore how the platform works with a simple lambda function that converts an audio file to text using the AWS Transcribe service.
To start using Lumigo, all you need to do is allow cloudformation to deploy cloudformation stack into your AWS account. Once the stack is deployed, you can select all the lambda functions you want Lumigo to automatically begin tracing.
Lumigo’s auto-tracing works by adding a lambda layer and environment variables to your lambda function. In the unlikely event that you need to trace functions on your own, follow their documentation from here.
Creating the Lambda Function:
For understanding the platform, deploy a lambda function using the following python code.
After the function has been created, provide full permissions (this is only for this sample application, as a general practice, only provide permissions for the intended use) for S3 and transcribe services to the associated function role.
This function essentially gets invoked whenever an audio file (.mp3) is uploaded into an S3 bucket. The file’s content is converted to text by calling the AWS Transcribe service. The transcribe service stores the output into another S3 bucket.
Create an S3 bucket that will act as the trigger for this function and associate it from the function’s console view. Create another bucket to store the output from the transcribe service. Add this bucket’s name to the above code block where indicated. (ensure that both buckets are created in the same region as the function)
Getting to Know Lumigo:
The dashboard view displays all those functions that Lumigo has picked up from your account.
Before we take a look at other screens, upload an .mp3 file to your previously created input bucket. (Some deliberate erroneous invocations were done for sample data)
Navigate from left pane Functions -> filter your created function.
Function view: This view gives you a high-level overview of how your function is performing and the cost per invocation. Clicking on any on the invocation will navigate to the transaction page to understand what went on in that particular trace.
If you haven’t enabled Lumigo to begin function tracing yet, the process is as simple as hitting Auto trace from the top of the screen. (since I have already traced this function that button is invisible)
Issues view: This view shows all issues your functions are facing in the account. Selecting any issue type will open up the function view as above filtered with the chosen issue type. From those entries, you can then view each of those transactions individually and the course of that invocation.
Transaction view: This view lists all invocations of your functions and their metrics. I have filtered with our speech-to-text function to understand what went wrong and get a pulse as to why the invocations are failing to produce the expected result. Notice how each failed entry has an attached label indicating the cause of failure. Selecting any of the entries goes to the transaction view below.
Transaction ID view: This view is where you truly get to know what Lumigo is capable of. From the log entries, it’s evident that the cause of failure is the filename not conforming to a pattern accepted by the transcribe service.
This is the view that Lumigo connects to other views, which we have already looked at when you are troubleshooting issues individually for each function call.
System map view: Once an invocation occurs, the service map view lets you visualize the services involved in an invocation call. This is very useful when you have extensive integrations with third-party services and need to see how an invocation trace flows between those services.
We’ve explored how simple and easy it is to begin tracing lambda functions on Lumigo. The console is intuitive with each screen blending in with one another for correlating metrics. There are also provisions to set up alerts for your functions so you can keep a close eye on them and fine-tune them depending on how frequently those alerts pop up. Alert notifications can be received by popular PagerDuty, Slack, and OpsGenie to name a few. Tracking issues for better visibility can be done by opening Jira tickets from Lumigo. With Lumigo offering a complete package for serverless debugging and performance monitoring as a service with their nimble approach, it makes them a compelling choice to consider when looking for a monitoring solution.
Serverless Guru is a serverless-first consulting company specializing in helping major brands in every step of their serverless journey.