Cloudfront can be simply defined as a CDN (Content Delivery Network), caching your static assets in a datacenter nearer to your viewers. But Cloudfront is a lot more complex and versatile than this simple definition. Cloudfront is a “pull” CDN, which means that you don’t push your content to the CDN. The content is pulled into the CDN Edge from the origin at the first request of any piece of content.
In addition to the traditional pull and cache usage, Cloudfront can also be used as:
A Networking Router
A Web Server
An Application Server
Why is using a CDN relevant?
The main reason is to improve the speed of delivery of static content. By caching the content on the CDN edge, you not only reduce the download time from a few seconds to a few milliseconds, but you also reduce the load and amount of requests on your backend (Network, IO, CPU, Memory, …).
Static content can be defined as content not changing between two identical requests done in the same time frame.
Edges, Mid-Tier Caches, and Origins
Cloudfront isn’t “just” some servers in datacenters around the world. The service is a layered network of Edge Locations and Regional Edge Caches (or Mid-Tier Caches).
Edge Locations are distributed around the globe with more than 400 points of presence in over 90 cities across 48 countries. Each Edge Location is connected to one of the 13 Regional Edge Caches.
Regional Edge Caches are transparent to you and your visitors, you can’t configure them or access them directly. Your visitors will interact with the nearest Edge Location, which will connect to the attached Regional Edge Cache and finally to your origin. Therefore, in this article, we will refer to Cloudfront as the combination of Edge Locations and Region Edge Caches.
What Have We Learned?
Cloudfront is more than just a simple “pull-cache-serve” service
You improve delivery speed to your visitors
You can increase resilience by always using a healthy backend
You improve overall speed to your backend by leveraging AWS’s backbone
You can modify any request to tailor the response to your visitor’s device or region
You don’t always need a backend
You protect your backend by reducing the number of calls reaching it
“Why don’t you use TypeScript with Serverless Framework?”
The real reason I expect you don’t use TypeScript:
“I don’t want to use TypeScript”.
Unfortunately, the main reasons why people are not using TypeScript with Serverless Framework yet:
“It’s hard to configure bundle, run lambdas locally with hot deploy and make a debugging process”.
Seriously, all you’ll want to do is run your lambda locally, make changes in your code without restarting 'serverless offline' and debug your lambda locally like this:
So, let’s move on and I will show you how to set up your project like this.
Serverless Framework JS vs. TS?
Understanding JS First
We configure triggers handlers (for examplAPI Gateway) 'serverless.yml' pointing it to the '.js' files;
In order to run lambdas locally, we use can 'localstack' or 'sls invoke local --function functionName', but for this example I will use only 'serverless-offline';
For the packaging process we include/exclude some files as desired in 'serverless.yml';
We can run 'serverless package' or 'serverless deploy' and it’s all done, simple like that;
In order to debug on VS Code we create a '.vscode' folder with a 'launch.json' configuration file running 'serverless-offline' for example.
What’s the Difference When Using TS?
Many ways for transpiling: 'tsc', 'tsup', 'tsx', 'ts-node', 'swc', 'esbuild';
It generates a build folder with transpiled js;
Instead of running only 'tsc' for example, you can combine with 'webpack' for optimized bundles, minifying and removing dead code for example;
So you configure a 'npm run build' script to run this build command;
Needs to point all handlers to '.js' in the generated build folder;
So, is that simple? Wait, calm down, because…
The Game Has Just Begun
⚔️ Round 1: Plugins, easier life
To simplify this configuration process we are going to use 'serverless-esbuild' for running code and making a bundle. I considered the following advantages with this plugin:
You can point your handlers to '.ts' files directly instead of needing to point to '.js' in the build folder;
You don’t need to do a 'npm run build' script when you run a 'serverless package' or 'serverless deploy' because this plugin bundles automatically before packaging;
'esbuild' is built on top of Go lang and is compiled to native code, so it is very fast;
It integrates very well with 'serverless-offline';
So, let's configure it and include it along with 'serverless-offline' on your plugins inside 'serverless.yml':
--[[ serverless.yml ]]--
- serverless-esbuild --[[ ⚠️ this needs to be before serverless-offline ]]--
... configuration here ...
You don’t need to point your handlers to '.js' in the build folder; use '.ts' instead in the source folder:
--[[ serverless.yml ]]--
handler: src/api/handler.handler --[[ 😁 yeah, it's a .ts file in a handler ]]--
The folder structure will look similar to this:
--[[ folder structure example ]]--
.esbuild/ -- autogenerated build folder
src/ -- source folder with ts files
serverless.yml -- the serverless-esbuild configurations will be here
You can now use 'serverless offline start' to run your API Gateway Lambda locally.
⚔️ Round 2: Hot Reload 🌶️
Do you want to change your TypeScript code and see what gets affected while running the 'serverless offline start' command without restarting it in order to enhance your productivity?
You need a watcher: when you save '.ts' files, it will transpile into the build folder automatically.
For this sample project I used 'serverless-esbuild', and to activate the watcher put this in 'serverless.yml':
Next, 'run serverless offline start' and start doing changes on your code as an experiment.
Did it work? No?
Sorry, but 'serverless-offline' reuses your local lambda. So, even using a watcher your changes won’t take effect, but there is a way to skip this lambda “cache”.
To skip this Lambda “cache” you have to run 'serverless offline start --reloadHandler' instead. Oh yeah, now we have hot reloading working!
⚔️ Round 3: Log Messages
When you get a 'throw new Error('message')', what happens? The printed stack trace is showing you the error at the generated .js files, but it’s a little bit too messy to actually help us understand anything about our code 😐.
We need to tell the lambda what is the '.ts' file line number/position equivalent to generated '.js' files, which the solution is a 'sourcemap'. Fortunately, 'serverless-esbuild' plugin helps us with it:
When you stop on a breakpoint to debug, you can get delayed more than 30 seconds for this;
You are running '.js' code, but you need to stop breakpoints on '.ts';
If you want to start your server directly from VS Code and activate the debugging mode, you need to create a dev script on 'package.json' and a '.vscode' folder with a 'launch.json'. Here is my ready-to-use configuration for this:
It’s done, all you need to do is use 'serverless package' or 'serverless deploy', it will be bundled seamlessly, quickly and well-optimized.
Sometimes developers get discouraged using TypeScript with Serverless Framework and there are many reasons for this:
There are many possibilities for transpiling only a single '.ts' file. Transpilers, bundlers, webpack… Oh… It’s a configuration hell;
There is a lack of tutorials for setting up TypeScript not only for bundling but also for hot reloading and debugging— which is essential for developer survival.
In other words, this task should not be so complicated. What I intended to do here is remove the complexity barrier, so you can painlessly introduce yourself to Serverless Framework with TypeScript today.
Check out the repository I made for this article, it’s more complete than this little tutorial, it’s a ready-to-use template. Feel free and enjoy 😄.