Deep Dive Into Serverless

February 7, 2023
Ryan Jones
5 minutes to read

Cloudfront can be simply defined as a CDN (Content Delivery Network), caching your static assets in a datacenter nearer to your viewers. But Cloudfront is a lot more complex and versatile than this simple definition.
Cloudfront is a “pull” CDN, which means that you don’t push your content to the CDN. The content is pulled into the CDN Edge from the origin at the first request of any piece of content.

In addition to the traditional pull and cache usage, Cloudfront can also be used as:

  • A Networking Router
  • A Firewall
  • A Web Server
  • An Application Server

Why is using a CDN relevant?

The main reason is to improve the speed of delivery of static content. By caching the content on the CDN edge, you not only reduce the download time from a few seconds to a few milliseconds, but you also reduce the load and amount of requests on your backend (Network, IO, CPU, Memory, …).


Static content can be defined as content not changing between two identical requests done in the same time frame.

Identical can be as simple as the same URI, or as fine grained as down to the authentication header. The time frame can range between 1 second to 1 year.
The most common case is caching resources like Javascript or CSS and serving the same file to all users forever. But caching a JSON response tailored to a user (Authentication header) for a few seconds reduces the backend calls when the user has the well-known “frenetic browser reload syndrome”.

Edges, Mid-Tier Caches, and Origins

Cloudfront isn’t “just” some servers in datacenters around the world. The service is a layered network of Edge Locations and Regional Edge Caches (or Mid-Tier Caches).

Edge Locations are distributed around the globe with more than 400 points of presence in over 90 cities across 48 countries. Each Edge Location is connected to one of the 13 Regional Edge Caches.

Regional Edge Caches are transparent to you and your visitors, you can’t configure them or access them directly. Your visitors will interact with the nearest Edge Location, which will connect to the attached Regional Edge Cache and finally to your origin. Therefore, in this article, we will refer to Cloudfront as the combination of Edge Locations and Region Edge Caches.

What Have We Learned?

Cloudfront is more than just a simple “pull-cache-serve” service

  • You improve delivery speed to your visitors
  • You can increase resilience by always using a healthy backend
  • You improve overall speed to your backend by leveraging AWS’s backbone
  • You can modify any request to tailor the response to your visitor’s device or region
  • You don’t always need a backend
  • You protect your backend by reducing the number of calls reaching it

Access free book

More from Serverless Guru

Building Serverless REST APIs for a Meal Prep Service with CloudGTO

October 31, 2023
Learn More

How to build an AWS AppSync GraphQL API with multiple data sources

October 26, 2023
Learn More

Building a Secure Serverless API with Lambda Function URL and CloudFront — Part 1

October 17, 2023
Learn More

Painless hot-reload and debug locally with Serverless Framework, TypeScript, and Esbuild

Let's Talk

Introduction

It’s so easy to code, bundle and debug with pure JavaScript, so why use TypeScript? It’s a controversial topic, but it’s not about JavaScript vs. TypeScript. I don’t want a flame war here, rather, I just want to ask you a simple question:

“Why don’t you use TypeScript with Serverless Framework?”

The real reason I expect you don’t use TypeScript:

“I don’t want to use TypeScript”.

Unfortunately, the main reasons why people are not using TypeScript with Serverless Framework yet:

“It’s hard to configure bundle, run lambdas locally with hot deploy and make a debugging process”.

So let’s simplify this, what I intend to do with this article is to pave a road for you to explore TypeScript by yourself without the pain of configuring it. Let’s make a project so simple to configure with TypeScript that you’ll swear it was simple like JavaScript.

Are there other alternative templates for TypeScript + Serverless Framework? Absolutely, but for now don't worry about how many solutions exists. If you are not a TypeScript expert, but you use the solution that I present in this article, then you already have an advantage over JavaScript — and this one is very simple to use and understand.

Seriously, all you’ll want to do is run your lambda locally, make changes in your code without restarting 'serverless offline' and debug your lambda locally like this:

So, let’s move on and I will show you how to set up your project like this.

Serverless Framework JS vs. TS?

Understanding JS First

Before showing you how to configure your project in TypeScript, let’s think a little bit about how we use JavaScript:

  • We configure triggers handlers (for examplAPI Gateway) 'serverless.yml' pointing it to the '.js' files;
  • In order to run lambdas locally, we use can 'localstack' or 'sls invoke local --function functionName', but for this example I will use only 'serverless-offline';
  • For the packaging process we include/exclude some files as desired in 'serverless.yml';
  • We can run 'serverless package' or 'serverless deploy' and it’s all done, simple like that;
  • In order to debug on VS Code we create a '.vscode' folder with a 'launch.json' configuration file running 'serverless-offline' for example.

What’s the Difference When Using TS?

We have to transpile TypeScript into JavaScript and there are many ways of doing this.

  • Many ways for transpiling: 'tsc', 'tsup', 'tsx', 'ts-node', 'swc', 'esbuild';
  • It generates a build folder with transpiled js;
  • Instead of running only 'tsc' for example, you can combine with 'webpack' for optimized bundles, minifying and removing dead code for example;
  • So you configure a 'npm run build' script to run this build command;
  • Needs to point all handlers to '.js' in the generated build folder;

So, is that simple? Wait, calm down, because…

The Game Has Just Begun

⚔️ Round 1: Plugins, easier life

To simplify this configuration process we are going to use 'serverless-esbuild' for running code and making a bundle. I considered the following advantages with this plugin:

  • You can point your handlers to '.ts' files directly instead of needing to point to '.js' in the build folder;
  • You don’t need to do a 'npm run build' script when you run a 'serverless package' or 'serverless deploy' because this plugin bundles automatically before packaging;
  • 'esbuild' is built on top of Go lang and is compiled to native code, so it is very fast;
  • It integrates very well with 'serverless-offline';

So, let's configure it and include it along with 'serverless-offline' on your plugins inside 'serverless.yml':

  
--[[ serverless.yml ]]--

...

package:
  individually: true

plugins:
  - serverless-esbuild --[[ ⚠️ this needs to be before serverless-offline ]]--
  - serverless-offline

custom:
  esbuild:
    ... configuration here ...
  

You don’t need to point your handlers to '.js' in the build folder; use '.ts' instead in the source folder:

  
--[[ serverless.yml ]]--

functions:
  status:
    handler: src/api/handler.handler --[[ 😁 yeah, it's a .ts file in a handler ]]--
    events:
      - httpApi:
          path: /status
          method: get
  

The folder structure will look similar to this:

  
--[[ folder structure example ]]--

.esbuild/ -- autogenerated build folder
src/ -- source folder with ts files
  api/
    handler.ts
serverless.yml -- the serverless-esbuild configurations will be here
  

You can now use 'serverless offline start' to run your API Gateway Lambda locally.

⚔️ Round 2: Hot Reload 🌶️

Do you want to change your TypeScript code and see what gets affected while running the 'serverless offline start' command without restarting it in order to enhance your productivity?

You need a watcher: when you save '.ts' files, it will transpile into the build folder automatically.

For this sample project I used 'serverless-esbuild', and to activate the watcher put this in 'serverless.yml':

  
--[[ serverless.yml ]]--

...

custom:
  esbuild:
    watch:
      pattern: 
        - 'src/**/*.ts'
  

Next, 'run serverless offline start' and start doing changes on your code as an experiment.

Did it work? No?

Sorry, but 'serverless-offline' reuses your local lambda. So, even using a watcher your changes won’t take effect, but there is a way to skip this lambda “cache”.

To skip this Lambda “cache” you have to run 'serverless offline start --reloadHandler' instead. Oh yeah, now we have hot reloading working!

⚔️ Round 3: Log Messages

When you get a 'throw new Error('message')', what happens? The printed stack trace is showing you the error at the generated .js files, but it’s a little bit too messy to actually help us understand anything about our code 😐.

We need to tell the lambda what is the '.ts' file line number/position equivalent to generated '.js' files, which the solution is a 'sourcemap'. Fortunately, 'serverless-esbuild' plugin helps us with it:

  
--[[ serverless.yml ]]--

...

provider:
  name: aws
  runtime: nodejs18.x
  region: us-east-1
  httpApi:
    cors: true
  environment:
    NODE_OPTIONS: "--enable-source-maps" --[[ ✅ enabled lambda sourcemaps support ]]

custom:
  esbuild:
    sourcemap: true --[[ ✅ enabled esbuild sourcemaps generation ]]
    watch:
      pattern: 
        - 'src/**/*.ts'
  

Before:

  
--[[ error output ]]--

{
  "errorMessage": "message",
  "errorType": "Error",
  "stackTrace": [
    "Error: message",
    "at handler (/home/jeferson/Code/sls-esbuild-template/.esbuild/.build/src/api/handler.js:3810:9)",
    "at InProcessRunner.run (file:///home/jeferson/Code/sls-esbuild-template/node_modules/serverless-offline/src/lambda/handler-runner/in-process-runner/InProcessRunner.js:87:20)"
  ]
}
  

After:

  
--[[ error output ]]--

{
  "errorMessage": "message",
  "errorType": "Error",
  "stackTrace": [
    "/home/jeferson/Code/sls-esbuild-template/src/api/handler.ts:15",
    "throw new Error('message')",
    "^",
    "",
    "Error: message",
    "at handler (/home/jeferson/Code/sls-esbuild-template/src/api/handler.ts:15:9)",
    "at InProcessRunner.run (file:///home/jeferson/Code/sls-esbuild-template/node_modules/serverless-offline/src/lambda/handler-runner/in-process-runner/InProcessRunner.js:87:20)"
  ]
}
  

⚔️ Round 4: Debugging in VS Code

There are some points to consider when debugging:

  • Lambda has a 30-second timeout;
  • When you stop on a breakpoint to debug, you can get delayed more than 30 seconds for this;
  • You are running '.js' code, but you need to stop breakpoints on '.ts';

If you want to start your server directly from VS Code and activate the debugging mode, you need to create a dev script on 'package.json' and a '.vscode' folder with a 'launch.json'. Here is my ready-to-use configuration for this:

  
--[[ package.json ]]--

{
  "name": "sls-esbuild-template",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "dev": "export SLS_DEBUG=* && serverless offline start --reloadHandler --noTimeout --stage local"
  }
}
  
  
--[[ .vscode/launch.json ]]--

{
  "version": "0.2.0",
  "configurations": [
    {
      "type": "node",
      "request": "launch",
      "name": "Debug Serverless",
      "runtimeExecutable": "npm",
      "cwd": "${workspaceRoot}",
      "runtimeArgs": [
        "run-script",
        "dev"
      ],
      "sourceMaps": true,
      "outFiles": [
        "${workspaceFolder}/.esbuild/.build/**/*.js"
      ],
    }
  ]
}
  
  • '--noTimeout' flag on 'serverless offline start' disables the 30-second timeout from Lambda, nice, right?
  • '"outFiles"' read the bundle folder to understand where the running code came from;
  • '"sourceMaps": true' takes care of your breakpoint and works correctly on '.ts' while running '.js' code.

I bet you are happy now because you can run the Debug mode from VS Code:

⚔️ Round 5: Optimize your bundle

Minifying, removing dead code and excluding dependencies from packaging. Let’s go!

  • First point: 'esbuild' when running in bundle mode enables tree shaking, a common compiler optimization that automatically removes unreachable/dead code;
  • Second point: A common case when running lambda locally you need 'aws-sdk', but when running lambda on AWS you don’t;
  • Third point: Minifying code locally may confuse the debugging process.

So, here is my ready configuration for solving these 3 points:

  
--[[ serverless.yml ]]--

...

params:
  default:
    esbuildMinify: true
    esbuildExclude:
      - aws-sdk
  local:
    esbuildMinify: false
    esbuildExclude: []

custom:
  esbuild:
    bundle: true
    minify: ${param:esbuildMinify}
    sourcemap: true
    keepNames: true
    watch:
      pattern: 
        - 'src/**/*.ts'
    exclude: ${param:esbuildExclude}
  

It’s done, all you need to do is use 'serverless package' or 'serverless deploy', it will be bundled seamlessly, quickly and well-optimized.

Conclusion

Sometimes developers get discouraged using TypeScript with Serverless Framework and there are many reasons for this:

  • There are many possibilities for transpiling only a single '.ts' file. Transpilers, bundlers, webpack… Oh… It’s a configuration hell;
  • There are many more tutorials for setting up Serverless Framework with JavaScript than TypeScript;
  • There is a lack of tutorials for setting up TypeScript not only for bundling but also for hot reloading and debugging— which is essential for developer survival.

In other words, this task should not be so complicated. What I intended to do here is remove the complexity barrier, so you can painlessly introduce yourself to Serverless Framework with TypeScript today.

Check out the repository I made for this article, it’s more complete than this little tutorial, it’s a ready-to-use template. Feel free and enjoy 😄.

References

More from Serverless Guru

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.