Lift and Shift to Lambda with Docker and Web Adapters

May 3, 2024

For those who are looking to shift to AWS Serverless as fast as possible, but have all HTTP servers in docker container images and have no idea how to start the migration process, here is an easy solution to lift and shift to AWS Lambda.

Hi, I'm Felipe and I've been a Serverless Expert for the past 5 years helping many clients lift and shift to AWS Serverless, as well as enhancing best practices and helping them make better and smarter decisions around their architecture.

The problem

One of the most common scenarios that I've faced is a client wanting to migrate his container HTTP servers to AWS Serverless as fast as possible. In this situation, there are two major solutions: AWS ECS FARGATE or AWS LAMDA CONTAINERS.

But when should I use one or another?

As with many aspects of software development, the choice depends on several factors. For the sake of simplicity in this discussion, I will focus on what I consider the most crucial factors for decision-making in a lift and shift scenario, discuss the pros and cons of each option, and finally decide between two AWS Serverless container services: AWS LAMBDA CONTAINERS and AWS ECS FARGATE.

To determine the most suitable service, we need to answer a few questions: What are the traffic patterns? Are they characterized by spikes, or do they remain relatively constant? What are the server web protocols? Do you exclusively utilize HTTP requests, or do you also incorporate other protocols such as GRPC and websockets?

Fargate VS. Lambda Containers

Fargate

FARGATE is an amazing tool for serverless containers that boasts better performance and cost overall for applications with constant traffic and fewer spiky loads compared to LAMBDA.  It also has much more variety of web protocols and customization available such as  TCP, UDP, SSH, etc.

Taking that all into consideration, if you have constant traffic with very predictable daily access, or if you use protocols other than HTTP, I would certainly recommend FARGATE.

AWS Lambda Containers

But that is not the case for most of the startups out there. The most common scenario that I've faced is startups that have developed their product using a simple Docker container with a small HTTP server that has unpredictable traffic due to their business model. They don't even know if they are going to become a unicorn in the next year and have to scale up their servers or if they are going to pivot their product. That is why they are looking for a serverless solution that can fit their needs and scale on demand without breaking the bank.

For this specific case, in a lift and shift scenario, I usually recommend using AWS Lambda Web Adapters with AWS Lambda Containers. AWS Lambda Web Adapters is a Lambda custom runtime made in Rust by AWS Labs that proxies Lambda events from HTTP requests to a real HTTP request within the container.

With that, you can have any simple HTTP server in a docker container being deployed on a Lambda environment immediately!

Hands-On!

Let's give you a simple example of how to deploy three simple HTTP microservices under the same AWS API Gateway using Serverless Framework and three different languages: GO, Python, and Javascript with Node.js.

After that, I will give you the best practices and next steps for your serverless microservices using the Strangler Pattern to break down those servers into modular and more secure Lambda functions.

First, we need AWS CLI, Serverless Framework, and Docker installed. After that, we can create our serverless.yaml file.

  
service: lambda-container-microservices
frameworkVersion: "3"

provider:
  name: aws
  runtime: custom
  ecr:
    images:
      go-ms:
        path: ./goLang
      javascript-ms:
        path: ./javaScript
      python-ms:
        path: ./python
  

Then we will add 3 folders, one for our GO microservice, and the other one for our Node.js and Python microservices.

three folders with a dockerfile and a file with the server code in python, Golang and Node.js.
Microservices folder Architecture

In each one of them let's create an image from their Alpine version and add the command and the Lambda Web Adapters runtime.

Python

  
FROM python:alpine
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.7.0 /lambda-adapter /opt/extensions/lambda-adapter
WORKDIR "/var/task"
RUN pip install flask
ADD . /var/task

CMD ["python3", "index.py"]
  

Node.js

  
FROM node:alpine
COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.7.0 /lambda-adapter /opt/extensions/lambda-adapter
# ENV PORT=7000
WORKDIR "/var/task"
ADD package.json /var/task/package.json
ADD package-lock.json /var/task/package-lock.json
RUN npm install --omit=dev
ADD . /var/task
CMD ["node", "index.js"]
  

Go

  
# Build binary
FROM golang:alpine as build

WORKDIR /build
# Use this if you have your go.mod in this folder instead
COPY go.mod go.sum ./
COPY . .
ENV GO111MODULE=on
ENV GOPATH=""
# RUN go mod init example.com/goLangMicroservice
# RUN go get -u github.com/gofiber/fiber/v2
RUN GOOS=linux go build -o ./main main.go

# Start lambda container from fresh image 
FROM golang:alpine


WORKDIR "/var/task"
COPY --from=build /build/main /var/task

COPY --from=public.ecr.aws/awsguru/aws-lambda-adapter:0.7.0 /lambda-adapter /opt/extensions/lambda-adapter




CMD ["/var/task/main"]
  

Now let's create simple HTTP servers using each language and returning a string from the endpoint.

Python

  
from flask import Flask, Blueprint

app = Flask(__name__)
api = Blueprint('app', __name__, url_prefix="/py/api")

@api.get("")
def hello():
    return "Hello from Python Microservice!"

if __name__ == "__main__":
    app.register_blueprint(api)
    app.run(port=8080)
  

Node.js

  
const express = require("express");
const {Router} = require("express");
const app = express();


app.use(express.json());
const api = Router();

app.use("/js/api", api);
api.get("/", (req, res, next) => {
  return res.status(200).send(
  "Hello from javascript Microservice!",
  );
});
app.listen(8080);
  

Go

  
package main

import (
	"github.com/gofiber/fiber/v2"
)

func main() {
	app := fiber.New()
	api := app.Group("/go/api") 

	api.Get("/", func(c *fiber.Ctx) error {
		return c.SendString("Hello from Golang Microservice!")
	})

	err := app.Listen(":8080")
	if err != nil {
		panic(err)
	}
}
  

Now let's create the endpoints for each microservice using the {proxy+} parameter for forwarding any string after the /lang/api url parameter.

  
service: lambda-container-microservices
frameworkVersion: "3"

provider:
  name: aws
  runtime: custom
  ecr:
    images:
      go-ms:
        path: ./goLang
      javascript-ms:
        path: ./javaScript
      python-ms:
        path: ./python

functions:
  jsMicroservice:
    image:
      name: javascript-ms
    events:
      - http:
          path: /js/{proxy+}
          method: ANY
  pythonMicroservice:
    image:
      name: python-ms
    events:
      - http:
          path: /py/{proxy+}
          method: ANY
  goLangMicroservice:
    image:
      name: go-ms
    events:
      - http:
          path: /go/{proxy+}
          method: ANY
  

Done! Now let's deploy it and see each microservice in action! Just type ‘sls deploy’ on your terminal and see the magic happening!

  
sls deploy
  

After the deployment by Accessing the API Gateway base URL with /py/api, we can see our Python microservice response and the same works for /js/api and /go/api.

result from the python microservice with status 200 and the message 'Hello from python Microservice!
Response from python microservice
the result from the Node.js microservice with status 200 and the message 'Hello from Javascript Microservice!
Response from Node.js microservice
the result from the Golang microservice with status 200 and the message 'Hello from Golang Microservice!
Response from Golang microservice


Conclusion

We learned when and how to migrate to AWS Lambda Containers vs AWS Fargate for a lift and shift use case using docker, and AWS Lambda Adapters in many programming languages.

Next Steps and Best Practices

Now, what's next!? It is not a good practice to have lambdas with many responsibilities, permissions, or a whole microservice. We did that to help with the lift and shift process.

Now, we apply the Strangler Pattern and break up slowly each endpoint into individual lambdas, using the Least privilege pattern. It can be done by adding a new function overriding the URL for that path as shown below.

  
module.exports.handler = async (event, context) => {
  return {
    statusCode: 200,
    body: JSON.stringify({
      message: "Use me for the Strangler Pattern",
    }),
  };
};
  
  
service: lambda-container-microservices
frameworkVersion: "3"

provider:
  name: aws
  runtime: custom
  ecr:
    images:
      go-ms:
        path: ./goLang
      javascript-ms:
        path: ./javaScript
      python-ms:
        path: ./python

functions:
  jsIsolatedFunction: # Use specific routes to override your Micro-service when you are using the Strangler Pattern
    runtime: nodejs20.x
    memorySize: 512
    handler: ./IsolatedJS/index.handler
    events:
      - http:
          path: /js/api/isolated
          method: GET

  jsMicroservice:
    image:
      name: javascript-ms
    events:
      - http:
          path: /js/{proxy+}
          method: ANY
  pythonMicroservice:
    image:
      name: python-ms
    events:
      - http:
          path: /py/{proxy+}
          method: ANY
  goLangMicroservice:
    image:
      name: go-ms
    events:
      - http:
          path: /go/{proxy+}
          method: ANY
  

Now you have a separate endpoint in JavaScript that has its own CPU, Memory, environment, and permissions, following some of the best serverless practices. The next step is to keep doing that for all endpoints until you finish the Strangler Pattern.

That's it for this blog post folks! Thank you for your time and leave a comment below if you have any doubt!

Research/References

https://github.com/awslabs/aws-lambda-web-adapter

Install or update to the latest version of the AWS CLI - AWS Command Line Interface

Setting Up Serverless Framework With AWS

Install Docker Engine

GitHub Repo with Example

https://github.com/felipegenef/simple-lambda-containers-microservices

Serverless Handbook
Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones - Founder
Ryan Jones
Founder
Speak to a Guru
arrow
Edu Marcos - CTO
Edu Marcos
Chief Technology Officer
Speak to a Guru
arrow
Mason Toberny
Mason Toberny
Head of Enterprise Accounts
Speak to a Guru
arrow

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.