Serverless Development

Serverless API Gateway Service Proxy

Reduce Costs & Improve Performance

Serverless API Gateway Service Proxy

By
Marcelo Andrade
November 13, 2020

Introduction

When developers think about Serverless, Lambda is the first thing that pops out in anybody's head. But do you know that it's possible to integrate some of the most used AWS products directly with API Gateway, without the need of a Lambda function?

In this article, we are going to show how to configure the plugin serverless-api-gateway-service-proxy, on top of Serverless Framework, and instantly reduce your AWS costs in addition to improve your serverless application performance.

First things first

Before we start, you need to know that we cannot use this solution with all the products in AWS stack. The plugin supports:

  • Kinesis Streams
  • SQS
  • S3
  • SNS
  • DynamoDB (PutItem, GetItem and DeleteItem)
  • EventBridge

Why you should know about this:

Besides the cost reduction of removing the need of a Lambda function in certain flows, that by itself would justify the use of this solution, all AWS accounts have a limitation in Lambda called "Unreserved Account Concurrency", which means that you have a maximum number of concurrent calls shared between all your executions. The default value for this is 1k and AWS normally increases it to 20k if you ask to. For bigger limits, you need to tell AWS which functions need the increase, and sometimes it is hard to detect it when you are dealing with a whole serverless application. By the way, here at Serverless Guru we use the Serverless Pro Dashboard and it's awesome. You can check more details right here: https://www.serverless.com/pro/

Let's bring it to a real world scenario:

Imagine that you have an application that uses DynamoDB as your data store and you built it on top of Lambda. For each endpoint you should have a Lambda that authenticate and do some operation with the database, like this example:

Considering that each Lambda in the draw solves only one use case of your application and you are running with the default limit of AWS (1k), your application scales limits would be something like:

  • ~1000 users hitting the same endpoint at the same time;
  • ~500 users hitting 2 different endpoints at the same time;
  • ~333 users hitting 3 different endpoint at the same time;
  • ...

For users who are beyond those limits, they are going to be throttled, will have problems with cold starts and should not get the best experience that they can from your application.

When should I consider using it?

If you are using a Lambda function that has the only responsibility to call one of the listed services and return the response, and you are already having troubles with cold start latency, you should definitely consider to use it.

Let's do some code

I'm going to show how to use it with SQS, but you should check the plugin repository for much more examples: https://github.com/serverless-operations/serverless-apigateway-service-proxy.

First you need to run this command in your Serverless project:

  
  serverless plugin install -n serverless-apigateway-service-proxy
  

In you serverless.yml file, you need to add the plugin in your plugins section:

  
  plugins:
    - serverless-apigateway-service-proxy
  

And then, in the custom section, you should configure your service proxy. In this case, a direct call to SQS:

  
  custom:
    apiGatewayServiceProxies:
      - sqs:
          path: /sqs
          method: post
          queueName:
            'Fn::GetAtt': ['SQSQueue', '[QUEUE_NAME]']
  

You can also enable CORS and add authorization with AWS_IAM, Cognito as well as custom authorizer functions. You can check more details here and here.

Conclusion

You can have huge improvements with really low effort with this pattern, and you will be able to handle millions of requests with no headaches and high costs.

That's it, just get in touch if you have any questions. Thanks.