Serverless C# - Lambda Layer and Serverless Framework

October 28, 2020

As we all know, working with lambda provides developers with the ability to only think about their code without thinking about underlying infrastructure. Like any other infrastructure or methodology, serverless also implies certain rules and best practices. Here we are going to talk about DRY principle with Lambda.

Don't repeat yourself (DRY, or sometimes do not repeat yourself) is a principle of software development aimed at reducing repetition of software patterns, replacing it with abstractions or using data normalization to avoid redundancy. - wikipedia

In this article we will explain how to use lambda layers in order to share code between services. Motivation for this article comes from the fact that the vast majority of examples are written in NodeJS. We will show simple use case where two lambda functions written in C# will use the same class library deployed as lambda layer.

How Lambda works

In order to understand Layers, we need to understand how Lambda works. We will not go deeper into explaining lambda runtime, because that is out of scope of this article, so this will be a high level overview explaining components important for this article.

Lambda functions are run in lightweight virtual machines, called microVMs, which combine the security and isolation properties provided by hardware virtualization technology with the speed and flexibility of containers.

Every time we execute a lambda function, a micro container gets spawned (*sometimes containers are reused), then code itself is executed. These containers are created with 5MB of RAM. If your deployment package size is larger than 5MB then your function is a good candidate for a cold start.

If you want to find more information about Lambda runtime, have a look into Firecracker.

Problem

To avoid cold starts we want to keep our deployment package as small as possible. This is particularly hard when we have binary files that we want to deploy with Lambda.

The other problem that we might experience is code sharing between functions. Having a shared library in a monolith application is not an issue. You can load what you need, when you need it, because it is accessible from a single place. Because of the nature of Lambda functions we need to deploy all the code that Lambda is using to the environment.

Now imagine that you want to reuse parts of the helper library in your Lambda functions. How can you do it? Well, you can deploy a full library with Lambda, or you can refactor your library then include parts you need within your Lambda.

That’s fine, but what if you need parts of that same code in your other lambda? You have to deploy it with your other functions, so it is copy-paste of the code or playing with folder aliases. In both cases you are deploying the same code to multiple functions. Either way - bye, bye DRY.

You can also consider having a single lambda with helper functions that your other lambdas will call, but that is also not considered as a best practice.

Solution

Lambda Layers to the rescue! We can publish binary files, or part of a code to the Lambda Layer. What lambda layer does, it takes a package and extracts its content into /opt folder on the runtime environment. Now you can reference these files from any Lambda function that is using our Layer.

Note that layers supports resource based policies, which means that you can grant cross-account permissions and use layers with lambda functions from another account or organization.

It’s a good practice to keep infrastructure and application in separate stacks. Layers and lambdas are not an exception to this rule. Every time we deploy a new layer a new version of the layer will be created, therefore it is hard to reference the current version as the lambda layer does not support the $LATEST tag. If we keep stacks separated it is easier to track versions or even use Serverless Pro outputs to reference the correct version.

The other way around is to use the same stack with serverless-latest-layer-version which basically adds $latest tag capability to Serverless framework. For the sake of simplicity this is exactly what we are going to do in this example.

Create layer with Serverless framework

Creating, deploying and referencing layers with Serverless framework is incredibly easy. All you have to do is to tell the location of your files on the local disc and provide a meaningful name to your layer.

Here is the relevant section from serverless.yml file:

  
  layer:
    lib:
      path: ./my-layer-folder # path to layer contents on disk
      Name: my-shared-library # Deployed Lambda layer name
  

What this does is, it takes the content from “my-layer-folder” and deploys it to the “my-shared-library” layer. This step is the same for all supported languages but for our example we will take C# class library, publish it to layer and then reference that layer from our lambda functions. Note that you can have multiple layers so you can organize code per your needs.

Use layer in your lambda function

Now that we are all set, the question is how can we reference the layer that we created? The answer is: even easier than to publish it. Here is the example:

  
  hello:
    handler: CsharpHandlers::AwsDotnetCsharp.Hello::Handler
    layers: 
      - my-shared-library-arn
  

And that is all you have to do. Let’s say that we have C# class library compiled into dll which is deployed to our “my-shared-library-layer”. That class library contains the “hi” method that we want to invoke. How can we do it?

Here is the code snippet:

  
  Assembly myLib = Assembly.LoadFrom(@"/opt/layer.dll");
  var t = myLib.GetType("layer.Hello");
  var method = t.GetMethod("hi");
  var hello = System.Activator.CreateInstance(t);
  // Invoke static method
  method.Invoke(hello, null);
  

Conclusion

Lambda layers are a powerful feature that helps us to keep our lambda function packages smaller and also to share code between functions and services.

Smaller packages means faster deployments and good prevention of a cold starts. Use layers to share code and apply DRY principles.

Full source code from this article is available on Serverless Guru GitHub at templates.serverlessguru.com.

Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones
Founder
Speak to a Guru
Edu Marcos
Chief Technology Officer
Speak to a Guru
Mason Toberny
Head of Enterprise Accounts
Speak to a Guru

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.