Serverless Development

Securing Against Supply Chain Disruption

An Exercise in Risk Management

Securing Against Supply Chain Disruption

Paul Chin Jr.
June 4, 2022

A Normal Morning

It’s a quiet weekday morning. You’ve just pushed a commit and take a break to doom-scroll on Twitter. Suddenly, a vulnerability in a popular open source package is trending. Frantically, you try to make sense of the attack and how it affects you. This time, you’re not affected and you heave a sigh of relief. With a clearer mind, you take stock to review your code so the next time is a little less hectic. How do we make future incidents a little less hectic?

Sharing Responsibility

Security is everyone’s responsibility. AWS shares this responsibility by providing security “of the cloud”, while we are responsible for security “in the cloud”. AWS takes care of securing your cloud infrastructure comprised of the physical machines and their exposed APIs. It’s up to us to secure the rest.

As serverless developers, we opt-in to insulate ourselves from the toil of managing hardware and undifferentiated heavy lifting. We get to focus on building software in the domain that impacts our users. It’s the difference between spending time configuring and patching your environment versus spending time understanding your users.

Our first task is making sure we operate our environment with zero trust and granting the least privileges to resources. Zero trust environments authenticate and verify every request regardless of the origin. Our solution architectures assume that attacks within the network are just as likely as external attacks. Even after a request is authenticated and verified, we further restrict its access to a bare minimum or least privileges. These practices minimize the blast radius of an attack and integrate well with architectures composed of smaller independent services. When an attacker does gain access to our system, they are prevented from going further. This security posture reinforces resiliency by anticipating that any resource can be corrupted, disposed, and rebuilt through a managed process.

Risky Business

Regardless of cloud providers, containers or VMs, or Functions as a Service, we are all vulnerable to supply chain attacks. In a supply chain attack, an attacker gains access through our least secure dependencies and build processes. We need to scrutinize the trust we place in code supplied to us.

Let’s think like an attacker! Here are the questions I ask myself:

  • what data is sensitive?
  • how can I access sensitive data?
  • how can I execute arbitrary code?
  • how can I control ingress and egress?

Twelve-Factor App is a methodology that we can apply to address all of the above. There’s even a serverless-specific version that I reference all the time. We need to recognize that an insecure cloud environment is very dangerous because AWS offers us infrastructure superpowers through their APIs. Read-only access still gives an intruder plenty of room to scan for further vulnerabilities. Risks to your overall supply chain are ever broader than a direct attack. Open source maintainers can change their source at any time and cause a massive disruption to everyone dependent on their libraries.

Secure and Friendly

We’re tasked with being diligent over multiplying layers of complexity and evaluating trade offs between security and convenience for our users. Automation is one way to minimize those trade offs. We can write Infrastructure as Code (IaC) that embeds our security posture into every environment we create. IaC makes it easier to destroy corrupted resources and redeploy from a previous state tracked in version control. Making security friendlier for developers increases compliance and avoids shadow IT. We use automation to reduce manual inputs, increase security, and offer a friendlier developer experience. One caveat is that a template becomes quickly redistributed and one mistake becomes your own supply chain nightmare. IaC and CI/CD pipelines should be required for operating in the cloud, however, build pipelines also have unusually high permission levels to do what they need to do. It is crucial that we apply zero trust and least privilege architecture to all of our automation code.

Friendly security practices start with our development environment and local tooling. We’re using IaC tools like Serverless Framework, Terraform, and Architect to make consistent deployments. Making sure we’re not hard-coding passwords in the IaC templates, and instead, using a secrets service like AWS Secrets Manager to securely hold key/value pairs. A secrets service also lets you dynamically change values based on your stage environment and automatically rotate secrets. We can increase the trust factor of Lambda functions before deployments using AWS Signer, a managed code signing service. When it comes down to the code that we write, we should make sure to pin our dependencies, watch for security patches with a tool like dependabot, and manage repository access.

The future will continue to expose new surface areas for attack. Just within the Lambda environment, developers can access Layers to share dependencies across functions, add networked storage, utilize an increasing amount of /tmp disk and memory, expose a URL directly, and trigger execution from a long list of other AWS services. All of these conditions reduce isolation and give attackers more entry points.

The next #vulnerability will come, but at least we can enjoy our cup of coffee first, knowing that the basics have been covered.