Cloudfront can be simply defined as a CDN (Content Delivery Network), caching your static assets in a datacenter nearer to your viewers. But Cloudfront is a lot more complex and versatile than this simple definition. Cloudfront is a “pull” CDN, which means that you don’t push your content to the CDN. The content is pulled into the CDN Edge from the origin at the first request of any piece of content.
In addition to the traditional pull and cache usage, Cloudfront can also be used as:
A Networking Router
A Firewall
A Web Server
An Application Server
Why is using a CDN relevant?
The main reason is to improve the speed of delivery of static content. By caching the content on the CDN edge, you not only reduce the download time from a few seconds to a few milliseconds, but you also reduce the load and amount of requests on your backend (Network, IO, CPU, Memory, …).
Static content can be defined as content not changing between two identical requests done in the same time frame.
Identical can be as simple as the same URI, or as fine grained as down to the authentication header. The time frame can range between 1 second to 1 year. The most common case is caching resources like Javascript or CSS and serving the same file to all users forever. But caching a JSON response tailored to a user (Authentication header) for a few seconds reduces the backend calls when the user has the well-known “frenetic browser reload syndrome”.
Edges, Mid-Tier Caches, and Origins
Cloudfront isn’t “just” some servers in datacenters around the world. The service is a layered network of Edge Locations and Regional Edge Caches (or Mid-Tier Caches).
Edge Locations are distributed around the globe with more than 400 points of presence in over 90 cities across 48 countries. Each Edge Location is connected to one of the 13 Regional Edge Caches.
Regional Edge Caches are transparent to you and your visitors, you can’t configure them or access them directly. Your visitors will interact with the nearest Edge Location, which will connect to the attached Regional Edge Cache and finally to your origin. Therefore, in this article, we will refer to Cloudfront as the combination of Edge Locations and Region Edge Caches.
What Have We Learned?
Cloudfront is more than just a simple “pull-cache-serve” service
You improve delivery speed to your visitors
You can increase resilience by always using a healthy backend
You improve overall speed to your backend by leveraging AWS’s backbone
You can modify any request to tailor the response to your visitor’s device or region
You don’t always need a backend
You protect your backend by reducing the number of calls reaching it
In this article, we will talk about how to make your serverless application security stronger. Nowadays many different types of Distributed Denial of Service (DDoS) attacks are happening on applications, one of them is the HTTP Flood DDoS Attack. This article will explain how to Prevent HTTP Flood DDOS attacks for serverless applications to make your serverless security stronger.
What is a DDoS (Distributed Denial of Service) Attack?
A distributed denial-of-service (DDoS) attack is a malicious attempt to disrupt the normal traffic of a targeted server, service, or network by overwhelming the target or its surrounding infrastructure with a flood of Internet traffic. [1]
The attacker does floods of fake requests on target, and due to that, the target goes slow or down completely and it is not able to serve the requests of valid users.
Few General DDoS Attacks Type
HTTP Flood DDoS Attack
SYN Flood DDoS Attack
DNS Amplification DDoS Attack
What is HTTP Flood DDoS Attack?
HTTP Flood DDoS Attack is a kind of attack that loads web applications again and again on many different systems at once (sometimes referred to as a botnet), due to the huge number of HTTP requests flooding on servers consuming more resources, and in the end, web applications are not available to real users & denial-of-service (DDoS) occurs. In short, each HTTP request does an expensive database query (and other logical operations on the server) so when lots of HTTP requests hit the server at the same time then it will go down due to the heavy resource consumption; which then creates a DDoS.
Preventing HTTP Flood DDoS Attack in Serverless Applications with AWS WAF
What is AWS WAF?
AWS WAF is a web application firewall that helps protect your web applications / APIs against common web exploits and bots. Attacks may affect availability, compromise security, or consume excessive resources. AWS WAF gives you control over how traffic reaches your applications by enabling you to create security rules that control bot traffic and block common attack patterns.
In the diagram below, we can see where the AWS WAF sits in our serverless architecture. Basically, it’s our shield in front of all requests coming into our system. But, don’t confuse this service with AWS Shield (lol, AWS has everything).
Using this command, Artillery sends 2000 requests to your API from 10 concurrent users. By doing this, you trigger the rate limit rule in less than the 5-minute threshold. Once Artillery finishes its execution, re-running the API and API response will be:
WAF Blocked:
{"message":"Forbidden"}
You can see the API response is Forbidden, as the request was blocked by AWS WAF. Your IP address will remove from the blocked list after it falls below the request limit rate.
Same as AWS API Gateway, you are also able to associate AWS WAF Web ACL with regional resources like Amazon Application Load Balancer, AWS AppSync, and Global resources like CloudFront Distributions.
For the CloudFront Distributions case you need to create AWS WAF Web ACL of Global type instead of the regional resource type.
Conclusion
Using the AWS WAF service as part of your Serverless Application Architecture helps to prevent HTTP Flood DDoS Attacks while also making the Serverless Application more secure through its many other security features provided.
In this article we:
Gave a high level of DDoS and HTTP Flood DDoS Attacks
Showed how to use AWS WAF to prevent this attack vector
Demonstrated how we can load test our serverless application and see the requests be blocked past a specific threshold