Cloudfront can be simply defined as a CDN (Content Delivery Network), caching your static assets in a datacenter nearer to your viewers. But Cloudfront is a lot more complex and versatile than this simple definition. Cloudfront is a “pull” CDN, which means that you don’t push your content to the CDN. The content is pulled into the CDN Edge from the origin at the first request of any piece of content.
In addition to the traditional pull and cache usage, Cloudfront can also be used as:
A Networking Router
A Firewall
A Web Server
An Application Server
Why is using a CDN relevant?
The main reason is to improve the speed of delivery of static content. By caching the content on the CDN edge, you not only reduce the download time from a few seconds to a few milliseconds, but you also reduce the load and amount of requests on your backend (Network, IO, CPU, Memory, …).
Static content can be defined as content not changing between two identical requests done in the same time frame.
Identical can be as simple as the same URI, or as fine grained as down to the authentication header. The time frame can range between 1 second to 1 year. The most common case is caching resources like Javascript or CSS and serving the same file to all users forever. But caching a JSON response tailored to a user (Authentication header) for a few seconds reduces the backend calls when the user has the well-known “frenetic browser reload syndrome”.
Edges, Mid-Tier Caches, and Origins
Cloudfront isn’t “just” some servers in datacenters around the world. The service is a layered network of Edge Locations and Regional Edge Caches (or Mid-Tier Caches).
Edge Locations are distributed around the globe with more than 400 points of presence in over 90 cities across 48 countries. Each Edge Location is connected to one of the 13 Regional Edge Caches.
Regional Edge Caches are transparent to you and your visitors, you can’t configure them or access them directly. Your visitors will interact with the nearest Edge Location, which will connect to the attached Regional Edge Cache and finally to your origin. Therefore, in this article, we will refer to Cloudfront as the combination of Edge Locations and Region Edge Caches.
What Have We Learned?
Cloudfront is more than just a simple “pull-cache-serve” service
You improve delivery speed to your visitors
You can increase resilience by always using a healthy backend
You improve overall speed to your backend by leveraging AWS’s backbone
You can modify any request to tailor the response to your visitor’s device or region
You don’t always need a backend
You protect your backend by reducing the number of calls reaching it
In this article, we will take a look at a use-case that represents how to Run Serverless Containers Using Amazon EKS and AWS Fargate.
Using Amazon EKS to run Kubernetes on AWS gives your team more time to just focus on core product development instead of managing the infrastructure of core Kubernetes. Kubernetes on AWS has good scalability, is easily upgradable, has the AWS Fargate option to run Serverless containers, and more.
The above architecture represents running Kubernetes on AWS using Amazon EKS.
What is Amazon EKS?
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that you can use to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or nodes. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. [1]
What is Amazon EKS Cluster?
Amazon EKS Cluster consists of two primary components:
The Amazon EKS Control Plane - Configure and Manage Kubernetes Services
Amazon EKS Worker Nodes - Configure and Manage User ApplicationsEKS provides a different way to configure worker nodes that execute application containers like Self-Managed, Managed, Fargate
The EKS Cluster consists of the above 2 components deployed in two separate VPCs.
AWS Fargate is a serverless, pay-as-you-go compute engine that lets you focus on building applications without managing servers. AWS Fargate is compatible with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). [2]
Recently, AWS announced that AWS Fargate now delivers faster scaling of applications. Now AWS Fargate enables customers to scale applications faster, improving performance and reducing wait time. AWS has made several improvements over the last year that enable you to scale applications up to 16X faster, making it easier to build and run applications at a larger scale on Fargate, along with that AWS Fargate increases task launch rates.
Make sure to create a relevant IAM User with programmatic access and relevant credentials with this AWS configuration step. To learn about how to create this AWS IAM user, see this article.
aws configure
AWS Access Key ID [None]: XXXXXXXXXXXXXXXXXXXX
AWS Secret Access Key [None]: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Default region name [None]: ap-south-1
Default output format [None]:
Create Amazon EKS Cluster
There are multiple ways to create an Amazon EKS Cluster (like AWS Console, AWS SDK, & more).
Create an EKS Cluster with Fargate Nodes using the eksctl command below:
This single command will provision an EKS Cluster along with VPC, Subnet, IAM Roles, RouteTable, Fargate Profile, Fargate Nodes, and more with the CloudFormation stack.
This command will take 15-25 minutes and your terminal will display all the resources created as part of creating the EKS Cluster.
Amazon EKS Cluster
Amazon EKS Cluster Fargate Profiles
CloudFormation Stack
Final Project Folders and Files Setup (For Reference)
Below is what your final project will look like by following each section of this article. I’m sharing it now so you can check what I have versus what you have as the article continues.
index.js <-- server
Dockerfile <-- instructions for container start-up
sg-sample-deployment.yaml <-- kubectl deploy spec
sg-sample-service.yaml <-- kubectl service spec
Create a NodeJs Application
Create a node project folder and initialize npm with express to run the server
npm init
npm install express
Create index.js file with this code
It will create a server that listens on port 80 using node express
For deploying the application we are using kubectl, a CLI tool, which will communicate with the Kubernetes Control Plane.
Create Deployment
Creating a Deployment type Kubernetes workload for deploying the app.
sg-sample-deployment.yaml
This file is used to create a deployment-type workload within the cluster. The container image used as part of this file is the same which we pushed into the Amazon ECR Repository in the previous steps. Container listens on port 80 for HTTP requests and has 1 replica.
Asking EKS to launch 1 replica of the sg-fargate-eks-app container to run across the cluster. According to needs, it is able to change that number on the fly- it will allow scaling application up or down. Having many replicas assures high availability since pods are scheduled across multiple nodes. Replica's purpose is to maintain the specified number of Pod instances running in a cluster for high availability.
The above deployment file is used in the below command.
kubectl create -f sg-sample-deployment.yaml
Using kubectl, launch this deployment. It will create ‘Deployment’ type Kubernetes ‘Workloads’ with the name ‘sg-fargate-eks-deployment’ under the ‘default’ namespace. Application deployed to Fargate instance type node within relevant pods using provided container image of the app.
The above image show EKS Kubernetes Deployment & relevant Pods.
Below, the kubectl command helps to get the ‘pods’ list running on your cluster within the default namespace.
kubectl get pods
NAME READY STATUS RESTARTS AGE
sg-fargate-eks-deployment-74b9d6f5b7-pzhh9 1/1 Running 0 25h
Below, the kubectl command helps to get the ‘deployments’ list running on your cluster within the default namespace.
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
sg-fargate-eks-deployment 1/1 1 1 25h
The above image show EKS Fargate Node running Pods.
Create Service
It is not a secure way to expose the internal components of a cluster to the outside world directly, so it is always better to deploy some services in front of the cluster.
For this, we will create a Kubernetes service. Kubernetes support different types of services; see here for more details. For the current article, we are using a Load Balancer type service.
Overall, we are using the Fargate instance for nodes, ELB for exposing services to the outside world, and VPC for networking inside the EKS cluster.
sg-sample-service.yaml
We are able to create a Kubernetes service of the type LoadBalancer by creating sg-sample-service.yaml file and adding the following contents inside that:
The below command will create a Kubernetes service of the type LoadBalancer.
kubectl create -f sg-sample-service.yaml
Within a few minutes, the Kubernetes Service will be up & running and able to connect to the applications. We are able to access the application using the “EXTERNAL-IP” of the service.
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sg-fargate-eks-service LoadBalancer 10.XXX.XXX.XX xxxxx.ap-south-1.elb.amazonaws.com 80:30953/TCP 28h
kubernetes ClusterIP 10.XXX.X.X 443/TCP 31h
Above, the kubectl command gives Kubernetes services details along with EXTERNAL-IP.
curl xxxxx.ap-south-1.elb.amazonaws.com
NodeJs App Running on Amazon EKS Fargate
Service should be accessible using a browser or command line curl like above using EXTERNAL-IP.
Above kubectl & eksctl commands useful to remove different resources EKS Cluster, Kubernetes Deployment, Kubernetes Services.
Conclusion
Using Amazon EKS to run Kubernetes on AWS gives your team more time to just focus on core product development instead of managing the infrastructure of core Kubernetes. Kubernetes on AWS has good scalability, is easily upgradable, has the AWS Fargate option to run Serverless containers, and more.
Amazon EKS with AWS Fargate allows for Serverless Containers to be run. We are able to provision, manage, and deploy Amazon EKS resources using different tools like eksctl, kubectl, and awscli.
The scope of this article covers the basic ideas around Amazon EKS with AWS Fargate, which will allow you to easily explore it after reading this article.