Choose Your Character: Same Service, Different Tools Part 4

May 15, 2024

Greetings Curious Humans! We’re back with another installment of “Same Service, Different Tools”. Parts 1 to 3 covered SAM, CDK, and Terraform. Today, we’ll explore Architect, a serverless framework focusing on speed and developer agility. Architect is fully open-source and governed by the non-profit OpenJS Foundation. It sets itself apart by curating interfaces for a subset of AWS services that are critical to building web apps. In my opinion, it is the only serverless framework that offers a good local development workflow. We’ll continue using GitHub Codespaces as a base environment that you can fork from my repo, https://github.com/pchinjr/serverless-file-upload

Getting Started

You’ll need to create an admin IAM user and set your /.aws/credentials file:

  
[arc-admin]
aws_access_key_id=xxx
aws_secret_access_key=xxx
  

These are the credentials that the Architect CLI will use to deploy your project. One cool thing about Architect is that we’ll be able to build and test everything locally before we deploy!

To bootstrap a project we just need to open a terminal and run npm init @architect serverless-upload-arc. This creates a folder called /serverless-upload-arc that contains our initial project structure. Similar to [main.tf](<http://main.tf>) or template.yaml, you will find an app.arc file in the root. The .arc file is our declarative manifest. You’ll notice right away that it’s quite small. Resources are declared with pragmas and minimal configuration.

  
@app
arc-example

@http
get /

@aws
# profile default
region us-west-2
  

app.arc resource maniest

Let’s break it down:

  • @app is the project namespace
  • @http declares API Gateway routes and methods. Each route has a corresponding Lambda handler located in /src/http/
  • @aws is the global AWS provider configuration

Next, let’s look at the Lambda handler code in /src/http/get-index/index.mjs:

  
// learn more about HTTP functions here: https://arc.codes/http
export async function handler (req) {
  return {
    statusCode: 200,
    headers: {
      'cache-control': 'no-cache, no-store, must-revalidate, max-age=0, s-maxage=0',
      'content-type': 'text/html; charset=utf8'
    },
    body: `
<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <title>Architect</title>
  <style>
     * { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: -apple-system, BlinkMacSystemFont, sans-serif; } .max-width-320 { max-width: 20rem; } .margin-left-8 { margin-left: 0.5rem; } .margin-bottom-16 { margin-bottom: 1rem; } .margin-bottom-8 { margin-bottom: 0.5rem; } .padding-32 { padding: 2rem; } .color-grey { color: #333; } .color-black-link:hover { color: black; }
  </style>
</head>
<body class="padding-32">
  <div class="max-width-320">
    <img src="https://assets.arc.codes/logo.svg" />
    <div class="margin-left-8">
      <div class="margin-bottom-16">
        <h1 class="margin-bottom-16">
          Hello from an Architect Node.js function!
        </h1>
        <p class="margin-bottom-8">
          Get started by editing this file at:
        </p>
        <code>
          arc-example/src/http/get-index/index.mjs
        </code>
      </div>
      <div>
        <p class="margin-bottom-8">
          View documentation at:
        </p>
        <code>
          <a class="color-grey color-black-link" href="https://arc.codes">https://arc.codes</a>
        </code>
      </div>
    </div>
  </div>
</body>
</html>
`
  

/src/http/get-index/index.mjs

This might look strange at first, but it is a Lambda function triggered by API Gateway that returns an HTML string to render in the browser. Our example service doesn’t have a front-end client, but we can use this default route to try out local development. From the project root /serverless-upload-arc open a terminal and run npx arc sandbox, you should see Architect start a local server and Github Codespaces will forward the ports to a private URL. Ctrl+click on https://localhost:3333 and you should see the index page. We can update the Lambda handler with custom text like “Praise Cage!” in the <h1> element and see the changes in the returned HTML.

sandbox is a local server that emulates AWS services like API Gateway and even Dynamodb. It does all this with Nodejs and doesn’t require any extra Java runtimes or Docker containers. We can incrementally build and improve our iteration cycles using the sandbox environment without waiting for a full cloud deployment.

Our First Route, SNS topic, and S3 Integration

Let’s add a POST /upload route to app.arc, make a new file /src/http/post-upload/index.mjs for our Lambda handler, and use two plugins for creating a public S3 bucket and a local S3 server to emulate our FileUploadBucket. Architect is extendable with plugins that hook into the sandbox lifecycle and can modify the CloudFormation output with custom resources. Under the hood, Architect compiles the .arc file into CloudFormation. Architect takes care of service discovery, IAM roles, code packaging, and service integrations. Check out the Architect playground to see a side-by-side comparison of how much CloudFormation you don’t have to write.

  
@aws
region us-east-1
profile arc-admin

@app
serverless-upload-arc

@http
get /
post /upload # new POST route

@plugins
architect/plugin-storage-public # enables public buckets
ticketplushq/arc-plugin-s3rver # runs a local s3 server

@storage-public
FileUploadBucket # creates a public bucket
  

app.arc

  
import awsLite from '@aws-lite/client'
const s3Config = JSON.parse(process.env.ARC_S3RVER_CONFIG || '{}')
const aws = await awsLite({ ...s3Config, plugins: [ import('@aws-lite/s3') ] })

export async function handler(req) {
    try {
        const body = JSON.parse(req.body);
        const decodedFile = Buffer.from(body.file, 'base64');
        const params = {
            "Body": decodedFile,
            "Bucket": process.env.ARC_STORAGE_PUBLIC_FILEUPLOADBUCKET,
            "Key": body.filename,
            "ContentType": body.contentType
        };
        await aws.s3.PutObject(params).then(() => console.log('congrats!'))
        return {
            statusCode: 200,
            body: JSON.stringify({ message: "Praise Cage!" }),
        };
    } catch (err) {
        console.error(err);
        return {
            statusCode: 500,
            body: JSON.stringify({ message: "Error uploading file", error: err.message }),
        };
    }

}
  

/src/http/post-upload/index.mjs

Architect has a default folder structure that is significant. The above handler code will be invoked on the POST /upload route. Architect has been optimized for performance so we can’t copy the previous handler code, but it is very similar. Differences include @aws-lite, a minimalist replacement for AWS-SDK v3, and local configuration for the S3 server. Let’s install these dependencies with npm from the project root.

  
npm i @architect/plugin-storage-public
npm i @ticketplushq/arc-plugin-s3rver
  

Now we can start our local dev environment from the terminal with npx arc sandbox. You should see the startup output in the terminal.

In GitHub Codespaces, ctrl+click on http://localhost:3333, and you’ll be greeted by the index route in the browser on a private URL. Since the preview URL generated by Codespaces is private by default, we’ll need to toggle it to public so we can send a curl command from the terminal. You can find this option under the Ports tab, and right-clicking on Port 3333 and selecting Port Visibility >> Public from the context menu.

Now we can issue a curl command to POST a Base64 encoded text file. Our example file contains the text “Praise Cage! Hallowed be thy name.” which becomes "UHJhaXNlIENhZ2UhIEhhbGxvd2VkIGJ5IHRoeSBuYW1lLg==”.

  
curl -X POST https://{YOUR_CODESPACE_PREVIEW_URL}-3333.app.github.dev/upload \
     -H "Content-Type: application/json" \
     -d '{
    "filename": "example.txt",
    "file": "UHJhaXNlIENhZ2UhIEhhbGxvd2VkIGJ5IHRoeSBuYW1lLg==",
    "contentType": "text/plain"
}'
  

Now you should see an object appear in our file system under serverless-upload-arc/buckets/FileUploadBucket/example.txt._S3rver_object

Pretty neat! We have a serverless API running locally with an inspectable S3 server and we’ve only written 13 lines of config and a standard Lambda handler. Thank Cage for open-source developers!

Adding Dynamodb and SNS

Architect also has runtime helper functions to normalize the data interfaces between services. We’ll be using @architect/functions to interact with SNS and Dynamodb. Install it now from the project root with npm install @architect/functions. When that’s complete let’s add to our app.arc file:

  
@aws
region us-east-1
profile arc-admin

@app
serverless-upload-arc

@http
get /
post /upload

@events
write-metadata # adds SNS Topic

@tables
FileMetadataTable # adds Dynamodb table
  FileId *String # primary key
  UploadDate **String # sort key

@tables-indexes
FileMetadataTable # adds Global Secondary Index
  SyntheticKey *String # primary key
  UploadDate **String # sort key
  name byUploadDate # index name

@plugins
architect/plugin-storage-publics
ticketplushq/arc-plugin-s3rver

@storage-public
FileUploadBucket
  

app.arc file

Now we can add the Lambda handler invoked by the SNS Topic by creating a new file /src/events/write-metadata/index.mjs

  
import arc from '@architect/functions'
export const handler = arc.events.subscribe(snsHandler)
let client = await arc.tables()
let FileMetadataTable = client.FileMetadataTable

async function snsHandler(event) {
  let fileMetadata = await FileMetadataTable.put({
    FileId: event.key,
    UploadDate: new Date().toISOString(),
    SyntheticKey: "FileUpload",
  })
  return
}
  

/src/events/write-metadata/index.mjs SNS Lambda handler

Go back to /src/http/post-upload/index.mjs and insert the code to publish a payload to our SNS topic.

  
import awsLite from '@aws-lite/client'
import arc from '@architect/functions' // new import
const s3Config = JSON.parse(process.env.ARC_S3RVER_CONFIG || '{}')
const aws = await awsLite({ ...s3Config, plugins: [ import('@aws-lite/s3') ] })

export async function handler(req) {
    try {
        const body = JSON.parse(req.body);
        const decodedFile = Buffer.from(body.file, 'base64');
        const params = {
            "Body": decodedFile,
            "Bucket": process.env.ARC_STORAGE_PUBLIC_FILEUPLOADBUCKET,
            "Key": body.filename,
            "ContentType": body.contentType
        };
        await aws.s3.PutObject(params).then(() => console.log('congrats!'))
        
        // add this publish() method to send data to SNS
        await arc.events.publish({
            name: 'write-metadata',
            payload: { key: body.filename }
        })
        
        return {
            statusCode: 200,
            body: JSON.stringify({ message: "Praise Cage!" }),
        };
    } catch (err) {
        console.error(err);
        return {
            statusCode: 500,
            body: JSON.stringify({ message: "Error uploading file", error: err.message }),
        };
    }

}
  

/src/http/post-upload/index.mjs updated to publish an SNS message

Go ahead and curl another example and you’ll see sandbox logging the events.

sandbox is doing its job subscribing the SNS handler and publishing a payload from the POST /upload handler. In previous articles, we registered an S3 event to write metadata asynchronously, but with Architect we have a mechanism for setting up SNS declaratively with local emulation. This still fulfills our desired user experience where operations are performed asynchronously and immediately returning responses.

Querying Metadata From Dynamodb

The final feature we need to implement is a GET route to query the metadata saved to Dynamodb. You guessed it, we’ll add one line to the app.arc file; get /metadata under the @http pragma. Then create the Lambda handler code in /src/http/get-metadata/index.mjs.

Here’s our full app.arc file

  
@aws
region us-east-1
profile arc-admin

@app
serverless-upload-arc

@http
get / # we don't need this, but handy to make sure sandbox is working
post /upload
get /metadata # one line config

@events
write-metadata

@tables
FileMetadataTable
  FileId *String
  UploadDate **String

@tables-indexes
FileMetadataTable
  SyntheticKey *String
  UploadDate **String
  name byUploadDate

@plugins
architect/plugin-storage-public
ticketplushq/arc-plugin-s3rver

@storage-public
FileUploadBucket
  

app.arc complete file

  
import arc from '@architect/functions'
export const handler = arc.http(ddbHandler)
let client = await arc.tables()
let FileMetadataTable = client.FileMetadataTable

async function ddbHandler(req) {
    try {

        // Extract query parameters from the event
        const startDate = req.queryStringParameters?.startDate; // e.g., '2023-03-20'
        const endDate = req.queryStringParameters?.endDate; // e.g., '2023-03-25'

        // Validate date format or implement appropriate error handling
        if (!startDate || !endDate) {
            return {
                statusCode: 400,
                body: JSON.stringify({ message: "Start date and end date must be provided" }),
            };
        }

        let queryResults = await FileMetadataTable.query({
            IndexName: 'byUploadDate',
            KeyConditionExpression: 'SyntheticKey = :synKeyVal AND UploadDate BETWEEN :startDate AND :endDate',
            ExpressionAttributeValues: {
                ":synKeyVal": "FileUpload",
                ":startDate": startDate,
                ":endDate": endDate
            },
        })

        return {
            statusCode: 200,
            body: JSON.stringify(queryResults)
        }
    } catch (err) {
        console.error(err);
        return {
            statusCode: 500,
            body: JSON.stringify({ message: "Error querying metadata", error: err.message }),
        };
    }
};
  

/src/http/get-metadata/index.mjs GET /metadata Lambda handler

Notice that we’re using the @architect/functions helpers to write to Dynamodb in the SNS handler and read from Dynamodb in the GET /metadata handler. Architect’s optimized dependencies keep our functions small and increase performance.

Let’s make sure sandbox is running with npx arc sandbox and then issue a GET request with curl. Also remember to make the endpoint port public, if using Codespaces.

  
curl -X GET "https://{YOUR_CODESPACES_URL}-3333.app.github.dev/metadata?startDate=2024-04-01&endDate=2024-05-31"
  

You should see a return output of the query.

  
{"ScannedCount":2,"Count":2,"Items":[{"FileId":"example.txt","UploadDate":"2024-05-02T16:02:11.226Z","SyntheticKey":"FileUpload"},{"FileId":"example.txt","UploadDate":"2024-05-02T16:03:09.402Z","SyntheticKey":"FileUpload"}]}
  

The Final Frontier - Deploying to AWS

We’ve been able to create a full serverless API locally, with in-memory Dynamodb,  s3 mocks, and SNS topics, but now it’s time to deploy. Are you ready?

npx arc deploy

That’s it!

As long as you have your ~/.aws/credentials set, Architect will generate a CloudFormation stack and deploy everything for you. Then you can change the endpoints in your curl command and you can inspect the live resources like S3 buckets and Dynamodb tables. Take note of how your Lambdas are scoped to least privilege, have minimal dependencies, and the CloudFormation stack uses parameter store for service discovery. We didn't have to write integrations or IAM policies, Architect took care of all of that.

When you’ve had enough fun and want to clean up your AWS account, just usenpx arc destroy --app MyAppName --force and follow the prompts. Using the --force option will empty S3 buckets for you and force the deletion of Dynamodb tables.

Conclusion and Comparisons

In conclusion, Architect offers a streamlined and highly efficient approach to building serverless applications on AWS, distinguishing itself with a focus on simplicity and minimal configuration. Here's how it stacks up against SAM, CDK, and Terraform:

  • AWS SAM (Serverless Application Model): SAM is deeply integrated with AWS and is ideal for developers looking for a straightforward, YAML-based configuration. While SAM is powerful for certain use cases, Architect simplifies the process even further with minimalist dependencies and rapid local iterations.
  • AWS CDK (Cloud Development Kit): CDK allows developers to define their cloud resources using familiar programming languages. This is powerful but can introduce complexity. Architect's use of simple declarative syntax in .arc files makes it more accessible for beginners or for projects where simplicity and speed are priorities.
  • Terraform: Unlike Architect, which is AWS-centric, Terraform provides a vendor-agnostic approach to infrastructure as code, supporting multiple providers. Terraform is suitable for complex environments that span multiple cloud services. However, for AWS-specific applications, Architect can be faster to set up and deploy due to its focused toolset.

Architect is particularly well-suited for developers who prioritize quick deployments and minimal setup. Its plug-and-play nature and efficient local development environment make it an excellent choice for full-stack serverless applications. In contrast, if your requirements include multi-cloud support or extensive custom resources, you might lean towards Terraform or CDK.

By choosing the right tool based on project needs, team skills, and the specific characteristics of each framework, you can maximize development efficiency and application performance.

Until next time, stay curious, and Praise Cage!

References

Official Architect docs: https://arc.codes/docs/en/get-started/quickstart

S3 Server plugin: https://github.com/ticketplushq/arc-plugin-s3rver

Public Bucket plugin: https://www.npmjs.com/package/@architect/plugin-storage-public

aws-lite: https://awslite.org/

Serverless Handbook
Access free book

The dream team

At Serverless Guru, we're a collective of proactive solution finders. We prioritize genuineness, forward-thinking vision, and above all, we commit to diligently serving our members each and every day.

See open positions

Looking for skilled architects & developers?

Join businesses around the globe that trust our services. Let's start your serverless journey. Get in touch today!
Ryan Jones - Founder
Ryan Jones
Founder
Speak to a Guru
arrow
Edu Marcos - CTO
Edu Marcos
Chief Technology Officer
Speak to a Guru
arrow
Mason Toberny
Mason Toberny
Head of Enterprise Accounts
Speak to a Guru
arrow

Join the Community

Gather, share, and learn about AWS and serverless with enthusiasts worldwide in our open and free community.