All Articles

Using Rescript to build applications on AWS Lambda

Rescript provides a type-safe, compile-to-JS language that delivers blazingly fast compile times with efficient and readable JavaScript output without sacrificing usability.

It’s most often used to build front-end applications with React but can also be used to build type-safe applications with NodeJS.

In this tutorial I’ll show you how to get up and running quickly with an AWS Lambda application in ReScript. I assume you have some knowledge of ReScript and building for AWS, but in any case I’ll explain what is happening as I go through.

The full code accompanying this article can be found on GitHub

Prerequisites

To begin, you’ll need to have installed:

  • NodeJS: this can be installed directly or using a tool like nvm to manage multiple NodeJS installations. I’ve assumed version 14.x here which is the latest stable version
  • AWS SAM CLI: Servlerless Application Model (SAM) is one of the various AWS native ways you can develop serverless applications.

You’ll also need to have an AWS account and have configured SAM with with your AWS credentials.

Set up project

  1. In a new directory, run npm init to generate a `package.json:

    mkdir rescript-lambda-example
    cd rescript-lambda-example
    npm init -y
  2. Next, install some of the modules we will need:

    npm i rescript esbuild npm-run-all

    ReScript is installed as an npm module. In addition, we’ll be using esbuild to bundle our code and npm-run-all to coordinate our shell scripts.

API Handler

AWS Lambda API handers for NodeJS consist of a JavaScript file that exports a function of the form:

export const handler = async (event, context) => {
  // some handler code
}

where the event is structured based on the Lambda trigger, and the context provides context information about the Lambda runtime environment (such as the request ID and authentication information for API Gateway contexts).

Our Lambda will be used with API Gateway, which has events structured like this:

{
  "body": "{\"name": \"Chris\"}",
  "isBase64Encoded": false,
  "headers": {
    "content-type": "application/json",
    "accept": "*/*",
  },
}

We will build a simple hello-world handler that accepts a JSON-encoded body containing an object with a single name field, and echo it back in the response to the HTTP client.

The Code

First, we define the structure of our incoming request body and outgoing response body and serialisation functions for them.

src/NameMessage.res

/**
 * The type of the incoming body
 */
type nameMessage = {name: option<string>}

/**
 * The type of the response body
 */
type nameResponse = {message: string}

/**
 * Decode a JSON-encoded string using JSON.parse()
 * and assume it is the nameMessage type
 */
@scope("JSON")
@val 
external parseNameMessage: string => nameMessage = "parse"

/*
 * Stringify the name response message
 */
@scope("JSON") 
@val 
external stringifyResponse: nameResponse => string = "stringify"

These serialisers are just JSON.parse (mapped as parseNameMessage) and JSON.stringify (as stringifyResponse). These aren’t particularly type safe, and as you’ll see, we can easily cause runtime errors as they effectively cast whatever input the user provides into the nameMessage structure.

We’ll see how we can build type-safe encoders/decoders in a future post, but for now, this wrapping the JSON API directly us get started quickly with our example.

Next, we’ll define the structure of the API Gateway event and the expected response from the Lambda event handler:

src/ApiGateway.res

/*
 * An API Gateway Lambda Event.
 *
 * (with a simplified interface with just what
 * we need for this example)
 */
type awsAPIGatewayEvent = {
  body: option<string>,
  isBase64Encoded: bool
}

/**
  * An APIGateway response type. In this we
  * seralise the response body and specify the
  * status code (with any headers, if necessary)
  */
type awsAPIGatewayResponse = {
  body: string,
  headers: Js.Dict.t<string>,
  statusCode: int,
}

These are based on the API Gateway Event structure we showed before.

Next, we’ll define some wrappers around NodeJS Buffer.from() and Buffer.prototype.toString() (which we’ll need to decode base64-encoded API Gateway bodies).

Lastly, we use the above two files and define our handler:

src/Hello.res

open Belt.Option
open ApiGateway
open NameMessage

let handler = (event: awsAPIGatewayEvent): Js.Promise.t<awsAPIGatewayResponse> => {
    Js.log2("body", isNone(event.body))
  // Convert body to a UTF-8 string if base64-encoded
  let body =
    event.body
    ->map(body =>
      event.isBase64Encoded
        ? body->Buffer.from(~encoding=#base64)->Buffer.toString(~encoding=#utf8)
        : body
    )
    ->flatMap(body => body != "" ? Some(body) : None)

  // Parse the event body string using JSON.parse
  let message = map(body, parseNameMessage)

  // Extract the name field
  let name = flatMap(message, message => message.name)

  // Construct the response body
  let responseBody = {
      message: `Hello, ${getWithDefault(name, "there")}!`,
  }
  let response = {
      statusCode: 200,
    headers: Js.Dict.fromArray([("content-type", "application/json")]),
    body: stringifyResponse(responseBody),
  }
  Js.Promise.resolve(response)
}

Our handler starts by decoding the body (if it happens to be base64-encoded) and checking if it is an empty string (making it None if it is to short-circuit the remaining code).

It then uses parseNameMessage to parse the JSON-encoded body string, and then extracts the name property.

(Note that we use map and flatMap (from Belt.Option) throughout, as the event.body value is optional.)

Lastly, we construct the response body object, and then stringify it as part of the API Gateway response (which also includes the Content-Type: application/json header and 200 status code).

Note that if the name value is not specified (i.e. it is the special value None), we use there as the default mapping. This is to account for when a body is not provided by the caller.

Configuration

We need to configure two aspects of this project - the Rescript build and the serverless application deployment.

Building your code with Rescript

The Rescript build is mostly boilerplate, but we’ll cover it here to identify the main customisation points.

The configuration I demonstrate uses a bundler, which most consider unnecessary for server-side nodejs code, but which is a good idea in AWS Lambda environments as it both drastically reduces your Lambda start up time and the time it takes to upload your code for deployment. It can also help improve the performance of your Lambda overall as there is far fewer filesystem reads from loading the required source files.

The bundler I’ve chosen is esbuild, which is a great complement to ReScript as it is also incredibly fast.

  1. Create a new file called bsconfig.json in the main project directory and fill it as follows:

    {
      "name": "rescript-lambda-example",
      "sources": { "dir": "src" },
      "package-specs": {
        "module": "es6-global"
      }
    }

    Breaking it down:

    • name - this the name of the project, which is unused here because we a compiling an application, but in the case of libraries, it sets the namespace
    • sources - set the source directory (this can also be an array when there are multiple source directories)
    • package-specs.module - set the output to ES6 modules using import/export style, instead of the default ES5/commonjs output. This ensures that the bundler (esbuild) can tree-shake out unused functions from our imports.
  2. In the package.json, add the following new scripts:

    "name": "rescript-lambda-example",
    "scripts": {
      "build": "run-s build:rescript build:bundle",
      "build:rescript": "rescript build",
      "build:bundle": "esbuild --outdir=build --target=node14 --platform=node lib/js/src/Hello.js",
    
      "develop": "run-p develop:rescript develop:bundle",
      "develop:rescript": "npm run build:rescript -- -w",
      "develop:bundle": "npm run build:bundle -- --watch"
    }

    We’ve added two main scripts: build, which does a full build and bundle (for CI/CD and deployment), and develop, which uses a watch mode to continuously rebuild and rebundle. npm-run-all has been used to delegate to two other npm scripts and run them sequentially (run-s) or in parallel (run-p) as needed.

    Both scripts run by first using rescript to compile the code (which is output to lib/js) and then bundle using esbuild which outputs to the build directory.

    The ReScript documentation has more information on configuring bsconfig.json and the rescript --help and esbuild --help will tell you all you need to understand their respective options.

Setting up a deployment template

SAM uses an extension of AWS CloudFormation Templates for its configuration.

Create a new file called template.yaml in the root project directory with the following declarations:

Transform: AWS::Serverless-2016-10-31

Resources:
  IndexHandler:
    Type: AWS::Serverless::Function
    Properties:
      Handler: Hello.handler
      CodeUri: build
      Runtime: nodejs14.x
      Events:
        ApiEvent:
          Type: HttpApi
          Properties:
            Method: POST
            Path: /hello
Outputs:
  ApiGatewayURL:
    Value: !GetAtt ServerlessHttpApi.ApiEndpoint
    Export:
      Name: !Sub "${AWS::StackName}-url"

This short template:

  1. Generates a Lambda function from our build/Hello.js file (specifying the entry point as the handler function)
  2. Adds a HttpApi event trigger to the function for path /hello and HTTP method POST
  3. Implicitly generates an AWS API Gateway HTTP API with default deployment and stage (based on the HttpApi event - the logical ID of the API is ServerlessHttpApi)

Test Handler Locally

We can test our handler locally by using the sam local command (you will need to have Docker installed for this to work):

# Build the application
> npm run build
# Start a local HTTP server
> sam local start-api

SAM should spin up a HTTP server on localhost:3000. We can run a quick testing using curl in another terminal session:

# Send a payload to curl (the POST HTTP method is implied)
> curl http://localhost:3000/hello --data '{"name": "Chris"}'
{"message":"Hello, Chris!"}%

Note that the first time you run this may take a while, as SAM will need to download and build a Docker image to run your Lambda within.

We can also see what happens when we don’t provide a request body:

> curl http://localhost:3000/hello -XPOST
{"message":"Hello, there!"}% 

We can also show how the JSON.parse wrapper is just as un-type-safe as JavaScript:

# Send a payload to curl (the POST HTTP method is implied)
> curl http://localhost:3000/hello --data '{"name": {}}'
{"message":"Hello, [object Object]!"}%

Deploy to the environment

The first step is packaging, which uploads your code and transforms the template to refer to its S3 location.

sam package --resolve-s3 --output-template-file template.deploy.yaml

Once that is done, we can run the deployment command. This will create a new CloudFormation stack containing your Lambda and API Gateway.

sam deploy \
  --template-file template.deploy.yaml \
  --resolve-s3 \
  --stack-name rescript-example \
  --capabilities CAPABILITY_IAM

The output should contain something like the following, which identifies the URL where your API is deployed:

CloudFormation outputs from deployed stack
---------------------------------------------------------------------------------------------------------------------------------------------------------
Outputs                                                                                                                                                 
---------------------------------------------------------------------------------------------------------------------------------------------------------
Key                 ApiGatewayURL                                                                                                                       
Description         -                                                                                                                                   
Value               https://2uhonte28e.execute-api.us-east-1.amazonaws.com                                                                         
---------------------------------------------------------------------------------------------------------------------------------------------------------

We can then use curl again to test our deployed API to verify it is working correctly:

> curl https://2uhonte28e.execute-api.us-east-1.amazonaws.com/hello -XPOST --data '{"name": "World"}'
{"message":"Hello, World!"}% 

Next Steps

In a future post, we’ll look at:

  • Improve JSON handling
  • Add multiple endpoints
  • Make AWS API calls