hamburger icon close icon

AWS Lambda Images: How to Use Container Images with AWS Lambda Functions

December 19, 2021

Topics: Cloud Volumes ONTAP AWSAdvanced8 minute read

As serverless has grown in popularity, so has Amazon’s serverless compute service, AWS Lambda. Given this prominence, AWS is giving a lot of support to the service, including the new capability to use container images to deploy Lambda functions.

In this blog, we’ll walk you through how to leverage container images to deploy AWS Lambda functions.

Read on below or jump down to How to Deploy Lambda Functions as Container Images.

A New Way to Use AWS Lambda: Container Images

Introduction to Lambda

Lambda is an event-driven, serverless compute service that enables teams to execute code for any backend service without having to deploy or configure their own servers. As one of its most essential features, Lambda reduces costs for running interactive backends and processing data at scale. With the rising adoption of serverless framework, AWS Lambda continues to be the most popular serverless computing platform, with 55% of organizations using it for their serverless workloads.

In its consistent push to support a seamless development and deployment platform for serverless workloads, AWS continues to offer new features and enhancements for Lambda. Recently, we covered the new support for using AWS EFS to share data across various Lambda functions, and how Cloud Volumes ONTAP extends the benefits of deploying NFS storage across a number of various use cases.

AWS’ recent announcement of support for the deployment of Lambda functions using container images further expands the way users can take advantage of the service.

What Is a Lambda Container Image?

A Lambda container image is a package that includes the Lambda Runtime API, necessary dependencies, an operating system, and function handlers required to run Docker containers in AWS Lambda. These packages allow development teams to deploy and execute arbitrary code and libraries into the Lambda runtime.

When creating a Lambda function, AWS supports the use of a container image as the deployment package. With this functionality, you can use the Lambda API or the Lambda console to create a function that is defined by the container image. Once the image is deployed, the underlying code can be further updated and tested to configure various Lambda functions.

Let’s take a look at the various methods you can follow, and the steps required to deploy Lambda functions using container images.

How to Deploy Lambda Functions as Container Images

In this section, we’ll go through the steps of creating Lambda container images.

Lambda Requirements for Using Container Images

To package Lambda code and dependencies in a container image, the following requirements should be met:

  • The image should be able to execute a read-only file system. Configure the image so that it can access a /tmp directory with at least 512 MB storage.
  • The image should implement the Lambda runtime API through open-source AWS runtime clients. To make a base image compatible with Lambda, add a preferred runtime interface client to the manifest.
  • Lambda’s default Linux user with the least privileges should be able to access all files needed to run the function code.
  • The base image must be Linux-based.
  • The Lambda function can only target one of the architectures in multi-architecture base images.

AWS Lambda Images Deployment Methods

AWS supports multiple approaches for the creation of Lambda container images. These include:

  • Using the Serverless Application Model (AWS SAM) to define the container image that creates and deploys the function. This involves setting the Runtime type to Image, then providing the URL of the base image in the AWS SAM template.
  • Creating images from AWS base images for Lambda. As Lambda supports multi-architecture and architecture-specific base images, you can use a specific image to preload runtimes and dependencies to create Lambda images.
  • Creating the Lambda container image using alternative base images. As Lambda supports base images from all Linux distributions, you can also use alternative base images to deploy Lambda functions.

Deployment Steps

While there are three different approaches to deploy Lambda functions using a container image as outlined above, for the purpose of this article we would use an AWS base image to deploy Lambda functions.

Prerequisites

  • Docker Desktop for Docker CLI commands
  • The AWS CLI for AWS service API operation calls
  • Function code
  • The machine’s environment variables include LAMBDA_TASK_ROOT and LAMBDA_RUNTIME_DIR. This is where any dependencies and function handlers are deployed so that the Lambda runtime can execute them when invoked.

Procedure

  1. Starting on a local machine, make a new project directory which will be used by the new Lambda function. This exercise uses the lambda-test folder as the current directory. Within the directory, create a new folder named app.
  2. Create the Dockerfile that adds the function handler code to the Lambda function’s directory.
    For a Node.js function, the Dockerfile will look similar to:
FROM public.ecr.aws/lambda/nodejs:14

COPY app.js package.json   ${LAMBDA_TASK_ROOT}

RUN npm install

CMD [ "app.handler" ]

Quick Note: For a Dockerfile to work, there should be an app.js file for the function app and a package.json configuration manifest for Node.js packages in the app directory.

For this demonstration, we have used the hello world node.js script in app.js and included package.json in the index.js file, as shown in the output of the ls command:

Mode             LastWriteTime         Length Name
----             -------------         ------ ----
-a----       11/16/2021   1:28 PM      350 app.js
-a----       11/16/2021   1:26 PM      464 Dockerfile
-a----       11/16/2021 11:49 AM       220 Dockerfile.bak
-a----       11/16/2021   3:13 PM      147 index.js

 

  1. Run the docker build command next. Running this command will create an image of the Dockerfile. For this demonstration, we named our image darwin.

$ docker build -t darwin.

  1. To build the container, start the Docker image using the docker run command:

$ docker run -p 9000:8080 darwin

This creates a container, which directs traffic from port 9000 (on all network interfaces) to port 8080 (on its external interface).

  1. To verify the deployment of the Lambda container image, use the runtime interface emulator. This is done by posting an event to the container’s endpoint using the curl command.

$ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'

That command will get the function that is running the container image started. It then returns a response:

hello world

  1. Once the container is up, authenticate the Docker CLI with the ECR registry. For systems that accept interactive login from TTY devices, this can be done by directly obtaining authentication credentials, and then piping them to the docker login command.

$ aws ecr get-login-password --region us-east-2 | docker login

Quick Note: If the client does not support interactive logins, first obtain the password.

$ aws ecr get-login-password

Then use it to log in with the command:

$ docker login -u AWS -p <password> https://132053483863.dkr.ecr.us-east-2.amazonaws.com

You will receive the following message upon successful log in.

WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded

  1. Once you’re logged in, create an Amazon repository using the following command:

$ aws ecr create-repository --repository-name darwin --image-scanning-configuration scanOnPush=true --image-tag-mutability MUTABLE

This creates an ECR repository named darwin, along with displaying the repository details.

Take note of the repository URI—in this case "132053483863.dkr.ecr.us-east-2.amazonaws.com/darwin"—as it will be used to tag and push the container images in the next steps.

  1. Once the repo is ready, tag the image using the command:

$ docker tag darwin:latest 132053483863.dkr.ecr.us-east-2.amazonaws.com/darwin:latest

  1. Then push the image to the repository using the command below:

$ docker push 132053483863.dkr.ecr.us-east-2.amazonaws.com/darwin

Once the image is published in the repository, the client displays the following information:

The push refers to repository [132053483863.dkr.ecr.us-east-2.amazonaws.com/darwin]
f42af8db8324: Pushed
180b071ee622: Pushed
2a710d85048d: Pushed
c7c0bc86ba5d: Pushed
33d73f3b21b2: Pushed
115e871ae9c2: Pushed
317d4532600e: Pushed
41bd29b368e7: Pushed
latest: digest:
sha256:bff77bf68fe2d16ead50a6735ae7454e43b84ea3ad9cdde446cf6eab4b6a7e88 size: 1998

Once done, you can also validate the image pushed into the ECR registry by logging into the AWS online console. The screenshot below shows details of our deployed image.

unnamed (9)-1

Once the container image is deployed and shows up in the repository, Lambda functions are ready to be configured.

Conclusion

AWS supports multiple methods to deploy Lambda functions, including the use of base images in containers. These AWS Lambda images make it easy for development teams to deploy scalable serverless workloads that rely on varying dependencies.

Besides this, AWS also supports the use of custom runtimes by leveraging the Lambda Runtime API and a Lambda Extensions API for seamlessly integrating monitoring and security into an existing Lambda setup.

New call-to-action

FAQs

Is Lambda a Container?

Lambda is a serverless platform that utilizes containers to operate and execute application code. In a typical Lambda ecosystem, containers enable isolation, immutability and flexible control for Lambda functions. Once the container image is created, it is initiated as a container using a runtime such as Docker. The container can then be deployed to run functions on the Lambda runtime.

What is a Container Image on AWS?

Container images in AWS are used as base images for containerized workloads. These images include the code, operating system and other components that are used for creating deployment packages within local environments before they are pushed to container registries.

Bruno Almeida, Technology Advisor

Technology Advisor

-