AWS Lambda Function from Docker Image

AWS Lambda Function from Docker Image

Since Dec 1, 2020, AWS Lambda allow developers to uses any docker images to be executed as lambda functions. This obviously bring tons of benefits and conveniences.

New for AWS Lambda – Container Image Support | Amazon Web Services
With AWS Lambda, you upload your code and run it without thinking about servers. Many customers enjoy the way this works, but if you’ve invested in container tooling for your development workflows, it’s not easy to use the same approach to build applications using Lambda. To help you with that, you …

To me, three most compelling reasons are:

Simplified Dependencies Management. No longer needed to pack dependencies in zip file. This was especially annoying for Java and Python.

Better Local Testing . Since this is docker based. The code we run on our machine is exactly the same running in AWS. Say – If a role and permission is misconfigured, the function blows up locally rather than later in prod.  

Greater Size Limit. Instead of 50MB zip file limit, Lambda allows for up to 10GB container image size. This is great news for Data Scientist around the world deploying their 500MB+ models.


  1. Create a docker image that can be run on lambda
  2. Upload to Amazon ECR
  3. Create Function

The follow steps shows detail for Python.

Find a base image

We could use any correctly created custom images. It is far easier to based off  Amazon provided images. We will end up with images that behaves very close to how AWS runs in production today when deploying zip file. (CloudWatch Logging and metrics exported and all).

Deploy Python Lambda functions with container images - AWS Lambda
Deploy your Python Lambda function code as a container image using an AWS provided base image or the runtime interface client.

Create a custom image

First, Create the app code ( This should looks exactly like a normal lambda function. We'd need a handler function which accepts an event and lambda context, returning function result.

def handler(event, context):
    return {"hello":"world"}

Copy the function into our docker image. Run the function with <file_name>.<function_name> in CMD.


CMD ["app.handler"]      

Build our image

$ docker build . -t lambda-docker-python

Running and Testing

These steps should be very familiar to anybody who has been using docker.

docker run -p 9000:8080 lambda-docker-python:latest

Now we could invoke our functions using curl. Per Lambda Runtime Interface Client library doc, the correct route to the deployed handler is /2015-03-31/functions/function/invocations . (Not sure why they impose this convention though – but you'll get 404 otherwise).

This means we could invoke our function by this curl

$ curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
{"hello": "world"}

### If we look at docker run logs, we'd see a very familiar lambda logging -- this is great!
START RequestId: 826b34b9-498e-4468-9e4c-ffa7db08b050 Version: $LATEST
END RequestId: 826b34b9-498e-4468-9e4c-ffa7db08b050
REPORT RequestId: 826b34b9-498e-4468-9e4c-ffa7db08b050	Init Duration: 0.43 ms	Duration: 165.74 ms	Billed Duration: 200 ms	...

Adding Dependencies

In my opinion, this is the most compelling reason to use Docker image based lambda. We do not need to do dependencies packing gymnastics to make it work. We just define and download it like we would in local machine/our docker.

For python, that means creates a requirements.txt and invoke a pip install on the file.


COPY requirements.txt ./
RUN pip install -r requirements.txt

CMD ["app.handler"]
modified Dockerfile
import requests

def handler(event, context):
  r = requests.get("")
  j = r.json()

  return [ {"states": r["states"], "positive": r["positive"]} for r in j]

Create ECR Repository

Follow the manual steps or uses CDK. Optionally we may want to keep only last 10 images instead of indefinitely keeps all versions.

ecr_repo2 = ecr.Repository(self, "lambda-docker-python", repository_name="lambda-docker-python")

Note the created repository id, which would be in form of <account>.dkr.ecr.<region><repo_name>. We need it to login and upload

Upload Image to ECR

First, Logging into ECR requires AWS CLI to obtain password. We can then pipe into docker login. Per instruction here. Replace 820792572713 with your aws account number.

$ aws ecr get-login-password | docker login --username AWS --password-stdin
Login Succeeded

Then we can tag and upload the image.

$ docker tag lambda-docker-python:latest

$ docker push

Create Function

Either create it manually (the UI is very easy to figure out).

Or hacker-style 😎 in CLI. Creating via CLI requires a lambda role to be created beforehand (which I recommends anyway).

aws lambda create-function \
    --role arn:aws:iam::820792572713:role/fn_lambda_role \
    --function-name lambda-docker-python \
    --package-type Image \

Invoke Function

Enough set up, let's invoke our function in prod.

$ aws lambda invoke --function-name "lambda-docker-python" /dev/stdout

That's it!

Updating Function

Later on if we have any changes to our function we'd want to

  1. Rebuild docker image
  2. Re upload docker image
  3. Update Function
aws lambda update-function-code \
    --function-name lambda-docker-python \

These are clearly can be automated. The script itself is left as exercise for the readers 😊 .

Find a very minimal sample code in github link below.

Contribute to varokas/lambda-docker-python development by creating an account on GitHub.