3 ways of recycling third-party code for AWS Lambda

In this article, I’m trying to shine some light on the AWS Lambda Layers, Lambda Extensions, and Docker image for Lambda, in order to add third-party code to Lambda. When and how to use which method, and when to mix and match?

Due to the circumstances in 2020, many software releases were postponed, and so the industry slowed its development speed quite a bit.

But at least at AWS, some teams got updates out of the door at the end of the year. AWS Lambda got two significant improvements. AWS Lambda Extensions and the support of Docker images for your Lambda functions. Thanks to these two new features and Lambda Layers, we now have three ways to add code to Lambda that isn’t directly part of our Lambda function.

The question now is: When should we use what?

First things first. All these Lambda features can be used together. So if you’re thinking about where to put your code, at least your decisions aren’t mutually exclusive. You can upload a Docker image and attach a regular Lambda Layer and a Lambda Extension. The same is possible if your Lambda function is based on a ZIP archive.

Lambda Functions with Docker

Let’s look at Lambda functions based on Docker images. Many developers adopted Docker containers into their development environment because it helps to replicate the production environment as closely as possible. After all, many services today are running on Kubernetes, which manages Docker containers.

AWS Lambda is a different approach to microservices. It’s not based on long-running containers but short-lived functions. It comes with its own way of doing development, and so people couldn’t use their Docker skills anymore. But since re:invent 2020, AWS Lambda supports Docker images to cater to developers’ needs who are already used to a Docker-based development workflow.

Instead of packing your function code into a ZIP archive, you can also pack it into a Docker image. In the opinion of many serverless advocates, this is just seen as a way to pull in legacy developers from the “container world” to serverless. Still, it also has a crucial benefit: You get 10GB of space.

That’s right, your Docker images for Lambda can be significantly larger than your ZIP archives, which only allow for 250MB of unzipped code and assets. Sometimes this is just how things are; you pull in a huge dependency and you’re over 250MB in no time. This is also an important factor for legacy systems. When you have a Docker image that you used for many years now, it can very well be that size wasn’t much of a concern for you in the past. So getting all that code down to 250MB just isn’t possible; it would require you to do a big refactor into many functions. 

So if you have a specific use-case that requires more than 250MB, you just won’t get around Docker images for your Lambda function. When your whole image is already based on other Docker images, that keep track of monitoring and other things not directly related to your function code, you’re also better off using Docker for your Lambda function instead of trying to refactor everything to fit into Lambda Layers or Extensions.

One note, though: Your container image has to implement the Lambda runtime API. You can’t run any Docker image on Lambda; it has to get Lambda events via the runtime API. Your containers won’t run indefinitely; like ZIP-based Lambda functions, they will be paused and subjected to cold starts. 15 minutes is the maximum allowed time for a function.

Lambda Layers

AWS Lambda Layers are helpful in multiple ways. The most prevalent way layers are used is for custom runtimes. This means you can create a layer for a programming language that isn’t officially supported for AWS. There are even runtime layers with community support, so if you’re lucky, you can just pull an open-source runtime and start writing code in the programming language of your choice.

The other way layers are used to share code between Lambda functions. The code of a layer will be available when your function runs, so you can import it as if it were part of your function, but since it is encapsulated in a layer, you can maintain it in a separate place. If you’ve got a library, a framework, or just a utils folder with code that gets used all over your Lambda functions, you could pack it into a layer.

While layers can be used together with Docker images, you can see them as separate ways to do the same thing.

If you’re starting a new project, put your functions into ZIP archives and use Lambda layers to factor out code used in multiple places. Provided you can get away with the 250MB limit. This limit is for your whole function with all of its layers, of which every function can have five. This means you can’t have five layers, each 250MB, but the sum of your function code and layers has to be below 250MB.

If you require more than 250MB or simply have a Docker-based legacy code-base that would never fit into 250MB, you’re better off with Docker than with a big refactor. At least in the short term. You can factor your libraries out into Docker images that become the base of all your Lambda functions, instead of using Lambda layers and won’t have to fight with the five-layers-per-function limit.

Lambda Extensions

Lambda Extensions are based on Lambda Layers, so if you want an extension to your Lambda function but can’t find that option in the AWS Console, it’s because an extension is simply a layer.

Extensions are a special layer because they don’t just supply your function with a custom runtime or shared code. Extensions can run in the same process as your Lambda function and in a separate process. This separate process is started before your Lambda function is executed and stopped after it finishes.

Running in a separate process enables Lambda Extensions to do things that regular layers couldn’t do. For example, gathering secrets from somewhere before your function does its work, or tracking the time your function runs and gather errors when it fails. Since an extension runs in its own process, a Lambda function crash usually won’t crash the extension, so it can take its time to find out what went wrong and send it to your monitoring service.

So if you have code that is concerned with monitoring or observability, or needs to perform actions before or after multiple functions run, you should consider putting that code into a Lambda Extension.

Operating for a resilient and cost-efficient Lambda

Whichever way you decide to architect your system, the chances are you’ll as much as you’ll be enjoying the benefits of Lambda, you’ll also be affected by its notorious cold starts and latency, which can slow you down and stop you from getting the most of your serverless application.

Dashbird’s tailing functionality offers a real-time insight into the functions you’re running, and it provides you with all the crucial logs and metrics immediately out of the box and all in one place. It makes quite a big difference in helping you navigate serverless.

You’ll get aggregated Lambda performance and resource usage data for easy analysis on:

  • Invocations (total count of)
    • Successful requests
    • Errors
    • Cold starts
  • Memory usage
    • Average
    • Maximum
    • Minimum
    • 99th percentile
  • Duration
    • Average
    • Maximum
    • Minimum
    • 99th percentile
  • Cost
    • Aggregated billed execution time
dashbird dashboard


To sum it all up, you should see Docker images mainly as a way to get more code deployed or an easier way to port legacy container code-bases to AWS Lambda. The 250MB of ZIP deployments often won’t cut it, and the 10GB of a Docker image is the only way around it.

Lambda Layers are a way to add libraries, frameworks, util-folders, and custom runtimes to your Lambda function.

Lambda Extensions are Lambda Layers that can run in a separate process, which allows them to perform actions before and after a function executes. Use-cases are monitoring or secret gathering.

With this knowledge, you’re now able to structure your Lambda code in a meaningful way and won’t despair when you hit a code size limit.

Further reading:

Using Lambda Layers for Better Serverless Architecture

Deploying AWS Lambda with Docker Containers

Read our blog

Introducing easy custom event monitoring for serverless applications.

Today we are excited to announce scheduled searches – a new feature on Dashbird that allows you to track any log event across your stack, turn it into time-series metric and also configure alert notifications based on it.

Why and How to Monitor Amazon OpenSearch Service

One of the most vital aspects to monitor is the metrics. You should know how your cluster performs and if it can keep up with the traffic. Learn more about monitoring Amazon OpenSearch Service.

Why and How to Monitor AWS Elastic Load Balancing

Dashbird recently added support for ELB, so now you can keep track of your load balancers in one central place. It comes with all the information you expect from AWS monitoring services and more!

More articles

Made by developers for developers

Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.

What our customers say

Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.

Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.

Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.

I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.

Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.

Great UI. Easy to navigate through CloudWatch logs. Simple setup.

Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.