AWS Lambda Cost Optimization Strategies That Work

In 2021 it’s common practice for businesses to use a pay-as-you-go/use pricing model. It’s no different with Amazon. It’s also the primary reason why this article is such an important read for all those looking to reduce their AWS Lambda costs

In this article, we will go over some actionable strategies to optimize the cost relating to our AWS Lambda usage.

What is AWS Cost Optimization?

One of the main reasons for choosing to move into the cloud is the ability to reduce costs. It’s essential to optimize how much you spend, so you only pay for what you need and only when you need it. Optimizing costs will help your organization get the most out of your investment, helping meet demand and capacity while using the most economically useful options that AWS has to offer.

Cost optimization allows you to decide how much, when, and in which cases you’ll pay for the service provided to you. AWS will allow you to easily pick the right size for your service and leverage memory size based on how much you need.

AWS Lambda Cost Optimization

AWS Lambda uses a pay-per-use billing model, where you are billed only for the time your functions are running. The more your function runs, the more you pay. This model forever changes the relationship between application code and infrastructure costs. The hardware is automatically provisioned when needed and billed accordingly. There is no need to overprovision servers to cope with peak load.

As a result, traditional tools designed to monitor resource usage are of no practical use. Instead, it is necessary to track application-level metrics like response time, memory utilization, and batch size to control infrastructure costs. To put it in a few words, the infrastructure costs and application performance are strongly linked.

It might look straightforward but, there’s a big risk hidden here. Because AWS Lambda is very cheap to get started with, it lures developers to forget about the infrastructure costs during the development phase. At the end of the month, you will end up getting an unpleasant surprise in the form of a significant bill.

How AWS Lambda Pricing Works

For each Lambda function, you can set the maximum memory size and maximum function execution time. For the moment, keep in mind that the maximum memory size impacts the processing power (CPU) allocated. The more memory you provision, the more CPU your function gets.

Lambda functions run only when triggered, and Amazon uses several indicators to calculate the cost of running your Lambda function:

  • number of executions
  • duration in milliseconds
  • memory size – the value set in the function configuration.

For each invocation, the duration and memory size are multiplied to produce a unit called GB-sec. Although it might seem simple, the practice has shown that GB-sec is not a very intuitive concept. To help you get an idea of the costs of your function, try this AWS Lambda cost calculator.

To whet your appetite, Amazon provides a monthly free tier of 400,000 GB-sec, but you will soon learn that your AWS Lambda will cost you lots of money if you don’t optimize your costs early in the development process.

For a better overview, you can use tools like Dashbird AWS Lambda function cost tracking tool that will allow you to monitor the cost of your Lambdas in real-time with detailed insights. That way, you’ll always be on top of how much you’re spending on this service.

Monitoring AWS Lambda Functions

To start optimizing your AWS Lambda costs, you have to set up a monitoring system in the first place. Amazon automatically sends logs to CloudWatch, where you can view the basic metrics. But CloudWatch is not good at providing key details on the execution of your functions. Here Dashbird comes to help by providing time-series metrics for invocation counts, durations, memory usage, and costs. For a more in-depth comparison between CloudWatch and Dashbird serverless monitoring, see here.

Constant Monitoring is Essential

Software projects are constantly changing, which makes cost optimization a moving target. For that reason, it’s important to have proper monitoring and alerting when our financial policies are not met so that we can act upon these incidents and fix them before they become a financial nightmare.

AWS offers spending alerts and expenditure information, but not on the granular level of a Lambda function, for example.

With services like Dashbird, you can set custom policies for one or more functions with very granular details. The example below will send an email and Slack message whenever the selected functions cost more than $10 over the past hour.

serverless cost monitoring

Strategies for optimizing AWS Lambda Costs

Minimize Lambda Usage

Don’t use Lambda functions for simple transforms. If you’re building an API with AppSync or API Gateway, this is often the case. You implemented authentication Cognito and custom authorizers in your API Gateway and now just want to push data directly to upstream services like DynamoDB or SQS.

API Gateway supports the Velocity Templating Language, a simple programming language that can transform the JSON objects of API Gateway requests. They can’t do everything, but they neither have cold-starts nor incur extra costs as Lambda functions do.

Keep in mind that optimizing with VTL is not too straightforward, but it can be worth your time if you have frequently called endpoints that don’t require the full power of Lambda. Richard Boyd, a developer relations engineer at AWS, wrote a bit on that topic.

Caching Lambda Responses

Caching goes hand in hand with minimizing the use of Lambda functions. When you have to use one, try to make sure it’s only called when it’s really needed. 

For some Lambda functions, like the ones called from API Gateway, AppSync, or Lambda@Edge functions called from CloudFront, you can cache responses. A function that isn’t called doesn’t cost you any money, so make sure you don’t hammer your Lambda function if its responses don’t change often.

Lambda@Edge functions are more expensive than regular Lambda functions. Still, if you only call them once every few minutes and deliver the cached response to thousands of users per second, you can significantly reduce your bill.

One benefit of caching is cheaper responses because you don’t pay for Lambda. Another benefit is that you’ll see faster responses because you remove the extra way the request would have to travel if Lambda was involved. 

In the business sense, this could also uplift revenue by providing a better experience for the end-user. Lambda cost optimization means you not only reduce costs, but implementations could end up driving more revenue.

To enable caching for API Gateway deployed with AWS SAM, you can use the MethodSettings attribute.

Resources:

Resources:
  DemoApi:
    Type: AWS::Serverless::Api
    Properties:
      StageName: Production
      CacheClusterEnabled: true
      CacheClusterSize: '0.5'
      MethodSettings:
        - HttpMethod: GET
          CacheTtlInSeconds: 480
          ResourcePath: "/my-endpoint"
          CachingEnabled: true

You can use the * wildcard in HttpMethod and ResourcePath to configure caching for multiple routes in your API.

Utilize Queues to Batch Lambda Invocations

Batching data is a good idea for Lambda functions that work in the background and aren’t directly related to user interactions. 

One of the most overlooked aspects is that AWS Lambda cold-starts happen for each concurrent execution. Therefore, as a first step in optimizing your costs, ensure that the functions are executed with the best frequency to avoid cold starts as much as possible.

This can be achieved by using one of AWS’ many queuing services like SQS or Kinesis. Don’t call your function directly, but send the data to a queue to batch it accordingly. 

Build Small Lambda Functions

The goal of all your AWS Lambda function is to be small and purpose-built. If a Lambda function only does one thing, you only have to optimize this specific use case. When you end up with multiple use-cases for one function, you can end up making compromises to satisfy all of them.

Function size is also part of the costs. The Lambda runtime has to get your function’s code from S3 or a Docker image registry on every cold start. Downloading one gigabyte takes Lambda much more time than downloading one megabyte. This is the waiting time you pay for

Since 2021 Lambda functions are billed in one millisecond increments, you can now save money for every millisecond your function runs faster. So getting your Lambda code down to the absolute basics can increase your savings for functions that run very often.

Optimize Memory Allocation

After you made sure you only call functions when needed and kept them all small and purpose-built, you’re enabled to control your AWS cost by following a few straightforward steps. 

First off, we’ll mention choosing the right size, which means that with AWS, you can set the memory, and in turn CPU, of your AWS Lambda functions to meet precisely the necessary capacities that you need.

There’s no need to over-provision or make compromises. Adapt your services to address the actual business needs at any given time, without any penalties or hidden fees whatsoever. AWS allows you to choose between services that meet your criteria, and while your demands change, it is quite easy to switch to the service option which will cover your new requirements. AWS also allows you to run multiple service options simultaneously, helping you reduce costs while maintaining optimal performance at all times.

Another way would be utilizing Step-Functions to find the optimal memory capacity for your functions. Here’s an open-source module built by Sr. Tech. Evangelist Alex Casalboni of AWS.

AWS doesn’t allow us to customize CPU for Lambda functions, but the more memory we allocate, the more computing power we get… and the faster our functions will execute our code! This can actually reduce the total execution cost.

There are a few caveats to this strategy, though. For example: over 2 GB of RAM, Lambda will allocate a second vCPU to the function. In this case, single-threaded programs won’t see any speed gains from increasing memory. Take a look at this article if you’d like to explore more about how Lambda’s memory and CPU allocation work under the hood.

Below is an illustration of the strategy: when we increased memory from about 1.8 GB to 2 GB, it decreased the total billed duration from 600 to 500 milliseconds. Although the memory cost is higher, the lower duration offsets the additional memory cost, rendering an effective 5% cost reduction. And we even have the extra benefit of lower latency.

lambda memory

We published a sample benchmarker on this Github repository, which you can plug into any of your Lambda functions to emulate requests and find the memory sweet spot.

Use Variables as a Local Cache

The Lambda internal memory can be used as a cheap and fast caching mechanism. Anything loaded outside the handler function remains in memory for the next invocations.

We can keep a copy of information retrieved from a database inside a global variable so that the data can be pulled from the Lambda internal memory in future requests.

This article we published, illustrates this with a couple of basic examples and covers a few points to pay attention to when implementing this strategy.

Never Call Lambda Directly from Lambda

Again, this is most important for synchronous Lambda calls, which happen with API Gateway.

If you call a Lambda function directly from within a Lambda function, you pay for both of them. The first one will wait for the second one to finish, and you’ll be paying for the waiting time. 

If you need to call multiple Lambda functions, finish the synchronous API Gateway Lambda function early, and start running the other functions with some extra service.

AWS has multiple queuing services to offer. SQS, SNS, Kinesis, and Step Functions are a few of them. When the heavy task is done, you can notify the clients with WebSockets or email.

Reduce CloudWatch Costs

Lambda will send all logging data to CloudWatch Logs. This service is beneficial, but it isn’t free. If you’re logging in excessively, the CloudWatch costs can end up eating your Lambda savings.

Use configurable log levels for your Lambda functions, so you can debug them when needed, but don’t log unrequired data all the time. Logging frameworks allow setting log levels dynamically.

You can enable access logs for AWS API-Gateway and AppSync, which then will be sent to CloudWatch Logs. Make sure you filter out request fields you don’t need.

CloudWatch Logs keeps your Lambda logs forever by default, but you can configure it to send log archives to S3 Glacier and then delete it from CloudWatch to save money.

And last but not least, you can set up filtered subscriptions that will send your logs to third-party monitoring services like Dashbird. 

Cost Reduction with Observability

With a proper serverless observability system in place, your company will for sure minimize the risks that inherently come with serverless architectures. You will also have a way to manage the budget predictably, in a way that complies with policies that require commitments on a long-term basis.

Dashbird single function view

Dashbird single function view

This would include monitoring, tracking, analyzing, and alerting your service usage. With a trusted advisor, you can provision your resources by keeping up with the best possible practices to improve system performance and reliability.

It will also increase security and give you opportunities to save some money. CloudWatch is an option that (in case you decide to turn off non-production functions) will allow you to match increases or reductions in demand. It will collect and track the metrics, monitor log files, and automatically respond to any changes made within your AWS resources.

But it doesn’t give you full insight into your system or instant alerts when things break. For that, you need a tool that will alert you instantly when your system is misbehaving. Dashbird is such a tool that gives you insights into your Lambda functions, all in one place.

Conclusion

There are other ways to reduce cost and optimize it for your own needs. Dashbird‘s Cost Explorer can help you with analyzing your usage and cost. It is a magnificent tool that allows you to use a set of default reports to identify cost drivers and usage trends. Dashbird’s own cost tracking system, which you can see on an account-wide scale or per-function basis, also gives you a real-time presentation of how much your services are costing you.

There are various choices to make and strategies to reduce costs and optimize them for your own needs. The essential thing you need to realize is discovering which of the given approaches best suits your personal needs. After finding out what you need, it’ll be easier to choose a more personalized way to reduce costs.


Further reading:

AWS Lambda metrics that you should be monitoring

4 tips for tuning AWS Lambda for production

Quick ways to cut cost on your AWS Lambda

Read our blog

Making serverless applications reliable and bug-free

In this guide, we’ll talk about common problems developers face with serverless applications on AWS and share some practical strategies to help you monitor and manage your applications more effectively.

ANNOUNCEMENT: new pricing and the end of free tier

Today we are announcing a new, updated pricing model and the end of free tier for Dashbird.

4 Tips for AWS Lambda Performance Optimization

In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.

More articles

Made by developers for developers

Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.

What our customers say

Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.

Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.

Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.

I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.

Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.

Great UI. Easy to navigate through CloudWatch logs. Simple setup.

Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.