Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
By the end of this AWS Lambda optimization article, you will have a workflow of continuously monitoring and improving your Lambda functions and getting alerts on failures.
Serverless has been the MVP for the last couple of years and I’m betting it’s going to play a bigger role next year in backend development.
AWS Lambda is the most used and mature product in the Serverless space today and is also at the core of Dashbird. That’s why I’m going to share some tips and best practices for building production-ready Lambdas with optimal performance and cost.
Before making any changes to your functions, I recommend setting up performance tracking for your functions. Monitoring invocation counts, durations, memory usage, and cost of your Lambda functions allows you to pinpoint issues and make informed decisions fast. You can use Dashbird since it’s easy to set up. It relies on CloudWatch Logs, is much easier to use, and gives you quick access and visualization to your mission-critical AWS data. You can learn more about AWS CloudWatch vs Dashbird in this key feature comparison.
Let’s dive into the different strategies to turbocharge your functions.
The amount of virtual CPU cores allocated to your Lambda function is linked to the memory provisioned for that function. A function with 256MB of memory will have roughly twice the CPU from a 128MB function. If you configure the current maximum of 10GB memory, you get 6 virtual CPU cores. Memory size also affects cold start time linearly.
Considering the cost increase of more memory, developers choose to optimize either for speed or cost.
Here’s a good example from “Cost and Speed Optimization” by Alex Casalboni:
“In terms of cost, the 128MB configuration would be the cheapest (but very slow!). Interestingly, using the 1536MB configuration would be both faster and cheaper than using 1024MB. This happens because the 1536MB configuration is 1.5 times more expensive, but we’ll pay for half the time, which means we’d roughly save 25% of the cost overall.”
Lambda performance tuning can be done manually or through external tools. It will run your functions multiple times with different memory configurations. This way, you can choose what you deem the best for your particular use case.
AWS reuses Lambda containers for subsequent calls if the next call is within 5 minutes. This allows developers to cache resources and implement connection pooling. Below is a checklist that you can go through when thinking in that direction.
Here’s some material on how to do that:
Don’t do this:
exports.handler = async (event, context) => { const client = clientModule.configure({ apiKey: process.env.API_KEY }); ... };
Do this:
const client = clientModule.configure({ apiKey: process.env.API_KEY }); exports.handler = async (event, context) => { ... };
In the following code example, the in-memory cache only stores up to 100 items; then, it starts dropping the oldest ones. This way, the memory doesn’t overflow when the data is unbound. The expensiveApiCall function is only called if the item can’t be found in the cache.
const cache = []; function fromCache(id) { return cache.find(item => item.id === id) } function toCache(item) { cache.push(item); if(cache.length >= 100) cache.shift(); } exports.handler = async (event, context) => { const {id} = event.queryStringParameters; let item = fromCache(id); if (!item) { item = expensiveApiCall(id); toCache(item); } return {statusCode: 200, body: JSON.stringify(item)}; }
AWS Lambda running slow?
With servers, collecting performance metrics and tracking failed executions is normally done by an agent that collects telemetry and error information and sends it over HTTP. With AWS Lambda, this approach can slow down functions and, over time, add quite a bit of cost. Not to mention the extra overhead that comes from adding (and maintaining) third-party agents over possibly large amounts of Lambda functions.
The great thing about Lambda functions is that all performance metrics and logs are sent to AWS CloudWatch. In itself, CloudWatch is not the perfect place to observe and set up error handling, but some services work on top of it and do a good job of providing visibility into your services.
Dashbird is a log-based monitoring and error handling solution for AWS Lambda. It’s perfect for observing all layers of your serverless architecture and get alerted for all failures that can happen to your service in seconds.
There’s a lot of room to optimize your Serverless stack, and it all starts with knowing the right ways to do it and locating the issues. I recommend following all of the instructions above and testing the performance difference after making Lambda functions changes. Keep in mind that performance is critical in API endpoints and functions that have high execution volumes. Of course, make sure to stay on top of your systems with an AWS Lambda monitoring tool.
Further reading:
Best Practices for Logging In AWS Lambda
How to optimize AWS Lambda cost (with examples)
AWS Well-Architected Framework: Cost Optimization Pillar
AWS Lambda Metrics That You Should Be Monitoring
In this guide, we’ll talk about common problems developers face with serverless applications on AWS and share some practical strategies to help you monitor and manage your applications more effectively.
Today we are announcing a new, updated pricing model and the end of free tier for Dashbird.
In this article we’ll go through the ins and outs of AWS Lambda pricing model, how it works, what additional charges you might be looking at and what’s in the fine print.
Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.
Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.
Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.
Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.
I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.
Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.
Great UI. Easy to navigate through CloudWatch logs. Simple setup.
Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.