Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
We currently have six major cloud platforms offering serverless products, AWS Lambda being the pioneer. Our goal is to provide a quick way to compare and evaluate all. For each service, we will be evaluating:
There are smaller service providers on the market that are focused on serverless, but we won’t cover them in the present analysis.
For the pricing comparison, we considered regions in the United States east coast.
Let the battle begin!
Due to its larger resource allocation availability, as well as the unrivaled set of possible integrations, the winner is AWS Lambda.
The serverless paradigm enables and makes it easier to implement some of the best practices in software development, such as event-based architectures. Being able to integrate AWS Lambda with many other services within the AWS portfolio and having it responding automatically to events in these services can greatly contribute to the application’s health and stability.
Dashbird integrates deeply with AWS Lambda to provide enterprise-grade monitoring, alerting and debugging features for serverless development teams. You can try it today for free (no credit card).
In terms of memory AWS Lambda offers from 128 MB up to 3 GB of RAM. Users can select memory on increments of 128 MB.
AWS allocates CPU proportionally to the memory assigned. Above 2 GB of RAM, the function is allocated two vCPU cores, which is useful for parallelizing tasks.
Maximum execution time is 15 minutes.
AWS allows users to implement custom runtimes, being able to support virtually any runtime. A few popular runtimes are provided out of the box, which makes it easier to get started:
Lambda can integrate with the following AWS services (synchronously and/or asynchronously):
AWS offers up to 1,000,000 invocations for free. Above this tier, the pricing is $0.20 per million invocations.
Memory is charged per MB-millisecond consumed. The price does not scale perfectly linear with each level of RAM allocated, but every 128 MB used for 100 milliseconds cost, on average, $0.00000020838. The free tier is 400,000 GB-seconds of compute time per month.
Azure Functions memory allocation availability also starts at 128 MB, going up to 1,536 MB and growing in increments of 128 MB.
CPU power is also allocated in proportion to memory. Azure also offers a premium version of Functions, in which the user can determine the number of vCPUs to be allocated, paying $0.00173 per 100 vCPU-seconds.
Maximum execution time is 10 minutes.
Azure Functions has official support for five runtimes:
Two other runtimes are supported as a preview:
Some other runtimes are supported only in an experimental fashion, not recommended for production usage.
Azure pricing model is very similar to AWS Lambda’s. It offers the exact same free tier levels for both invocations and memory usage.
The price per invocation is the same as Lambda, but the RAM pricing is slightly cheaper (- 4%): $0.0000002 per 128 MB of RAM consumed over 100 milliseconds.
Google offers five tiers of resource allocation (memory and CPU, respectively):
The maximum execution time is 9 minutes, which is lower than the previous contenders.
Google Functions also has a different pricing model in comparison to the ones we’ve already seen above, since it charges for memory and CPU separately – although CPU is still determined in a fixed way by the memory allocated.
Memory costs $0.0000025 per GB-second and computing power is charged at $0.00001 per GHz-second consumed.
It’s twice more expensive than Lambda in terms of invocation count cost ($0.4 per million).
The free tier is more generous than AWS and Azure, offering 2 million invocations for free, 1 million compute-seconds per month.
Memory allocation in IBM Functions is limited in comparison to other cloud services, going from 128 MB to 1,024 MB. CPU is assigned proportionally to the memory.
Execution timeout can go up to 10,000 minutes, which is way higher than other services. This would be a compelling selling point for long-running processing needs, but we would argue that serverless platforms are not well suited for this type of job in the first place.
IBM offers a REST API so that users can implement their own integrations with any other HTTP-compatible services. There are no out-of-the-box integrations offered by the platform, unfortunately.
Memory allocation is slightly more expensive than AWS and Azure, costing $0.0000002125 per 128 MB per 100 milliseconds.
The advantage of IBM in pricing is that invocations are free.
Free tier also includes 400,000 GB-second per month, just as AWS.
Memory availability goes from 128 MB up to 1,536 MB, growing in increments of 128 MB. CPU is proportional to memory allocated.
Execution time duration limit is 10 minutes.
Memory allocation cost is slightly higher than AWS, at $0.0000002085 per 128 MB per 100 milliseconds.
Invocation count price is also priced at $0.2 per million calls, just as AWS.
Alicloud offers the first 400,000 GB-seconds per month free of charge.
Oracle Functions isn’t generally available yet and many of the items we analyzed in this article are still unknown: resource capabilities, pricing, integrations. Nevertheless, we decided to mention it here because we see they’re heading in an interesting direction.
Instead of developing their own proprietary serverless platform, like all other we analyzed above, they went the open source way and adopted the Fn Project, managed by Apache, in their underlying infrastructure.
Two main advantages are:
We wish the best to the Oracle team and that their debut in serverless computing contributes to foster competition, innovation, a healthier market and, ultimately, better and cheaper services for the developer community.
In this guide, we’ll talk about common problems developers face with serverless applications on AWS and share some practical strategies to help you monitor and manage your applications more effectively.
Today we are announcing a new, updated pricing model and the end of free tier for Dashbird.
In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.
Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.
Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.
Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.
Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.
I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.
Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.
Great UI. Easy to navigate through CloudWatch logs. Simple setup.
Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.