Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
Dashbird continuously monitors and analyses your serverless applications to ensure reliability, cost and performance optimisation and alignment with the Well Architected Framework.
What defines a serverless system, main characteristics and how it operates
What are the types of serverless systems for computing, storage, queue processing, etc.
What are the challenges of serverless infrastructures and how to overcome them?
How systems can be reliable and the importance to cloud applications
What is a scalable system and how to handle increasing loads
Making systems easy to operate, manage and evolve
Learn the three basic concepts to build scalable and maintainable applications on serverless backends
The pros and cons of each architecture and insights to choose the best option for your projects
Battle-tested serverless patterns to make sure your cloud architecture is ready to production use
Strategies to compose functions into flexible, scalable and maintainable systems
Achieving loosely-coupled architectures with the asynchronous messaging pattern
Using message queues to manage task processing asynchronously
Asynchronous message and task processing with Pub/Sub
A software pattern to control workflows and state transitions on complex processes
The strategy and practical considerations about AWS physical infrastructure
How cloud resources are identified across the AWS stack
What makes up a Lambda function?
What is AWS Lambda and how it works
Suitable use cases and advantages of using AWS Lambda
How much AWS Lambda costs, pricing model structure and how to save money on Lambda workloads
Learn the main pros/cons of AWS Lambda, and how to solve the FaaS development challenges
Main aspects of the Lambda architecture that impact application development
Quick guide for Lambda applications in Nodejs, Python, Ruby, Java, Go, C# / .NET
Different ways of invoking a Lambda function and integrating to other services
Building fault-tolerant serverless functions with AWS Lambda
Understand how Lambda scales and deals with concurrency
How to use Provisioned Concurrency to reduce function latency and improve overall performance
What are Lambda Layers and how to use them
What are cold starts, why they happen and what to do about them
Understand the Lambda retry mechanism and how functions should be designed
Managing AWS Lambda versions and aliases
How to best allocate resources and improve Lambda performance
What is DynamoDB, how it works and the main concepts of its data model
How much DynamoDB costs and its different pricing models
Query and Scan operations and how to access data on DynamoDB
Alternative indexing methods for flexible data access patterns
How to organize information and leverage DynamoDB features for advanced ways of accessing data
Different models for throughput capacity allocation and optimization in DynamoDB
Comparing NoSQL databases: DynamoDB and Mongo
Comparing managed database services: DynamoDB vs. Mongo Atlas
How does an API gateway work and what are some of the most common usecases
Learn what are the benefits or drawbacks of using APIGateway
Picking the correct one API Gateway service provider can be difficult
Types of possible errors in an AWS Lambda function and how to handle them
Best practices for what to log in an AWS Lambda function
How to log objects and classes from the Lambda application code
Program a proactive alerting system to stay on top of the serverless stack
AWS Lambda charges are based on two key metrics:
Every time a function is invoked, AWS charges $0.0000002 per request. That translates to 1 Million requests for $0.20.
After it’s invoked, another charge starts counting based on how long it takes for the execution to finish. It charges a fraction of a cent per 100 milliseconds. How much depends on the amount of memory allocated to the function1.
A function with 1 GB allocated, for example, will cost $0.000001667 per 100 milliseconds. That translates to $16.67 for 1 Million requests lasting 1 second each.
For billing purposes, function execution time is rounded up to the next multiple of 100. When duration is 256 milliseconds, for example, Lambda rounds to 300 in order to determine the cost.
Dashbird offers a Lambda Cost Calculator, in case you would like to simulate your own use cases.
AWS offers 1 Million invocations and 400,000 GB-seconds of execution time per month for free.
This free tier does not expire after 12 months of account creation, like other services. That means developers can continue enjoying the free tier for an unlimited period of time.
There is no official guarantee, though, that AWS will keep the free tier in place forever.
Lambda doesn’t work passively. It has to be invoked somehow in order to run. This is called an event source. Some sources are free, others will add up to the total execution cost.
By free event source we mean those that don’t add up to what normally costs for the Lambda invocation and execution time.
Examples of free event sources are:
Event sources that add up to Lambda costs are:
How much each of these will cost depend heavily on the service and usage patterns. It is necessary to study their pricing models and consider each particupar use case. Check our Knowledge Base table of contents, since we have pages covering many other AWS services.
By default, Lambda will store all logs generated by applications running on the platform in CloudWatch Logs4. It is highly recommended to keep this logging practice. That is the only way to gain visibility over what happens inside Lambda functions runtime and whether anything went wrong.
CloudWatch Logs charges per amount of data ingested (generated by Lambda) and also for storage over time. It’s possible to set CloudWatch Logs to automatically delete logs onder than a certain lifetime threshold (e.g. one week, or a month).
When a function invocation results in an error, AWS Lambda may retry the request with the same parameters for a few times. This is called the Retry Behavior5. Each retry will be charged as a normal request. Depending on how many errors the application may experience during transient failures or during all the time, it may add up to the total Lambda execution costs.
Consider an application running on the AWS US-East (Ohio) region. It serves 1 Million requests per month, each during on average 250 milliseconds. The workload requires 2 GB of RAM.
Now let’s compare this application running on Lambda vs. EC2:
1,000,000 x $0.0000002 = $0.200
1,000,000 x roundup(250/100) x $0.000003334 = $10.002
$0.200 + $10.002 = $10.202
Since Lambda runs on Amazon Linux6, let’s consider an EC2 instance also running on a similar type of OS.
To match the same Memory size (2 GB) and vCPU allocation7 (2 cores) of the Lambda function, we chose the t3a.small8 EC2 instance.
t3a.small
Also aiming at a fair comparison, we are using the EC2 on-demand pricing and assuming the application must be online 24×7, which better aligns with Lambda pricing model.
$0.0188
30 days x 24 hours = 720
$0.0188 x 720 hours = $13.536
Even though Lambda offers a wide range of benefits over EC2 (fully-managed, highly available and scalable, etc), it can still be cheaper than provisioning and maintaining our own server instances.
A really fair comparison would consider a cluster of at least four EC2 servers: a couple servers in two different Availability Zones9. This would provide a level of availability similar with Lambda’s (although still not on par with it).
Only that would already quadruple EC2 costs and management work. On top of that, consider the need for a Load Balancer and an Auto-Scaling service, the total cost of the infrastructure would probably be 5 or 6 times higher than Lambda.
The main benefit of Lambda pricing model is that it eliminates waste with idle resources. There’s no need to pay for servers allocated at waiting for requests in your application. Developers only pay when the function is invoked.
If 24 hours or even days, weeks pass without any invocation, it costs nothing. Nevertheless, the function remained highly available throughout all that time. Developers don’t have to worry about whether the application will be up when it’s needed.
Another major advantage that comes from this pricing structure is reduced financial risks. That is especially beneficial to SMEs and startups. To reach such level of availability and scalability, it’s usually necessary to allocate a large budget on a cluster of servers. That is without knowing how much of those resources will actually be needed.
Companies usually over-provision to avoid going out of compute or memory resources and failing during peak times, when auto-scaling systems cannot cope with the speed with which demand grows.
High availability comes totally free of charge in AWS Lambda.
For workloads that have a difficult to predict duration, the Lambda model can actually increase financial risks.
With an EC2 server, for example, there is always some level of idle resources that can accommodate some compute time fluctuations without increasing costs. In AWS Lambda, on the other hand, if execution time increases, the total cost will increase proportionally.
Another downside is that, since pricing is fully variable to application demand, there are no economies of scale as the demand grows.
Lambda can be used to its full extent by implementing decoupled micro-services. Since a single-purpose user request is likely to invoke multiple functions. Costs and latency may add up in such architectures.
No results found