Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
Dashbird continuously monitors and analyses your serverless applications to ensure reliability, cost and performance optimisation and alignment with the Well Architected Framework.
What defines a serverless system, main characteristics and how it operates
What are the types of serverless systems for computing, storage, queue processing, etc.
What are the challenges of serverless infrastructures and how to overcome them?
How systems can be reliable and the importance to cloud applications
What is a scalable system and how to handle increasing loads
Making systems easy to operate, manage and evolve
Learn the three basic concepts to build scalable and maintainable applications on serverless backends
The pros and cons of each architecture and insights to choose the best option for your projects
Battle-tested serverless patterns to make sure your cloud architecture is ready to production use
Strategies to compose functions into flexible, scalable and maintainable systems
Achieving loosely-coupled architectures with the asynchronous messaging pattern
Using message queues to manage task processing asynchronously
Asynchronous message and task processing with Pub/Sub
A software pattern to control workflows and state transitions on complex processes
The strategy and practical considerations about AWS physical infrastructure
How cloud resources are identified across the AWS stack
What makes up a Lambda function?
What is AWS Lambda and how it works
Suitable use cases and advantages of using AWS Lambda
How much AWS Lambda costs, pricing model structure and how to save money on Lambda workloads
Learn the main pros/cons of AWS Lambda, and how to solve the FaaS development challenges
Main aspects of the Lambda architecture that impact application development
Quick guide for Lambda applications in Nodejs, Python, Ruby, Java, Go, C# / .NET
Different ways of invoking a Lambda function and integrating to other services
Building fault-tolerant serverless functions with AWS Lambda
Understand how Lambda scales and deals with concurrency
How to use Provisioned Concurrency to reduce function latency and improve overall performance
What are Lambda Layers and how to use them
What are cold starts, why they happen and what to do about them
Understand the Lambda retry mechanism and how functions should be designed
Managing AWS Lambda versions and aliases
How to best allocate resources and improve Lambda performance
What is DynamoDB, how it works and the main concepts of its data model
How much DynamoDB costs and its different pricing models
Query and Scan operations and how to access data on DynamoDB
Alternative indexing methods for flexible data access patterns
How to organize information and leverage DynamoDB features for advanced ways of accessing data
Different models for throughput capacity allocation and optimization in DynamoDB
Comparing NoSQL databases: DynamoDB and Mongo
Comparing managed database services: DynamoDB vs. Mongo Atlas
How does an API gateway work and what are some of the most common usecases
Learn what are the benefits or drawbacks of using APIGateway
Picking the correct one API Gateway service provider can be difficult
Types of possible errors in an AWS Lambda function and how to handle them
Best practices for what to log in an AWS Lambda function
How to log objects and classes from the Lambda application code
Program a proactive alerting system to stay on top of the serverless stack
Traditionally, to provision cloud servers, developers have to choose from a wide variety of resource combinations: CPU, RAM, Local Storage. They would be charged by the server uptime period (in hours, usually).
AWS Lambda dramatically changes this approach to computing resources. Only the amount of RAM has to be allocated. CPU power will be allocated proportionally to the memory.
This model has important caveats and implications that are important to consider.
It is known that at 1,792 MB we get 1 full vCPU1 (notice the v in front of CPU). A vCPU is “a thread of either an Intel Xeon core or an AMD EPYC core”2. This is valid for the compute-optimized instance types, which are the underlying Lambda infrastructure (not a hard commitment by AWS, but a general rule).
If 1,024 MB are allocated to a function, it gets roughly 57% of a vCPU (1,024 / 1,792 ~= 0,57). It is obviously impossible to divide a CPU thread. In background, AWS is dividing the CPU time. With 1,024 MB, the function will receive 57% of the processing time. The CPU may switch to perform other tasks on the remaining 43% of the time.
The result of this CPU allocation model is: the more memory is allocated to a function, the faster it will accomplish a given task.
To increase CPU power above 1,792 MB, AWS increases the number of vCPUs provided to the function. From the developer standpoint, two vCPUs can be seen as two processing cores3.
For single-threaded programs there is no speed gains from increasing memory above 1,792 MB. The only way to reap the benefits from more CPU power above 1 vCPU is by writing code to run in two threads simultaneously.
In some cases, it is desirable that a Lambda function responses as fast as possible. The best choice would be increase memory to a maximum. Even if the RAM is not entirely needed, it will drive allocation of more CPU power, speeding it up and reducing processing time.
Since above 1,792 MB Lambda will provide two vCPUs, function code that can’t be parallelized shouldn’t have memory allocated above this threshold.
Functions running time insensitive jobs should have memory allocated to a minimum required. The less RAM is assigned, the cheaper it gets to run the function per 100 milliseconds.
There is another caveat in this model, though. Lambda charges per duration time and more memory reduces the processing time. In some cases, increasing memory can actually reduce Lambda costs4.
In order to discover the optimal memory size for a given function, it’s necessary to benchmark it with multiple options5.
At the end of each Lambda invocation log stored in AWS CloudWatch Logs, there will be a line indicating how much memory has been allocated and consumed by the function invocation. Using CloudWatch Log Insights6, it would be possible to extract this information from multiple logs and compile aggregated time-series metrics.
Professional monitoring services dedicated to serverless – such as Dashbird – take care of this compilation automatically. They also allow to analyze performance on different percentiles. In these services, it’s possible to set policies for expected, optimal memory performance, and receive alerts whenever a Lambda function start to deviate.
No results found
End-to-end observability and real-time error tracking for AWS applications.