Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
Dashbird continuously monitors and analyses your serverless applications to ensure reliability, cost and performance optimisation and alignment with the Well Architected Framework.
What defines a serverless system, main characteristics and how it operates
What are the types of serverless systems for computing, storage, queue processing, etc.
What are the challenges of serverless infrastructures and how to overcome them?
How systems can be reliable and the importance to cloud applications
What is a scalable system and how to handle increasing loads
Making systems easy to operate, manage and evolve
Learn the three basic concepts to build scalable and maintainable applications on serverless backends
The pros and cons of each architecture and insights to choose the best option for your projects
Battle-tested serverless patterns to make sure your cloud architecture is ready to production use
Strategies to compose functions into flexible, scalable and maintainable systems
Achieving loosely-coupled architectures with the asynchronous messaging pattern
Using message queues to manage task processing asynchronously
Asynchronous message and task processing with Pub/Sub
A software pattern to control workflows and state transitions on complex processes
The strategy and practical considerations about AWS physical infrastructure
How cloud resources are identified across the AWS stack
What makes up a Lambda function?
What is AWS Lambda and how it works
Suitable use cases and advantages of using AWS Lambda
How much AWS Lambda costs, pricing model structure and how to save money on Lambda workloads
Learn the main pros/cons of AWS Lambda, and how to solve the FaaS development challenges
Main aspects of the Lambda architecture that impact application development
Quick guide for Lambda applications in Nodejs, Python, Ruby, Java, Go, C# / .NET
Different ways of invoking a Lambda function and integrating to other services
Building fault-tolerant serverless functions with AWS Lambda
Understand how Lambda scales and deals with concurrency
How to use Provisioned Concurrency to reduce function latency and improve overall performance
What are Lambda Layers and how to use them
What are cold starts, why they happen and what to do about them
Understand the Lambda retry mechanism and how functions should be designed
Managing AWS Lambda versions and aliases
How to best allocate resources and improve Lambda performance
What is DynamoDB, how it works and the main concepts of its data model
How much DynamoDB costs and its different pricing models
Query and Scan operations and how to access data on DynamoDB
Alternative indexing methods for flexible data access patterns
How to organize information and leverage DynamoDB features for advanced ways of accessing data
Different models for throughput capacity allocation and optimization in DynamoDB
Comparing NoSQL databases: DynamoDB and Mongo
Comparing managed database services: DynamoDB vs. Mongo Atlas
How does an API gateway work and what are some of the most common usecases
Learn what are the benefits or drawbacks of using APIGateway
Picking the correct one API Gateway service provider can be difficult
Types of possible errors in an AWS Lambda function and how to handle them
Best practices for what to log in an AWS Lambda function
How to log objects and classes from the Lambda application code
Program a proactive alerting system to stay on top of the serverless stack
The Lambda model1 is based on ephemeral microVMs (or containers). Once a request is received, AWS will provision resources on-demand to run developer’s code and compute what’s necessary.
Depending on the runtime2 and how large is the codebase and its dependencies, the start up process of a function may take some time (from a few hundred milliseconds to several seconds). This is what is called “cold start” (the function was “cold” when the request was received).
AWS allows developers to set a minimal concurrency threshold in which the function would be warm to answer all requests in under double-digit millisecond. It is also possible to use auto-scaling to increase provisioned concurrency proactively when demand is on the rise.
Provisioned Concurrency is a step away from the serverless model of paying for what is used. By enabling it in a Lambda function, it is going back to renting compute capacity for time. This model defeats one of the main arguments in favor of using serverless: not paying for idle time.
One major caveat is that, by enabling Provisioned Capacity, the function is inelligible for the Lambda Free Tier.
Another problem is the complexity that the pricing model for Provisioned Concurrency3 adds to the overall Lambda financials. The amount of memory needed by a function is usually fixed and determined by the workload, so developers are left with only one dimension to analyze Lambda costs: duration of the executions.
Additionally to duration, with Provisioned Concurrency developers must also observe:
AWS Lambda Provisioned Concurrency will charge for the following dimensions:
Consider a function with 512 MB allocated running for 31 days. It receives 10 Million requests, with a duration of 2 seconds, each.
In the traditional, on-demand pricing model, this function would cost:
In Provisioned Capacity, with 50 function concurrent instances provisioned:
With this example, we can see that using provisioned concurrency can greatly increase the costs of running serverless workloads on AWS Lambda. In light of that, developers should plan and anticipate costs carefully before using it.
The Provisioned Concurrency level counts to the function’s Reserved Concurrency4 limit and also to the account regional limits5.
It is possible to use Application Auto Scaling6 to automatically scale up and down the concurrency provisioned threshold.
There are three ways to implement the auto-scaling:
The first two options have some similarities in the way they work and are suitable for applications with unpredictable load behavior.
The last one (Scheduled-scaling) is suitable for applications that have predictable spikes in demand, such as an e-commerce during the Black Friday period, for example.
The provisioned concurrency can be set manually from the AWS Console. Under the Provisioned concurrency configurations option, click “Add” or “Add configuration”:
It will open a new screen to select a version or alias of the function and the desired concurrency level:
With the AWS CLI, we can add, list and delete provisioned resources to our functions. Please see examples below:
Add 50 as the concurrency level for the version 123 of my-function:
aws lambda put-provisioned-concurrency --function-name my-function --qualifier 123 --provisioned-concurrent-executions 50
List concurrency settings for my-function:
aws lambda list-provisioned-concurrency-configs --function-name my-function
Delete concurrency provisioned for the version 123 of my-function:
aws lambda delete-provisioned-concurrency-config --function-name my-function --qualifier 123
In the AWS SAM YAML, declare Provisioned Concurrency settings like the example below.
Bear in mind that AWS SAM will raise an error if this feature is used when AutoPublishAlias is not set.
Provisioned Concurrency can be configured in the Serverless framework YAML file such as:
No results found
End-to-end observability and real-time error tracking for AWS applications.