All-in-one serverless DevOps platform.
Full-stack visibility across the entire stack.
Detect and resolve incidents in record time.
Conform to industry best practices.
Dashbird continuously monitors and analyses your serverless applications to ensure reliability, cost and performance optimisation and alignment with the Well Architected Framework.
What defines a serverless system, main characteristics and how it operates
What are the types of serverless systems for computing, storage, queue processing, etc.
What are the challenges of serverless infrastructures and how to overcome them?
How systems can be reliable and the importance to cloud applications
What is a scalable system and how to handle increasing loads
Making systems easy to operate, manage and evolve
Learn the three basic concepts to build scalable and maintainable applications on serverless backends
The pros and cons of each architecture and insights to choose the best option for your projects
Battle-tested serverless patterns to make sure your cloud architecture is ready to production use
Strategies to compose functions into flexible, scalable and maintainable systems
Achieving loosely-coupled architectures with the asynchronous messaging pattern
Using message queues to manage task processing asynchronously
Asynchronous message and task processing with Pub/Sub
A software pattern to control workflows and state transitions on complex processes
The strategy and practical considerations about AWS physical infrastructure
How cloud resources are identified across the AWS stack
What makes up a Lambda function?
What is AWS Lambda and how it works
Suitable use cases and advantages of using AWS Lambda
How much AWS Lambda costs, pricing model structure and how to save money on Lambda workloads
Learn the main pros/cons of AWS Lambda, and how to solve the FaaS development challenges
Main aspects of the Lambda architecture that impact application development
Quick guide for Lambda applications in Nodejs, Python, Ruby, Java, Go, C# / .NET
Different ways of invoking a Lambda function and integrating to other services
Building fault-tolerant serverless functions with AWS Lambda
Understand how Lambda scales and deals with concurrency
How to use Provisioned Concurrency to reduce function latency and improve overall performance
What are Lambda Layers and how to use them
What are cold starts, why they happen and what to do about them
Understand the Lambda retry mechanism and how functions should be designed
Managing AWS Lambda versions and aliases
How to best allocate resources and improve Lambda performance
What is DynamoDB, how it works and the main concepts of its data model
How much DynamoDB costs and its different pricing models
Query and Scan operations and how to access data on DynamoDB
Alternative indexing methods for flexible data access patterns
How to organize information and leverage DynamoDB features for advanced ways of accessing data
Different models for throughput capacity allocation and optimization in DynamoDB
Comparing NoSQL databases: DynamoDB and Mongo
Comparing managed database services: DynamoDB vs. Mongo Atlas
How does an API gateway work and what are some of the most common usecases
Learn what are the benefits or drawbacks of using APIGateway
Picking the correct one API Gateway service provider can be difficult
Types of possible errors in an AWS Lambda function and how to handle them
Best practices for what to log in an AWS Lambda function
How to log objects and classes from the Lambda application code
Program a proactive alerting system to stay on top of the serverless stack
Resilience is the ability of a cloud system to anticipate and handle faults without disrupting or discontinuing services to its users1.
Lambda offers a high level of resilience by benefiting from multi-AZ2 replication3 by default4. Each function can run from one or more AZs within an AWS Region5. Even in the event of multi-machine or an entire data center failure, AWS is able to continue serving invocations to a Lambda function.
Cross-region replication, function versioning and retry behavior are other features provided by Lambda that increases application reliability.
Function versions6 and aliases7 enables developers to save a function’s code and configuration. It’s possible, for example, to run different versions of the same function at the same time.
This enables multiple consumers of a single function to upgrade to newer versions as it best suits. It reduces the likelihood of service disruption by upgrading a function for all consumers at the same time. Blue/green and rolling deployments are also a possibility by using versioning in Lambda functions.
When a function invocation fails for some reason, Lambda may retry multiple times until the execution is successful. A retry is simply invoking the same function again with the same event payload.
This bahavior enables fault-tolerance in Lambda applications, since it avoids transient faults from frustrating the request service definitively. For more information, please read the page about Lambda retry behavior.
Although multi-AZ replication is enabled by default for all Lambda functions, Cross-Region must be implemented manually by developers. This can be accomplished by combining API Gateway regional endpoints and Route53 active-active setup.
Below is an outline of the implementation:
For a detailed walk-through, please check this AWS blog post.
Lambda can scale very quickly to accommodate hundreds or thousands of concurrent requests to multiple functions. To protect the entire platform from abuse and DoS attacks, there is a limit to how much it can scale. The default value is 1,000 concurrent requests (burstable to 3,000 to cope with short peaks).
It is very common that applications will rely on multiple functions. If a single function scales up to 1,000 concurrent requests, it will prevent all others from being executed. To avoid this type of scenario, Lambda provides Reserved Concurrency. Read more about it in the Scalability and Concurrency page.
When asynchronous invocations fail, Lambda may retry the request multiple times8. When the last retry still fails, Lambda can be configured to send the request payload to a Dead-Letter Queue (DLQ). This queue can store messages for several days9, which allows developers to inspect failed requests, possibly fix the causes of failure and replay them.
No results found