Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
Dashbird continuously monitors and analyses your serverless applications to ensure reliability, cost and performance optimisation and alignment with the Well Architected Framework.
What defines a serverless system, main characteristics and how it operates
What are the types of serverless systems for computing, storage, queue processing, etc.
What are the challenges of serverless infrastructures and how to overcome them?
How systems can be reliable and the importance to cloud applications
What is a scalable system and how to handle increasing loads
Making systems easy to operate, manage and evolve
Learn the three basic concepts to build scalable and maintainable applications on serverless backends
The pros and cons of each architecture and insights to choose the best option for your projects
Battle-tested serverless patterns to make sure your cloud architecture is ready to production use
Strategies to compose functions into flexible, scalable and maintainable systems
Achieving loosely-coupled architectures with the asynchronous messaging pattern
Using message queues to manage task processing asynchronously
Asynchronous message and task processing with Pub/Sub
A software pattern to control workflows and state transitions on complex processes
The strategy and practical considerations about AWS physical infrastructure
How cloud resources are identified across the AWS stack
What makes up a Lambda function?
What is AWS Lambda and how it works
Suitable use cases and advantages of using AWS Lambda
How much AWS Lambda costs, pricing model structure and how to save money on Lambda workloads
Learn the main pros/cons of AWS Lambda, and how to solve the FaaS development challenges
Main aspects of the Lambda architecture that impact application development
Quick guide for Lambda applications in Nodejs, Python, Ruby, Java, Go, C# / .NET
Different ways of invoking a Lambda function and integrating to other services
Building fault-tolerant serverless functions with AWS Lambda
Understand how Lambda scales and deals with concurrency
How to use Provisioned Concurrency to reduce function latency and improve overall performance
What are Lambda Layers and how to use them
What are cold starts, why they happen and what to do about them
Understand the Lambda retry mechanism and how functions should be designed
Managing AWS Lambda versions and aliases
How to best allocate resources and improve Lambda performance
What is DynamoDB, how it works and the main concepts of its data model
How much DynamoDB costs and its different pricing models
Query and Scan operations and how to access data on DynamoDB
Alternative indexing methods for flexible data access patterns
How to organize information and leverage DynamoDB features for advanced ways of accessing data
Different models for throughput capacity allocation and optimization in DynamoDB
Comparing NoSQL databases: DynamoDB and Mongo
Comparing managed database services: DynamoDB vs. Mongo Atlas
How does an API gateway work and what are some of the most common usecases
Learn what are the benefits or drawbacks of using APIGateway
Picking the correct one API Gateway service provider can be difficult
Types of possible errors in an AWS Lambda function and how to handle them
Best practices for what to log in an AWS Lambda function
How to log objects and classes from the Lambda application code
Program a proactive alerting system to stay on top of the serverless stack
The Asynchronous Messaging pattern is very useful to decouple serverless functions and make systems more maintainable.
For example, a common anti-pattern in serverless projects is the extensive tight coupling between services. Having Lambda functions invoking each other directly usually leads to a project that is difficult to deploy, maintain, and extend.
This is a common outline of how serverless development roadmaps evolve:
Although this seems easy to implement, services will be tightly coupled. Developers might not feel the pain for small implementations. But as the overall system grows, it will become increasingly hard to maintain and evolve:
Before we get into more details about the architecture itself, let’s analyze why coupling is an issue for serverless functions.
An e-commerce system is something most of us are quite familiar with. Consider when a customer submits a purchase. Several processes need to be performed:
The following architectural ideas are only for conceptual illustration purposes. They omit aspects that would be required in a production environment and might not be the most suitable for the particular context used as an example.
Consider there is a Lambda function to perform each of the actions above.
In a highly coupled architecture, there could be a central function called SubmitOrder that would perform all these actions in a single run and imperatively.
Let’s analyze each disadvantage point mentioned above in light of this example:
Observe that each step of the process (in grey) blocks the execution of the main function Submit Order (in blue). Also, multiple I/O-bound processes are running in the background (in yellow).
As a result, the main function will be waiting idly for considerable periods of time. This leads to poor resource usage and higher costs since serverless functions are charged per execution time.
What happens in the above outline if the Decrease Stock function fails? Perhaps the database was under heavy load and couldn’t process the decrement query.
Surely the request could be retried. But now there’s one more thing that needs to be coordinated by the Submit Order function, which increases complexity. As a result, the chances of experiencing even more errors only increase.
To reduce complexity, let’s not handle the retrying of individual components within the Submit Order function. If anything fails, the main function will be retried from start.
How should a failure within the Decrease Stock step be handled? Running the entire Submit Order request again will authorize payment twice.
Should the previous payment authorization be canceled? If we keep it, the main function must skip the payment step in the retry. But now there’s one more logic to handle, which also adds complexity.
We can see that the tight coupling of services is already making it difficult to manage our system and make it perform in a reliable way without many hurdles.
Idempotency is a “property of certain operations in mathematics and computer science whereby they can be applied multiple times without changing the result beyond the initial application“.2.
In other words: the Submit Order function would be idempotent if it was possible to retry it in case of failure without worrying about double charges. We dedicated an entire page in our Knowledge Base to cover the Lambda Retry behavior and Idempotency.
Every single serverless function should be idempotent. That is a trait of reliable and maintainable serverless systems.
What we need to know is: accomplishing idempotency for the Submit Order function would be a lot harder and complex in the tightly coupled architecture outlined above.
Let’s consider a different strategy and compare it against the first one. When a customer submits a purchasing order, the Submit Order function only dispatches two messages:
The main function is now streamlined to only interact with a message queue service:
Each step in the process is now decoupled, having asynchronous messages as the means to make the system work cohesively.
Messages are further consumed by each step of the process independently. The “Set aside stock” for example, would pull pending messages from the queue and reserve products in the database.
Having each step working independently improves resource utilization and reduces costs by cutting down idle I/O time. It is also easier now to handle retries in case anything fails. When a failure occurs within one step of the process, the other functions will not be impacted.
Another benefit is being able to replace of modifying one service without having to touch others. Consider the system was only sending a confirmation e-mail to the customer. Now, sending an SMS message becomes a requirement. In the tightly coupled architecture, it would be hard to introduce any changes independently. With an asynchronous messaging architecture, we only need to add one message consumer.
Lastly, ensuring the scalability and reliability of a decoupled architecture is much easier. Consider Service A publishes a message for Service B to get a task done. In the case of Service B relying on a database, for instance, that poses scalability difficulties, Service A can still scale up without a problem. Messages may start to pile up for some time, but Service B will eventually catch up.
Even if this situation is undesirable (a list of messages piling up and taking more time to process than usual), it is better than having Service B crashing due to a peak demand coming from tight coupling with Service A.
Below is a brief introduction to the main options for handling asynchronous messages in a distributed serverless environment.
We mentioned a “message queue” in the example for the loosely-coupled architecture above. This is exactly what the name suggests: messages are received and piled up by the messaging system in a queue. A consumer can pull messages from the queue for processing. When processing is finished, the messaging system is notified to delete the message. If it fails, the message goes back to the queue.
Read our detailed page about the Message Queue pattern.
The main components of a Pub/Sub system are:
In the “confirmation message” example, if we had only an e-mail sending mechanism and wanted to add SMS messaging as well, it would be a matter of deploying an SMS sender service and subscribing it to the “customer notifications” topic. The rest of the system remains unaware of this change. This makes the overall project a lot more manageable and extensible.
Read our detailed page about the Pub/Sub pattern.
An Event Bridge is somewhat similar to the above two, but with a difference: messages (called events) are matched to subscribers depending on a fine-grained pattern matching mechanism. Events that match a certain pattern would be forwarded to a particular subscriber responsible for processing them.
A Stream Processing system would be more suitable for continuous data ingestion. An example would be application or database logs, tracking a series of clicks in a user interface for analytical purposes, etc. It is not quite in the exact same category as the others in this list, but it can be helpful for batching.
Stream Processing services will usually allow to group multiple data points and invoke a service processor every 10 minutes, for example, or whenever the amount of data ingested reaches 10 megabytes. This batching mechanism can help improve performance and reduce costs in serverless architectures.
No results found
End-to-end observability and real-time error tracking for AWS applications.