Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
Dashbird continuously monitors and analyses your serverless applications to ensure reliability, cost and performance optimisation and alignment with the Well Architected Framework.
What defines a serverless system, main characteristics and how it operates
What are the types of serverless systems for computing, storage, queue processing, etc.
What are the challenges of serverless infrastructures and how to overcome them?
How systems can be reliable and the importance to cloud applications
What is a scalable system and how to handle increasing loads
Making systems easy to operate, manage and evolve
Learn the three basic concepts to build scalable and maintainable applications on serverless backends
The pros and cons of each architecture and insights to choose the best option for your projects
Battle-tested serverless patterns to make sure your cloud architecture is ready to production use
Strategies to compose functions into flexible, scalable and maintainable systems
Achieving loosely-coupled architectures with the asynchronous messaging pattern
Using message queues to manage task processing asynchronously
Asynchronous message and task processing with Pub/Sub
A software pattern to control workflows and state transitions on complex processes
The strategy and practical considerations about AWS physical infrastructure
How cloud resources are identified across the AWS stack
What makes up a Lambda function?
What is AWS Lambda and how it works
Suitable use cases and advantages of using AWS Lambda
How much AWS Lambda costs, pricing model structure and how to save money on Lambda workloads
Learn the main pros/cons of AWS Lambda, and how to solve the FaaS development challenges
Main aspects of the Lambda architecture that impact application development
Quick guide for Lambda applications in Nodejs, Python, Ruby, Java, Go, C# / .NET
Different ways of invoking a Lambda function and integrating to other services
Building fault-tolerant serverless functions with AWS Lambda
Understand how Lambda scales and deals with concurrency
How to use Provisioned Concurrency to reduce function latency and improve overall performance
What are Lambda Layers and how to use them
What are cold starts, why they happen and what to do about them
Understand the Lambda retry mechanism and how functions should be designed
Managing AWS Lambda versions and aliases
How to best allocate resources and improve Lambda performance
What is DynamoDB, how it works and the main concepts of its data model
How much DynamoDB costs and its different pricing models
Query and Scan operations and how to access data on DynamoDB
Alternative indexing methods for flexible data access patterns
How to organize information and leverage DynamoDB features for advanced ways of accessing data
Different models for throughput capacity allocation and optimization in DynamoDB
Comparing NoSQL databases: DynamoDB and Mongo
Comparing managed database services: DynamoDB vs. Mongo Atlas
How does an API gateway work and what are some of the most common usecases
Learn what are the benefits or drawbacks of using APIGateway
Picking the correct one API Gateway service provider can be difficult
Types of possible errors in an AWS Lambda function and how to handle them
Best practices for what to log in an AWS Lambda function
How to log objects and classes from the Lambda application code
Program a proactive alerting system to stay on top of the serverless stack
Originally, serverless started as Function-as-a-Service (Faas), only for computing purposes. Nowadays, it spans across all sorts of cloud services.
There are even some that preceded FaaS (such as some storage and queue platforms), but are now called serverless because they fit the conceptual characterization.
In the following lines we cover the most common types of serverless systems and their main purposes.
Developers can deploy code and run it on-demand or on-schedule. FaaS platforms, such as AWS Lambda, will abstract all infrastructure – even the OS itself – to provide only a containerized runtime to the developer.
Usually supporting the most popular runtimes as default, such as Python, NodeJS, .NET , Java, some platforms also allow to implement custom runtimes, such as AWS Lambda.
Python
NodeJS
.NET
Java
In a nutshell, the developer responsibility is to:
Python 3.8
Once invoked, a serverless function will provide the invocation parameters to the developer code, which is responsible for processing what it needs and returning a response.
Almost anything that that could be done in a traditional infrastructure, such as authenticating a user login, calling an external API, reading or writing data to a database, etc, can be implemented in a serverless compute service.
The AWS service for serverless computing is Lambda. Find more information here.
A queue or message service holds data for a limited period of time while being moved from one part of an application to another.
The most common goals for using a queuing service are:
The AWS serverless offering for queuing is SQS. Find more information here.
Consider a poll that gathers feedback from people on a particular topic.
It’s possible for the poll to intake so many requests at a given point in time that the database can’t handle persisting all of them.
The queue can work as a vote buffer. An internal job regularly reads votes from the queue according to the database write capacity. During peak load, the queue might build up. It can be gradually cleaned up in periods of lower demand.
After a purchase on an e-commerce site, the store might send a confirmation email, a request to the fulfilling department, etc.
The store can send a message to the confirmation-queue, for example. An internal job regularly reads purchases and emails buyers.
confirmation-queue
The advantage is service decoupling. Changing the email service doesn’t impact the on-line store code. The new email processor only needs to read messages from the same queue.
A system that receives a constant flow of data packages. It can analyze and process data in real time.
What sets a queue buffer and stream processor appart is its ability to analyze data, detect patterns, apply conditions and react accordingly.
Although it can be used as a buffer, like the queuing mechanism, it is more commonly employed when analytics is needed for data ingested.
Kinesis is the AWS offering for stream processing. It is capable of processing streams in four different specializations:
Apply SQL queries to structured incoming data and set conditional triggers. Combine data from the past 10 minutes, or once it reaches 5 Mb, in order to trigger a custom data processor. The system is flexible for many use cases.
An event bus allows to receive messages from multiple sources and deliver to multiple destinations.
The line between a message queue and an event bus is somewhat blurred. While a queue handles messages of a particular kind, an event bus can process multiple topics and serve several publishers/subscribers.
Event bus allows for custom rules to get each message delivered to the right subscriber(s). It is an alternative for reducing service coupling. Event bus offers an interface that is easy to change and maintain.
The AWS serverless offering is EventBridge, an event bus that also comes with integrations with third-party systems, from customer support (Zendesk) to sales solutions (SugarCRM) to security suites (Symantec).
The serverless database space is very rich nowadays, with options ranging from SQL, to NoSQL, Graph, Analytics, you name it.
A SQL database, such as AWS Aurora Serverless is ideal when:
NoSQL options, such as AWS DynamoDB, usually scale better for large datasets or write-intensive applications, due to the data distribution. Transactional queries have better and predictable performance. It is not well suited for analytical needs, though.
Graph databases, such as AWS CloudDirectory, are indicated when the data relationships become too complex, or when it is required to have both the transactional performance of NoSQL and data connectibility support similar to a SQL model.
Finally, analytical services, as the name suggests, are usually applied in combination with another type of database. A NoSQL can be used for highly performant transactional persistence, while a service like AWS Athena can be plugged in to provide analytical insights in an efficient way.
A blog storage system is used to store objects such as a text file, an image or video, for example.
The AWS offering in this space is the battle tested S3.
S3 can integrate with AWS Lambda and Athena to build powerful big data analysis, by leveraging structured data formats such as CSV or Apache Parquet.
The two most common models for API implementations are REST and Graph (not to confuse with GraphDB).
REST is the oldest and most widely used. It allows two systems to communicate by levaraging HTTP methods (GET/PUT/POST/DELETE). REST requires data consumers to known in advance its endpoints and which parameters or filters are supported.
Graph API is a model created by Facebook to support more flexible queries from data consumers. Requesters can combine filters, select which data points to return, or aggregate information before retrieval, for example.
AWS offers solutions for both architecture models:
Most web applications need to manage, authenticate and authorize users at some point. Using a serverless user management system allows to speed up an application development and to focus on what really brings value to its users.
Security is also improved, since these services implement high security standards that are hard to replicate by a small team.
Cognito is the AWS contender at this space. There are external services, as well, such as Auth0.
No results found