All-in-one serverless DevOps platform.
Full-stack visibility across the entire stack.
Detect and resolve incidents in record time.
Conform to industry best practices.
Dashbird continuously monitors and analyses your serverless applications to ensure reliability, cost and performance optimisation and alignment with the Well Architected Framework.
What defines a serverless system, main characteristics and how it operates
What are the types of serverless systems for computing, storage, queue processing, etc.
What are the challenges of serverless infrastructures and how to overcome them?
How systems can be reliable and the importance to cloud applications
What is a scalable system and how to handle increasing loads
Making systems easy to operate, manage and evolve
Learn the three basic concepts to build scalable and maintainable applications on serverless backends
The pros and cons of each architecture and insights to choose the best option for your projects
Battle-tested serverless patterns to make sure your cloud architecture is ready to production use
Strategies to compose functions into flexible, scalable and maintainable systems
Achieving loosely-coupled architectures with the asynchronous messaging pattern
Using message queues to manage task processing asynchronously
Asynchronous message and task processing with Pub/Sub
A software pattern to control workflows and state transitions on complex processes
The strategy and practical considerations about AWS physical infrastructure
How cloud resources are identified across the AWS stack
What makes up a Lambda function?
What is AWS Lambda and how it works
Suitable use cases and advantages of using AWS Lambda
How much AWS Lambda costs, pricing model structure and how to save money on Lambda workloads
Learn the main pros/cons of AWS Lambda, and how to solve the FaaS development challenges
Main aspects of the Lambda architecture that impact application development
Quick guide for Lambda applications in Nodejs, Python, Ruby, Java, Go, C# / .NET
Different ways of invoking a Lambda function and integrating to other services
Building fault-tolerant serverless functions with AWS Lambda
Understand how Lambda scales and deals with concurrency
How to use Provisioned Concurrency to reduce function latency and improve overall performance
What are Lambda Layers and how to use them
What are cold starts, why they happen and what to do about them
Understand the Lambda retry mechanism and how functions should be designed
Managing AWS Lambda versions and aliases
How to best allocate resources and improve Lambda performance
What is DynamoDB, how it works and the main concepts of its data model
How much DynamoDB costs and its different pricing models
Query and Scan operations and how to access data on DynamoDB
Alternative indexing methods for flexible data access patterns
How to organize information and leverage DynamoDB features for advanced ways of accessing data
Different models for throughput capacity allocation and optimization in DynamoDB
Comparing NoSQL databases: DynamoDB and Mongo
Comparing managed database services: DynamoDB vs. Mongo Atlas
How does an API gateway work and what are some of the most common usecases
Learn what are the benefits or drawbacks of using APIGateway
Picking the correct one API Gateway service provider can be difficult
Types of possible errors in an AWS Lambda function and how to handle them
Best practices for what to log in an AWS Lambda function
How to log objects and classes from the Lambda application code
Program a proactive alerting system to stay on top of the serverless stack
When a function is invoked, Lambda checks whether a microVM1 is already active. If there’s an idle microVM available, it will be used to serve the new incoming request. In this particular case, there is no startup time, since the microVM was already up and had the code package in memory. This is called a warm start.
The opposite – having to provision a new microVM from scratch to serve an incoming request – is called cold start.
The total startup time depends on multiple factors. As a general rule, these are the most important ones:
Cold starts add up to the overall execution time. For time-sensitive workloads, this can be a problem.
The occurrence of a cold start will depend a lot on the variability of the application demand. For frequent and low variability traffic, cold starts will hardly be an issue. This is because the application will require the same number of microVMs most of the time. And since traffic is frequent (new requests every minute for example), Lambda will find warm microVMs available for most invocations.
Applications that present infrequent or highly variable traffic demand, the likelihood of cold starts increase considerably. Infrequent access means Lambda will terminate microVMs after too long idle periods. And high variability increases the chances of multiple concurrent requests, which may require spinning up microVMs from scratch.
A simple solution is invoking functions on a scheduled basis (e.g. every 10 minutes). This will make Lambda keep some microVMs alive all the time. Developers will commonly need to ensure warm starts for multiple concurrent requests. The scheduled process will need to handle multiple invocations in parallel in order to force Lambda into keeping multiple microVMs alive.
Beware that the warming scheduled invocations will be charged normally as any other Lambda request. Since there’s no need to process anything actually, the function can terminate right after invoked, reducing the cost of the warm-up process.
Another approach is using traffic prediction modeling. By anticipating how many requests are likely to be received in the next 30 minutes, for instance, it’s possible to adjust the scheduled invocations. This would also contribute to keeping warming costs down.
Read more about ways to solve cold starts.
There are open-source projects to help with those two approaches:
No results found