Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
Dashbird continuously monitors and analyses your serverless applications to ensure reliability, cost and performance optimisation and alignment with the Well Architected Framework.
What defines a serverless system, main characteristics and how it operates
What are the types of serverless systems for computing, storage, queue processing, etc.
What are the challenges of serverless infrastructures and how to overcome them?
How systems can be reliable and the importance to cloud applications
What is a scalable system and how to handle increasing loads
Making systems easy to operate, manage and evolve
Learn the three basic concepts to build scalable and maintainable applications on serverless backends
The pros and cons of each architecture and insights to choose the best option for your projects
Battle-tested serverless patterns to make sure your cloud architecture is ready to production use
Strategies to compose functions into flexible, scalable and maintainable systems
Achieving loosely-coupled architectures with the asynchronous messaging pattern
Using message queues to manage task processing asynchronously
Asynchronous message and task processing with Pub/Sub
A software pattern to control workflows and state transitions on complex processes
The strategy and practical considerations about AWS physical infrastructure
How cloud resources are identified across the AWS stack
What makes up a Lambda function?
What is AWS Lambda and how it works
Suitable use cases and advantages of using AWS Lambda
How much AWS Lambda costs, pricing model structure and how to save money on Lambda workloads
Learn the main pros/cons of AWS Lambda, and how to solve the FaaS development challenges
Main aspects of the Lambda architecture that impact application development
Quick guide for Lambda applications in Nodejs, Python, Ruby, Java, Go, C# / .NET
Different ways of invoking a Lambda function and integrating to other services
Building fault-tolerant serverless functions with AWS Lambda
Understand how Lambda scales and deals with concurrency
How to use Provisioned Concurrency to reduce function latency and improve overall performance
What are Lambda Layers and how to use them
What are cold starts, why they happen and what to do about them
Understand the Lambda retry mechanism and how functions should be designed
Managing AWS Lambda versions and aliases
How to best allocate resources and improve Lambda performance
What is DynamoDB, how it works and the main concepts of its data model
How much DynamoDB costs and its different pricing models
Query and Scan operations and how to access data on DynamoDB
Alternative indexing methods for flexible data access patterns
How to organize information and leverage DynamoDB features for advanced ways of accessing data
Different models for throughput capacity allocation and optimization in DynamoDB
Comparing NoSQL databases: DynamoDB and Mongo
Comparing managed database services: DynamoDB vs. Mongo Atlas
How does an API gateway work and what are some of the most common usecases
Learn what are the benefits or drawbacks of using APIGateway
Picking the correct one API Gateway service provider can be difficult
Types of possible errors in an AWS Lambda function and how to handle them
Best practices for what to log in an AWS Lambda function
How to log objects and classes from the Lambda application code
Program a proactive alerting system to stay on top of the serverless stack
A Lambda Layer works very similarly to a folder containing a library in a function code. The difference is that, instead of having to package this library within the function code, it can be packaged separately. Lambda will load the Layer together with the function when it’s invoked.
Isolating common features in Layers makes it easier to share the same codebase across multiple functions. This avoids having to replicate the same code in different places, which is a bad practice.
The previous solution to avoid replicating code was to implement common features as isolated functions. In some cases, that might be the ideal solution. The problem is that it required an additional function invocation. Sometimes, this invocation needed to run in parallel, increasing function coupling and costs.
With Layers, it’s possible to share common libraries without an additional invocation. Consider a simple validation that processes JSON strings used across an application, checking against a structured template. Instead of replicating the code in multiple functions, the validator could be implemented as a Layer. Each function requiring it can have the Layer attached.
Having a large codebase per function can be a problem. It can make it more difficult to maintain and test the application, for example. It can make deployment slower as well.
This is especially true for large dependencies, such as mathematical/statistical packages, video processing libraries, etc. Also when the same code features are replicated in multiple functions.
To keep packages smaller, Layers allow isolating features that are commonly used by two or more functions. Instead of deploying the same code multiple times, the Layer is deployed only once. The Lambda platform will make sure it’s loaded with the functions that depend on it.
By keeping the function’s package smaller, deployment times are faster. When using large dependencies, the main function code may have only a couple of megabytes, while the entire package ends up with up to 50 MB.
Since these large dependencies rarely change, packaging them as Layers and deploying only once will save you from having to deploy every time the main function changes.
Having Layers isolating features in single places also makes it easier to manage code. Whenever a feature needs to change, only one Layer has to be updated. Its consumers may also choose to upgrade when it best suits them.
AWS Lambda allows each function to have up to five Layers. There is no limit to how many times the same Layer can be reused across different functions.
Beware that the Lambda code package limits apply to Layers as well. The function must stay below 50 MB (250 MB uncompressed), which is counted including the size of its layers.
With the AWS CLI, use the publish-layer-version to publish a new Layer (or a new version of an existing one):
publish-layer-version
aws lambda publish-layer-version --layer-name hello-world-layer --description "Hello World Layer" --license-info "MIT" --content S3Bucket=lambda-layers-us-east-1-1234567890,S3Key=hello-world-layer.zip
Use the AWS CLI (command line interface), as demonstrated below. Use the update-function-configuration option with the --layers argument. Pass as many Layer ARNs needed (observing the limit of five in total).
update-function-configuration
--layers
aws lambda update-function-configuration --function-name hello-world --layers arn:aws:lambda:us-east-1:1234567890:layer:helloworld-layer:1 arn:aws:lambda:us-east-1:1234567890:layer:foobar-layer:2
Every time the update-function-configuration option is used with the --layers argument, Lambda will replace the entire list of current layers with the new ones.
To remove all Layers from a function, provide an empty list:
aws lambda update-function-configuration --function-name hello-world --layers []
In case only one Layer needs to be removed, it’s necessary to call the update-function-configuration option with the list of existing Layers except the one to be removed.
Layers are loaded in the /opt directory within a Lambda MicroVM1. All runtimes supported natively by Lambda (node.js, Python, Go, etc) will include paths to everything in the /opt folder. The function’s code can access libraries provided by Layers normally.
/opt
Layers can be shared by functions inside an AWS account, within an organization, or even across multiple accounts2. To access a Layer, the Lambda function will need permission to call GetLayerVersion on the version of Layer attached to it.
GetLayerVersion
No results found