All-in-one serverless DevOps platform.
Full-stack visibility across the entire stack.
Detect and resolve incidents in record time.
Conform to industry best practices.
Dashbird continuously monitors and analyses your serverless applications to ensure reliability, cost and performance optimisation and alignment with the Well Architected Framework.
What defines a serverless system, main characteristics and how it operates
What are the types of serverless systems for computing, storage, queue processing, etc.
What are the challenges of serverless infrastructures and how to overcome them?
How systems can be reliable and the importance to cloud applications
What is a scalable system and how to handle increasing loads
Making systems easy to operate, manage and evolve
Learn the three basic concepts to build scalable and maintainable applications on serverless backends
The pros and cons of each architecture and insights to choose the best option for your projects
Battle-tested serverless patterns to make sure your cloud architecture is ready to production use
Strategies to compose functions into flexible, scalable and maintainable systems
Achieving loosely-coupled architectures with the asynchronous messaging pattern
Using message queues to manage task processing asynchronously
Asynchronous message and task processing with Pub/Sub
A software pattern to control workflows and state transitions on complex processes
The strategy and practical considerations about AWS physical infrastructure
How cloud resources are identified across the AWS stack
What makes up a Lambda function?
What is AWS Lambda and how it works
Suitable use cases and advantages of using AWS Lambda
How much AWS Lambda costs, pricing model structure and how to save money on Lambda workloads
Learn the main pros/cons of AWS Lambda, and how to solve the FaaS development challenges
Main aspects of the Lambda architecture that impact application development
Quick guide for Lambda applications in Nodejs, Python, Ruby, Java, Go, C# / .NET
Different ways of invoking a Lambda function and integrating to other services
Building fault-tolerant serverless functions with AWS Lambda
Understand how Lambda scales and deals with concurrency
How to use Provisioned Concurrency to reduce function latency and improve overall performance
What are Lambda Layers and how to use them
What are cold starts, why they happen and what to do about them
Understand the Lambda retry mechanism and how functions should be designed
Managing AWS Lambda versions and aliases
How to best allocate resources and improve Lambda performance
What is DynamoDB, how it works and the main concepts of its data model
How much DynamoDB costs and its different pricing models
Query and Scan operations and how to access data on DynamoDB
Alternative indexing methods for flexible data access patterns
How to organize information and leverage DynamoDB features for advanced ways of accessing data
Different models for throughput capacity allocation and optimization in DynamoDB
Comparing NoSQL databases: DynamoDB and Mongo
Comparing managed database services: DynamoDB vs. Mongo Atlas
How does an API gateway work and what are some of the most common usecases
Learn what are the benefits or drawbacks of using APIGateway
Picking the correct one API Gateway service provider can be difficult
Types of possible errors in an AWS Lambda function and how to handle them
Best practices for what to log in an AWS Lambda function
How to log objects and classes from the Lambda application code
Program a proactive alerting system to stay on top of the serverless stack
Fast adoption of serverless is fueled by the ability to build products faster, scale effortlessly and benefit from an efficient pricing model. Regardless of that, challenges exist over server-centric architectures that should be considered when adopting serverless. This article tries to outline the main challenges and concerns around building serverless applications.
Running on Lambda can have negative performance implications due to multiple reasons:
Cold starts can be optimised in a lot of ways. First thing to pay attention to is if the function is deployed inside VPC or not. Running a function inside VPC can result in a significant cold start time. On top of that, different programming languages have different cold start times, fastest being Python and Node.js and slowest being .NET and Java. Some services, such as Dashbird allow you to analyze cold starts by duration, count and frequency, giving you an opportunity to estimate the impact to your users.
Response times are also influenced by the distance of the function from the user. If your user base is mostly in Europe, it’s probably suboptimal to host the function in us-east-1. Functions can also be deployed to multiple regions to ensure less delays. However with Lambda@Edge, you have the ability to deploy a function to Amazon CloudFront which will host the function in multiple locations around the world automatically.
Often times, user requests require actions and information from multiple databases and different microservices. Lambda-based services can exaggerate this problem when logic is distributed to a large number of small functions, requiring many to be executed to service the request. This can often lead a single request to hit 3 or 4 Lambda functions, API Gateway, databases and external services. If you’re designing a new service, you should always keep in mind the request path you are building to keep the latency to a minimum.
Lambda has a predefined list of memory and compute configurations to choose from. The available memory allocations range from 128MB to 3008MB (with 64MB increments). The CPU allocated to a function is correlated with the amount of memory provisioned. Lower memory and CPU settings are cheaper but can have an impact on performance for some types of tasks.
Serverless infrastructures are very much decoupled and can span across hundreds of functions and other infrastructure services (such as databases, apis, queues etc.), making detecting issues, debugging problems and getting a sense on health a challenge for developer teams. AWS has services that do the heavy lifting connected to gathering logs, metrics and traces but it’s not trivial to be on top of that data and diagnose the root cause of issues in a reasonable time frame. Third-party tooling provide solutions here to automate visualisation, alerting and insights with tools such as Dashbird, Serverless Framework, Thundra and others.
Since serverless applications have far more surface area than traditional applications, it also means they provide a larger area of attack for malicious parties.
At the very least, as a developer you should be looking out for:
Serverless is specific to cloud providers and migrating from one cloud platform to another can be challenging. Even though cloud providers are getting more and more similar and tools exist to manage multi-cloud workloads (see: The Serverless Framework or Terraform), it can still be an enourmous challenge to migrate from one cloud provider to another.
On the other hand, the richness and maturity of services in modern cloud providers often begs the question of how likely it is to change providers. Lock-in can also be in other technologies and even though the concept is slightly exaggarated with serverless, it’s still a very small likelyhood that you’ll need to switch providers in the near future.
Pound-for-pound, Lambda is more expensive than EC2. If you’re building something that needs to run 24/7 and is highly paralleled, with consistent workloads, an EC2 instance will likely be cheaper. For serverless databases, APIs, and queues the same rule applies – the more layers of managedness they have, the more expensive they are compared to hosting databases and queues on your own in a container.
Arguments can be made however, that the speed of development outweighs the extra costs of paying premium on your infrastructure and you can eventually deliver more value for your users if you use serverless.
The main scenarios where server-centric architectures win over serverless is commonly associated with architectural or cost questions. In cases where the system load is large and fairly constant, using containers can be less expensive. On top of that, it can be easier to cram some complex logic into one container and have everything in one place.
The future looks serverless and the community is working hard to overcome problems related to the multitude of challenges we see with serverless at the moment. In years to come, developers will also develop more experience and know-how in preventing and navigating modern cloud environments. If you’re considering going serverless, a good strategy might be to start with a small solution at first and see how that works out for you.
No results found