Serverless has been gaining more and more traction over the last few years. The global serverless architecture market was estimated at $3.01 billion in 2017 and is expected to hit $21.99 billion by 2025. The number is reflected in the increasing amount of enterprises starting to look for ways of decoupling their current monolithic architectures and migrating their stack to serverless.
Read more about the popular enterprise use cases for AWS Lambda.
Looking back at the container revolution and how long it took for serious enterprises to start banking on it, it’s likely that we’re still a couple of years away but because the rate of adoption has been quicker for serverless, we might see the migration pick up a bit sooner than expected.
Our team at Dashbird has done extensive research about how companies use serverless, what common pain points they have, and how developers go about solving them.
In this article, we will cover the most common use cases and give an honest breakdown of what to expect when using AWS Lambda.
If you’d like to learn more about the practical side of how to migrate to serverless and make your infrastructure really work post-transition, make sure to check out our webinar with Ryan Jones from Serverless Guru.
REST APIs (with The Serverless Framework or APEX etc.)
This seems to be the most popular use case for AWS Lambda and there’s no surprise there since everyone needs some sort of an API in their stack. It’s also one of the strongest use-cases for AWS Lambda because of its big benefits in scalability, ease of use, simplicity, and of course low cost. However, decoupling an existing system into smaller logical units shifts the difficulty from code to orchestration and introduces some new challenges.
- Smaller learning curve. The serverless framework does an amazing job in abstracting, setting up, and managing resources in AWS, allowing you to build and launch apps quickly. There’s really nothing complicated to learn when starting out building serverless APIs – this fact is obvious when looking at the popularity around the Serverless Framework. In addition, it’s not just deployments that get easier but the atomic nature of functions ensures that code is easy to write and less likely to contain bugs.
- Faster time to proof of concept. Serverless enables developers to focus the majority of their time on business logic and solving the problems unique to the service rather than generic operational problems. Overall, serverless seems to have a dramatic improvement in development speed due to this.
- Scales by default. Out of the gate, serverless APIs are able to support large workloads with the limitation being the concurrency limit set by AWS (which is changeable with a simple support request)
- Operational visibility. Having logic distributed over a larger amount of Lambda functions increases the surface area for failures while the parallel nature of event-driven architectures acts as a multiplier of complexity. Moreover, event-sources such as API Gateway, databases (Aurora, DynamoDB), notification systems, and queues add even more possible failures to the list. This is the reality of distributed architectures but it’s easily improvable by observability platforms such as Dashbird or AWS’ own CloudWatch.
- Using non-serverless DBs in a large-scale. Parallelism can cause non-serverless databases (I mean limited connection counts) to run into scalability issues when Lambdas functions use up all the connections. Some remedies there are connection-pooling over subsequent requests (re-using underlying Lambda containers makes this possible).
- Latency in decoupled microservice architectures. This isn’t a serverless issue, per-se, but more of a side-effect of having microservices ask information from one-another, producing chain-requests that increase latency. This is avoidable by designing the microservices in a way that allows parallel querying and avoids dependencies for requesting data.
- Scaling Lambda functions in a VPN. Functions in a VPN that need internet connection are limited to the IP addresses available in the network, meaning scaling can become an issue there. Make sure you allocate enough IP addresses and are mindful of the fact when designing the application.
An amazing and very useful use-case for AWS Lambda is to pair it up with S3 storage. After a user uploads a file to S3, a Lambda function is triggered to process the data. This can be image optimizations, video transcoding, or whatever you may need. It’s up to you to handle the data further, store it in a database or return it to S3.
- Simplicity. It’s easier to set up a Lambda function than it is to set up an EC2 instance. You’re essentially using dedicated services for your needs. You’re storing the data in S3 which is built to store files, and using serverless compute to spin up and process the files, return them back to S3, and shut down.
- 900-second timeout limit. With large files, this computation can take some time. Watch out because a Lambda function can’t run for more than 900 seconds.
Scheduled tasks (CRON jobs)
Using Lambda functions as CRON jobs is another excellent use case. In most cases, it’s a relatively easy task to decouple an existing CRON server to AWS Lambda.
- Cost. Not having to add a separate EC2 instance can decrease the cloud bill.
- Simplicity. It’s easier to set up a Lambda function than it is to set up an EC2 instance, especially because you don’t have to worry about uptime and other basic DevOps questions. AWS Lambda also scales better for these cases if the workload happens large.
- 900-second timeout limit. If a CRON job takes longer than 5 minutes, you’ll have to ensure that the task gets continued somehow. Lambda functions have a maximum time limit of 900 seconds.
Processing a potentially infinite stream of messages – it’s something that we’ve been doing in Dashbird for a while now without much headache. The reason why Lambda functions are good for stream processing is the scalability aspect and the pay-as-you-go pricing model. Keep in mind that compute time is not cheaper (than EC2 for instance) but if the events are infrequent and sent in bursts, you’ll end up saving money.
- Development speed. You’ll only have to focus on how one shard is processed and everything else is abstracted away – this makes for a faster time to market.
- Automatic scalability.
- Cost. Compute isn’t necessarily cheaper with AWS Lambda, so a constant and highly parallel flow of executions might end up costing the same or more than EC2. Optimize for performance and cost by analyzing invocation level data with Dashbird.
- Difficult to monitor. Another area when Dashbird gives a good overview into what’s going. Latency metrics, error reporting, and cost breakdown help improve various aspects of your application.
The biggest benefit for anyone using serverless is the dramatic increase in development speed. Sure, it can be more cost-effective and is scalable by default, but for the most part, development time is actually the most crucial metric for any tech company. This is why high levels of abstraction really benefit companies. In our experience (being a serverless operations company), observability and tooling around serverless is the biggest pain point for new AWS Lambda users. Another thing you want to keep an eye on is the design-patterns you use.
Event-based architectures are really hard to debug, not only finding failures but also latency can get really high. This tech stack usually produces lots of invocations and it might get expensive. That’s what we want to solve. Giving you the overview and insight you need to make changes that can have a huge impact on your overall system health and performance.
- Speed of development
- Automatic scalability
Things to look out for
- Operational visibility
- Misuse and anti-patterns
This post is loosely based on the CNCF (Cloud Native Computing Foundation), a classification of the top 10 serverless use cases. We listed the use cases that might encounter the biggest observability problems.