Architectural Pattern for Highly Scalable Serverless APIs

The most common API architecture on Serverless backends is not necessarily the most scalable and resilient option. Many developers take for granted that an AWS Lambda processing external requests will require an API Gateway endpoint connected directly to it.

How to decouple Lambda and API Gateway

One of the best options to decouple a Lambda function and an API Gateway endpoint is by using an SQS queue. Requests come into API Gateway, which are sent as messages to SQS. Lambda polls SQS and processes messages on a regular basis, in batches.

When should we use an SQS queue with API Gateway?

Anytime we’re receiving messages from clients that don’t have to be processed immediately. The client may need a response, but it could be only that “your message was received and will be processed shortly”. Usually, these are Write requests to API endpoints that can accept an eventually consistent model.

What are the advantages of using SQS with API Gateway and Lambda?

From the three services we are discussing, SQS is the most scalable, and Lambda is the least. Failures due to scalability limitations are more likely to happen in downstream resources, such as Lambda functions and DynamoDB tables. Having SQS paired with API Gateway makes our endpoints more scalable and resilient.

Throwing a message in an SQS queue is also usually faster than executing our code on Lambda synchronously to the API request. Sometimes the client doesn’t have to wait for the time it takes for us to process the request.

By having SQS taking the peaks in traffic from the API, we can allocate a lower concurrency threshold to the particular Lambda responsible for processing the messages. This leaves the overall Lambda quota freed up to other Lambdas in our system that may require higher throughput capabilities

Implications for application monitoring

Monitoring API endpoints that are decoupled from downstream computing resources as we described above pose additional monitoring challenges, if we don’t have a Serverless-first monitoring tool, such as Dashbird.

First, messages can take an unpredictable period of time to end up being processed. Our system might do well with an eventually consistent model, but it may be unacceptable for messages to take several hours to be processed, for example.

If demand stays too high for a considerable amount of time and depending on how restrictive we set the Lambda concurrency limit, we might end up losing messages or having our queue full and unable to accept new requests from the API.

Debugging issues would also be harder since the request that triggered a potential issue and the error reporting are now separate apart by a queue.

Systems running in production that require full scalability possibilities from a cloud provider, such as AWS, must rely on specialized monitoring tools. Dashbird can monitor not only Lambdas, but also API Gateways and SQS queues, all together in one place. It provides not only alerts and notifications when things go wrong, but also can capture potential failures in advance, such as messages piling up in a queue, or unusual API latency.

Dashbird provides a free tier and also a trial of the entire feature-set. You can start right now with no credit card.

Read our blog

8 Must-Know Tricks to Use S3 More Effectively in Python

The simplicity and scalability of S3 has made it a go-to platform not only for storing objects, but also to host them as static websites, serve ML models, provide backup functionality, and so much more. In this article, we’ll look at various ways to leverage the power of S3 in Python.

Monitoring serverless applications with AWS CloudWatch alarms

Running any application in production assumes reliable monitoring to be in place and serverless applications are no exception. Here’s how to do that.

10x development speed with local serverless debugging

In the development world, the ultimate goal is to increase the business value that you can add to your product. In this article, you’ll find out how to 10x your development speed with local serverless debugging and reach that goal faster.

Made by developers for developers

Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.

What our customers say

Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.

Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.

Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.

I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.

Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.

Great UI. Easy to navigate through CloudWatch logs. Simple setup.

Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.