Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
TL;DR: yes, API Gateway can replace what a Load Balancer would usually provide, with a simpler interface and many more features on top of it. The downside is that it doesn’t come cheap.
Load balancers have been one of the most common ways to expose a backend API to the public or even to an internal/private audience. API Gateways seem to provide the same functionality: map and connect HTTP requests to a backend service. So, are they the same or are there any differences? Can API Gateway actually provide the balancing of load? Which one is the best for Serverless architectures?
For the purposes of this article, we will look into AWS offerings for API Gateway (API GW) and Application Load Balancer (ALB).
An ALB is a central interface that enables better scalability to connect clients and backend services through HTTP requests. Each Load Balancer might offer multiple HTTP endpoints pointing to one or more infrastructure resources.
The client requests an endpoint /api/service/xyz and the load balancer is responsible for distributing that request to a healthy backend resource (e.g. an EC2 server or a Lambda function). It communicates with the backend service, waits for the results, and packages an HTTP response to the client.
/api/service/xyz
As the name suggests, one of the main purposes of using an ALB is to smooth out and balance demand across a set of resources.
Traditionally, load balancers have been used to distribute requests in a horizontally scaled infrastructure cluster, with systems replicated in multiple servers, where a single server can’t have sufficient power to handle all the demand.
Load Balancers also serve the purpose of decoupling clients and services, which is a good practice from a cloud architecture perspective.
API Gateway can manage and balance out network traffic just as a Load Balancer, just in a different way. Instead of distributing requests evenly to a set of backend resources (e.g. a cluster of servers), an API Gateway can be configured to direct requests to specific resources based on the endpoints being requested.
It plays an important role in microservices architectures, for example. Multiple services can be connected to the Gateway and mapped to particular HTTP endpoint representations. The Gateway is responsible for routing each request, on-demand, to the appropriate backend service.
When integrated with AWS Lambda, the API Gateway handles the network scaling in a seamless way. By default, API Gateway can handle up to 10,000 requests per second. Lambda will scale to match the demand of invocations coming from the API clients.
In fact, developers should not worry about configuring scalability parameters, since there are none for both API Gateway and Lambda. They both scale according to their own internal rules. Depending on the use case, when large spikes in demand are expected, it may be necessary to request an increase in the service quotas to make sure AWS will not throttle client requests.
The request routing based on endpoint rules can also be supported by ALB, especially when paired with Lambda functions.
Nevertheless, API Gateway offers many additional features missing in ALB. For example, it handles authentication and authorization, API token issuance and management, and can even generate SDKs based on the API structure. API Gateway integrates with the IAM (Identity Access Management) service, for example, simplifying access control of the underlying resources.
Although we usually want to avoid throttling of client requests, in many cases it is important to have throttling rules in place. This can help in preventing abuse, or restricting access depending on billing plans, for example. API Gateway supports throttling implementation out-of-the-box, which can save time and ensure compliance with business requirements.
If you don’t need the features provided by API Gateway, you may consider using an ALB instead, since it can be a lot cheaper in many cases – although it is difficult to compare because the API Gateway pricing depends on the number of requests, while ALB costs depend on several factors to determine its pricing, such as hours, new and live connections. Usually, APIs with light traffic are more cost-effective on API Gateway, while the high-traffic ones can find cost savings by adopting an ALB instead.
Technically speaking, the main limitation of API Gateway is the timeout of 29 seconds. If a request takes more time to be processed by the backend resource (e.g. a Lambda function), the API will respond early to the client with an error. Although there is a limit in the number of requests per second, as mentioned above, it can be increased according to the demand. A load balancer, though, can scale to hundreds of thousands, or even millions of requests per second, without any issues.
For that reason, an ALB is more suitable for low-cost/undifferentiated applications, long-running processes, and/or ultra-high-throughput applications. API Gateway is suitable for small teams that want to reduce time-to-market, as well as sophisticated use cases that require complex security measures and access control logic.
Both API Gateway and Application Load Balancer can be very useful. The latter is simpler and cheaper, which makes a good option for internal APIs to connect microservices architectures based on AWS Lambda, for example. API Gateway is more suitable especially for APIs that require fine-grained access control and other features not available in ALB.
In case you would like to learn more about cloud architecture and serverless, you might want to check this Cloud Knowledge Base.
Further reading:
How we built a Serverless Stonks checker API for Wall Street Bets
AWS API Gateway vs Application Load Balancer
Using API Gateway to run Database Queries
Using API Gateway to Decouple and Scale Serverless Architectures
In this guide, we’ll talk about common problems developers face with serverless applications on AWS and share some practical strategies to help you monitor and manage your applications more effectively.
Today we are announcing a new, updated pricing model and the end of free tier for Dashbird.
In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.
Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.
Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.
Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.
Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.
I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.
Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.
Great UI. Easy to navigate through CloudWatch logs. Simple setup.
Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.