Microservices and Serverless: Winning Strategies and Challenges

The concept of a microservice perfectly fits the structure of a serverless function, which easily enables deployment and runtime isolation for different services. On the storage side, services such as DynamoDB also make it easier to have independent databases for each microservice and scale them independently (when required or desirable).

Before we dive into details, please consider whether the benefits of Microservices abundantly outweigh its disadvantages for your particular project and team. Please, please don’t pick it because “it’s the trend”. More often than you may think, a Monolith is better and can be a “majestic” choice.

Advantages of Microservices in Serverless

Selective scalability and concurrency levels

Serverless functions make it easy to manage concurrency and scalability. In a microservices architecture, we take maximum advantage of this. Each (micro)service can have its own concurrency/scalability settings according to its needs.

This is valuable from different perspectives: possibilities to mitigate DDoS attacks, reducing financial risks of cloud bills spiraling out of control, better allocation of resources…and so on.

Fine-grained resource allocation

With selective scalability and concurrency come the benefits of detailed control over the resource allocation priorities.

In Lambda functions, each (micro)service can have different levels of memory allocation, according to its needs and purposes. Customer-facing services can have higher memory allocated since it will contribute to faster execution times. Internal services that are not sensitive to latency can be deployed with optimized memory settings.

The same applies to storage mechanisms. A DynamoDB table or Aurora Serverless database can have different levels of capacity units allocation according to the needs of the particular (micro)service they are supposed to serve.

Loose coupling

This is a property of Microservices in general, not so much of Serverless, since it makes it easier to decouple components of a system that have different purposes.

Multi-runtime environments

The easiness of configuration, deployment, and execution of serverless functions opens up possibilities for systems that are based on multiple runtimes.

While Node.js (JavaScript runtime) is one of the most popular technologies for backend web applications, it’s unlikely to be the best tool for every single task.

For data-intensive tasks, predictive analytics, and any sort of machine learning, it’s likely that Python will be your programming language of choice. Dedicated platforms – i.e. SageMaker – are better suited for very big projects, but you can still run some of the most popular data and AI projects – Scikit Learn, SpaCy, Numpy & Pandas – on a serverless function.

With a serverless infrastructure, there is no additional effort (operations-wise) to pick Node.js for your regular backend APIs and Python for your data-intensive workloads. Obviously this would add some effort in terms of code maintenance and the type of skills your team must manage.

Independence for development teams

Also more associated with Microservices than Serverless itself. Different developers or teams can work, fix bugs, extend features, etc on (micro)services without stepping on each other’s toes.

Tools such as AWS SAM also give more independence on the operations side. AWS CDK constructs enable an even greater independence without sacrificing higher-level quality and operational standards.

Disadvantages of Microservices in Serverless

Harder to monitor and debug

Among many of the challenges introduced by Serverless, monitoring, and debugging is one that can be problematic. Microservices introduces additional challenges since compute and storage systems are scattered across many different functions and databases, not to mention other services for queueing, caching, etc.

There are professional platforms to solve all of these issues, though. Professional teams should certainly consider whether it’s worth investing in.

Monitoring and observability tools like Dashbird give you a quick and easy understanding of what’s going on in your application and will send you an immediate alert via Slack, email, or via webhooks or SNS to your configured endpoint, whenever something breaks or is about to break so that you can jump in and fix it before it starts affecting your system performance and thus your end customers.

For example, we could set Dashbird to raise an incident whenever memory consumption is above 90%, on average, over a period of 15 minutes.

Dashbird will also give you Well-Architected insights based on your infrastructure data, allowing you to improve the architecture making it more reliable, cost-efficient, less likely to break, and able to take on added complexity in time.

Dashbird AWS insights
Dashbird AWS insights

Possible to experience more cold starts

Cold starts happen when the FaaS platform (e.g. Lambda) needs to spin up a new virtual machine to run your function code. They may be problematic if your function’s workload is latency-sensitive since a cold start adds from a few hundred milliseconds up to a few seconds in the total startup time.

After one request if finished, FaaS platforms will usually keep microVMs idle for some time waiting for the next one before shutting down after 10-60 minutes (yes, varies a lot). The result is: the more frequently your function is executed, the more likely it is for a microVM to be up and running for the incoming requests (avoiding cold starts).

When we scatter our application in hundreds or thousands of (micro)services, we may also spread invocations in time per service, leading to a lower invocation frequency per function. Notice the “may spread invocations”. Depending on the business logic and how your system behaves, this negative impact might well be small or negligible.

Other disadvantages

There are other disadvantages inherent to the Microservices concept itself. These aren’t inherently associated with Serverless. Nonetheless, every team adopting this type of architecture should be cautious to mitigate their potential risks and costs:

  • Not trivial to decide on service boundaries, which may lead to architectural issues
  • More extensive attack surfaces to secure
  • Services orchestration overhead
  • Syncing compute and storage (when required) is not easy to do in a performant and scalable way

Microservices challenges and best practices in Serverless

How small or big a serverless microservice should be

It’s relatively easy to confuse the concept of a “function-as-a-service” with a function statement (or, more generally speaking, a subroutine) in your programming language of choice.

We are entering an area with no way to draw a perfect line, but anecdotal experience shows that going for very small serverless functions is not a good idea.

One thing you should keep in mind is that, when deciding to spin out a (micro)service into a separate function, you will have to deal with the Serverless trilemma. Whenever possible, there are many benefits in keeping correlated logic in a single function.

The decision-making process should take into account the advantages of having a separate microservice. “If I spin out this microservice…

  • …will it enable different teams to work independently?
  • …can I benefit from a fine-grained resource allocation or selective scalability capability?

If not, it’s worth considering to keep this service bundled with another that requires similar resources, is contextually linked and performs correlated workloads.

Loosely coupling your architecture

There are many ways to orchestrate microservices by composing serverless functions.

When synchronous communication is required, good ol’ direct invocations (i.e. AWS Lambda RequestResponse invocation method), but this leads to a highly coupled architecture. Better alternatives are using Lambda Layers or an HTTP API, which makes it possible to later modify or migrate services without disrupting clients.

For workloads that accept an asynchronous communication model, we have several possibilities such as queues (SQS), topic notifications (SNS), Event Bridge, or even DynamoDB Streams.

Isolating implementation details across components

Ideally, microservice should not leak implementation details to its consumers. A serverless platform such as Lambda will already provide an API to isolate functions. But that in itself is an implementation detail leakage, and ideally we would add an agnostic HTTP API layer on top of our functions to make them truly isolated.

This strategy has its drawbacks as well and there are several factors you should consider to make a decision.

Importance of using concurrency limits and throttling policies

To mitigate DDoS attacks, make sure to set individual concurrency limits and throttling policies for each public-facing endpoint when using services such as AWS API Gateway. Such services have global concurrency quotas for an entire region in the cloud platform. If you don’t have endpoint-based limits, an attacker only needs to target one single endpoint to exhaust your quota and bring your entire system down in that region.

Wrapping up

No matter if you are migrating legacy systems or building something from scratch, making sure it’s running smoothly as expected is a constant challenge. Dashbird features a monitoring platform and Well Architected insights engine tailored specifically for serverless applications that implement distributed and microservices architectures.

In case you (are going to) run serverless microservices in production, we recommend you give it a try for free for 2 weeks. The service implements an asynchronous monitoring approach with a simple 2-minutes integration process that doesn’t require any code changes (nor a credit card).

Read our blog

ANNOUNCEMENT: new pricing and the end of free tier

Today we are announcing a new, updated pricing model and the end of free tier for Dashbird.

4 Tips for AWS Lambda Performance Optimization

In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.

AWS Lambda Free Tier: Where Are The Limits?

In this article we’ll go through the ins and outs of AWS Lambda pricing model, how it works, what additional charges you might be looking at and what’s in the fine print.

Made by developers for developers

Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.

What our customers say

Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.

Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.

Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.

I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.

Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.

Great UI. Easy to navigate through CloudWatch logs. Simple setup.

Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.