What Vending-Machines Have to Do With Your Microservices Architecture

Orchestrating and composing multiple services in a distributed architecture is not easy.

Desired Serverless Composition Properties

Before we move along with the great solution offered by vending-machines to our distributed architectures, we need to understand what solutions and values we’re looking for.

In a serverless environment, there are at least three desired properties of any distributed services implementation.

Below is a short intro. We’re covered these in details in the Serverless Composition Strategies article, in case you’d like to dive deeper.

Isolation

Each component must be isolated from each other, working as a blackbox. Services should not need to be aware of eachothers’ implementation details to interact with. The deployment process is ideally independent for each individual component.

Substitution

It must be relatively easy to replace (or extend) an existing component with minimal to no risk of causing a disruption to other services.

Zero-waste

Pay-per-use model is one of the main benefits of serverless. As opposed to renting time-allocation, such as in virtual server instance. The composition of services should not increase costs by creating idle time and double billing with IO-bound waiting time, for example.

Challenges

Usually meeting all three requirements is not a trivial task. Sometimes an implementation meets two of them, but breaks a third. And it really depends on the context and use case.

Although there’s no one-size-fits-all when it comes to distributed cloud architecture, there are some software patterns that can be applied to a variety of use cases. By following these conceptual implementation designs, it is usually possible to meet the desired properties and also achieve a robust and scalable architecture.

One Possible Solution

This is where Vending-Machines come in.

These little soda & snack automated distributors usually run on top of a mature and robust software design pattern called Finite-State Machine (FSM).

In essence, a FSM takes care of controlling a workflow process composed of multiple steps, in which it the system may be in a finite number of states and there’s a needs to handle transitions between these states. Read more about it here in case you’re not familiar.

Why Finite-State Machines?

The orchestration of distributed microservices is about coordinating multiple components in a coherent and organized way to achieve a higher-level task.

This very often involves some sort of workflow with multiple steps, transitions and different states along the way. Sounds like a FSM!

By example, we learn

Take an e-commerce checkout process as an illustrative example. After the customer submits an order, the system kicks out a process with several steps:

  1. Verifies stock availability;
  2. Reserve products in stock;
  3. Process the payment;
  4. Issue invoice;
  5. Send confirmation message to the customer e-mail;
  6. Decrement inventory availability for ordered products;
  7. Notify the logistics and fulfilling department;

When transitioning from one step to the other, some checks might be necessary. For example: did the customer ask any products to be delivered as a gift? If yes, a separate process needs to be triggered within the fulfillment center. If the payment fails, the rest of the process should be suspended and someone must be notified, either the customer directly, a support agent or perhaps both.

Coding and handling all this logic manually can be difficult and developers risk ending up with a tangled spaguetti code with multiple services being coupled with each other.

Meeting the three desired properties is not trivial, as we said before. If the payment fails, should the inventory reservation be released for others to buy the products? In this case, should the inventory service stay running in idle state waiting for payment status to only then react accordingly?

We risk having two services (payment and inventory management) contributing to double-billing by being tied to an IO-bound process with a third-party credit card processing provider, for example. This would be a subpar implementation from a Serverless good practices standpoint.

Solution in practice

The Finite-State Machine will allow us to handle not only the task execution, but also the logic behind state transitions and rules to manage exceptional cases (such as when payment fails).

There would be no need to keep services in a double-billing state, because the FSM knows what to do when the payment fails and will ensure the inventory management component is triggered appropriately.

Hands-on Finite-State Machines

There are basically three ways of implementing a FSM in your projects:

  1. Implementing on top of open-source packages;
  2. Leveraging managed cloud services;
  3. Creating your own from scratch (not recommended);

Open-source

There are multiple open source projects in many different programming languages available. Below are some projects, but this is not a recommendation, you should do your own research before deciding to use:

Managed Cloud Services

The leading cloud providers offer a managed service for implementing custom FSM logic. They are usually very flexible and easy to get started with. Also often employ a pay-per-use model, being cost-effective for small to large applications.

AWS

Amazon Web Services offer Step Functions, which allows developers to “build distributed applications using visual workflows“. It integrates very well with a variety of other AWS services, making it ideal to coordinate services in the leading cloud provider.

Azure

Microsoft offers Logic Apps to “quickly build powerful integration solutions“, as well as the Power Automate service.

Wrapping up

We learnt that orchestrating distributed services in a serverless architecture is not a trivial task, but there are software design patterns to help us with that.

The Finite-State Machine is a robust and mature pattern that can be used in a variety of cases.

In case you’d like to learn more, read more about serverless architectural patterns in our Cloud Knowledge Base in which we cover other patterns that might be more suitable to different scenarios.

Read our blog

ANNOUNCEMENT: new pricing and the end of free tier

Today we are announcing a new, updated pricing model and the end of free tier for Dashbird.

4 Tips for AWS Lambda Performance Optimization

In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.

AWS Lambda Free Tier: Where Are The Limits?

In this article we’ll go through the ins and outs of AWS Lambda pricing model, how it works, what additional charges you might be looking at and what’s in the fine print.

Made by developers for developers

Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.

What our customers say

Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.

Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.

Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.

I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.

Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.

Great UI. Easy to navigate through CloudWatch logs. Simple setup.

Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.