Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
In this article, we’re covering all the latest updates from AWS in 2021 that serverless builders should be aware of.
Before we start, let’s recall a few significant updates in serverless, announced at re:Invent 2020. One of the things that we see is that agility is really one of the primary drivers to one’s workload in the cloud and serverless is a good example of this.
But the discussion often starts with cost.
At re:Invent 2020, AWS announced that Lambda has its billing granularity reduced to one millisecond. And what that really means for you, is that for example if your Lambda function runs for 30 milliseconds, you also pay for 30 milliseconds rather than for a 100 milliseconds, as opposed to before. And this really translates to something like 70% of savings from function duration alone. And this is something that required no action from you to be enabled. This automatically affected all of your Lambda functions and this also includes Lambda Edge.
Learn more how to cut AWS Lambda cost in this article about 6 AWS Lambda cost optimization strategies that work.
We’re not done with Lambda just yet.
So to simplify packaging and deployment of Lambda functions, you can now have them as container images, meaning that it’s much easier to package dependencies with your function. And to use the tools for containers that you’re already familiar with.
This does not change how Lambdas work. They’re still event-driven, it’s just that in addition to those zipped images that you used to work with, you can now package the Lambda functions in Docker V2, or OCI container formats, and this can be up to 10 gigabytes in size.
One of our personal favorites from late last year, a little bit before re:Invent, is the ability to synchronously execute Step Functions express workflows.
Serverless builders often find themselves in a situation where they have multiple microservices that need some form of orchestration. For example, you might be taking data from one microservice, passing it on to another, maybe taking the decision on its output, and executing some more functions along the way. And you can orchestrate all of this really easily with Step Functions, but if you wanted to hide all of that processing behind a single RESTful API call, you either had to set up a Lambda function to wait for the completion of the workflow, or you had to build a more complex polling client to handle the asynchronous nature of workflow executions.
Now with synchronous execution support, you can just trigger a Step Function, express workflow directly from API gateway, and let the API gateway wait until the workflow is complete. All this is possible even without having to really build another layer of indirection, and then just wait for something to happen.
One more service that AWS released in preview was the Aurora serverless version 2. This version is really aiming to make relational databases even more scalable and to do this by first of all, being able to scale instantly from hundreds, to hundreds of thousands of transactions in really a fraction of a second – you’re able to scale in fine-grained increments. It’s no longer that you have to very coarsely specify how many ACUs or capacity units you need, but rather Aurora will find the right capacity for you.
We still have the full breadth of Aurora capabilities and serverless V2, including multi-AC, including global database, but overall, it’s estimated that you can save up to 90% in costs when compared to your traditional approach for provisioning for peak loads only.
Some honorary mentions from re:Invent include, for example, larger Lambda functions. So now you can use up to 10 gigabytes of memory, with a maximum of 60 CPUs. And also, advanced extra extensions too—this is not supported on Lambda functions.
Recently, AWS also released AWS Proton, which is a fully-managed application deployment service. This is mainly targeted for container and serverless applications. What you can do with Proton really is you can connect and coordinate all the different tools that you need for infrastructure provisioning, and co-deployment, monitoring and updates, and you can do that through reusable environment and service templates.
Additionally, AWS also announced EventBridge, EventReplay, and Archiving Support. This allows you to replay a specific set of events that have occurred in your system so that you can, for example, develop what has happened when a problem struck.
And last but most certainly not the least, S3 is now strongly consistent, which means that once you read after you write, you already get a consistent view of the world like it should be after a write. This is really a game-changer for all sorts of data-driven workloads that make heavy use of object storage, as you don’t need to kind of write your code around eventual consistency.
Now let’s move on to post-re:Invent AWAS news and updates.
The first one is S3 Object Lambda. With S3 Object Lambda, you can add code to S3 GIT requests to process and even modify data as it’s returned to an application. The way it works, as you could see on the schematic as well, is that first of all, you create a Lambda function that you want to use to modify data or to analyze data to some extent. You then attach that to a supporting S3 access point.
And you do that through what’s called an S3 Object Lambda access point.
So what happens is that you will have an S3 access point. You will have a Lambda function, and you will have an object Lambda access point which executes that Lambda function on top of the data that’s received from a supporting S3 access point. Every time someone gets the object through the new object Lambda access point, your Lambda function is called, its provided request information, its provided user’s identity, and the object content to work on. And your Lambda can then access the original object content as well. You can do that through the assigned URL, and you write a new version of that object using a new API which is called the write-get-object response API. You write it back and then the client will receive that modified content.
The maximum duration of S3 Object Lambda is 60 seconds. Keep in mind to keep your objects small enough to be able to fit into the timeframe. What could be the potential use cases for Object Lambda are for example redacting personally identifiable information on the fly, augmenting data, converting data formats, or encoding like compressing, decompressing files, implementing customer authorization, and resizing and watermarking images for example. So now, remember that AVX2 or the advanced vector ascensions are supported on Lambda, and Lambdas can be made much more efficient at tasks like image manipulation.
With Amazon CloudFront, you can securely deliver content like videos, images, data applications to your customers with low latency and high transfer speeds. For a while now, you’ve been able to use Lambda Edge to run your code really close to the customers, to customize their content experience. But this has mostly been targeted towards a little bit more compute-heavier use cases, especially for use cases when those objects that you’re working with are not cashed regionally.
But sometimes you don’t need this, but you just need a quick way to manipulate the request with as low latency as possible. You don’t perhaps even care about what’s in that response, but you care about what are the headers, what are the request parameters that you provide. For this, AWS introduced CloudFront functions. And what you can do with CloudFront functions, is that you can write your lightweight serverless general script functions for high-scale and latency-sensitive customizations, and these run really, really close to the customer. You can customize those requests and responses there to a certain extent to perform simpler authentication tasks, and generate HTTP responses.
If you look at for example the scale, they’re designed to scale to 10 million or more requests per second. But at the same time, at the cost of how much time it takes to execute a single function.
So your function duration has to stay under one millisecond. A typical use case for CloudFront function could be cache key normalization. So let’s say you have cache keys coming in with your request in different formats, but you know that your backend or your origin really likes them in one specific kind of a format, then you can do that kind of formatting. You can do header manipulation and header verification, for example, if your security headers are present. You can do your redirects and rewrites, or you can do simple request authorization.
So just kind of keep that in mind that you can’t access the network and the file system from a CloudFront function, and you really have one millisecond to work, so stay lightweight when running at the Edge.
AWS EventBridge is something that allows you to integrate different event-driven systems in a very simple and local manner. What it can now do is, it can also propagate X-Ray trace context to make your event pipelines easier to understand and debug.
For example, if you need to trace transactions through a very distributed microservices architecture, or understand latency in different parts of your architecture, then X-Ray traces can prove to be really valuable.
EventBridge has been growing a lot in features. Previously we mentioned that events can now be archived and replayed. Plus event pipelines are now more observable through XRay traces. But there is now also support to send events directly to HTTP APIs, whether your internal APIs, or external APIs like Slack or Zendesk, PagerDuty. And to do that without writing any code. It really takes away some of the typical heavy lifting, for example, authentication, retrying, working with rate limits downstream, and things that aren’t particularly interesting to reinvent every time.
EventBridge: Cross-Region event bus targets are also supported. You could collect all of your events in a central region.
Lambda management actions are now visible through last access information in IAM. So you can go to the console, have a look at what kind of Lambda management actions were executed by which roles, and then you can tighten your IAM policies and permissions.
Databases: Integration from PostgreSQL to Lambda. Previously, you knew that you can use Lambda and you can access Poster SQL from Lambda, but now you can also do it the other way around. You can invoke Lambda function via stored procedures or use it to find functions and really get to see what you can come up with.
Finally, there is a new three-day intermediate instructor-led course available on AWS training on developing serverless solutions on AWS. And there are, of course, many other courses available in AWS training.
This article was written based on our co-hosted webinar with AWS on 25 May 2021.
In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.
In this article we’ll go through the ins and outs of AWS Lambda pricing model, how it works, what additional charges you might be looking at and what’s in the fine print.
Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.
Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.
Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.
Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.
I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.
Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.
Great UI. Easy to navigate through CloudWatch logs. Simple setup.
Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.
End-to-end observability and real-time error tracking for AWS applications.