Serverless security hazards and trends to consider

Fourteen billion dollars – that’s the projected global market size for serverless, which is supposed to grow by about 26 percent annually in the next few years, according to the recent Global Serverless Architecture Market report.

The fast pace of adoption of serverless is hardly surprising because the technology can save significant costs for companies. It can enable them to build and deploy software and digital products without providing and maintaining any virtual or physical servers. This task is done by the cloud provider, responsible for the dynamic management of allocation and provision of servers.

As a result, companies using serverless architecture eliminate complex tasks as maintaining security by continuously applying patches and other solutions. Therefore, they can focus more on the functionality of their digital product.Surprisingly, IT professionals are taking a cautious approach to adopt the technology of serverless architecture right now. For example, according to this ZDNet article citing the latest research, the adoption slowed down at the beginning of 2019. After interviewing more than 500 IT professionals, a study found that only 15 percent were using it. The share of those evaluating the technology fell to 36 percent compared to 42 percent.

These numbers suggest that companies need more time understanding serverless architectures and learning how to deploy them correctly across their organization. On top of that, plans or intentions for serverless implementations can be slowed down by evaluating security risks associated with the new technology.

Indeed, quite a few risks should be considered when securing serverless applications and using the technology itself. As more organizations adopt serverless, this issue may become more impactful, so let’s review the most significant hazards and trends in this area that any adopter of this technology should be aware of.

Major hazards in Serverless

Let’s see what risks are ahead for serverless technology and what factors will drive its development in the following years. 

So let’s begin with risks, shall we? Given the relatively new nature of the technology, serverless-based architectures have several security risks that should be considered by the owners, including the following.

1. Complicated security scanning

The recency of the technology is the main reason for this. Compared to standard applications, doing security testing for serverless architecture is more complex because almost everything has to be done manually. Since nearly all currently available automated scanning tools lack adaptability and compatibility with serverless applications, it’s reasonable to assume that the problem will persist until adequate security testing solutions are developed.

For example, SAST (Static Application Security Testing) works great with standard applications but has some issues with serverless. Specifically, this tool relies heavily on control flow and data flow analysis to identify security issues in software. But since serverless apps have a range of functions connected with cloud services and event triggers, performing this security check would be prone to false positives.

The users of serverless architectures may run into problems with conducting standard security protections, including Firewall, IDS, and WAF. The technology may restrict or eliminate access to virtual and physical servers, which means that users can’t take advantage of the aforementioned conventional protections, endpoint protection, and host-based intrusion detection systems.

2. Expanded potential attack surface

Serverless architectures limit the use of application firewalls and other standard protections, which means that they’re prone to more threats. These architectures use data from an expanded range of event sources (APIs, cloud storage, HTTP, IoT devices, and others), which increases the attack surface.

Understanding all the involved risks can be challenging, as the technology is still relatively new for many. For businesses, this could present many challenges; while content companies can outsource writing and proofreading to many online tools like Supreme Dissertations, Grammarly, Wowgrade, and Studicus, security risks must be considered and monitored at all times.

3. Dependencies on third-party software packages

To complete a specific task, serverless functions require code. Typically, serverless functions heavily rely on software developed by third parties, and in some cases, even open-source libraries and online services. This means that the risk of obtaining something dangerous during the import of code from a vulnerable third-party dependency is high.

There’s also the risk of bit-rot when using third-party libraries. For example, when you deploy a serverless app and move on, it begins to age, so it needs regular security updates to protect the data adequately. If you don’t update them, the risk of being hacked becomes more probable by the day; that’s what happened to Capital One several years ago when the company lost the data of millions of customers because of an outdated third-party library.

Avoiding this security risk requires the users of serverless to use several methods, including:

  • Removal of unnecessary dependencies
  • Regular upgrade of outdated package versions
  • Software scanning for known vulnerable dependencies (typically comes in the form of a security platform)
  • Keeping the list of the available dependencies and their versions.

4. Unsafe storage of access and configuration files

Storage and maintenance of configuration settings, access files, API keys, database credentials, encryption keys, and other “app secrets” becomes a concern for users of serverless architectures as their apps grow in complexity. In conventional applications, these files are typically stored in one centralized and encrypted file or database that developers cannot access because of the role-based access control (RBAC), limiting their rights to their role (deployment, etc).

On the other hand, one can’t store “the secrets” in one centralized file because each function is packaged separately. As a result, they need to rely on environmental variables to access it. The storage as an ecological variable, e.g., plain text, means that the users who deploy an app are likely to access sensitive data.

The best way to mitigate this risk is to ask your vendor about the possibility of storing sensitive data in an encrypted environment. They should provide you with secure APIs that are fully compatible with serverless.

Now that we know the potential risks of migrating to serverless architectures let’s talk about the hottest trends that will define the future of the technology.

1. Adopting and securing Serverless apps will become easier

Some crucial security considerations come with serverless architectures, so many people are working on eliminating them. As more and more companies adopt serverless, the need to address them will become even more urgent. That’s why we see big companies like Google and IBM are working to make sure that the technology is secure and ready to use.

Another significant trend is the development of tools that help with adopting serverless. Since technology adoption essentially means rethinking almost everything the users are used to doing with the traditional architectures, they will need helpful tools for tasks like monitoring.

For example, adopters of serverless architectures can use Dashbird for monitoring and troubleshooting serverless applications. The tool is specifically designed to help users detect all potential failures in the apps, including crashes, timeouts, configuration errors, and early exits. To make the adoption of serverless even more accessible, the tool supports all runtime languages available on AWS and doesn’t need the user to make changes to the existing code.

Learn more about securing serverless architectures in these infographics.

2. Standardized Serverless development

“There’s little standardization across serverless vendors, which means that the online community can’t quite realize the full potential of the technology,” says Ashley Stockwell, a developer from Trust My Paper. “Currently, many players push for standardization measures that can make a single experience for deploying functions across all providers of serverless.”

The Cloud Native Computing Foundation (CNCF) is at the forefront of the standardization effort with Oracle. According to this SDX report, the two organizations are working on the first draft specification “targeted at interoperability for generating a serverless function.” Reportedly, the project’s primary purpose is to provide tools for building, testing, and managing the lifecycle of serverless architectures.

“One thing is quite clear – as a new technology, there is a lack of standardization and interoperability between cloud providers that may lead to vendor lock-in,” SDX quoted CNCF’s recent document on the progress of the standardization project. “There is a need for quality documentation, best practices, and more importantly, tools and utilities. Mostly, there is a need to bring different players together under the same roof to drive innovation through collaboration.”

In addition to this collaboration, there’s another open-source framework project pushing for standardization called Knative. Leveraging Kubernetes’ experience, Knative seeks to make the building and deploying serverless applications more manageable. The project was started by Google – with the latest version released not so long ago – but many others are already on board, including IBM.

3. Serverless will also become hybrid

It’s safe to suggest that many enterprise users of applications will have requirements that can only be met with hybrid serverless solutions; for example, this means that some apps will run on data centers while others on AWS Lambda and public clouds. By making that happen, serverless technology will become even more popular among enterprise users and get more valuable integrations and features.

Reputable sources support this trend. For example, this Deloitte report claims the following:

“Not all applications or services can be delivered in a serverless model – we believe the future enterprise IT landscape will be a hybrid landscape.”

The report also suggests that serverless computing is a lot more attractive for enterprise users than traditional infrastructure models, but taking full advantage of the technology would be impossible without “the right software architecture.”

Becoming mainstream

Even though serverless comes with some security considerations, big players are working to ensure that this technology is as secure as possible. Like any other emerging technology, serverless is at the improvement stage, as all challenges, including those described in this article, are surmountable.

It’s safe to assume that serverless will become mainstream in the next several years, as more and more companies are becoming interested in this new building and architecting applications that eliminate the need to manage them.


Further reading:


Read our blog

ANNOUNCEMENT: new pricing and the end of free tier

Today we are announcing a new, updated pricing model and the end of free tier for Dashbird.

4 Tips for AWS Lambda Performance Optimization

In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.

AWS Lambda Free Tier: Where Are The Limits?

In this article we’ll go through the ins and outs of AWS Lambda pricing model, how it works, what additional charges you might be looking at and what’s in the fine print.

Made by developers for developers

Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.

What our customers say

Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.

Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.

Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.

I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.

Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.

Great UI. Easy to navigate through CloudWatch logs. Simple setup.

Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.