Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
Serverless is a hot technology right now. But, what does it actually mean? The easiest way to define it is that it takes the “where” out of software deployment. To understand this concept a little more clearly, let’s look at the various ways in which software has been deployed.
In the past, systems administrators would prepare physical servers for software to be deployed. This would involve installing the operating system, associated device drivers, making sure there was enough memory/disk/processor available, installing any prerequisites, etc. They would also take care of hardware upgrades and so forth. This is known as a “bare metal” environment. There is strong coupling between the physical hardware and the deployed software, since one strongly depends on the other. Here, the unit of deployment was an actual server.
The next type of deployment to later emerge was a virtual machine. Now, instead of deploying right to a given piece of hardware, developers were able to target a simulated server. This led to a lot of flexibility with upgrades and migrations, as well as not having to worry about small hardware variations. This made deployments a lot more repeatable and flexible. It also enabled system administrators to begin decoupling software from hardware. Now, if there was a hardware failure, a system administrator could migrate the virtual machine to different hardware and avoid issues. System administrators could also host more than one virtual machine on a given physical server. However, virtual machines still had some limitations and overhead. For better or for worse, they pretended to be actual servers and this wasn’t always needed. Here, the unit of deployment is the virtual machine.
The follow-up to virtual machines was containerized deployment. This is when various containerization technologies like Docker, OpenVZ, LXC, FreeBSD zones, and Solaris jails were born. These technologies enabled a system administrator to “section off” an operating system and have different applications running on the same system without them interfering with each other. It also let developers have a lightweight environment that closely matched the production environment, leading to more consistent operations between environments. Additionally, a lot of tooling has developed to ease creating and maintaining containers. Many companies use this to improve their development and deployment cadence. Here, the unit of deployment is a container. You will notice in each of these paradigms, there is the concept of “where” the software is executing, whether it’s on a physical server on-prem, or on a VM on a cloud host. Serverless, on the other hand, promises to give us another level of abstraction: the code itself. With this new level of abstraction, we don’t need to be as concerned “where” our code is hosted.
Google released Google App Engine in 2008. For the first time, a developer was able to develop a program and launch it to run on Google’s cloud without having to be concerned with details like how the server was provisioned or if the operating system required patches. Amazon launched a similar service, Lambda in 2015. Now, developers could create software and not have to worry about hardware, operating system maintenance, scalability, or locality.
Although these are two of the bigger providers, there are a great number of serverless providers today, all supporting various technologies at different price points. Additionally, the software is available for on-prem deployment for companies that don’t want to go to the cloud but want some additional flexibility.
Although there are a great number of variations on how the different Serverless platforms work, they typically follow the following workflow:
The way software is delivered has continued to evolve, especially since the advent of affordable and reliable cloud hosting. Each new methodology brings new advantages as well as some disadvantages, so it’s best to be familiar with them all to enable yourself to select the most appropriate deployment method for a given project. Large deployments may find themselves needing to use a combination of these approaches. For example, it’s common for GPU compute to be done on bare-metal, instead of in a container or virtual machine. It is best to view these separate approaches as augmenting each other, rather than supplanting each other. As always, stay aware of project requirements and this will guide your hand in choosing the most efficient way to launch and maintain your software platform.
10 amazing benefits of serverless technology
Create your first serverless website in 15 minutes
The complete AWS Lambda handbook for beginners
In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.
In this article we’ll go through the ins and outs of AWS Lambda pricing model, how it works, what additional charges you might be looking at and what’s in the fine print.
Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.
Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.
Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.
Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.
I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.
Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.
Great UI. Easy to navigate through CloudWatch logs. Simple setup.
Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.
End-to-end observability and real-time error tracking for AWS applications.