Monitoring platform for keeping systems up and running at all times.
Full stack visibility across the entire stack.
Detect and resolve any incident in record time.
Conform to industry best practices.
DynamoDB is a key-value and document database with single-digit millisecond response times at any scale. It’s a fully managed durable database with built-in security, backup, and restore capabilities.
A keyword you’ll often hear with DynamoDB is that it is a NoSQL database, which simply means it doesn’t use the traditional SQL query language used in relational databases. Its design is to reduce complexity between tables by consolidating objects into a common collection or “schemaless” table in a NoSQL database. These objects are then grouped together based on common themes, which will meet the conditions of the common queries of the application that you set.
Table: as a collection that can hold a virtually infinite number of items, it may also have secondary indexes associated
Secondary Index: duplicates table items using a different primary-key and sort-key
Primary Key: a special form of attribute that is used to reference items, similarly to an item ID
Sort Key: another special form of attribute that is used to organize items in a different sorting order
Item: the most basic unit in AWS DynamoDB, it holds the data attributes structured in a JSON
Attribute: a key-value pair that contains informational data-points about an item in the database table
Streams: a constant stream of state-changing operations executed against a table
Query: operation to retrieve a particular item (or set of items)
Scan: operation to scan the entire table or a section of it
Filter: rules to apply after a query or scan has executed, but before results are returned to the requester
While a relational database still has its place such as when flexibility is needed, as computing costs have increasingly become the main consumer of a business’ budget, the world needs faster speeds to match scaling demands. RDBMS can also see query performance become unpredictable and highly variable, and as more data is loaded in, it can also degrade.
When it comes to NoSQL design, data can be queried efficiently however only in a finite amount of ways. Therefore, it’s important to understand the specific questions it’ll be answering before any design takes place. Also to note, unlike with RDBMS, your aim should be to have as few tables as possible.
Although there is some rigidity within the service, DynamoDB supports two data models allowing for slightly different needs and some flexibility.
First is the key-value store, a scaled-up distributed hash table. The items within the table are uniquely identifiable by a key-value pair of attributes, which is used to GET, SET, UPDATE and DELETE. There are two types of attributes: the Primary Key, which works similarly to an item ID, and the Sort Key, which allows for ordering the items.
As we know, hash tables are reliable, consistent, and fast whatever their size, however, their drawback is that only one record can be retrieved at a time.
To combat this, DynamoDB can also be used as a wide-column store meaning that each row can have any number of columns at any time. This B-tree data structure and secondary index provide the option to find an item while also allowing for range queries. They can be used to reference and order items by different Primary Keys and Sort Keys. It’s also important to remember that DynamoDB is a schema-less database, in which items can have different sets of attributes.
One of the biggest benefits of serverless computing is the hands-off approach developers can take when it comes to time and budget-consuming aspects, and one of these is ensuring that capacity is never limited or underused. DynamoDB automatically scales to manage surges in demand without throttling issues or slow response, and then conversely reduces down so resources aren’t wasted.
The service does this using AWS Application Auto Scaling, which allows tables to increase read and write capacity as needed using your own scaling policy. This policy specifies whether you want to scale these capacities, and the minimum and maximum provisioned capacity unit settings. It also includes a target utilization (the percentage of consumed provisioned throughput), which, along with a target tracking algorithm, ensures the throughput matches as near to the set target no matter the workload.
DynamoDB can handle more than 10tr requests per day and can support peaks of more than 20m requests per second, making it one of AWS’ top services for applications needing low latency data access at any scale.
The service was born out of Amazon’s own need for a more advanced database after its Black Friday sales started. RDBMS features, such as joins, were slow at scale, and they found that other features, namely, strict consistency functionality wasn’t always necessary and so could be relaxed. In both cases, the RDBMS features were adding cost and with NoSQL, CPU and memory could be vastly reduced.
As speed is of the essence and from a technical standpoint, we need shorter initialization steps while still maintaining high security, DynamoDB is perfect as access is over HTTP and the service uses AWS IAM. These two elements mean that your database is always protected and that the requests are authenticated and validated quickly without the need for complex network configuration, such as network partitioning.
There are endless use cases for DynamoDB as it’s a service that is needed in a huge amount of situations. Growing companies such as Lyft and Airbnb as well as long-standing enterprises, such as Capital One and Nike use the service within their architecture.
Most of the systems that make Dashbird work run on DynamoDB. We use it to store customer’s data such as integrations settings, delegations, and others. We trust DynamoDB to be able to handle high throughput loads, therefore we use it for storing detected errors and insights. To operate with confidence we use our own system to also monitor read and write table capacities as well as to detect possible throttles, high latency periods, or tables that are nearing the capacity limits.
DynamoDB charges for reading, writing and storing data in your DynamoDB tables, and for any additional features, you choose to add. DynamoDB has two capacity modes and those come with specific billing options for processing reads and writes on your tables, these are:
As you’d expect, this mode means there isn’t a need to specify the read and write throughput as it’ll scale up and down as needed. This is a good option if there is an element of unpredictability within your tables or traffic volumes.
Conversely, with a provisioned capacity, you specify the number of reads and writes per second that you expect will be needed. Autoscaling is available here to automatically adjust based on the specified utilization rate ensuring high performance and budget management.
For both options, costs vary depending on the region you’re using, so it’s worth using the calculator to give an estimate.
The following DynamoDB benefits are included as part of the AWS Free Tier. Each benefit is calculated monthly on a per-region, per-payer account basis.
DynamoDB charges one write request unit for each write (up to 1 KB) and two write request units for transactional writes. For reads, DynamoDB charges one read request unit for each strongly consistent read (up to 4 KB), two read request units for each transactional read, and one-half read request unit for each eventually consistent read
For On-Demand mode, these are charged per million units, while for Provisioned mode, these are charged per unit, per hour.
Within the Provisioned mode is the additional option for Reserved Capacity. Using this can help save costs, and works by paying a one-time upfront fee and committing to paying the hourly rate for a minimum throughput level for the duration of the reserved capacity term. It’s worth noting however that any unused capacity will not roll over into the following month.
Data doesn’t need to be provisioned on either capacity mode as the service will be constantly monitored to determine charges. As with most storage solutions on AWS, there is a free tier here also, which includes 25GB per month – anything after this is charged monthly on a per GB basis.
Additional features available are charged on top of these basics. Once again, pricing for these can vary depending on the capacity mode so the calculator is a worthwhile use of time.
Other features include backup and recovery, global storage, DynamoDB Accelerator (DAX), DynamoDB Streams, and data transfer.
While the On-Demand mode offers a more hands-off approach, it can result in being up to seven times more costly than the Provisioned mode. Meanwhile, using the Provisioned mode can mean comparatively more burden on developers to predict capacity, and it can have some drawbacks on scalability. Here are three quick tips to save and optimize your tables, and to choose the right features for your application.
You can find out more about how to run cost-effective DynamoDB tables in this article.
Setting up DynamoDB is incredibly simple. In this example, we are going to use parameters and features available in the AWS Free Tier so you’re able to replicate a similar version if you’re just starting out.
DynamoDB Streams is a great feature that captures changes to a table at the point in time when the change happened, storing these changes in a log kept for 24hours.
This stream record is written with the primary key attributes of the items that were changed, ensuring that all modifications are recorded in the same sequence they occurred. It can also be configured to capture additional data such as the states before and after the change.
DynamoDB Streams writes in near to real-time allowing other applications to consume and take action on the stream records. Some good examples of use cases are:
It’s important to remember that AWS uses different endpoints for DynamoDB and DynamoDB Streams, however both must remain in the same region.
You’re able to query an item or a set of items in your tables based on the Primary Key and retrieve them quickly.
Querying The Table using Query Operations
A DynamoDB Scan reads every item in the table or secondary index and returns a set of results. By using the Sort Key, you can decide in which order the scan takes place. Additionally, a filter expression can be used to determine which items are returned from the scan.
Running a scan can be expensive so where possible, avoid them and use queries instead, unless it’s absolutely necessary to read through each individual item.
Monitoring your Serverless application is key for performance, managing costs, and debugging, particularly as your application scales. You can use tools like Dashbird to easily keep track of your DynamoDB, Lambda functions, SQS ques, and more. Dashbird also cross-references our architecture against industry best practices to suggest performance improvements, cost reduction opportunities, and anticipate potential points of failure. Get your free account now and see the performance of your entire serverless environment at a glance.
In this guide, we’ll talk about common problems developers face with serverless applications on AWS and share some practical strategies to help you monitor and manage your applications more effectively.
Today we are announcing a new, updated pricing model and the end of free tier for Dashbird.
In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.
Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.
Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.
Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.
Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.
I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.
Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.
Great UI. Easy to navigate through CloudWatch logs. Simple setup.
Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.