6 Common DynamoDB Issues

DynamoDB, the primary NoSQL database service offered by AWS, is a versatile tool. It’s fast, scales without much effort, and best of all, it’s billed on-demand! These things make DynamoDB the first choice when a datastore is needed for a new project.

But as with all technology, it’s not all roses. You can feel a little lost if you’re coming from years of working with relational databases. You’re SQL and normalization know-how doesn’t bring you much gain.

It’s expected that developers face many of the same issues when starting their NoSQL journey with DynamoDB. This article might clear things up a bit.

1. Query Condition Missed Key Schema Element

You’re trying to run a query on a field that wasn’t indexed for querying. Add a secondary index for the field to your table to fix it.

DynamoDB is a document store, which means it can store documents in each table that have an arbitrary number of fields, in contrast to relational databases that usually include fixed columns in each table. This feature often leads developers to think, DynamoDB is a very flexible database, which is not 100% true.

To efficiently query a table, you have to create the proper indexes right from the beginning or add global secondary indexes later. Otherwise, you will need to use the scan command to get your data, which will check every item in your table. The query command uses the indexes you created and, in turn, is much quicker than using scan.

2. Resource Not Found Exception

The AWS SDK can’t find your table. The most common reasons for this are a typo in the table name, using the wrong account credentials, or operating in the wrong region.

If you’re using CloudFormation or TerraForm, it can also be that these tools are still deploying your table, and you try to access them before they are finished.

Companies using AWS in many different teams tend to have multiple AWS accounts, and if they operate globally, chances are good they deploy tables close to their customers. If you got many different combinations of accounts and regions, things could get confusing.

That’s why you should always double-check your AWS SDK configurations and credentials. 

3. Cannot do Operations on a Non-Existent Table

You have errors in your credentials for DynamoDB Local or the AWS SDK. Use the same accounts and regions in your credentials or configure DynamoDB Local with the -sharedDb flag so that multiple accounts can use one table.

DynamoDB Local is a tool that allows you to run DynamoDB on your local computer for development and testing purposes. It has to be configured just like the regular DynamoDB to ensure the system works as your production deployment.

While this makes DynamoDB Local susceptible to the same error, we saw above in point two. This also makes sure you catch these errors at development time.

4. The Provided Key Element Does Not Match the Schema

Your table configuration for hash and sort keys is different from the one you use in GetItem. Make sure you use the same key config at definition and access time to fix this.

Again, this is about the difference between the perceived and actual flexibility of DynamoDB. You need to correctly set up your key schema for a table when creating it and later keep on using the same keys when loading data from the table.

Check out our knowledge base if you want to learn more about setting up DynamoDB key schemas correctly.

5. User is Not Authorized to Perform: DynamoDB PutItem on Resource

An IAM policy issue, the user doesn’t have access to the PutItem command. To fix this, you either use the managed policy AmazonDynamoDBFullAccess or, if you don’t want to give the user access to all commands, explicitly add the PutItem to their IAM role.

AWS tries to keep you safe from security problems; that’s why services deployed on AWS are always as closed as possible for any access. The downside is that accessing services deployed into a production environment can become a boilerplate-heavy task.

To get around this, you can try the AWS CDK. It’s an infrastructure as code tool built around CloudFormation that comes with constructs for all AWS services. These constructs help with all that permission management that is easy to get wrong when using AWS.

6. Expected params.Item to be a Structure

You’re using the low-level DynamoDB client of the AWS SDK, which expects you to specify runtime types for your fields. Instead of using pid: ‘<YOUR_PID>’, you have to use an object that includes the type: pid: {S: ‘<YOUR_PID>’}.

The AWS SDK for JavaScript includes two clients. The low-level client requires you to set all your types manually, and a higher-level client, called DynamoDB Document Client, tries to infer your types from the values you use as arguments.

I would recommend using the DynamoDB Document Client if you don’t have good reasons to use the low-level one instead.

Conclusion

Using DynamoDB for your projects can save you from many issues with relational databases. Most importantly, the costs associated with them. It scales well in terms of developer experience, from single developers to big teams, and performance, from single deployments to replicas worldwide.

But using DynamoDB and NoSQL databases requires you to relearn what you know from relational databases. If you don’t see what you’re getting into by just following the hype, you might find yourself at a dead-end later because DynamoDB won’t deliver on the promises you read about.

Let Dashbird Help

If you’re new to serverless or DynamoDB, Dashbird can help you get up to speed. After you add Dashbird to your AWS account, it automatically monitors all deployed DynamoDB tables, so no problem that might occur is lost on you.

And best of all, Dashbird comes with AWS Well-Architected best practices out of the box. This way, you can make sure you’re using DynamoDB in a safe and performant way without learning everything about it up-front. 
You can try Dashbird now; no credit card is required. The first one million Lambda invocations are on us! See our product tour here.


Further reading:

Read our blog

ANNOUNCEMENT: new pricing and the end of free tier

Today we are announcing a new, updated pricing model and the end of free tier for Dashbird.

4 Tips for AWS Lambda Performance Optimization

In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.

AWS Lambda Free Tier: Where Are The Limits?

In this article we’ll go through the ins and outs of AWS Lambda pricing model, how it works, what additional charges you might be looking at and what’s in the fine print.

Made by developers for developers

Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.

What our customers say

Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.

Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.

Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.

I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.

Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.

Great UI. Easy to navigate through CloudWatch logs. Simple setup.

Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.