10 Simple AWS Hacks That Will Make You Super Productive

Useful AWS hacks and tricks that will save you time and money.

If you work a lot with AWS, you probably realized that literally, everything on AWS is an API call; hence everything can be automated. This article will discuss several tricks that will save you time when performing everyday tasks in the AWS cloud. Make sure to read till the end. The most interesting one is listed at the very end 😉

1. Use –dryrun flag in the AWS CLI before performing any task on production resources

If you ever have to perform some S3 migration tasks or want to make changes to other existing AWS resources via the command line, it’s useful to leverage the –dryrun flag to ensure that your CLI operation does what you expect. For instance, before uploading a bunch of CSV files to S3, we may want to check if our command only operates on CSV files and doesn’t move other files to S3:

aws s3 cp /path/to/local/files/ s3://demo-datasets/path/ –exclude “*” –include “*.csv” –dryrun

Thanks to the –dryrun flag, the command line output provides information about which files will be copied to S3. This output is helpful to validate that our command works as expected.

2. Use –dryrun to check if two S3 buckets are in sync

Imagine a scenario when you have to migrate hundreds of files from a region in Ireland to one in Frankfurt. After moving the object to a new S3 bucket, you may want to check if all files have been properly copied over to the new region with no errors. By using the AWS s3 sync command, you can check whether S3 buckets (or specific paths) are in sync:

aws s3 sync s3://bucket1 s3://bucket2 –dryrun

If there are any differences, the –dryrun flag would show them in the console without transferring anything. Using this trick, you can easily determine the difference between the contents of two buckets without laboriously comparing the files yourself. And you don’t even have to download those files to perform the check.

3. Use –dryrun to check if an S3 bucket and a local folder are in sync

Similar to the previous scenario, we can use the same command when comparing files from S3 to those from a local computer or a remote host.

aws s3 sync /Users/…/path/ s3://mybucket –dryrun

This command is particularly useful if you have to sync only specific file types and folders with S3.

4. Quickly download many files from S3

The last S3 command is aws s3 cp, which allows us to download all files from a specified S3 path. 

aws s3 cp s3://mybucket /Users/anna/path/ –recursive –storage-class STANDARD

In the same way, we can upload a bunch of files while specifying the S3 storage class that is most suitable to those objects:

aws s3 cp /Users/…/path/ s3://mybucket  –recursive –storage-class STANDARD

5. Use –profile flag in the AWS CLI to manage multiple accounts

Imagine that you have a different IAM user for dev and prod environments. Switching between those two can be painful unless you utilize the –profile flag. If you configured your CLI, you should see a text file with credentials in your directory location: 

a) ~/.aws/credentials on Linux and Mac 

b) %USERPROFILE%\.aws\credentials on Windows. 

A useful setup is to set your dev credentials as your default profile. Then, prod would have to be specified explicitly. Here is how the credentials file could look like:







You can create any new profile by using the command:

aws configure –profile yourProfileName

By using the command below, we will receive the results for the default (dev) profile:

aws s3 ls

In contrast, when adding the –profile prod, the result will show only production resources:

aws s3 ls –profile prod

6. Pin your most commonly used services

If you use only a few services on a regular basis, you can mark them as favorites. With the growing number of services, you may find it useful.

Recently visited (shown below favorites) provides a similar overview.

However, since AWS implemented the excellent search bar, you may consider the favorites bar a bit redundant.

7. Billing Federation

After creating your AWS account, you are signed in as the account owner, i.e., the root user. AWS recommends that we never use this account for everyday activities. Instead, we should create our first IAM user, and use it for working with AWS. We would then sign in as a root user only to perform service management tasks such as changing your account or payment details.

If you follow this best practice, you may still end up signing in as a root user every now and then to check your AWS bill because, by default, billing information is only available to a root user. However, there is a better way. You can grant the Billing and Cost Management console access to your IAM user. Once you’ve done this, you will be able to access the Billing console from your non-root user. Here is how you can “federate” billing access to an IAM user.

First, you have to Activate IAM access in the account settings:

Then, you can attach a Billing policy to your IAM user:

You can choose between full access or read-only access:

For a more detailed description of the billing federation, see AWS docs.

8. Monitoring your resources

If you work with AWS Lambda, SQS queues, SNS topics, DynamoDB tables, Kinesis Data Streams, AWS Step Functions, or ECS services, you may have realized that monitoring of serverless resources can be challenging. While CloudWatch centralizes logging into a single service, switching between tens of log groups to analyze the performance of a single application can be difficult and time-consuming. The same is true about error notifications. To improve your development experience and save time, you can start using Dashbird.

Try Dashbird for free

After a two-minute setup, you gain access to an observability platform providing you with:

  • visualizations about job execution, duration, cold starts, retries, and incidents,
  • beautifully formatted logs (rather than pure JSON),
  • project views allowing you to group resources by projects rather than by AWS service,
  • highly customizable alerts,
  • a score of how well your serverless resources adhere to the AWS Well-Architected Framework.
Image courtesy of Dashbird.io

9. Learn CloudFormation or Terraform

Infrastructure as Code gained a lot of momentum in recent years and for very good reasons. Once you learned how to programmatically deploy or modify your resources, you can become much more productive using AWS. For instance, imagine that you built all your resources for a development environment with a CloudFormation template or a Terraform configuration file. In order to build the same resources for production, you will only need to change specific values in the declarative file, and it will replicate the setup for a new environment. Additionally, you can apply the same programmatically defined infrastructure setup for a new project or for recovery scenarios.

10. Use SQL in AWS Config to query metadata about your AWS resources

AWS Config allows you to view all your cloud resources at a glance, track how their configuration changes over time, and establish configuration rules that automatically check whether your services match the desired configuration settings. Any violation of the rules you defined will trigger an alert informing you about non-compliant resources.

But AWS Config is not only a great resource to enforce compliance. It also gives you an overview of all resources in your AWS account. One of the most impressive features that I’ve recently encountered on AWS is the SQL query editor within the AWS Config. It allows you to easily group your resources by service or filter for only resources from a specific region. 

For instance, in the query below (also available in this Github gist), we are retrieving all resources with corresponding ID, region name, time of creation, tags, and current state while filtering out all network and security group resources.

AWS Config query — image by the author

Here is how it looks like in the AWS management console:

AWS Config query — image by the author (2)

The output shows all resources that match the query conditions:

AWS Config query results  — image by the author

One of the coolest things in this SQL editor is that you can query for resources with a specific tag. Typically, we use tags as key-value pairs to associate resources to a specific project or organizational unit, which is beneficial for cost allocation. But it can also be leveraged to ensure that you terminate all resources when a specific project is finished. As an example, here is how we can find all resources associated with the tag “medium”:

AWS Config: query resources with a specific tag — image by the author

For more SQL examples, check out the AWS Config documentation.


In this article, we looked at ten useful tricks to save time when using AWS resources. The –dryrun flag is useful to test any CLI operation before performing it on live resources. Similarly, the –profile flag can be beneficial if we regularly need to switch between several AWS account (for instance, dev and prod). The billing federation allows you to see your AWS costs even as a non-root-user.

Dashbird allows you to visualize, monitor, and observe the state of your serverless resources with just a one-off two-minute setup. AWS CloudFormation and Terraform equip you with building blocks to automate provisioning and modification of resources by using Infrastructure as Code. Finally, AWS Config allows you to query all your AWS resources and their state using simple SQL queries so that you can efficiently keep track of your resources.

Further reading:

AWS Lambda metrics that you should be monitoring

4 tips for tuning AWS Lambda for production

6 quick ways to cut cost on your AWS Lambda

Grouping AWS Lambda functions into project views

How to secure your data with Serverless access points

Read our blog

ANNOUNCEMENT: new pricing and the end of free tier

Today we are announcing a new, updated pricing model and the end of free tier for Dashbird.

4 Tips for AWS Lambda Performance Optimization

In this article, we’re covering 4 tips for AWS Lambda optimization for production. Covering error handling, memory provisioning, monitoring, performance, and more.

AWS Lambda Free Tier: Where Are The Limits?

In this article we’ll go through the ins and outs of AWS Lambda pricing model, how it works, what additional charges you might be looking at and what’s in the fine print.

Made by developers for developers

Dashbird was born out of our own need for an enhanced serverless debugging and monitoring tool, and we take pride in being developers.

What our customers say

Dashbird gives us a simple and easy to use tool to have peace of mind and know that all of our Serverless functions are running correctly. We are instantly aware now if there’s a problem. We love the fact that we have enough information in the Slack notification itself to take appropriate action immediately and know exactly where the issue occurred.

Thanks to Dashbird the time to discover the occurrence of an issue reduced from 2-4 hours to a matter of seconds or minutes. It also means that hundreds of dollars are saved every month.

Great onboarding: it takes just a couple of minutes to connect an AWS account to an organization in Dashbird. The UI is clean and gives a good overview of what is happening with the Lambdas and API Gateways in the account.

I mean, it is just extremely time-saving. It’s so efficient! I don’t think it’s an exaggeration or dramatic to say that Dashbird has been a lifesaver for us.

Dashbird provides an easier interface to monitor and debug problems with our Lambdas. Relevant logs are simple to find and view. Dashbird’s support has been good, and they take product suggestions with grace.

Great UI. Easy to navigate through CloudWatch logs. Simple setup.

Dashbird helped us refine the size of our Lambdas, resulting in significantly reduced costs. We have Dashbird alert us in seconds via email when any of our functions behaves abnormally. Their app immediately makes the cause and severity of errors obvious.