From Amazon Aurora DSQL pricing:
Aurora DSQL measures all request-based activity, such as query processing, reads, and writes, using a single normalized billing unit called Distributed Processing Unit (DPU). Storage is billed based on the total size of your database, measured in GB-month. Aurora DSQL ensures your data is available and strongly consistent across three Availability Zones in an AWS Region. You are only charged for one logical copy of your data.
DPU | $8.00 per 1M Units |
Storage | $0.33 per GB-month |
Let’s try it out!
Now, open the CloudWatch Metrics page (here’s a link in us-west-2) and click on “AuroraDSQL” and then “ClusterId”. Find the cluster you just created, then click “Add to search”. It should look something like this:

If you don’t see your cluster yet, wait a minute and try again. After your cluster transitions to Active, metrics should immediately begin to be populated.
The first metric you should see is ClusterStorageSize. If you graph it, you should see nothing. That’s because we just made a new cluster, and of course new clusters are empty!
As a tip: add the LAST label to this metric as an easy way to see the current size of the cluster.

Now, let’s connect1:
Now, let’s create a test table:
Congratulations! You’ve just inserted some data into the system. Creating a table inserts rows into the catalog, and therefore we should expect our cluster size to increase.
Also, we’ve just spent some DPUs. Let’s wait a minute, then hit refresh on that CloudWatch search. Keep size, then select all the DPU metrics:

I set my graph up so that size was on the right showing the LAST datapoint as the label, while all the DPU metrics were on the left using AVG as the label. Note that I’m using Sum as the statistic for the DPU metrics. It looks something like this:

So, creating a table consumed some DPUs:
WriteDPU | 0.53 |
ReadDPU | 0.039 |
ComputeDPU | 0.38 |
TotalDPU | 0.95 |
For those of you familiar with other AWS services that price per-request, this should feel somewhat familiar. For example, DynamoDB On-Demand (pricing link) charges for reads and writes:
Write request | Writes data to your table | Write request unit |
Read request | Reads data from your table | Read request unit |
Like DynamoDB, DSQL bills for reads and writes. Unlike DynamoDB, DSQL also bills for compute. ComputeDPU is how DSQL accounts for running SQL functions, joins, aggregations, and so on.
Here’s a little script to help pull usage metrics for a cluster for the current month: fetch-dpus.sh. Feel free to adapt it to your needs. If I run it, I’ll see the same values as in the screenshot:
Ok! Time to insert some data! The following query inserts 1000 tiny rows in a single transaction:
Let’s wait a minute for the metrics to come through..
Looks like we need to insert some more rows before we get that DPU cost above $0!
We’re going to be running this query a bunch of times, so I created a tool to help. You can find it on GitHub, and you run it like this:
This will run --batches 9 more transactions, each inserting --rows 1000. We’ll do this on a single connection (--concurrency). Because 9x1000 = 9,000, running with these parameters should get us to 10,000 rows in total. Once we’re done, let’s check what happened.
We’re at $0.0005 on DPUs and about half that on storage. It’s a good thing that I built a tool, because we’re going to need to put in about 1,250 times as many rows:
That took about 13 minutes for me, averaging about 1.3 MiB/sec.
Cool. Twelve and a half million rows. What did that cost?
Or, on the console:

Would you look at that! $0.99 — so close. Of course, I’m now storing data in this cluster at $0.33 per GB-month. If I wait about a day, I’ll be charged another penny.
Too bad. I’ll have to spend a dollar another day!