Modern applications require databases that can handle myriad tasks while ensuring data remains consistent and accurate. This is where Amazon’s DynamoDB enters the fray. DynamoDB offers lightning-fast performance and scalability while ensuring data consistency. Among its many capabilities, DynamoDB Transactions is key to data integrity in multi-item transactions. This blog post delves into the ins and outs of these transactions, examining their advantages and exploring the technicalities behind their implementation.

Dynamodb Transactions: What and Why?

DynamoDB Transactions offer a safe, efficient means of executing multi-item transactions while ensuring the desired data consistency level is upheld. Taking advantage of these transactions confers several benefits, including:

  1. Maintaining Data Integrity : Transactions meticulously ensure that all involved operations succeed or fail as a single, coherent unit. This prevents situations where only some operations are completed, thereby eliminating data anomalies and inconsistencies.

  2. Enabling Concurrency Control : Transactions can handle multiple concurrent requests while correctly updating the involved data.

  3. Simplifying Application Logic : Without transactions, developers must painstakingly program intricate error-handling mechanisms to account for potential failures or conflicts. DynamoDB Transactions effectively achieve this automatically, allowing developers to focus on their core application logic. If someone is curious, they can explore the repository to understand how transactions are accomplished prior to their implementation in DynamoDB.

Now, let’s see how we can use the Dynamodb Transactions.

Dynamodb Transactions - How?

Dynamodb Transactions are nonconversatioal in nature. Two key elements form the crux of DynamoDB Transactions – the TransactWriteItems and TransactGetItems APIs.

TransactWriteItems API

The TransactWriteItems API enables developers to perform batch write operations, including Put, Update, and Delete actions. This API ensures that all or none of these operations are executed, maintaining data consistency throughout the transaction.

Suppose we are building a simple banking application, which has an account table in DynamoDB to store account holder information and a transactions table to store money transfers across the accounts. The table schema might look like this:


account_id(pk) account_holder_name balance
1001 A 1000
1002 B 700


transaction_id (PK) from_account_id to_account_id amount status
1 1001 1002 250 COMPLETE
2 1002 1001 100 PENDING

Now, let’s say we want to transfer funds from account A to account B. It involves three operations

  1. Check whether current balance is higher than the amount A is trying to send
  2. Decrease the amount from the account who is sending money
  3. Increase the amount to the account who is receiving money

We need to ensure that if we subtract an amount from account A, we must add the same amount to account B, and vice versa. This operation must be atomic; if any part of the process fails, the entire transaction should be rolled back.

DynamoDB’s TransactWriteItems lets us do all this. It lets us make multiple changes, like adding, changing or removing, in one go. This means all the changes happen together, or not at all.

TransactGetItems API

This API allows developers to execute batch read operations – i.e., GetItem – upto 100 items in a single transaction. The TransactGetItems API returns a consitent and isolated snapshot for all involved items.

In above example, Let’s say if we want to answer this question: What is the latest balance of both John Doe and Jane Smith, and what is the status of the last transaction made for these accounts?

To answer this, we would need to perform two read operations on the ‘Accounts’ table to get the balance fields and one read operation on the ‘Transactions’ table to get the last transaction’s status. This is where DynamoDB Transactions come in handy. With a transactional read operation (TransactGetItems), we can ensure that we read all the necessary data in a single, consistent, and atomic operation. This will guarantee the data that we read is up-to-date, and no other write transactions occurred in-between.

Isolation Levels For DynamoDB Transactions

In this section, we are going to delve into the isolation levels within DynamoDB transactions. Isolation levels are pivotal for understanding how transactions interact with each other and with other operations. By comprehending these isolation levels, we can grasp the caveats and the extent of data integrity offered by transactions.


In serializable isolation, the system ensures that if multiple operations are executed concurrently, the outcome is the same as if these operations were executed serially, one after the other, without any overlapping

Consider a scenario where GetItem requests for item A and item B are executed concurrently with a TransactWriteItems request that modifies both item A and item B. There are four possible outcomes:

  1. Both GetItem requests are executed before the TransactWriteItems request.

    • The GetItem operations read the original state of items A and B.
  2. Both GetItem requests are executed after the TransactWriteItems request.

    • The GetItem operations read the modified state of items A and B.
  3. GetItem request for item A is executed before the TransactWriteItems request, and GetItem for item B is executed after it.

    • GetItem for item A reads the original state of item A.
    • GetItem for item B reads the modified state of item B.
  4. GetItem request for item B is executed before the TransactWriteItems request, and GetItem for item A is executed after it.

    • GetItem for item B reads the original state of item B.
    • GetItem for item A reads the modified state of item A.

There are no other possible combinations or orderings of these operations that would be considered valid under serializable isolation because any other combination or ordering would not be equivalent to a serial execution, and would therefore not meet the serializability criterion.


In read-committed isolation, the read operations only return the values of the items that have been committed. It ensures that the read operation does not present an item state from a transactional write that didn’t succeed. However, it doesn’t prevent the item from being modified immediately after the read operation.

Key Characteristics of Isolation Levels

Operations involved Isolation Level Details
Transactional Operation and standard write (Put, Update, Delete) Serializable Ensures consistent order
Transactional Operation and GetItem Serializable Ensures consistent order
TransactWriteItems and TransactGetItems Serializable Ensures consistent order
Transactional Operation and BatchWriteItem as a unit Not Serializable BatchWriteItem as a whole doesn’t follow serializable isolation
Transactional Operation and individual writes in BatchWriteItem Serializable Individual write operations in BatchWriteItem are serializable with respect to the transaction
Transactional Operation and BatchGetItem as a unit Read-Committed BatchGetItem as a whole doesn’t follow serializable isolation
Transactional Operation and individual reads in BatchGetItem Serializable Individual read operations in BatchGetItem are serializable with respect to the transaction
Transactional Operation and Query or Scan Read-Committed Ensures reading committed values

Note: If you need serializable isolation for multiple GetItem requests, it’s advisable to use TransactGetItems.

Trnasactional Conflicts

Transactional conflicts may occur when multiple operations try to access the same item in a database concurrently, and at least one of these operations is a mutation (update or delete).

Example 1: Conflicting Write Operations

Alice, an admin, is tasked with managing the inventory. She is currently updating the inventory to increase the number of items in the warehouse. At the same time, Bob, a customer, is purchasing an item from the platform.

Here, a conflict can arise:

  • Alice’s operation attempts to update the quantity of the item in the warehouse.
  • Bob’s operation tries to decrease the same item’s quantity, as he is buying it.

If both operations try to update the inventory simultaneously, they might not account for each other’s modifications. This scenario is a conflicting write operation.

Conflict Response

In such a case, DynamoDB’s transactional capabilities can come into play to avoid inconsistencies. If Alice’s update and Bob’s purchase are both part of a transaction, one of the following can happen:

  • Alice’s operation gets completed first, and then Bob’s operation is executed. In this case, Bob’s operation will see the updated inventory and proceed.
  • Bob’s operation gets completed first, and then Alice’s operation is executed, updating the inventory after the purchase.

If both conflict with each other, the second operation to be executed might be rejected, and DynamoDB would throw a TransactionConflictException.

Example 2: Conflicting Read and Write Operations

In another scenario, let’s say Bob is now browsing the platform and has multiple items in his cart. Before proceeding to checkout, the system reads the inventory status of the items in his cart to verify their availability. Meanwhile, Alice is processing an order that involves one of the items in Bob’s cart.

Here, a conflict can arise:

  • Bob’s read operation attempts to verify the availability of items.
  • Alice’s write operation is trying to update the quantity of one of those items due to another customer’s purchase.

Conflict Response

If Bob’s verification process and Alice’s order processing are part of transactions, DynamoDB ensures that Bob’s read operation only sees the committed state of the inventory. This is an example of read-committed isolation. If Alice’s transaction completes before Bob’s read operation, Bob will see the updated inventory status.

However, if the read operation is performed using TransactGetItems, and it conflicts with Alice’s ongoing transaction, then DynamoDB might reject Bob’s read operation with a TransactionCanceledException.

Exploring Use Cases for DynamoDB Transactions

Maintaining uniqueness on multiple attributes

We maintain uniqueness in databases to ensure data integrity and accuracy. It prevents duplicate entries and conflicting information, making the data more reliable and efficient to query. This is crucial for tasks like identifying users, processing transactions, and generating reports, where accuracy is non-negotiable.

In DynamoDB, uniqueness is generally enforced by using a partition key or a combination of a partition key and sort key (together known as the primary key). DynamoDB doesn’t have built-in support for enforcing uniqueness constraints on non-key attributes.

Let’s consider a case where a bank wants to create a database for its customers using AWS DynamoDB. The bank wants to store account details such as account_id, account_holder_name, email, and balance. To maintain data integrity, the bank must ensure that both the account_id and email are unique across all records.

Can we achieve it with secondary indexes?


Lets try to solve it with secondary index. the bank creates an Account table with account_id as the primary key. Additionally, the bank creates a secondary index named AccountEmailIndex with email as the partition key.


account_id(pk) account_holder_name email balance
1001 Alice [email protected] 1000
1002 Bob [email protected] 700

Secondary Index: AccountEmailIndex

email(pk) account_id
[email protected] 1001
[email protected] 1002

Whenever the bank wants to create a new account, it must first query the AccountEmailIndex to check if an account with the given email already exists. If no account exists with the given email, the bank can then insert the new record into the Account table. The secondary index AccountEmailIndex will automatically get updated.

import boto3

# Initialize a DynamoDB resource
dynamodb = boto3.resource('dynamodb')

# Reference the Account table
account_table = dynamodb.Table('Account')

# Details of the new account to be inserted
new_account = {
    'account_id': '1003',
    'account_holder_name': 'Charlie',
    'email': '[email protected]',
    'balance': 1200

# Query the AccountEmailIndex to check if the email already exists
response = account_table.query(

# If no account with the email exists, insert the new account
if not response['Items']:
        print("New account inserted successfully.")
    except Exception as e:
        print(f"Failed to insert new account: {e}")
    print("An account with this email already exists.")

Howver, this using a secondary index for querying and then performing the write is not concurrency safe, as there could be a race condition between the query and the write operation.

Can we achieve it with using pk and sk as composite key?

No let’s create a table with a composite primary key (pk and sk), where pk stands for partition key and sk for sort key.

Lets redisgn the schema

pk sk account_id account_holder_name balance email
ACCOUNT#1001 [email protected] 1001 A 1000 [email protected]

Within the context of a single account, there is nothing preventing the registration of multiple emails because the account_id can be the same with different emails. Conversely, the same email can be associated with multiple accounts if the account_id is different each time. Essentially, the schema does not enforce uniqueness on individual attributes like email or account_id, but only on their combination. This approach can be used to enforce uniqueness on multiple attributes when they are combined through an ‘and’ condition.

Correct Approach - Uniqueness Records

DynamoDB does not inherently support unique constraints on arbitrary attributes, this solution makes use of key overloading and transactional writes to enforce the desired constraints. To accomplish this, we will create two items for each account in the Account table. The first item will store the account details (account_id, account_holder_name, balance, and email), while the second item will only store the email.

The first item will have the account_id as the partition key and the second item will have the email as the partition key.

Table: Account

pk account_id account_holder_name balance email
ACCOUNT#1001 1001 A 1000 [email protected]
EMAIL#[email protected]
import boto3
import botocore.exceptions

# Initialize a DynamoDB client
dynamodb = boto3.client('dynamodb', region_name='us-west-2')

# Use transactions to maintain uniqueness on account_id and email
    response = dynamodb.transact_write_items(
                'Put': {
                    'TableName': 'Account',
                    'Item': {
                        'pk': {'S': f'ACCOUNT#{account_id}'},
                        'sk': {'S': 'ACCOUNT_INFO'},
                        'account_id': {'S': account_id},
                        'account_holder_name': {'S': account_holder_name},
                        'balance': {'N': balance},
                        'email': {'S': email}
                    'ConditionExpression': 'attribute_not_exists(pk)'
                'Put': {
                    'TableName': 'Account',
                    'Item': {
                        'pk': {'S': f'EMAIL#{email}'},
                        'sk': {'S': 'EMAIL_INFO'}
                    'ConditionExpression': 'attribute_not_exists(pk)'
    print("Account created successfully")
except botocore.exceptions.ClientError as e:
    # Handle error (e.g. item with the same account_id or email already exists)
    print("Error creating account:", e)

This approach maintains uniqueness on both the account_id and email. If an account_id or email already exists, the transaction will fail, and neither item will be written to the table.

Atomic Aggregations

Let’s consider a use case where we have an e-commerce application and we need to keep track of the inventory of products. We want to make sure that if a product is purchased, the inventory count is updated, and if the product is already in the customer’s cart, it should not be added again (to prevent duplicates).


product_id (PK) product_name inventory_count
1 Laptop 50
2 Headphones 200
3 Smartphone 100


cart_id (PK) user_id (SK) items
cart1 user123 [1, 3]
cart2 user456 [2]

When a user adds an item to their cart and completes the checkout, we need to perform two operations atomically:

  1. Deduct the inventory count of the product
  2. Add the product to the user’s cart, but only if it’s not already there.
import boto3

# Initialize a DynamoDB client
dynamodb = boto3.client('dynamodb', region_name='us-west-2')

    # Start the transaction
    response = dynamodb.transact_write_items(
            # Update the inventory count
                'Update': {
                    'TableName': 'Products',
                    'Key': {
                        'product_id': {'S': product_id},
                    'UpdateExpression': "SET inventory_count = inventory_count - :val",
                    'ConditionExpression': "attribute_exists(product_id) AND inventory_count > :zero",
                    'ExpressionAttributeValues': {
                        ':val': {'N': '1'},
                        ':zero': {'N': '0'}
            # Add item to the cart
                'Update': {
                    'TableName': 'Carts',
                    'Key': {
                        'cart_id': {'S': cart_id},
                        'user_id': {'S': user_id},
                    'UpdateExpression': "ADD items :item",
                    'ConditionExpression': "NOT contains(items, :item_val)",
                    'ExpressionAttributeValues': {
                        ':item': {'SS': [product_id]},
                        ':item_val': {'S': product_id}
    # Transaction was successful
    print("Transaction succeeded:", response)
except Exception as e:
    # Handle exception (e.g. item already in cart, not enough inventory, etc.)
    print("Transaction failed:", str(e))

Can we achieve the same with Dynamodb Streams?

Using DynamoDB Streams can be an effective way to decouple the inventory management and cart management logic, and make the system more resilient and scalable. DynamoDB Streams captures a time-ordered sequence of item-level modifications in a DynamoDB table and stores this data for 24 hours.

Here is how you can use DynamoDB Streams to achieve the same goal:

  1. Enable DynamoDB Streams: Enable DynamoDB Streams for your Carts table. This will capture every change made to the items in the table (like adding an item to a cart).

  2. Create a Lambda Function: Create an AWS Lambda function that is triggered by the DynamoDB Stream. This function will process the stream records which contain the information about what items are added to the carts.

  3. Implement Inventory Deduction Logic: Inside the Lambda function, write logic to deduct the inventory count for each product added to a cart. Since this function is triggered by the stream, it will execute in near real-time whenever an item is added to a cart.

  4. Error Handling and Retries: Implement error handling in the Lambda function. For example, if the inventory count is not sufficient, you can take appropriate action like notifying the user. Also, Lambda functions inherently retry on failure, providing some level of resiliency.

Here’s an example using Python for the Lambda function:

import boto3
import json

# Initialize a DynamoDB client
dynamodb = boto3.client('dynamodb')

def lambda_handler(event, context):
    # Process each record in the stream
    for record in event['Records']:
        # Check if the record is an INSERT (item added to cart)
        if record['eventName'] == 'INSERT':
            # Get the product_id and quantity from the record
            new_image = record['dynamodb']['NewImage']
            product_id = new_image['product_id']['S']
            quantity = new_image['quantity']['N']
            # Deduct inventory count
                response = dynamodb.update_item(
                    Key={'product_id': {'S': product_id}},
                    UpdateExpression="SET inventory_count = inventory_count - :val",
                    ConditionExpression="attribute_exists(product_id) AND inventory_count >= :val",
                        ':val': {'N': str(quantity)}
                print("Inventory updated successfully:", response)
            except Exception as e:
                # Handle exception (e.g. not enough inventory)
                print("Failed to update inventory:", str(e))

This Lambda function is triggered by the DynamoDB Stream from the Carts table. For each new item added to a cart, it attempts to update the inventory count in the Products table. This way, you are decoupling the cart management from inventory management, allowing for more scalable and resilient architecture. Also, using DynamoDB Streams ensures that the inventory update is eventual and can handle high throughput efficiently.

Version Control

Version control involves maintaining different versions of an item or document and ensuring that updates are consistent and conflict-free. Below is an example of how you might implement a simple version control system using DynamoDB transactions:

  1. Version Numbering: Each item or document should have a version number attribute. This number should be incremented with each update. For example, when you first create a document, it could have version number 1. When it is updated, the version number is incremented to 2, and so on.

  2. Update with Condition: When updating an item, you can use a transaction to make sure that the version number in the database matches the version number you expect. If they don’t match, it might mean that someone else has updated the document since you last read it, and you could take appropriate action, like informing the user of the conflict.

  3. Storing History: Each time an item is updated, you can use a transaction to atomically add a new item with the updated content and the incremented version number, and mark the previous item as historical. This way, you build up a history of all versions of an item. You might want to include attributes such as the timestamp, and which user made the change.

  4. Rollback Capability: Given that you are storing historical versions of each item, you can use transactions to perform rollbacks. For example, if you want to revert to a previous version of a document, you could use a transaction to update the current item with the contents of the historical item you wish to revert to, and increment the version number.

  5. Consistency Across Multiple Items: If your version control system involves relationships between different items (e.g., a document and its metadata), you can use a transaction to ensure that updates to related items are consistent. For instance, if you update a document, you might also need to update a metadata item that tracks the document’s history. By including both updates in a single transaction, you ensure either both succeed or both fail, maintaining consistency.

Read more about version control in another blog post: Versioning in Dynamodb

Synchronized Updates across Multiple Entities

Let’s assume we are building an e-commerce system. In this example, we have three tables:

  1. Inventory
  2. Orders
  3. EmailNotifications

Here are the schemas for each table:


sku (pk) productName quantity
SKU123 Phone 100
SKU124 Laptop 50


orderId (pk) customerEmail sku quantity status
ORDER1 [email protected] SKU123 1 PENDING
ORDER2 [email protected] SKU124 1 PENDING


notificationId (pk) customerEmail subject message status
NOTIFICATION1 [email protected] Order Confirmation Your order has been placed successfully! PENDING
NOTIFICATION2 [email protected] Order Confirmation Your order has been placed successfully! PENDING

Now, let’s assume a customer wants to place an order. This operation involves updating the Inventory table to decrease the stock quantity, updating the Orders table to create a new order, and creating a new record in the EmailNotifications table.

We use AWS DynamoDB Transactions to make sure that all updates occur atomically. Here’s how you could do it:

import boto3

dynamodb = boto3.client('dynamodb')

def place_order(order_id, customer_email, sku, order_quantity):
        # Transactional write
            # Deduct item from Inventory
                    'Update': {
                        'TableName': 'Inventory',
                        'Key': {'sku': {'S': sku}},
                        'UpdateExpression': 'SET quantity = quantity - :quantity',
                        'ConditionExpression': 'quantity >= :quantity',
                        'ExpressionAttributeValues': {':quantity': {'N': str(order_quantity)}}
                    # Create new order in Orders table
                    'Put': {
                        'TableName': 'Orders',
                        'Item': {
                            'orderId': {'S': order_id},
                            'customerEmail': {'S': customer_email},
                            'sku': {'S': sku},
                            'quantity': {'N': str(order_quantity)},
                            'status': {'S': 'PENDING'}
                    # Create new email notification in EmailNotifications table
                    'Put': {
                        'TableName': 'EmailNotifications',
                        'Item': {
                            'notificationId': {'S': 'NOTIFICATION' + order_id},
                            'customerEmail': {'S': customer_email},
                            'subject': {'S': 'Order Confirmation'},
                            'message': {'S': 'Your order has been placed successfully!'},
                            'status': {'S': 'PENDING'}
        print("Transaction succeeded.")
    except Exception as e:
        print("Transaction failed: ", e)

Next, to send the email notifications, we need to set up a DynamoDB Stream on the EmailNotifications table and use AWS Lambda to process the records and send emails using SES:

import boto3
import json

ses = boto3.client('ses')

def lambda_handler(event, context):
    # Process records from DynamoDB Stream
    for record in event['Records']:
        if record['eventName'] == 'INSERT':
            new_image = record['dynamodb']['NewImage']

    customer_email = new_image['customerEmail']['S']
            subject = new_image['subject']['S']
            message = new_image['message']['S']
            # Send email using SES
                Source='[email protected]',
                Destination={'ToAddresses': [customer_email]},
                    'Subject': {'Data': subject},
                    'Body': {'Text': {'Data': message}}
            print(f"Email sent to {customer_email}")

This setup ensures that when an order is placed, the inventory is updated, a new order is created, and an email notification is queued up for sending, all as part of a single, atomic transaction. The email is then sent asynchronously through the processing of the DynamoDB Stream.


Imagine you’re trying to save some information in a database. If a connection time-out or any other connectivity issue arises, the same operation might be sent multiple times unintentionally. This can lead to inconsistencies and unintended duplications. This is where idempotency comes into play; it ensures that irrespective of how many times a request is sent, the outcome remains the same.

When you make a TransactWriteItems call, you can opt to include a client token. This token guarantees that if the original request was successful, any subsequent calls with the same client token will also return successfully but won’t cause any additional changes to the data.

Consider a scenario where we need to move some funds from account A to account B. Ideally, the transaction completes swiftly and the records are updated accordingly. In this ideal case, TransactWriteItems will indicate the amount of write capacity employed to execute the changes.If the server makes an attempt to commit the transaction again due to network issues, TransactWriteItems smartly avoids duplicating changes. Instead, it reports the number of read capacity units that were utilized just to fetch the item. It’s like the system is saying, “I’ve checked and no changes are needed.”

Key Points

  1. Validity Period: A client token is valid for 10 minutes after the request using it has been completed. After this window, any request with the same client token is treated as a brand-new request.

  2. Caution Against Parameter Changes: If you resend a request with the same client token within the 10-minute window but alter another request parameter, DynamoDB will return an IdempotentParameterMismatch exception. This means you need to be cautious not to change any parameters if you are using the same client token.

Best Practices

To fully harness the potential of DynamoDB Transactions, developers should adhere to the following best practices:

  1. Choose the Right Partition Strategy: Concurrent transactions that update the same items can result in conflicts leading to transaction cancellation. Exercise caution in the choice of primary key to sidestep such issues. Furthermore, if a group of attributes are frequently updated across different items within a single transaction, contemplate consolidating these attributes into a single item to narrow down the transaction’s extent.

  2. Properly Set the Read and Write Capacity: Be thoughtful when setting up read and write capacities. Keep your eyes on the prize - you want to avoid the dreaded ProvisionedThroughputExceededError. Get familiar with how your transactions flow by keeping tabs on performance metrics like consumed read and write capacities. This helps pinpoint the sweet spot for capacity settings. And remember, for unpredictable and spikey loads, on-demand pricing could be your best friend.

  3. Implement Retries and Error Handling: Should a transaction experience conflicts or timeouts, it is advisable for developers to institute solid retry and error management policies. The implementation of exponential backoff is highly recommended as it allows for the application to recover in an orderly manner.

  4. Break Big Transaction in small One: Avoid lumping operations into a transaction unless absolutely required. For instance, if a transaction comprising 10 operations can be divided into several transactions without undermining application integrity, it is advisable to partition the transaction. Simpler transactions bolster throughput and have a higher likelihood of success.


  1. Transactions are confined to the region they’re executed in, and uphold the ACID (Atomicity, Consistency, Isolation, Durability) principles only within that region. Global tables don’t support inter-region transactions. Take, for example, a global table with replicas in US East and US West. If a TransactWriteItems operation is executed in US East, US West may witness incomplete transactions while changes are being replicated. These modifications will only synchronize across regions once fully committed in the originating region.

  2. Transactional APIs should not be mistaken for a substitute to batch APIs. Transactional APIs are best-suited for circumstances where consistency and integrity are paramount. Conversely, batch APIs are the preferred choice when there is a need to expeditiously process large data volumes without an all-or-nothing execution requirement.

Dyamodb Transactions - Limits

While utilizing the transactional API operations of DynamoDB, be mindful of the following limitations:

Constraint Limit
Number of unique items per transaction 100
Data size per transaction 4 MB
Actions against the same item in the same table No two actions can work against the same item


DynamoDB’s pricing model is based on a combination of Write Request Units (WRUs) and Read Request Units (RRUs). Transactions in DynamoDB consume more WRUs and RRUs compared to regular read or write operations.

Transaction Type Pricing
Read Two underlying reads are performed, one to prepare the transaction and one to commit the transaction. You are charged for two reads.For example, a transactional read request of a 4 KB item requires two read request unit
Write Two underlying writes are performed, one to prepare the transaction and one to commit the transaction. You are charged for two writes.For example, a transactional write request of a 1 KB item requires two write request unit


In conclusion, DynamoDB Transactions offer a powerful means of managing complex, multi-item transactions while preserving data consistency and concurrency. By employing this functionality effectively, organizations can ensure smooth data management operations and create more robust applications, meeting the ever-increasing demands of the modern digital world.

Be sure to check out more such insightful blogs in my Master Dynamodb: Demystifying AWS's Powerful NoSQL Database Service, One Byte at a Time series, for a deeper dive into DynamoDB's best practices and advanced features. Stay tuned and keep learning!