DynamoDB is a popular choice for various applications but faces limitations in storing large objects ( item size limit = 400KB). Since costs are based on the amount of data read or written per second. Large objects can strain these limits. Here’s a 1-minute rundown of strategies to tackle this:

  1. Compress Large Objects: Shrink the size of your objects using algorithms like GZIP before storing them in DynamoDB. Adds complexity and limits querying capabilities..

  2. Vertical Sharding: Split an object into multiple parts or shards, and store them in separate rows. This is efficient for large items with different access patterns for their attributes. Group related attributes together and create a shard key combining the primary key with a shard identifier. Though efficient, it adds complexity, especially in reassembling shards and maintaining consistency.

  3. Using Amazon S3 for Storage: For very large or infrequently accessed objects, store them in Amazon S3, and keep a reference in DynamoDB. This offloads the storage burden to S3, which is cost-effective and scalable. However, this method might increase latency and complexity since you’re using two different services.

Real-life applications include storing compressed text documents in DynamoDB, using S3 for high-resolution images and video files, and employing vertical sharding for complex JSON objects.