Source Allies Logo

Sharing Our Passion for Technology

& Continuous Learning

<   Back to Blog

Node Reference - DynamoDB

Teammates sitting together and looking at code

Prerequisites

This article builds on the prior article: Node Reference - Authentication.

Add DynamoDB

Before we can start building out our product endpoints, we need a place to store them. For that, we are turning to DynamoDB{:target="_blank"}. Tables are the main component, not a "database", so we can create just a single table. Add the following to your cloudformation.template.yml:

ProductsTable:
    Type: "AWS::DynamoDB::Table"
    Properties:
      AttributeDefinitions:
        - AttributeName: id
          AttributeType: S
      KeySchema:
        - AttributeName: id
          KeyType: "HASH"
      ProvisionedThroughput:
        ReadCapacityUnits: 1
        WriteCapacityUnits: 1

Notice how we are not specifying every field in our product structure. DynamoDB only requires we tell it about attributes we are going to index (the primary key as well as any global secondary indexes).

In order to grant your application access to this table, add the following to the Statement array inside the TaskPolicy:

...
    - Effect: "Allow"
      Action:
        - dynamodb:*
      Resource: !GetAtt ProductsTable.Arn

Now all we have to do is tell our task the ARN{:target="_blank"} for this table. Add the following to the Environment section under the TaskDefinition ContainerDefinitions.

      - Name: PRODUCTS_TABLE_NAME
        Value: !Ref ProductsTable
      - Name: AWS_REGION
        Value: !Ref "AWS::Region"

And add the following to your Outputs section:

  ProductsTable:
    Value: !Ref ProductsTable
    Export:
      Name: !Sub "${AWS::StackName}:ProductsTable::Id"

Configure DynamoDB AutoScaling

In 2017, AWS introduced Auto Scaling for DynamoDB to help automate capacity management for your tables. Previously, you were required to estimate the amount of read and write capacity required by your applications and provision your tables based on that estimate. If your actual usage exceeded your estimate, your DynamoDB reads and writes would be throttled. The AWS SDKs would detect throttled reads or writes and retry them after a delay.

"With auto scaling, you define a range (upper and lower limits) for read and write capacity units. You also define a target utilization percentage within that range. DynamoDB auto scaling seeks to maintain your target utilization, even as your application workload increases or decreases.

With DynamoDB auto scaling, a table or a global secondary index can increase its provisioned read and write capacity to handle sudden increases in traffic, without request throttling. When the workload decreases, DynamoDB auto scaling can decrease the throughput so that you don't pay for unused provisioned capacity." - AWS{:target="_blank"}

To enable Auto Scaling of both read and write capacity (between 1 and 5 units) for your table, add the following YAML to the end of the Resources section in your cloudformation.template.yml. The ScalingPolicy below is configured such that when the target ratio exceeds 50 percent for a sustained period of time, Application Auto Scaling notifies DynamoDB to adjust the throughput of ProductsTable upward, so that the 50 percent target utilization can be maintained:

WriteCapacityScalableTarget:
  Type: 'AWS::ApplicationAutoScaling::ScalableTarget'
  Properties:
    MaxCapacity: 5
    MinCapacity: 1
    ResourceId: !Sub 'table/${ProductsTable}'
    RoleARN: !GetAtt ScalingRole.Arn
    ScalableDimension: dynamodb:table:WriteCapacityUnits
    ServiceNamespace: dynamodb
ReadCapacityScalableTarget:
  Type: 'AWS::ApplicationAutoScaling::ScalableTarget'
  Properties:
    MaxCapacity: 5
    MinCapacity: 1
    ResourceId: !Sub 'table/${ProductsTable}'
    RoleARN: !GetAtt ScalingRole.Arn
    ScalableDimension: dynamodb:table:ReadCapacityUnits
    ServiceNamespace: dynamodb
ScalingRole:
  Type: 'AWS::IAM::Role'
  Properties:
    AssumeRolePolicyDocument:
      Version: '2012-10-17'
      Statement:
        - Effect: 'Allow'
          Principal:
            Service:
              - application-autoscaling.amazonaws.com
          Action:
            - 'sts:AssumeRole'
    Policies:
      - PolicyName: 'root'
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
            - Effect: 'Allow'
              Action:
                - 'dynamodb:DescribeTable'
                - 'dynamodb:UpdateTable'
                - 'cloudwatch:PutMetricAlarm'
                - 'cloudwatch:DescribeAlarms'
                - 'cloudwatch:GetMetricStatistics'
                - 'cloudwatch:SetAlarmState'
                - 'cloudwatch:DeleteAlarms'
              Resource: '*'
WriteScalingPolicy:
  Type: 'AWS::ApplicationAutoScaling::ScalingPolicy'
  Properties:
    PolicyName: WriteAutoScalingPolicy
    PolicyType: TargetTrackingScaling
    ScalingTargetId: !Ref WriteCapacityScalableTarget
    TargetTrackingScalingPolicyConfiguration:
      TargetValue: 50.0
      ScaleInCooldown: 60
      ScaleOutCooldown: 60
      PredefinedMetricSpecification:
        PredefinedMetricType: DynamoDBWriteCapacityUtilization
ReadScalingPolicy:
  Type: 'AWS::ApplicationAutoScaling::ScalingPolicy'
  Properties:
    PolicyName: ReadAutoScalingPolicy
    PolicyType: TargetTrackingScaling
    ScalingTargetId: !Ref ReadCapacityScalableTarget
    TargetTrackingScalingPolicyConfiguration:
      TargetValue: 50.0
      ScaleInCooldown: 60
      ScaleOutCooldown: 60
      PredefinedMetricSpecification:
        PredefinedMetricType: DynamoDBReadCapacityUtilization

Deploy

We can now commit and push our changes, to cause our our template to deploy so our new DynamoDB table is created.

Grab the ARN of this table and set it as the PRODUCTS_TABLE_NAME environment variable locally so we can continue development.

export PRODUCTS_TABLE_NAME=$(aws cloudformation describe-stacks \
    --stack-name ProductService-DEV \
    --query 'Stacks[0].Outputs[?OutputKey==`ProductsTable`].OutputValue' \
    --output text)

See the changes we made here{:target="_blank}.

Table of Contents

If you have questions or feedback on this series, contact the authors at nodereference@sourceallies.com.