Create AWS Billing & Usage Alerts
29 Dec 2020AUTOMATE CLOUD OPERATIONS
Automate DynamoDB Throughput
Scale maximum read & write capacity to meet demand
Select which DynamoDB tables to scale
Target one or many tables at one time using your existing tags to update the Read Capacity & Write Capacity in order to avoid throttling requests around known peaks in throughput requirements and to quickly back-off and save money around troughs.
Create a rule
Determine how you wish to trigger changes to your DynamoDB throughput settings. A common use case is a schedule that anticipates known peak throttling issues and leverages savings during predictable troughs.
Focus on more important work
When it’s time for the DynamoDB throughput settings to change, an update will be preceded by a notification via email, Slack or Teams (with a list of all targeted tables included) and then snoozed or cancelled.
Scale Throughput Up & Down
Anticipate known peaks and troughs in DynamoDB usage
Target Multiple Tables by Tag
Set rules that persist for DynamoDB for your whole AWS environment
Use Any Trigger
Trigger by schedule, cost threshold, webhook or many other events
Target by Region
Alongside tags & accounts, you can target based on where the DynamoDB table is located.
Intervene
Cancel or snooze a schedule by interacting with rich notifications from Slack or email.
Target by Account
Alongside tags & regions, target throughput based on which accounts the volumes are in.
Try it in action. Sign up for our 14 days free trial today.
Related Use Cases
- Delete elastic IPs that are no longer in use for cost optimization benefits.
- Update desired task instantiation counts across any number of ECS services. Cache existing settings & restore to an accompanying rule.