Cloud cost overruns are one of the most common problems engineering teams face after moving to AWS. The pay-as-you-go model is powerful but easy to abuse — idle resources, oversized instances, and forgotten experiments can silently accumulate into thousands of dollars per month. The good news: most AWS cost problems are solvable without sacrificing performance. Here are 10 practical tips to significantly reduce your AWS bill.
1. Right-Size Your EC2 Instances
The most common source of wasted AWS spend is oversized instances. If your m5.2xlarge is running at 10% CPU and 15% memory utilization, you are paying for resources you don't use.
# Use AWS Compute Optimizer to get right-sizing recommendations
aws compute-optimizer get-ec2-instance-recommendations
--query "instanceRecommendations[*].[instanceArn,finding,recommendationOptions[0].instanceType]"
--output table
AWS Compute Optimizer analyzes 14 days of CloudWatch metrics and recommends the optimal instance type for your actual workload. Typical savings from right-sizing: 20–40%.
2. Use Reserved Instances or Savings Plans
On-demand pricing is the most expensive way to run EC2. For stable, predictable workloads, commit to 1 or 3 years and save up to 72%:
- Reserved Instances (RIs) — Commit to a specific instance type in a specific region. Up to 72% off on-demand pricing.
- Compute Savings Plans — More flexible than RIs. Commit to a dollar amount of compute usage per hour. Applies automatically to EC2, Fargate, and Lambda. Up to 66% off.
- EC2 Instance Savings Plans — Commit to a specific instance family in a region. Up to 72% off, slightly less flexible than Compute Savings Plans.
# View Savings Plans recommendations
aws savingsplans describe-savings-plans-offering-rates
--savings-plan-offering-types COMPUTE_SP
--max-results 5
3. Use Spot Instances for Fault-Tolerant Workloads
Spot Instances use unused EC2 capacity at discounts of up to 90% off on-demand pricing. The catch: AWS can reclaim them with a 2-minute warning. Ideal for:
- Batch processing and data analytics jobs
- CI/CD build agents
- Stateless web servers behind an Auto Scaling Group (mix Spot and on-demand)
- Machine learning training jobs
# Launch a Spot Instance via the CLI
aws ec2 run-instances
--image-id ami-0c94855ba95c71c99
--instance-type m5.large
--instance-market-options MarketType=spot
--count 1
4. Delete Idle and Orphaned Resources
Every AWS account accumulates forgotten resources over time. Run a monthly cleanup audit:
# Find unattached EBS volumes (paying for storage you're not using)
aws ec2 describe-volumes
--filters Name=status,Values=available
--query "Volumes[*].[VolumeId,Size,CreateTime]"
--output table
# Find unassociated Elastic IPs
aws ec2 describe-addresses
--query "Addresses[?AssociationId==null].[PublicIp,AllocationId]"
--output table
# Find old, unused AMIs
aws ec2 describe-images --owners self
--query "Images[*].[ImageId,Name,CreationDate]"
--output table
5. Use S3 Intelligent-Tiering for Unpredictable Access Patterns
If you can't predict how frequently your S3 objects will be accessed, S3 Intelligent-Tiering automatically moves objects between frequent and infrequent access tiers based on actual usage — with no retrieval fees and no operational overhead. It pays for itself with as little as 1–2 accesses per month below what Standard-IA would cost.
# Apply Intelligent-Tiering to a bucket via a lifecycle rule
aws s3api put-bucket-intelligent-tiering-configuration
--bucket my-data-bucket
--id all-objects
--intelligent-tiering-configuration '{
"Id": "all-objects",
"Status": "Enabled",
"Tierings": [
{"Days": 90, "AccessTier": "ARCHIVE_ACCESS"},
{"Days": 180, "AccessTier": "DEEP_ARCHIVE_ACCESS"}
]
}'
6. Enable S3 Lifecycle Policies for Log and Backup Data
Application logs, CloudTrail logs, and database backups grow without bound. Set lifecycle rules to transition old data to cheaper storage classes and delete what you no longer need:
aws s3api put-bucket-lifecycle-configuration
--bucket my-logs-bucket
--lifecycle-configuration '{
"Rules": [{
"ID": "log-rotation",
"Status": "Enabled",
"Filter": {"Prefix": "logs/"},
"Transitions": [
{"Days": 30, "StorageClass": "STANDARD_IA"},
{"Days": 90, "StorageClass": "GLACIER"}
],
"Expiration": {"Days": 365}
}]
}'
7. Schedule Non-Production Environments to Stop at Night
Development and staging environments don't need to run 24/7. Use AWS Instance Scheduler or EventBridge to stop instances during evenings and weekends — saving up to 65% on non-production compute costs:
# Stop all instances tagged with Environment=dev at 7 PM UTC
# (Use EventBridge + Lambda for this — create a Lambda function that stops instances by tag)
aws ec2 describe-instances
--filters "Name=tag:Environment,Values=dev" "Name=instance-state-name,Values=running"
--query "Reservations[*].Instances[*].InstanceId"
--output text | xargs aws ec2 stop-instances --instance-ids
8. Use CloudFront to Reduce Data Transfer Costs
Data transfer out of AWS to the internet is one of the top billing line items. CloudFront caches content at edge locations worldwide, reducing the number of requests that hit your origin (EC2, ALB, or S3). CloudFront's data transfer rates are lower than EC2's, and data transfer from AWS services to CloudFront is free within the same region.
9. Use AWS Cost Anomaly Detection
Cost Anomaly Detection uses machine learning to automatically detect unusual spending patterns and alert you via email or SNS. Set it up once and forget it — it will notify you if a runaway resource starts accruing unexpected costs:
# Create a cost monitor for all AWS services
aws ce create-anomaly-monitor
--anomaly-monitor '{
"MonitorName": "AllServicesMonitor",
"MonitorType": "DIMENSIONAL",
"MonitorDimension": "SERVICE"
}'
10. Review AWS Trusted Advisor and Cost Explorer Weekly
AWS Trusted Advisor (available for Business and Enterprise support plans) automatically identifies cost optimization opportunities: underutilized EC2 instances, idle RDS instances, unattached EBS volumes, and more. The free tier includes a subset of checks. Pair Trusted Advisor with Cost Explorer's daily cost breakdown by service and usage type to catch spending spikes before they become invoices.
Putting It Together: A Monthly Cost Review Process
- Review Cost Explorer for month-over-month changes — investigate anything that increased >10%
- Check Compute Optimizer recommendations — right-size instances flagged as over-provisioned
- Audit unattached EBS volumes and Elastic IPs — delete or release them
- Review S3 storage by bucket — ensure lifecycle policies are in place
- Check for any Trusted Advisor cost optimization findings
- Verify your Savings Plans and RIs are covering your committed usage
Summary
AWS cost optimization is not a one-time project — it is a discipline. The biggest wins usually come from right-sizing instances, purchasing Savings Plans for stable workloads, using Spot for batch jobs, and eliminating idle resources. Start with AWS Cost Explorer and Compute Optimizer to understand where your money is going, then systematically apply these tips. Most AWS customers can reduce their bill by 20–40% without any performance impact.