More about S3 Storage
- S3 Intelligent-Tiering's Archive Instant Access Tier
- Amazon S3 Storage Lens: A Single Pane of Glass for S3 Storage Analytics
- S3 Access: How to Store Objects With Different Permissions In the Same Amazon S3 Bucket
- S3 Lifecycle Rules: Using Bucket Lifecycle Configurations to Reduce S3 Storage Costs
- S3 Storage: The Complete Guide
- S3 Pricing Made Simple: The Complete Guide
- How to Copy AWS S3 Objects to Another AWS Account
- Amazon S3 Bucket Security: How to Find Open Buckets and Keep Them Safe
- How to Test and Monitor AWS S3 Performance
- How to Secure S3 Objects with Amazon S3 Encryption
- How to Secure AWS S3 Configurations
- Comparing AWS SLAs: EBS vs S3 vs Glacier vs All the Rest
- AWS Certification Cheat Sheet for Amazon S3
Subscribe to our blog
Thanks for subscribing to the blog.
How is Amazon S3 Priced?
Amazon Simple Storage Service (Amazon S3) is an object storage solution that features data availability, scalability, and security. Amazon S3 offers management features so that you can configure, organize, and optimize how your data is accessed, to adhere to your particular organization, business, and compliance demands.
Amazon Web Services (AWS) operates in multiple geographical Regions, each of which is divided into several Availability Zones (AZ). AWS S3 pricing differs according to the Region, the type of storage (there are several tiers from Standard to Glacier Archive), the volume of storage, and operations performed on the data.
This is part of our series of articles about S3 Storage.
In this article:
- Amazon S3 Pricing Components
- 3 Tips for Amazon S3 Cost Optimization
- Optimizing AWS Storage with NetApp Cloud Volumes ONTAP
Amazon S3 Pricing Components
S3 pricing is broken up into several components: a free tier that lets you try S3 at no cost, storage costs priced by GB-month, and special charges for requests, data retrieval, analytics, replication, and the S3 Object Lambda feature.
Learn more about AWS storage options and costs in our guide to AWS storage costs
S3 Free Tier
As a feature of the AWS Free Tier, you can begin using Amazon S3 at no charge. Once you sign up, new AWS customers get 5GB of Amazon S3 storage within the S3 Standard storage class. For one year, the monthly limits are:
- 2,000 PUT, POST, COPY, or LIST requests
- 20,000 GET requests
- 15GB of data to transfer out
Your free tier usage is measured each month across every AWS Region, apart from the AWS GovCloud Region, and automatically charged to your account—you cannot roll over unused monthly usage.
Pricing examples in this section are for the US East (Ohio) region and are subject to change—for up to date prices see the official pricing page.
S3 Storage Cost
You are charged for retaining objects in your S3 buckets. The amount you are billed varies according to an object’s size, the period during which you stored the object over the month, and the storage class.
The S3 Intelligent-Tiering option moves data between frequently accessed and infrequently-accessed tiers for cost saving. You are charged an automation fee and monthly monitoring fee for each object retained in the S3 Intelligent-Tiering storage class, to track access patterns and transfer objects from one access tier in S3 Intelligent-Tiering to another.
Costs for Requests and Data Retrieval Within Amazon
Although you are given a specific number of GET and PUT requests as part of the free usage tier, you will be charged for other requests, as well as any GET and PUT requests that exceed the free tier monthly cap.
Outside the free tier, requests are priced as follows (using prices from US East Region as an example):
- 1,000 PUT, COPY, or POST requests cost $0.005 in the Standard Storage tier and in Reduced Redundancy and Glacier tiers. They cost $0.01 in Infrequent Storage tiers (these tiers provide lower storage costs but charge extra for data requests).
- Data retrieval has no cost in the Standard Storage tiers, but costs $0.01 per GB in the Infrequent Access tiers, and $0.03 in the Glacier tiers.
- In S3 Glacier, which normally requires a minimum of 90 or 180 days of storage, you can pay extra for expedited access.
Costs for Data Retrieval Outside Amazon
Amazon charges per GB for data transfer from Amazon S3 to the Internet. External data transfer is free up to 1 GB per month.
Above this, pricing starts from $0.09 per GB for the first 10 TB transferred. Amazon provides volume discounts for increasing data transfer amounts, down to $0.05 per GB for over 150 TB per month.
You can pay a premium for faster data transfers—the charge for fast data transfer is $0.04 per GB, or $0.08 outside the US, Europe, and Japan.
Management and Analytics Costs
Amazon charges extra for specific features, which you can activate on any of your S3 buckets.
The table below shows pricing for selected services.
Special S3 Feature
$0.0025 per million objects
S3 Object Tagging
$0.01 per 10,000 tags
S3 Storage Lens advanced metrics
$0.20 per million objects
S3 batch operations
$0.25 per job, $1 per million operations
You might want to replicate data across multiple buckets in the same region, or across multiple regions, to improve availability. In this case, you will pay for:
- S3 storage in the original bucket
- Extra S3 storage in the replicated bucket
- Replication PUT requests
- Infrequent-access storage retrieval fees (if you are replicating data from a bucket using an infrequent-access tier)
- For cross-region replication, inter-region Data Transfer charges
- Special charges for using S3 Replication Time Control
S3 Object Lambda Costs
S3 Object Lambda is a feature that lets you write your own code and add it to GET requests in S3. The code is then run in a serverless model whenever the GET request is processed, using Amazon Lambda.
Costs for S3 Object Lambda are as follows:
- $0.0000167 per GB-second for the duration the Lambda function runs*
- $0.20 per 1 million Lambda requests*
- $0.0004 per 1,000 requests for S3 GET requests invoked by Lambda functions
- $0.005 per-GB for data retrieved to your applications via the Lambda functions
(*) The price for these components may vary depending on memory allocated to your Lambda functions.
3 Tips for Amazon S3 Cost Optimization
Use the following practices to optimize your Amazon S3 costs.
Organize Your Data
Once you have defined your requirements, you must take time to organize your information. Use prefixes, resources tags, and bucket names to effectively define your large data sets, which will help you select the correct storage classes and tiers afterwards.
For example, your finance department may require fast access to customer information, but only require periodic access to last month’s sales or assets. You could use data storage classes for these distinct requirements if you accurately track and define your data and organize it effectively with tags. You could also use these details for S3 lifecycle management and to inform data transfers between various classes.
Leverage S3 Intelligent Tiering
Your organization must manage all the information it collects. AWS provides a tool known as S3 Intelligent Tiering to transparently oversee the tiering feature. This automates information management from tier to tier. Note that you must select the Intelligent Tiering option from the onset.
Intelligent Tiering is only effective with storage objects greater than 128 KBs—it does not feature the lower levels of S3 Glacier archival data. If you decide to forgo Intelligent Tiering, you still need an effective procedure to remove unuseful objects from the S3 system, to avoid spending on excess resources.
Monitor and Analyze Your Spending and Access Patterns
As your processes evolve, you will need to analyze and monitor your service usage and information access patterns for cost optimization. AWS services (including Amazon CloudWatch Metrics, Storage Class Analysis, and S3 Server Access Logging) can provide insight into access patterns.
Storage Class Analysis lets you understand and examine your object access patterns and then deduce how best to specify your lifecycle policies for expiration or transition actions on your S3 objects. It is useful to specify the time frames for moving your objects to the ideal storage classes created for frequent or infrequent access. You can also specify an expiration time limit for when objects must be deleted.
Alternatively, you could use Amazon CloudWatch Metrics to interpret daily storage data across your buckets and identify the growth patterns of objects.
Like the Amazon CloudWatch Metrics, AWS Server Access Logging lets you examine the requests created for your buckets and appreciate the current patterns over data access. It also makes it easier to analyze large data sets over different applications, and lets you monitor and track your bucket records in a systematic way. You can use the metrics you obtain to decide which storage classes to use.
Related content: Read our guide to AWS cost optimization
Optimizing AWS Storage with NetApp Cloud Volumes ONTAP
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP capacity can scale into the petabytes, and it supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, Kubernetes integration, and more.
Cloud Volumes ONTAP provides storage efficiency features, including thin provisioning, data compression, and deduplication, reducing the storage footprint and costs by up to 70%.
Learn more with these Cloud Volumes ONTAP Storage Efficiency Case Studies.
Cloud Volumes ONTAP’s data tiering feature automatically and seamlessly moves infrequently-used data from block storage to object storage and back.
Learn more about how Cloud Volumes ONTAP helps cost savings with these Cloud Volumes ONTAP Data Tiering Case Studies.