Amazon Simple Storage Service (Amazon S3) is an object storage solution that features data availability, scalability, and security. Amazon S3 offers management features so that you can configure, organize, and optimize how your data is accessed, to adhere to your particular organization, business, and compliance demands.
Amazon Web Services (AWS) operates in multiple geographical Regions, each of which is divided into several Availability Zones (AZ). AWS S3 pricing differs according to the Region, the type of storage (there are several tiers from Standard to Glacier Archive), the volume of storage, and operations performed on the data.
This is part of our series of articles about S3 Storage.
In this article:
S3 pricing is broken up into several components: a free tier that lets you try S3 at no cost, storage costs priced by GB-month, and special charges for requests, data retrieval, analytics, replication, and the S3 Object Lambda feature.
Learn more about AWS storage options and costs in our guide to AWS storage costs
As a feature of the AWS Free Tier, you can begin using Amazon S3 at no charge. Once you sign up, new AWS customers get 5GB of Amazon S3 storage within the S3 Standard storage class. For one year, the monthly limits are:
Your free tier usage is measured each month across every AWS Region, apart from the AWS GovCloud Region, and automatically charged to your account—you cannot roll over unused monthly usage.
Pricing examples in this section are for the US East (Ohio) region and are subject to change—for up to date prices see the official pricing page.
You are charged for retaining objects in your S3 buckets. The amount you are billed varies according to an object’s size, the period during which you stored the object over the month, and the storage class.
The S3 Intelligent-Tiering option moves data between frequently accessed and infrequently-accessed tiers for cost saving. You are charged an automation fee and monthly monitoring fee for each object retained in the S3 Intelligent-Tiering storage class, to track access patterns and transfer objects from one access tier in S3 Intelligent-Tiering to another.
Although you are given a specific number of GET and PUT requests as part of the free usage tier, you will be charged for other requests, as well as any GET and PUT requests that exceed the free tier monthly cap.
Outside the free tier, requests are priced as follows (using prices from US East Region as an example):
Amazon charges per GB for data transfer from Amazon S3 to the Internet. External data transfer is free up to 1 GB per month.
Above this, pricing starts from $0.09 per GB for the first 10 TB transferred. Amazon provides volume discounts for increasing data transfer amounts, down to $0.05 per GB for over 150 TB per month.
You can pay a premium for faster data transfers—the charge for fast data transfer is $0.04 per GB, or $0.08 outside the US, Europe, and Japan.
Amazon charges extra for specific features, which you can activate on any of your S3 buckets.
The table below shows pricing for selected services.
Special S3 Feature |
Pricing |
S3 Inventory |
$0.0025 per million objects |
S3 Object Tagging |
$0.01 per 10,000 tags |
S3 Storage Lens advanced metrics |
$0.20 per million objects |
S3 batch operations |
$0.25 per job, $1 per million operations |
You might want to replicate data across multiple buckets in the same region, or across multiple regions, to improve availability. In this case, you will pay for:
S3 Object Lambda is a feature that lets you write your own code and add it to GET requests in S3. The code is then run in a serverless model whenever the GET request is processed, using Amazon Lambda.
Costs for S3 Object Lambda are as follows:
(*) The price for these components may vary depending on memory allocated to your Lambda functions.
Use the following practices to optimize your Amazon S3 costs.
Once you have defined your requirements, you must take time to organize your information. Use prefixes, resources tags, and bucket names to effectively define your large data sets, which will help you select the correct storage classes and tiers afterwards.
For example, your finance department may require fast access to customer information, but only require periodic access to last month’s sales or assets. You could use data storage classes for these distinct requirements if you accurately track and define your data and organize it effectively with tags. You could also use these details for S3 lifecycle management and to inform data transfers between various classes.
Your organization must manage all the information it collects. AWS provides a tool known as S3 Intelligent Tiering to transparently oversee the tiering feature. This automates information management from tier to tier. Note that you must select the Intelligent Tiering option from the onset.
Intelligent Tiering is only effective with storage objects greater than 128 KBs—it does not feature the lower levels of S3 Glacier archival data. If you decide to forgo Intelligent Tiering, you still need an effective procedure to remove unuseful objects from the S3 system, to avoid spending on excess resources.
As your processes evolve, you will need to analyze and monitor your service usage and information access patterns for cost optimization. AWS services (including Amazon CloudWatch Metrics, Storage Class Analysis, and S3 Server Access Logging) can provide insight into access patterns.
Storage Class Analysis lets you understand and examine your object access patterns and then deduce how best to specify your lifecycle policies for expiration or transition actions on your S3 objects. It is useful to specify the time frames for moving your objects to the ideal storage classes created for frequent or infrequent access. You can also specify an expiration time limit for when objects must be deleted.
Alternatively, you could use Amazon CloudWatch Metrics to interpret daily storage data across your buckets and identify the growth patterns of objects.
Like the Amazon CloudWatch Metrics, AWS Server Access Logging lets you examine the requests created for your buckets and appreciate the current patterns over data access. It also makes it easier to analyze large data sets over different applications, and lets you monitor and track your bucket records in a systematic way. You can use the metrics you obtain to decide which storage classes to use.
Related content: Read our guide to AWS cost optimization
NetApp Cloud Volumes ONTAP, the leading enterprise-grade storage management solution, delivers secure, proven storage management services on AWS, Azure and Google Cloud. Cloud Volumes ONTAP capacity can scale into the petabytes, and it supports various use cases such as file services, databases, DevOps or any other enterprise workload, with a strong set of features including high availability, data protection, Kubernetes integration, and more.
Cloud Volumes ONTAP provides storage efficiency features, including thin provisioning, data compression, and deduplication, reducing the storage footprint and costs by up to 70%.
Learn more with these Cloud Volumes ONTAP Storage Efficiency Case Studies.
Cloud Volumes ONTAP’s data tiering feature automatically and seamlessly moves infrequently-used data from block storage to object storage and back.
Learn more about how Cloud Volumes ONTAP helps cost savings with these Cloud Volumes ONTAP Data Tiering Case Studies.