For most IT managers today, bringing costs down is a bigger concern than tech problems. Cost efficiency is a big motivating factor for companies to migrate to the cloud.
But for many companies that makes the move to the cloud, the monthly bill that shows up can be much higher than anticipated. What went wrong? Wasn’t the whole point of moving to the cloud to save money?
No matter which cloud you're using—Google Cloud, Azure, or AWS—data transfer costs might be a big part of the bill shock.
While this is true for data transfer costs in all the clouds, AWS data transfer costs provide a good example of how these costs accrue and what you can do about them.
In this post we will take a look at what causes high AWS data transfer costs and three easy ways that you can reduce them.
Costs associated with data transfers are expensive. But what exactly is causing the high charges?
Data traffic moves two ways: in and out.
Since traffic going into AWS is free, it’s the traffic moving out of AWS and between AWS services that you want to track. Make sure you carefully examine the official AWS price list for transferring data out of AWS if you want to keep these costs under control.
Data transfers within the AWS cloud are tricky in that they aren’t as easy to track and their pricing varies. The pricing here is dependent on which region the data is coming from, but also the region it is being transferred to.
Even within a single region, prices vary from moving between different Availability Zones and keeping it within the same Availability Zone.
The AWS data transfer cost across regions is $0.02/GB; however, you should remember that provisioning all of the services you’ll need to connect the two regions—racking up costs outside of your AWS bill—are also a factor in your budget.
Transferring within a region in the same Availability Zone is free if the IP address that performs the transfer is private, but transferring using public IP addresses costs $0.01/GB. Data that moves between Availability Zones in one region also costs $0.01/GB. If you use Amazon S3 for storage in conjunction with the Amazon S3 Transfer Acceleration service, be aware that there are higher fees to keep in mind.
The answer to high transfer costs is optimizing. This means making sure that the ways you store data, measure it, and move it from one place to another all keep your TCO within acceptable limits.
The following section will present some recommendations for making that happen, but keep in mind that you don’t have to create solutions for all of these problems on your own.
There are ready-to-use services out there that can make it easy to manage transfer costs. NetApp’s Cloud Volumes ONTAP (formerly ONTAP Cloud) and Cloud Sync both combine to offer powerful aids to keeping your AWS costs under control.
AWS offers a number of tools that you can use to measure your costs. The Billing & Cost Management Dashboard displays a general overview that shows what you are paying.
Cost Explorer offers a useful graphic display of your usage rates and associated costs. And AWS Simple Monthly Calculator is a good utility for tracking expenditures for future scheduled projects.
One other native-AWS utility that would be wise to set up is Billing Alarms. Billing Alarms come in handy if you have a set budget that you don’t want to exceed within a certain time period. Free-Tier users would be especially interested in this tool since they are automatically charged if they go over the free-usage limit.
If you are using Cloud Volumes ONTAP on AWS, you can also turn to the Cloud Volumes ONTAP Calculator to calculate and adjust your TCO costs.
Data compression and data deduplication are both useful ways to cut down on the costs of data transfers. Both of these are features of Cloud Volumes ONTAP. Data compression makes sure that you have plenty of open disk space available by packing all the data you have into a smaller size.
Compression makes transferring less expensive because when it comes time to move files, you won’t burn up as much bandwidth as you would with non-compressed files.
Data deduplication is a type of data compression that is dedicated to removing duplicate data from the storage space. Through analysis, deduplication finds patterns to identify duplicates.
Once identified, these duplicates are replaced with a link to the original version on which the pattern was based. Deduplication makes storage a lot more efficient, and by lowering the the total number of files, it keeps transfer costs down.
Copying all the files over from scratch every time you need to sync data is extremely wasteful. Incremental synchronization is a method of saving on data transfer costs by limiting the amount of information that needs to be changed.
Once a full synchronization takes place, future syncs only change data that differs from the baseline. Usually, this will only mean files that are new or ones that have been changed since a previous sync. For NetApp users, Cloud Sync offers this space-saving feature.
Cloud migration is a smart move for companies to make for lots of reasons, but sometimes costs can wind up being higher than expected. It takes careful planning to manage all of the services that will be utilized by an account, and one of the places that needs special attention is data transfers.
By keeping a close eye on usage rates and utilizing a number of data management tools and techniques, it is still possible to get the full benefit of a cloud deployment without overpaying for it.
Data compression, data deduplication, and incremental syncing can all be used to efficiently store and transfer your data. These methods are easy to use when you take advantage of products such as NetApp Cloud Volumes ONTAP and Cloud Sync.