BlueXP Blog

Cloud Tiering Policies: Which One Is Best For You?

Written by Oded Berman, Product Evangelist | Aug 27, 2019 9:23:18 AM

NetApp’s new Cloud Tiering service, using ONTAP’s powerful FabricPool technology, provides a quick path to adopting a cloud strategy by allowing data to be automatically tiered from high performance on-prem AFF or SSD-backed FAS to a cloud-based object store on AWS, Azure, and coming soon Google Cloud Platform and IBM Cloud Object Storage.

What kinds of data can be tiered? Depending on the type of business that your organization conducts, that answer is going to be different. Cloud Tiering offers three different policies that can cater to the specific needs of varying business models. In this article we’ll take a look at each of these tiering policies to help you determine which one is best suited for your business’s workloads.

Cloud Tiering Policies

When You Have Lots of Data that Won’t Be Used Much: Auto-Tiering

It’s not infrequent for AFF and FAS storage users to have data pile up in their systems. That data could be associated with productivity software, completed projects, and old datasets, but it leads those companies to constantly need to invest into expensive new SSD disk shelves. Consider the kind of storage requirements that a media company has on its hands.

Media companies need fast access to their data storage so they can actively use and modify videos and photos. Usually that data is in active use for, on average, around one week. But after that initial processing work is over, the data will basically sit in the storage system taking up valuable space. The data can’t be deleted, because it must be accessible for future use if needed. Historically, the solution these media companies used was to add a less-expensive storage rack next to the fast AFF and manually copy all the old and unused files to the more affordable storage. That takes a lot of time and overhead resources, which could be spent more productively on other projects.

For media companies like this, or any other company dealing with large datasets that are not accessed frequently anymore, if at all, the best Cloud Tiering policy is the Auto-tiering option. Auto-tiering is the default tiering policy, and with it Cloud Tiering automatically scans your data and identifies which data is cold and then moves it to cloud storage. Auto-tiering considers any data, including snapshot data, that has not been accessed in over 31 days, by default, to be cold. Users can also set an alternate minimum cooling period, based on their specific needs, by using the advanced CLI privilege level to modify volume attributes.

When You Need to Keep a Lot of Snapshots for a Long Time On-Prem: Snapshots-Only Tiering

Many users rely on NetApp Snapshot™ technology to create point-in-time backups of the data stored on their AFF and FAS systems. This is especially true for developers, who when they are developing and testing solutions, have to make sure that the work they are conducting is backed up in case something goes wrong with a test or a build. But those snapshots can frequently take up more than 10% of the total storage space in the system, despite how space-efficient they are. Since these point-in-time copies are rarely used—which is an inefficient use of high performance SSDs—there is a clear case for tiering here.

Most likely, if the developers need to restore data from a snapshot, the most recent snapshot will be the version they would likely use, not one that was taken ten days previously. So why bother storing more than just the most recent snapshot? There is always a chance that that ten-day old snapshot will be the version that needs to be restored to. Many companies have backup policies that demand snapshots be kept on hand, on primary systems, exactly for this reason.

In all these cases, Cloud Tiering’s Snapshot-only policy provides clear benefits, not the least of which is freeing up around 10% of your storage space, but also by leveraging a storage tier that requires none of the additional overhead investment.

The Snapshot-only tiering policy moves cold snapshot data blocks only of volumes that have snapshots stored on them. The default cooling period before cold snapshot blocks are tiered is two days, but that period can be changed to be as long as 63 days.

Note: The Snapshot-only policy is not a substitute for creating secondary data copies for disaster recovery or Backup and Archive purposes.

When You Have Whole Volumes to Upload: The All Tiering Policy

But what if there are entire volumes that are taking up valuable AFF or SSD-backed FAS storage space you can use for more important workloads? This is the situation that architectural firms might find themselves in. 

Architectural firms that are responsible for designing buildings, roads, or other construction projects create enormous amounts of data for every project they take on. These kinds of projects are usually carried out over the course of many years, and they produce a lot of large files which need to be available quickly during the working process. After the project is completed, that data must be retained and be accessible for potential later use but is unlikely to be frequently read.

And what if you are replicating volumes to another AFF or SSD-backed FAS system for disaster recovery or backup and long-retention archive? Typically, data replicated to secondary systems for disaster recovery share a 1:1 ratio with the primary data sets. When this secondary copy is used for backup and a long-retention archive, the ratio can be significantly greater, causing the secondary storage space you require to be prohibitively expensive. This will force you to go through a complex decision-making process to determine which data will be protected and how and which data will not.

In cases such as these, and whenever dealing with legacy reports or historical data, the All tiering policy would be the most suitable. This policy instantly marks all the data within a volume as cold, causing an immediate tiering of the entire volume to the object storage tier. This policy is beneficial both in the case when you have a finished project and when protecting a volume by replicating it to a secondary system, because it allows complete sets of data to be tiered to less expensive cloud storage indefinitely and still be fully accessible if needed.

As the name says, the All tiering policy moves all the data on a volume from on-premises primary or secondary storage to the cloud storage tier. This policy is also coming soon.

Note: The All tiering policy does not substitute creating secondary data copies for disaster recovery or Backup and Archive purposes.

More Cloud Tiering Benefits

Besides the ability to choose tiering options that cater best to your deployment, Cloud Tiering offers many other benefits.

A Transformer from CAPEX to OPEX Spending

Many customers are facing challenges dealing with high CAPEX and want to adopt new technologies and strategies. The quickest and most widespread solution to this problem is to adopt the OPEX spending model made possible with cloud technologies. For NetApp All-Flash FAS and SSD-backed FAS storage users, the easiest way to do that is to take advantage of the new Cloud Tiering Service.

A First Step into the Cloud

With Cloud Tiering able to be set up with only a few clicks on NetApp’s Cloud Central portal, this service can get a company’s cloud strategy into action as soon as possible. Cloud Tiering is technically a hybrid cloud architecture, which is the first stop that many organizations make as they begin their cloud journeys.

Lifting—but Not Shifting—to the Cloud

One of the biggest obstacles to moving to the cloud is the need to refactor legacy applications to work on cloud infrastructure. But Cloud Tiering eliminates that need by making it possible to use the cloud with no changes to application layer or current workflows and processes.

More Space for High Demand Workloads

As Cloud Tiering will free up your AFF or FAS storage system, it will allow data growth for applications that require high performance infrastructure without the need to further invest in additional data center capacity or the infrastructure to house it (power, cooling, space).

Getting More for Your Investment

Powerful storage systems are high cost items that companies want to get the most out of given the price tag. Cloud Tiering lets these machines do more for longer periods of time, optimizing ROI.

Conclusion

The primary role of all cloud tiering policies is to move cold data to cloud-based object storage, which can help optimize storage usage, both primary and secondary, at every company. Though currently Auto-tiering and Snapshots-only are the only policies available; the All/Entire Volume tiering policy is coming soon, expanding many of the possibilities for using this powerful new service.