So, you have a great NetApp appliance, but its storage capacity is being used up faster than expected, due to the rapid growth of data and growing regulatory requirements. This is having a significant effect on your total cost of ownership (TOC) and may require additional CAPEX investments to buy new expensive storage systems and cover increased data center operational costs.
One solution to handle these challenges is to consider offloading some of the data on your on-prem system to the cloud. If you’re using a NetApp AFF or an SSD-backed FAS system, the Cloud Tiering service can be used to integrate your on-prem systems with cloud-based object storage on AWS, Azure, or Google Cloud, allowing you to efficiently manage your rarely accessed data.
But which data should you tier to the cloud? If you’ve never used the cloud before, you may still have reservations about moving your data at all. What data is critical for your operation? Where is it in the data lifecycle?
In this blog we’ll look at three types of data that are sufficiently low-risk and can easily be moved to the cloud, benefitting you and your high-performance NetApp storage appliances.
Data is a critical element in business, whether in the form of documents, spreadsheets, or application storage. Even after this data stops being used on a regular basis, it still must be kept secure, sometimes for an indefinite period of time. Such data can include older company documentation, historical sales reports, healthcare data retained for regulatory reasons, and data analytics source data, which, once cleaned, must be kept for validation.
The typical method of protection is to archive the data regularly to another storage medium, or another NetApp system, which results in multiple copies of the data residing on expensive, high-performance storage.
With Cloud Tiering, the archive destination volume is tiered to the cloud, and the archived data is migrated to cloud object storage, but your archived data is still accessed as if it was stored locally.
Without making any infrastructure changes, you have recovered substantial high-performance storage capacity, increased your archive limits with your cloud object storage capacity, and converted any additional storage requirements to OPEX at cloud object storage prices.
Documentation and data change over time, and occasionally files are lost or corrupted. Snapshot copies enable you to jump back and easily recover from these point-in-time images of your data, specified in your volumes’ snapshot policies, and are very useful in other situations.
The best example for how snapshot copies are helpful is during a ransomware infection. In this scenario an infected machine can be quickly isolated and its files recovered from a snapshot copy taken just over two hours before the attack, rather than the previous night's backup.
These snapshots are essential for short-term recovery and cloning, and although highly efficient in their space consumption, over time, they consume valuable storage space (typically up to 10%). The data in these snapshots is crucial but won’t necessarily need to be stored on the original data volume. It’s an ideal low-risk use case for moving data to the cloud.
With an appropriate data tiering policy, volume snapshot data would be migrated to cloud object storage, where the cost of storage is considerably less. This frees up hot-tier capacity to be used where high performance is required, and leaves the snapshot data accessible at any time for immediate use should something impact the original files.
Catastrophic loss of storage is when you lose access to your local storage appliance. This could be due to a loss of power, loss of network access to the storage appliance, or even loss of physical access to your office.
Companies should always have a plan to cater for this type of emergency. Usually, this plan would involve mirroring crucial volumes to an equivalent NetApp located at a different office or in a data center. Of course, these additional space requirements come with additional costs, and the DR copy will likely not be in use as long as a disaster isn’t taking place—which is hopefully never.
This makes tiering the secondary volumes in your DR NetApp system to the cloud a great solution. It reduces the on-prem capacity requirements significantly, which results in substantially lower CAPEX spending and a much lower total cost for your DR solution.
Now that you’ve been introduced to three easy, low-risk data types that you can tier to the cloud, let’s take a closer look at how your AFF or SSD-backed FAS system can do that with Cloud Tiering.
Cloud Tiering can automatically transfer your data to the cloud, where it will be stored in an object storage service on Amazon S3, Azure Blob storage, Google Cloud Storage or any combination of the three.
Based on FabricPool technology, Cloud Tiering automatically detects infrequently used data based on three different tiering policies, and moves those cold data blocks to the cloud without any user intervention. If reads are performed on the tiered data, Cloud Tiering automatically moves the data back to your on-prem storage system for immediate use in performance-intensive workloads.
The metadata with the directory structure and the details of each file remain on-prem. Clients connecting to a share are unaware, and implementation requires no changes to your infrastructure or application flows.
With Cloud Tiering, you could even have virtually higher capacity volumes. Since Cloud Tiering migrates cold data automatically to the cloud, and your on-prem system maintains only the metadata and hot data (based on the policy selected), the total capacity limits can be expanded by 50x.
These simple examples show how easy it can be to move data to the cloud and recover space on your high-performance storage.
To move your low-risk data to the cloud—try out Cloud Tiering now.