Subscribe to our blog
Thanks for subscribing to the blog.
All data needs to be protected. For some companies that means keeping a local backup copy in the data center. However, that can pose a number of difficulties, including clogging up an expensive high-performance storage tier just for storing backups.
As an alternative to keeping backup data in the same location as the primary data, off-site backups can be used. These off-site backups can form a central part of a disaster recovery strategy. Off-site backups have a simple underlying premise: protect your essential data, stored on local file servers or in a data center, with a backup copy in a geographically separate location than the primary location. In the event of a disaster that causes your data to be unusable or unreachable, you can recover your data.
How can the cloud be leveraged for off-site backup if your storage system is a NetApp AFF or SSD-backed FAS appliance in the data center?
In this article, we will show how NetApp Cloud Tiering service and its All Data tiering policy can be used to shift backups stored in volumes to the cloud as an off-site location, allowing you to recover space on your secondary NetApp appliance.
Current Off-Site Backup Methods
Until recently, the primary method to create an off-site backup was to back up data to some form of removable media, such as tape, hard disk, or DVD, and load those backups into a vehicle for delivery to another office, data center, or secure storage facility. In some cases, such as backups of large volumes of data or secured offline storage, this method is still in use.
But for the most part, backup methods have changed. Over time, the cost of high bandwidth data links to remote sites or the internet have reduced, making it cost-effective to perform off-site backups over a private network link between geographic locations, or across the internet over a virtual private network. While this mechanism is valid, it requires infrastructure installed and maintained at another location, or a contract with an online secure storage provider, which has an associated cost.
The cloud is taking the cost reductions of off-site backup even further. After the advent of relatively cheap cloud storage options, using a public cloud provider's storage as the off-site backup is now very popular. With it, backups can be uploaded to cloud storage over the internet or a private network to the cloud vendor. This solution can require new software or hardware to manage the movement of backups to and from the cloud, which can be expensive and require time-consuming new infrastructure design. However, if you have a NetApp system, then you can benefit from cloud tiering to fulfill this requirement.
Moving Backups to the Cloud with Cloud Tiering’s All Data Policy
The performance tier of an on-premises storage system that is used entirely for the purpose of housing a backup copy is not going to be very useful for regular use. Mostly, it gathers dust while doing the important job of waiting for an emergency situation that may never happen. Cloud Tiering service solves the problem of this completely inefficient use of performant storage by moving entire backup copies to cloud object storage, recovering valuable storage capacity in the data center, while keeping the backup data immediately available if needed.
Cloud Tiering links the performance tier on NetApp AFF or SSD-backed FAS storage to cloud object storage on AWS, Azure, or Google Cloud.
To move these dormant datasets to the cloud, Cloud Tiering users can use the All Data tiering policy. This solution reduces the on-prem storage used by volumes significantly, freeing it up for more performance demanding workloads.
The All tiering policy immediately marks each 4 KB user data block as cold upon the tiering scan execution. Every 1,024 cold blocks that Cloud Tiering discovers on a single volume are encapsulated into a 4MB object that is immediately moved to the cloud tier and so on until the entire volume’s data (excluding data ineligible for tiering) moves to the cloud tier.
Unlike with the other data tiering policies with Cloud Tiering, once the All policy sends a volume’s data to the cloud, accessed data blocks are read directly from cloud storage instead of being moved back to on-prem, a situation ideal for backups. Other good candidates for use with the All tiering policy are volumes containing completed projects, historical reports, or SnapMirror® data protection/secondary volumes.
An example scenario would be to create a secondary volume that is tiered to cloud storage using the All tiering policy, from a SnapMirror backup relationship with a primary volume, and schedule a once a day SnapMirror backup. All user data backed up to the secondary volume (excluding ACL, directory information, and file system metadata) is seamlessly moved to the cloud tier upon arrival without being written to the performance tier.
The result is that total cost of ownership is lower, as using the cloud for off-site backup doesn’t require any geographically separate locations, or secure storage contracts, or third party backup software/hardware. CAPEX and OPEX become just OPEX for the capacity of cloud storage used.
Though some of the fundamental technology may have changed, backup is still one of the most important parts of an IT deployment. In this article we saw how implementing a backup solution together with Cloud Tiering service for AFF and SSD-backed NetApp appliances can be done quite easily in more than one way: either tiering a volume of cold data or tiering a SnapMirror secondary volume to cloud storage with the All data tiering policy.
Compared to other off-site backup solutions that rely on a physical data center alone, the total cost of ownership is lower, as the cost is simply the price of the cloud storage being used. If you’ve never kept data stored outside of the data center before, backup is one of the safest ways to get started in the cloud.
Start paying less for your backup solution and try Cloud Tiering service today.