Subscribe to our blog
Thanks for subscribing to the blog.
July 3, 2019
Topics: Cloud Tiering Data TieringAdvanced4 minute read
Not all data is created equal. In the IT world we typically differentiate between hot and cold data, where hot data is accessed frequently and requires low-latency performance while cold data is accessed only rarely, such as for compliance scenarios, when latency is not a major issue. In addition, sensitive data, such as personally identifying information (PII) or business-critical records, requires stricter levels of protection during storage than general data.
In order to meet the needs of different data types, data storage—whether it’s on-prem or in the cloud—is available in performance and capacity tiers. Performance tiers for hot data use cutting-edge, high-performance storage technologies and are more expensive. The less expensive capacity tiers use legacy technologies or cloud storage infrastructures to retain cold data over time.
In this blog post we explore the concept of moving data among storage tiers, its benefits, and how NetApp Cloud Tiering provides a seamless hybrid Data Fabric for NetApp AFF users.
Data Tiering 101
Data tiering is the process whereby data is shifted from one storage tier to another as its state changes dynamically from hot to cold and vice versa.
The concept of data tiering is not new. Computer architectures, for example, have at least three tiers (CPU data caches, secondary disk storage, and tertiary backup storage) to meet data needs as they change—from realtime processing to persistent storage and archiving. However, data tiering has been taken to a new level with the advent of storage virtualization in general, and with public cloud storage in particular.
Today’s modern software-defined data storage platforms automatically detect a data set’s current state and seamlessly shift the data among tiers and across infrastructures so that performance SLAs are met cost-effectively. The following table describes the three main data tiers and the kinds of data and operations they are meant for:
Data Tiering Benefits
With data sets and storage environments growing exponentially, there’s no way that IT teams can implement data tiering manually. Automated workflows and processes are necessary in order to reap the benefits of data tiering, whose main value propositions can be summarized as follows:
- CAPEX savings by extending the capacity of on-prem high-performance storage platforms. Instead of provisioning additional expensive storage arrays, offload less critical data to capacity tiers.
- OPEX-model spending using scalable, consumption-based cloud storage resources. Cloud service providers offer tiered object storage options whose pricing varies according to performance SLAs.
- Many data tiering platforms implement storage optimization methodologies such as data compression and deduplication or thin provisioning that considerably lower data storage footprints and costs. These storage efficiency features can be particularly impactful when it comes to tiering backup or archive volumes.
- Moving data automatically among tiers ensures that performance tiers are freed up to meet their SLAs for the data that needs the fastest read/write response times. As a result, end users and apps experience significantly better performance.
- Automated, policy-based data tiering provides a high level of flexibility so that an organization can meet its ever-changing business requirements.
NetApp Cloud Tiering
NetApp is an industry leader in enterprise-grade, high-performance software-defined storage. Leveraging NetApp’s tried and true FabricPool technology, the NetApp Cloud Tiering service automatically shifts cold data from performant on-prem All-Flash FAS (AFF) SSD storage arrays to low-cost object storage tiers on Amazon S3, Azure Blob, or IBM Cloud Object Storage.
With Cloud Tiering there is no need to make any change in the application layer since the tiering occurs exclusively in the data layer. Cloud Tiering works invisibly in the background, with no disruption of processes. The service is available in two license models: the PAYGO consumption-based model as well as the upfront, termed-based BYOL model.
For maximum flexibility, Cloud Tiering offers different data tiering policies:
- Tiering cold data: Data that has not been accessed for 30 days (or any user-defined period) is automatically sent to cloud object storage until needed, in which case it is moved automatically back to the on-prem performance storage tier.
- Tiering snapshots only: NetApp’s incremental, point-in-time snapshots are shifted to cloud object storage after a short delay and retrieved to the performance storage tier as needed.
- Backup tiering (coming soon): With this option, users will be able to tier entire backup copies to the cloud, where storage costs will be more efficient.
- All (coming soon): This policy will make it possible to tier an entire AFF storage system's data to the cloud.
In short, the NetApp Cloud Tiering service lets you extend your data center storage capabilities to the cloud with zero effort: no changes to the application layer, no changes to existing workflows and processes, the same familiar ONTAP tools and interface. In addition, Cloud Tiering optimizes and scales AFF storage capacity by up to 20X, letting you support more workloads with less on-prem infrastructure. Last but not least, on average 80% of data is cold and can be shifted to low-cost capacity tiers in the public cloud. The result: Significant reductions in data center footprints as well as storage costs.