hamburger icon close icon
Azure File Storage

Myth Busting: Cloud vs. On Premises - Which is Faster?

October 2, 2019

Topics: Azure NetApp Files 6 minute read

Even though the cloud storage market has continued to grow into a multi-billion-dollar industry that’s expected to reach $207.05 billion by 2026, misconceptions about the technology still abound. In the “cloud vs. on-premise” debate, a common misperception has been that cloud storage performance is inferior to that of on-premises. Yet with the right services and optimal configuration, cloud storage can provide performance levels on par with—and in many cases exceeding—on-premises storage.

Moving applications to the Azure cloud involves careful planning and assessment, including identifying the cloud services most suited to your organization’s needs. For applications with high-performance requirements, storage performance is often a deciding factor. For applications that leverage file shares in the cloud (for example, databases, analytics, and 3D modeling), specific performance demands must often be met before deployment in or migration to the cloud.

Azure NetApp Files (ANF), the latest Azure native-cloud shared file service powered by trusted NetApp® ONTAP storage OS, enables enterprises to meet the demands of their file share-based workloads.

Azure Cloud Storage: Performance Considerations

Until now, organizations opting for a self-hosted approach to deploying virtual file servers in Azure for NFS-based file shares require either Premium SSD, Standard SSD, or Standard HDD disk SKUs to meet the varying performance demands of their applications.

Premium SSD: Backed by premium high-performance SSD storage, these disk SKUs provide high throughput and low latency disk performance. These disks are best suited for performance-sensitive line-of-business applications or mission-critical workloads. This SKU can provide maximum throughput of up to 900MiB/s per disk for a 32TB disk SKU. But this high performance comes with a price tag of roughly $3,600 per month.

Standard SSD: Standard SSD also uses SSD in the backend, but offers lower performance throughput relative to Premium SSD SKU. It targets entry-level workloads in the cloud, such as dev/test environments and web servers, which have lower IOPS requirements. The maximum throughput per disk available with the largest SKU is up to 750MiB/s, at a monthly cost of around $2,500.

Standard HDD: With this SKU, data is stored in HDDs, thus targeting workloads with minimal performance requirements, such as home directories, backup, and non-critical data storage. The SKU can offer a maximum throughput per disk of up to 500MiB/s at a cost of over $1,300 per month.

Deployment of do-it-yourself file server clusters in Azure to provide NFS file services is often counterproductive and expensive. It also becomes difficult to meet the performance demands at scale, especially while storing terabytes of data. For example, the largest Azure disk is 32TB, and multiple disks of this type must be added to a storage pool to accommodate what could be 100TB of data.

In addition to disk charges and the higher cost of VMs that support multiple disks, the maximum storage throughput supported by the machine should also be taken into account, since this factor directly impacts performance. All these add to the complexity of the configuration.

Azure NetApp Files: The Cloud-Native File Service

Azure NetApp Files addresses the pain points of self-hosted file share services by offering a fully managed native shared file service to meet the scale and performance demands of enterprise file workloads. ANF can be provisioned directly from the Azure portal in a matter of minutes with near bare-metal data performance capabilities. With ANF, you can easily meet the demands of performance-intensive workloads such as SAP, Oracle, SQL, and VDI. Foregoing a native file share service also eliminates the need for rearchitecting applications in order to facilitate migration and deployment.

Next, we’ll explore common scenarios in which ANF’s capabilities and performance could be of benefit.

Lift and shift workloads to the cloud: ANF offers file services for both NFS and SMB protocols, thereby supporting lift-and-shift migration of both Windows and Linux applications with file share requirements. It also supports all the necessary features of an enterprise-class file share service. Those features include client access control (read-only and read-write) and AD integration, along with NetApp’s data replication capabilities using Cloud Sync for easy migration of data to the cloud.

Capex or opex? ANF is ideal for organizations that don’t wish to reinvest in hardware to support their growing file storage capacity needs because it can be leveraged to move data to the cloud without compromising application performance. It offers advanced data management capabilities, including rapid clones, snapshot copies, and encryption, all of which add value over traditional on-premise storage solutions.

Quick deployment for DevOps: ANF enables quick cloning of data to deploy dev/test environments in the cloud without compromising performance. This capability is useful in CI/CD scenarios for quick provisioning and testing of applications.

Database performance management: ANF helps you optimize storage costs with multiple service tiers to meet the performance demands of hot and cold data handled by databases. The service is resilient and highly available by default, allowing quick deployment of highly available databases in the cloud.

ANF for Performance-Intensive Workloads

Azure NetApp Files offers three service levels—Ultra, Premium, and Standard. The performance offered by each of these service levels is measured for every 1TB of volume quota assigned:

Ultra

Premium

Standard

128MiB/s

64MiB/s

16MiB/s


ANF’s storage hierarchy consists of a capacity pool and the volumes a capacity pool can contain, starting from as little as 4TB and reaching as high as 500TB. Multiple volumes can be created in a capacity pool, from 100GB to 100TB. The throughput for a volume is determined by the service levels of the capacity pool from which the volume is provisioned and the quota assigned to the volume.

A volume quota of 10TB provisioned in a capacity pool and created in the Ultra Service tier will have a gross throughput of 1280MiB/s (10TiB * 128MiB/s). A volume of the same quota provisioned in Standard tier, on the other hand, will give a throughput of 160MiB/s (10TiB * 16MiB/s). The Ultra Service tier is thus best suited for performance-sensitive applications, depending upon how idle or active the data is, while the standard tier should be employed when capacity is a priority. Users reap the best value by knowing their data well before spinning up volumes in ANF.

Performance benchmark tests were conducted using the Vdbench utility for a hypothetical Linux app that used ANF for NFS volumes. They resulted in the following outcome for throughput and I/O-intensive access patterns for Premium Service level:

Throughput-Intensive:

Using 12 D32s V3 storage virtual machines, a throughput of 4,523MiB/s was achieved during the test.

64KBI Throughput Test

O-intensive workloads:

The same configuration can provide read IOPS in excess of 300k:

8KiB I/O Test

As can be seen from the above test results, throughput of 4.5GiB/s and IOPS of 300k+ were achieved using ANF. When comparing cloud to on-premises, these figures are at least equivalent to on-premises storage performance, although in some cases this throughput exceeds on-premises storage. The service offers sub-millisecond latency for your NFS-based workloads and databases running in Azure.

Want to Bust the Cloud vs. On-Premises Myth at Your Company?

Discover how to boost your cloud storage performance with Azure NetApp Files.

 

Cloud Data Services

-