Subscribe to our blog
Thanks for subscribing to the blog.
September 23, 2018
Topics: Cloud StorageAWS4 minute read
High performance I/O is a fundamental requirement for many enterprise applications and services, including database platforms, High Performance Computing (HPC) clusters, multimedia processing, and much more.Databases are repositories of structured data that are constantly being read and updated, and so are completely reliant on strong I/O performance. HPC workloads that must read large volumes of data from disk also benefit from having very fast storage from which to consume that information.
In this article, we will look at the performance of NetApp’s Cloud Volumes Service, a new fully-managed shared file service for AWS. We will also examine the way in which Cloud Volumes Service can be used to drive greater levels of performance for applications that are usually I/O bound so we can answer the question: Is Cloud Volumes Service the fastest storage option for AWS customers?
Cloud Volumes Service is a dynamically-scalable and fully-managed PaaS (Platform as a Service) solution for NFS and SMB file services that runs with AWS. That means you can access and deploy Cloud Volumes Service just like Amazon EBS or Amazon S3 as it simply extends the set of available options for cloud storage services. As well as delivering incredible I/O performance and throughput, Cloud Volumes Service also includes several sophisticated data management features, such as snapshots, volume cloning, and data synchronization.
But how fast is it? In the following two sections we will examine Cloud Volumes Service I/O performance for a specific use case: database workloads in MySQL and Oracle.
In a MySQL performance test conducted by NetApp using a TPC-C workload, which is the industry-standard benchmark for online transaction processing (OLTP) database systems, Cloud Volumes Service was able to almost max out the bandwidth per connection allowed by AWS for inter-VPC traffic, with just a single storage volume. This test was performed using only a single instance of MySQL.
When the same test was repeated with multiple MySQL instances, the same single Cloud Volume was able to produce about four times the amount of I/O throughput, reaching approximately 16 Gbps. For all the details of how this performance test was carried out, the Amazon EC2 instance types used, and the MySQL configuration, you can follow the link above or click here.
NetApp has also performed I/O performance tests with Cloud Volumes and Oracle using the Direct NFS feature of the database platform. Using Direct NFS, Oracle database is able to launch a very large number of parallel client connections to an NFS file service, which massively increases the amount of I/O throughput a database can achieve. This makes Direct NFS very compatible with the scalability of Cloud Volumes Service.
Using the SLOB workload generator across a variety of I/O test types, including pure read and mixed read/write, NetApp found that Cloud Volumes Service and Oracle were able to continue scaling up until all available resources of any Amazon EC2 instance type were completely consumed, even for the largest Amazon EC2 instances. These tests were all performed with a single instance of Oracle database.
In most of the tests, the performance of using a single Cloud Volume was on par with using two Cloud Volumes; however, using more than one volume improved application latency. In a 100% read workload, a single Cloud Volume was able to achieve almost 300,000 IOPS. For all the details of the Oracle workload test, check out the link above or click here.
As we have seen from the performance results above, Cloud Volumes Service is an extremely effective storage solution for building database environments in AWS. The I/O performance of each Cloud Volume is determined by its Service Level configuration setting, which can be set to either Standard, Premium, or Extreme. This allows database administrators to create separate storage tiers based on the performance requirements of the data to be stored.
Cloud Volumes Service enables users to deploy new NFS and SMB Cloud Volumes within seconds, with the ability to immediately scale these volumes up or down in size at any time. Even the service level can be changed instantly after a Cloud Volume has been created and contains live data. These features make it fast and easy to administer Cloud Volumes Service when supporting a large and growing database environment.
Robust I/O performance is also crucial for processing big data analytics workloads that operate over very large datasets. Storing data centrally using NFS makes it easier to scale compute clusters such as Apache Hadoop. However, these NFS file services would need to deliver exceptional levels of I/O performance and easily scale to handle hundreds of individual compute nodes. As shown earlier, Cloud Volumes Service can meet these requirements with just a single storage volume.
NetApp’s recent performance tests show that Cloud Volume Service delivers unprecedented levels of I/O performance for Oracle and MySQL databases. But there are plenty of other application domains that can directly benefit from very fast I/O, including database systems and big data analytics.
To get the fastest storage for your workloads, sign up to try Cloud Volumes Service on AWS today.
In this article, we will look at the performance of NetApp’s Cloud Volumes Service, a new fully-managed shared file service for AWS. We will also examine the way in which Cloud Volumes Service can be used to drive greater levels of performance for applications that are usually I/O bound so we can answer the question: Is Cloud Volumes Service the fastest storage option for AWS customers?
Cloud Volumes Service I/O Performance
Cloud Volumes Service is a dynamically-scalable and fully-managed PaaS (Platform as a Service) solution for NFS and SMB file services that runs with AWS. That means you can access and deploy Cloud Volumes Service just like Amazon EBS or Amazon S3 as it simply extends the set of available options for cloud storage services. As well as delivering incredible I/O performance and throughput, Cloud Volumes Service also includes several sophisticated data management features, such as snapshots, volume cloning, and data synchronization.
But how fast is it? In the following two sections we will examine Cloud Volumes Service I/O performance for a specific use case: database workloads in MySQL and Oracle.
MySQL Database I/O Performance
In a MySQL performance test conducted by NetApp using a TPC-C workload, which is the industry-standard benchmark for online transaction processing (OLTP) database systems, Cloud Volumes Service was able to almost max out the bandwidth per connection allowed by AWS for inter-VPC traffic, with just a single storage volume. This test was performed using only a single instance of MySQL.
When the same test was repeated with multiple MySQL instances, the same single Cloud Volume was able to produce about four times the amount of I/O throughput, reaching approximately 16 Gbps. For all the details of how this performance test was carried out, the Amazon EC2 instance types used, and the MySQL configuration, you can follow the link above or click here.
Oracle Direct NFS I/O Performance
NetApp has also performed I/O performance tests with Cloud Volumes and Oracle using the Direct NFS feature of the database platform. Using Direct NFS, Oracle database is able to launch a very large number of parallel client connections to an NFS file service, which massively increases the amount of I/O throughput a database can achieve. This makes Direct NFS very compatible with the scalability of Cloud Volumes Service.
Using the SLOB workload generator across a variety of I/O test types, including pure read and mixed read/write, NetApp found that Cloud Volumes Service and Oracle were able to continue scaling up until all available resources of any Amazon EC2 instance type were completely consumed, even for the largest Amazon EC2 instances. These tests were all performed with a single instance of Oracle database.
In most of the tests, the performance of using a single Cloud Volume was on par with using two Cloud Volumes; however, using more than one volume improved application latency. In a 100% read workload, a single Cloud Volume was able to achieve almost 300,000 IOPS. For all the details of the Oracle workload test, check out the link above or click here.
Solutions Using Cloud Volumes Service
As we have seen from the performance results above, Cloud Volumes Service is an extremely effective storage solution for building database environments in AWS. The I/O performance of each Cloud Volume is determined by its Service Level configuration setting, which can be set to either Standard, Premium, or Extreme. This allows database administrators to create separate storage tiers based on the performance requirements of the data to be stored.
Cloud Volumes Service enables users to deploy new NFS and SMB Cloud Volumes within seconds, with the ability to immediately scale these volumes up or down in size at any time. Even the service level can be changed instantly after a Cloud Volume has been created and contains live data. These features make it fast and easy to administer Cloud Volumes Service when supporting a large and growing database environment.
Robust I/O performance is also crucial for processing big data analytics workloads that operate over very large datasets. Storing data centrally using NFS makes it easier to scale compute clusters such as Apache Hadoop. However, these NFS file services would need to deliver exceptional levels of I/O performance and easily scale to handle hundreds of individual compute nodes. As shown earlier, Cloud Volumes Service can meet these requirements with just a single storage volume.
Conclusion
NetApp’s recent performance tests show that Cloud Volume Service delivers unprecedented levels of I/O performance for Oracle and MySQL databases. But there are plenty of other application domains that can directly benefit from very fast I/O, including database systems and big data analytics.
To get the fastest storage for your workloads, sign up to try Cloud Volumes Service on AWS today.