hamburger icon close icon

Putting Windows Workload Performance on CVS to the Test

August 30, 2019

Topics: AWS5 minute read

Out of all the critical aspects of cloud-based file services—security, compliance, access control, flexibility, availability—high performance may be the most demanding.

In this blog, we’ll examine the results of a File Server Capacity Tool (FSCT) benchmark test that rates the performance of NetApp® Cloud Volumes Service for AWS. Because the product caters to multiple enterprise cloud-based file share use cases, the results should help customers select the right solution based on their individual performance requirements.

Cloud Volumes Service for AWS

Cloud Volumes Service uses the industry-leading NetApp® ONTAP technology to deliver a secure, flexible, and high-performance cloud-based NAS file service in AWS. Some of the features and advantages of using the service are

  • Multiprotocol access over NFS and SMB.
  • High performance cloud file share service using all-flash storage with three performance tiers, with a maximum IOPS of more than 450K.
  • NetApp Volume Encryption (NVE) technology with AES 256-bit encryption is used for encryption of data at rest, while data in transit encryption is supported using client/server SMB protocol capabilities.
  • Flexibility in configuring authentication for SMB volumes because customers can choose from authentication using standalone AD or AWS Managed Microsoft AD.
  • Supports NetApp Snapshot™ technology to create point-in-time backup copies of cloud volumes, which can be used for DR purposes or for fast cloning of environments.
  • Offered as a fully-managed service with NetApp Cloud Central acting as a single pane of management for all Cloud Volume Service instances across multiple cloud platforms.
  • Cloud Volumes Service ensures data-flow security using proven multi-tenant isolation technologies such as VLAN, VRF, and BGP for infrastructure and tenant data path security.

The Benchmark Test

To benchmark the performance of Cloud Volumes Service for AWS, we decided to benchmark how a home directory would perform at peak usage. To that end, we used the File Server Capacity Tool (FSCT).

Home directories have traditionally been a quintessential component of every organizations’ file- and folder-sharing structure. In large organizations, the data in these file shares is often accessed simultaneously by thousands of users for their day-to-day work. The underlying storage system or service that hosts the home directories and file shares should be capable of handling storage read/write requests from these users—especially during peak office hours—without any performance degradation.

The File Server Capacity Tool (FSCT) can be used to simulate a peak usage scenario for a home directory, where multiple SMB requests from clients are generated to put a load on your storage system used for storing user’s home directories (known as the “home folder workload” in FSCT terminology) and measure its performance. The FSCT controller is configured to initiate the test and collect performance data using perfmon counters. A client computer is also used to generate the SMB requests for the stress test. The windows server perfmon counter gives insight into storage performance and throughput while the test is in progress. This data is also stored in FSCT controller once the test is completed.

We conducted the benchmark test using the FSCT home folder workload with target home directories hosted on Cloud Volumes Service for AWS. The test was conducted to simulate a load of 12,000+ concurrent users accessing the home directory to simulate a typical enterprise-class deployment scenario. The user load was systematically increased to test the maximum capabilities of Cloud Volume Service for AWS. 

Workload metadata operations are as crucial as data read/write operations, and as such metadata retrieval speeds have a great impact on performance. In FSCT, performance is measured in terms of users supported without “overload,” or the point at which the input to the system exceeds its processing capability. It also considers the following:

  • The ability to schedule/complete one user scenario in 900 seconds.
  • 1%-2% overload for a range of users. 

The following chart shows the FSCT user/throughput of NetApp Cloud Volume Service in our test:

User/Throughput NetApp Cloud Volume Service

The Results

The results of the FSCT benchmark test can lead to the following conclusions:

  • Cloud Volumes Service supports 12000+ number of users.
  • Cloud Volumes Service was able to achieve 1086 scenarios per second over a 15-minute time period (12410 users).
  • Cloud Volumes Service costs only $0.0455 per user on weekdays and $0.023 per user on weekends. This is possible by switching between the Standard performance tier during weekends and the Extreme performance tier during weekdays.

Weekend/weekday cost calculation:

  • Total capacity consumed by 12400 users:
    • Weekend (12400 Users * 80Mib)/1024 = 968.75 GB
    • Weekday (12400 Users * 155Mib)/1024 = 1880 GB
  • Cost of the consumption on weekdays
    • 1880 GB * 0.30 = $564
  • Per day user cost on weekdays
    • 564/12400 = $0.0455
  • Cost of the consumption on weekends
    • 75 GB * 0.30 = $290.63
  • Per day user cost on weekends
    • 63/12400 = $0.023

Total weekday and weekend cost calculation:
260 weekdays = $0.0455 per user * 5
105 weekends = $0.023   per user * 2
Average yearly cost per user = $0.034

Table with CVS (weekdays and weekends), days, average cost per user and average yearly cost per user


Our FSCT benchmark tests results show that NetApp Cloud Volumes Service for AWS has an advantage in terms of performance during peak utilization hours for a home directory. Additionally, Cloud Volumes Service also provides flexibility for customers in terms of data protection capabilities, security, pricing, and more. 

For instance, Cloud Volumes Service offers multiple options for DR for your mission critical data. Cloud Volumes Service provides volume-level restore using snapshot backups and file-level restore through the Cloud Backup Service (in beta release).

When it comes to security, Cloud Volumes Service can be integrated with AWS Managed AD (here) or standalone Microsoft AD configured by the user.

Cloud Volumes Service also has a flexible pricing model. Cloud Volumes Service pricing is based on the performance tier selected—Standard, Premium, or Extreme—each of which can be changed on the fly based on target use cases. Also, all the data-management features of Cloud Volumes Service are accessible from single UI with minimal setup and configuration overhead required.

From these findings, Cloud Volume Service for AWS emerges as the best-fit solution not just for home directories, but for enterprise-grade cloud file share use cases such as data analytics, application migration, content management, and more. 

Ready to Try Out Cloud Volumes Service?

Learn more about Cloud Volume Service for AWS or take it for a spin.

Solution Architect