hamburger icon close icon

Windows Applications on Google Cloud Platform: A Cloud Volumes Service for Google Cloud SMB Story

March 26, 2019

Topics: 10 minute read

If you’re a Google Cloud Platform customer, you need wait no longer for SMB3.0 support: NetApp® Cloud Volumes Service for Google Cloud Platform is here. A considerable number of enterprise applications and business use cases have SMB needs, thus I wrote this blog post.

The need for SMB support covers so many applications, we in the NetApp Cloud Data Services business unit are always in the market for more use cases and stories. So, feel free to drop us a line about your story. Among the list of topics, consider the following:

  • Windows web servers
  • Financial applications, such as for day traders
  • Geographic information system (GIS) applications
  • Hospital and medical-imaging applications
  • Multimedia content development
  • Oil and gas applications, such as the Schlumberger Petrel platform
  • Adobe content management
  • Windows home directories

Whether your use case fits in this list or not, if you’re considering moving your Windows application to the cloud, you have a list of needs. In this blog post, I explore the capabilities of Cloud Volumes Service as it relates to SMB. From there, I establish the architectural limits of the environment. If you have read any of my papers or have attended any seminar at which I have presented, then surely you know that I value the identification of limits. By identifying limits, we can establish proper expectations and just about ensure positive outcomes.

This blog post exposes the read and write throughput and I/O limits—there’s that word again—of the Google Cloud Platform environment as they relate to Cloud Volumes Service. If you are very familiar with the SMB protocol, then you know its reputation for chattiness. To that end, you can expect a forthcoming paper that dives deeply into metadata performance.  

To begin, let’s look at an overview of the capabilities of Cloud Volumes Service, then we will look at the performance overview. So, let’s go swimming (rather than deep diving).

Capabilities of Cloud Volumes Service for Google Cloud Platform

With NetApp Cloud Volumes Service, you get Active Directory integration, data protection, and the ability to create volumes from NetApp Snapshot™ copies of volumes.

Active Directory Integration   

In Cloud Volumes Service for Google Cloud Platform, you can easily establish Active Directory connections from the dashboard. The service supports one Active Directory connection per region (for example, one for us-east4 and one for us-central1) per project. You set up an Active Directory connection by using an account that has privileges to create a compute object within Active Directory.

After you create your first SMB volume by using this connection information, Cloud Volumes Service contacts the domain controller and creates the computer account and the associated DNS records. You go through this setup process only once; subsequent SMB volumes automatically use the same credentials.

Note: The following screenshot shows part of the Active Directory integration process. NetApp recommends that you use the DNS name in the Domain field and that you use the name of the computer account as the NetBIOS value.

Screen Shot 2019-03-25 at 11.52.04pic1

Data Protection  

Data Encryption

NetApp Cloud Volumes Service uses storage encryption to give you full disk encryption without compromising storage application performance. With this single-source solution, you can meet overall compliance with industry and government regulations without negatively affecting the user experience. The data for each customer is encrypted with its own unique key.

Data Durability

Your data in Cloud Volumes Service is protected against multiple drive failures. And it’s also protected against numerous types of disk errors that could otherwise affect not just your data durability, but also your data integrity. You no longer have to worry that your data is going to disappear.

Logical Backups

Snapshot copies act as logical backups, so they are point-in-time representations of your data. From the Cloud Volumes Service Google Cloud Project user interface, you can schedule the creation of Snapshot copies, or you can create them manually. Cloud Volumes Service is evolving rapidly, and I can tell you that API support for Snapshot creation is coming soon.

You have several choices when you use Cloud Volumes Service to schedule Snapshot creation. You can schedule hourly, daily, weekly, and monthly Snapshot copies, each independently.

pic2

Or if you need to create a Snapshot copy manually, it’s easy from the Cloud Volumes Service dashboard. Just go to Snapshots > Create Snapshot, name the Snapshot copy, select the volume, and click Save.

pic3

How Cloud Volumes Service Snapshot Copies Work  

NetApp Cloud Volumes Service Snapshot copies are fast, plentiful, and nondisruptive to your operations. NetApp Snapshot technology simply manipulates block pointers, creating a “frozen” read-only view of a volume that enables applications to access older versions of files and directory hierarchies without special programming. Because the actual data blocks aren’t copied, Snapshot copies are extremely efficient both in the time that it takes to create them and in storage space. Saving on both helps improve your efficiency and your bottom line. A Cloud Volumes Service Snapshot copy takes only a few seconds to create—and typically less than 1 second—regardless of the size of your volume or the level of activity in your environment.

Following is a visual representation of the Snapshot process. A Snapshot copy is created in 1a. In 1b, changed data is written to a new block, and the pointer is updated. But the Snapshot pointer still points to the old block, giving you a live view of the data and a historical view. Another Snapshot copy is created in 1c. You now have access to three generations of your data without taking up the disk space that three unique copies would require. In order of age, from most recent to oldest, those generations are live, Snapshot 2, and Snapshot 1.

pic4

Because a NetApp Cloud Volumes Service Snapshot copy incurs no performance overhead, you can comfortably store up to 255 Snapshot copies per volume. And you can access all of them as read-only and as online versions of the data. To access read-only versions, use the /.snapshot directory at the root of the file system. For online versions, go to the cloned volumes. In-place restore is coming sometime in the future.

Create Volumes from Volume Snapshot Copies

Another advantage that you get from Cloud Volumes Service is the ability to easily create volumes from volume Snapshot copies. You can use these volumes to create point-in-time testing or development file systems. With this powerful capability, you can make a copy of a dataset in minutes to accelerate application development, or you can create a copy for analytics to accelerate insights from analytics.

As the following screenshot shows, to create a volume from a volume Snapshot copy, go to Volumes > Create Volume and simply select that Snapshot copy in the Snapshot field.

pic5

Performance Overview

Now let’s look at a performance overview for NetApp Cloud Volumes Service. Our survey of applications that benefit from SMB showed such diversity that we deemed it prudent to focus on generic workloads. Therefore, this introductory blog post focuses on the general category of enterprise applications that require Windows File Service. Let’s call it Enterprise App X.

Enterprise App X is a scale-out custom application that relies heavily on SMB to give each of the many compute instances access to one or more shared file system resources. As the application architect, you’re not quite sure about the I/O needs, except you know that they are large and distributed. To understand the I/O needs, let’s explore what the NetApp Cloud Volumes Service can do by answering the following questions:

  • How many IOPS can Enterprise App X generate against a single cloud volume?
  • How much bandwidth can Enterprise App X consume against the same volume?
  • How much bandwidth can Enterprise App X consume in total in the Google Cloud Platform project?
  • What response time can Enterprise App X expect?

The following results come from Vdbench summary files. Vdbench is a command-line utility that was created to help engineers and customers generate disk I/O workloads that they could use to validate storage performance. We used the tool in a client-server configuration, with a single mixed master/client and with 15 dedicated client Google Compute Engine (GCE) instances—thus scale-out.  

The tests were designed to identify the limits that the hypothetical Enterprise App X might experience and to expose the response-time curves up to those limits. Therefore, we ran the following scenarios:

  • 100% 8KiB random read
  • 100% 8KiB random write
  • 100% 64KiB sequential read
  • 100% 64KiB sequential write
  • 50% 64KiB sequential read, 50% 64KiB sequential write
  • 50% 8KiB random read, 50% 8KiB random write

Before I go any further, let’s talk about the environment that we used in our tests.  

The Region

We conducted all our tests in the us-central1 Google Cloud Platform region.

Project-Level Bandwidth

At the project level, GCE instances are currently provided with approximately 13Gb of total redundant bandwidth (26Gb usable) to access Cloud Volumes Service. This limit might be raised in the future.

GCE Instance Bandwidth

To understand the bandwidth that’s available to a GCE instance, you must understand that inbound and outbound (read and write) rates are different.   

Limits per GCE Instance

The following limits apply per GCE instance:

  • Writes to Cloud Volumes Service are rate-limited by Google Cloud Platform at 3Gb per second (Gbps).
  • Reads from Cloud Volumes Service are unrestricted. Although you can anticipate rates of up to 3Gbps, testing has shown that up to 6Gbps can be achieved.

Each GCE instance must have enough bandwidth in and of itself to achieve these numbers. Why should you trust the documentation? Find out for yourself by using iPerf3, a tool to actively measure the maximum achievable bandwidth on IP networks.

We used GCE instances of type n1-highcpu-16 for most of the testing that I describe in this blog post. By running iPerf3 from two n1-highcpu-16 instances, we discovered that this machine type has 5Gb of bandwidth.

Volume-Level Bandwidth

Volume network bandwidth is based on a combination of service-level and allocated capacity. However, the total bandwidth that’s available to all volumes is potentially constrained by the bandwidth that is made available to a project. Bandwidth calculations work as shown in the following table.

Service Level

Bandwidth Available per TiB of Capacity

Bandwidth at 10TiB Capacity

Bandwidth at 20TiB Capacity

Standard

16MiB

160MiB (1.25Gib)

320MiB (2.5Gib)

Premium

64MiB

640MiB (5Gib)

1280MiB (10Gib)

Extreme

128MiB

1280MiB (10Gib)

2560MiB (20Gib)


For example, although two volumes that are allocated 10Gbps of bandwidth each can both operate unconstrained, three volumes that are allocated 10Gbps each are constrained by the project and must share the total bandwidth.

Test Results

SMB3.0 Workloads: IOPS

The following graph demonstrates the amount of random I/O that you can expect to achieve with multiple clients against a single Google Cloud Platform cloud volume. Our test revealed that the maximum I/O in that scenario is ~306,000 8KiB IOPS.

pic6

SMB3.0 Workloads: Throughput

The previous graph documents the random I/O potential of a single cloud volume, and the next graph does the same for sequential workload. We ran the tests in a similar manner by using one to many Windows Server 2012 Datacenter Edition GCE instances as the Vdbench workers. In this case, the throughput consumed for 100% reads—3100MiB per second (MiBps)—reached nearly the full bandwidth that was available to the project, as the following graph shows. As I mentioned in the “Project-Level Bandwidth” section, a project presently has ~26Gb of total usable bandwidth, which equates to 3300MiBps.

Single Volume Throughput1

Up to ~120,000 IOPS at < 2ms Latency

Your applications can benefit from the excellent network latency that we saw across the board in Google Cloud Platform. As the following graph shows, in our tests, run from the us-central1 region, our system achieved ~120,000 8K random read IOPS at less than 2ms, and it achieved ~150,000 IOPS at just past the 2ms point.

Screen Shot 2019-03-25 at 11.55.33

A Solution for a Vast Range of Use Cases

Whether your use case is the oil and gas industry or an IAS web server farm, if you need shared file services, NetApp Cloud Volumes Service for Google Cloud Platform can meet your needs. With Cloud Volumes Service, you get:

  • Simple Active Directory integration
  • Flexible service-level-based performance
  • Fast, space-efficient scheduled and manual Snapshot copies
  • Self-service file restores through the client-accessible /.snapshot directory

And then there’s the exceptional performance that your system can achieve. Collectively, the GCE instances in a Google Cloud Platform project have roughly 26Gbps of aggregate bandwidth in relation to the Cloud Volumes Service. And the bandwidth limit might be raised in the future. Individually, each GCE instance can read between 3Gbps and 6Gbps; writes are constrained to 3Gbps. As the application architect, you allocate the volume bandwidth that meets the needs of your application. You can either scale up or scale out, within the constraints of the environment.

Request a Demonstration

See NetApp Cloud Volumes Service for Google Cloud Platform in action. Sign up now to schedule your personal Cloud Volumes Service for Google Cloud Platform demonstration.

Senior Cloud Solutions Architect, NetApp Cloud Data Services

-