hamburger icon close icon

Programming with NetApp Cloud Volumes Service APIs To Optimize AWS Experience

[Note: This blog has been updated to reflect the newest information.]

One of the most amazing things about the cloud is that cloud storage can be provisioned and managed entirely via software using Application Programming Interfaces (APIs). This ability allows DevOps and Site Reliability Engineers (SREs) to create compute instances, configure networking, and expand or contract resources to meet business and application requirements. What has been missing from the equation is flexibility, such as the ability to programmatically control persistent storage and increase performance when needed, as well as adjust performance to lower costs when workloads are lighter.

This area is one of many where NetApp Cloud Volumes Service for AWS can help. It is a fully managed file service for AWS customers that provides high performance shared storage over NFSv3, NFSv4.1 and/or SMB 2.1, 3.0 & 3.1.1 protocols and dual protocol file services to support Linux and Windows Amazon EC2 instances. The service is available via pay-as-you-go and contract-based subscriptions on AWS Marketplace.

Use NetApp Cloud Volumes Service RESTful APIs to control every feature via an API call, including provisioning cloud volumes, creating snapshots and instant copies, and changing performance levels non-disruptively. You can even change performance while clients are mounted and actively read and write to greatly reduce total storage cost. This freedom is a unique feature of the service.

In this blog, we’ll consider examples where the ability to change service levels can reduce storage costs by over 75%.

Before we get into some API programming examples, I recommend that you take a look at the online documentation about how to use the APIs, as well as how to find your Access and Secret API keys. There, you’ll also find the Cloud volumes Service for AWS Getting Starting guide. The guide provides several examples for listing volumes, creating volumes, snapshots, clones, and changing performance. The documentation also links to simple bash scripts—useful examples to help you quickly write code specific to your needs. Modern programming languages all fully support calling RESTful APIs including python, golang, java, C, C#, C++.

Creating Cloud Volumes

You can create a cloud volume in seconds via APIs, and fully define the protocols you need, such as export policies, performance, and data protection. Use the Service Levels in Cloud Volumes to set the appropriate throughput and allocate capacity to the cloud volume; the combination of the two define performance and cost.

The following table helps explain service levels.

APIs1

You can choose from three service levels and change a service level on the fly without having to re-provision volumes.

Choosing your Service Level based on workload needs.

  • Standard service level – provides economical cloud storage at a cost of $0.10 per GB per month. It enables throughput up to 16 MB/s for each TB allocated. This a low-cost solution that is ideal for infrequently accessed data. If you need to increase performance, you can either increase the allocation (e.g., 10 TB provides 160 MB/s) and/or choose a higher performance service level.
  • Premium service level – delivers a mix of cost and performance at a cost of $0.20 per GB per month. Premium provides four times the performance of the standard level with 64 MB/s for each TB allocated. This is a good fit for many applications where data capacity and performance needs are balanced.
  • Extreme service level – provides the highest level of performance. At a cost of $0.30 per GB per month, it enables up to 128 MB/s for each TB allocated. You can scale cloud volumes to deliver several GB/s for reads and writes. Extreme is ideal for high-performance workloads.

When creating a cloud volume, you provide the name, export path, protocol(s), policies, and service level and capacity you want to allocate. The following example uses a POST call to create a cloud volume with the standard service level, and an allocated capacity of 100 GB and exported using NFSv3:


curl -s -H accept:application/json -H "Content-type: application/json" -H api-key:<api_key> -H secret-key:<secret_key> -X POST <api_url>/v1/FileSystems -d '
{
        "name": "Test",
        "creationToken": "grahams-test-volume3",
        "region": "us-west",
        "serviceLevel": "standard",
        "quotaInBytes": 100000000000,
        "exportPolicy": {"rules": [{"ruleIndex": 1,"allowedClients": "0.0.0.0/0","unixReadOnly": false,"unixReadWrite": true,"cifs": false,"nfsv3": true,"nfsv4": false}]},
        "labels": ["test"]
}'

Change Cloud performance on the fly

If you find that you need more performance, you can increase the level of the cloud volume via an API call. The change happens in seconds as is non-disruptive to clients.

The following example uses a PUT call to change the service level to extreme, moving the allocated capacity to 500 GB:


curl -s -H accept:application/json -H "Content-type: application/json" -H api-key:<api_key> -H secret-key:<secret_key> -X PUT <api_url>/v1/FileSystems/cdef5090-aa5e-c2cf-6bba-f77d259a37f8 -d '
{
        "creationToken": "grahams-test-volume3",
        "region": "us-west",
        "serviceLevel": "extreme",
        "quotaInBytes": 500000000000
}'

It’s just as easy to lower the performance and therefore lower storage costs via APIs. The following example code will adjust the service level to standard:


curl -s -H accept:application/json -H "Content-type: application/json" -H api-key:<api_key> -H secret-key:<secret_key> -X PUT <api_url>/v1/FileSystems/cdef5090-aa5e-c2cf-6bba-f77d259a37f8 -d '
{
        "creationToken": "grahams-test-volume3",
        "region": "us-west",
        "serviceLevel": "standard",
        "quotaInBytes": 500000000000
}'

You’ve just learned how to change performance. You can include this API in scripts for tasks—such as starting and stopping tasks and scheduling changes to file service performance to lower cloud storage cost over time.

Scripting Performance Changes

Using the example update-cv.sh, you can quickly script to increase performance before running an intensive task such as machine learning, and then lower performance to reduce storage costs when the task finishes.


#! /bin/bash
# script to increase cloud volume performance for a machine learning app and then lower costs when finished.
 
./update-cv.sh -m arcadian-pedantic-shaw -l extreme -a 30000 -c us-west-1.conf
./machine-algorithm.py
./update-cv.sh -m arcadian-pedantic-shaw -l standard -a 10000 -c us-west-1.conf

This script will increase the performance to 3.8 GB/s (128 MB/s * 30 TB) to accelerate the machine learning task.

Note. This performance level would cost $9000 / month (30000 GBs @ $0.30 / GB) if we run it all the time.

Once the task finishes you can drop the performance to 160 MB/s (16 MB/s * 10 TB), which meets the I/O needs to review the resulting data but at a significantly lower cost.

Note. This performance level would cost $1000 / month (10000 GBs @ $0.10 / GB) .

The cost savings will vary, but if you run the machine learning application for 20% of the time and adjust the Cloud Volume service level you could save:
$9000 – (($9000*0.2)+($1000*0.8)) = $6400, which is more than a 70% saving.

By using Cloud Volumes Service Premium service level only when needed, you can realize additional savings by running fewer Amazon EC2 instances for a shorter time to complete the machine learning task.

Scheduling Changes

Using a scheduler such as cron in Linux, you can define when you want to increase and decrease performance to control costs while meeting business needs. This function can be very useful when applications, including databases, need fast performance for a few hours to process weekly reports or for user home directories, which should lower costs during evenings and weekends.

Example crontab file:


# Runs at extreme performance for 14 hrs every week to accelerate order processing
# Increase the performance of cloud volume ‘arcadian-pedantic-shaw’ every Thursday at 8am
0 8 * * Thu /opt/cvs-api/update-cv.sh -m arcadian-pedantic-shaw -l extreme -a 20000 -c us-west-1.conf
 
# Decrease the performance of cloud volume ‘arcadian-pedantic-shaw’ every Thursday at 10 pm
0 22 * * Thu /opt/cvs-api/update-cv.sh -m arcadian-pedantic-shaw -l standard -a 10000 -c us-west-1.conf

This schedule would save over 75% vs always running at the extreme service level ($6000).
($6000 * 0.833) + ($1000 * 0.9166) = $1416

Automate Cloud Data Protection

You can also programmatically create snapshots of cloud volumes. NetApp Cloud Volumes Service provides snapshots policies that let you define when snapshots are taken and how many to retain (via UI or API). It can also be useful to take snapshots of a data set before you work on tasks like updating applications or running a new algorithm; doing so gives you a point-in-time recovery in case you have an issue or want to run different algorithms against the original data-set.

Using the example script snap-cv.sh you can quickly create a snapshot before starting a job or to make a consistency point to create a backup from.


#! /bin/bash
# script to create a snapshot of cloud volume ‘arcadian-pedantic-shaw’ before running ML job
./snap-cv.sh -m arcadian-pedantic-shaw -c us-west-1.conf
./machine-algorithm.py

You can also restore volumes from point in time snapshot via APIs.

Using the example script revert-snap.sh, we can revert volume ‘vol3’ to the latest snapshot.


./revert-snap.sh -m vol3 -s last -c us-west-2.conf

We can also revert to older snapshots by selecting their unique IDs.


./revert-snap.sh -m vol3 -s a6518730-eaff-cc24-d020-52e25ea91c1b -c us-west2.conf

Deleting Cloud Volumes

It takes only a few seconds to create a cloud volume and to delete a Cloud Volume, making it practical to use Cloud Volumes as a high-performance shared scratch space for ephemeral workloads.

For example, using AWS CloudFormation, we can call AWS APIs to create hundreds of Amazon EC2 instances, and Cloud Volume APIs to create high performance shared volumes that all of your instances can mount.  This approach can be used to run compute and storage intensive jobs against a new data set. Once the job finishes, you automatically terminate the instances and delete the cloud volumes to save costs.

The example script delete-cv.sh can be used to delete volumes. The following example will delete cloud volume ‘test’ in region eu-west-1:


./delete-cv.sh -m test -c eu-west1.conf

Of course, deleting a volume or reverting to a snapshot is destructive, so use these scripts with caution.

Note that Cloud Volumes API keys are unique to each user and only available to privileged AWS IAM users.

Next Steps

Every feature of NetApp Cloud Volumes Service for AWS is available through a web user interface, and through RESTful API for programming tasks, such as creating volumes, cloning, making snapshots and changing performance levels. RESTful APIs can be called by modern programming languages, making it easy to include cloud volumes in custom scripts, and allowing partners to integrate them into their applications.

To learn more about using service levels and RESTful APIs, watch this video (5:29 duration).

Graham Smith, Senior Product Manager

Senior Product Manager

-