Subscribe to our blog
Thanks for subscribing to the blog.
June 12, 2019
Topics: 7 minute read
Imagine a cloud infrastructure that’s ready to support your needs as soon as you deploy your file-level applications. At the click of a button, your system’s performance is elastic, effective, and less costly. On top of that, there’s absolutely no need for you to manage your storage stack.
That’s the dream for any IT professional or engineer. And that’s exactly the reality that NetApp® Cloud Volumes Service for AWS provides.
With Cloud Volumes Service, you can deploy all kinds of applications, including high-performance computing and databases, without having to manage the underlying storage stack. Best of all, you can use the Cloud Volumes GUI or its API to do this deployment
Creating an NFS Volume in Cloud Volumes Service
For this walk-through, we’re going to assume that you already have an Amazon Web Services (AWS) account connected to your NetApp account. Every time you create a cloud volume for an AWS region, you need to connect all AWS network information. After you’re signed in, you’re ready to create a volume.
In just four steps, your cloud volume will be ready to go.
Step 1
Go to the Cloud Central page for Cloud Volumes Service for AWS and log in with your user account credentials. Then select Create New Volume.
Step 2
We are going to set up this volume for NFS protocol use; however, Cloud Volumes Service supports SMB 2.1, 3.0, and 3.1.1, as well as NFSv3 and NFSv4. We’ll look at an example of how to set up an SMB volume and a dual-protocol volume later in this article.
After you select the file-level protocol, it’s time to enter all of the volume details. These details include the volume’s name, region (make sure it matches the region assigned to your Amazon EC2 instance in AWS), the volume path of your choice, service level, allocated capacity, and security style. When you know the capacity you need for the volume and the bandwidth needed for data access, you can choose the optimal calibration of service level and allocated capacity. You can also change the service level on demand.
Step 3
Again, if this is your first time creating a cloud volume for AWS, the Cloud Orchestrator site prompts you to enter all of your AWS network details.
Review the information in the warning message to make sure you’ve entered everything correctly.
Step 4
To control access permissions, add an export policy for the NFS volume. At this stage, that means entering IP ranges or single IP addresses from the AWS instances that need access to the NFS volume. You can add more than one rule. When you’re done, select Create Volume.
You’ll then be redirected to the main screen, where the cloud volume will appear. And you’re all set. In just four steps, your volume is ready to go, with a clear export path.
Mounting the Volume from Your Amazon EC2 Instance
At this point, you need shell access to your Amazon EC2 instance. AWS has a very clear set of instructions on how to get SSH access for your instance. The Cloud Orchestrator site makes it easy for you to mount your volume, because it provides all the instructions, along with the exact commands you need to use. You can simply copy and paste that information into your shell command line. Just click the appropriate cloud volume and select the question mark icon next to the export path, as shown here.
You’ll then see the instructions window:
In this example, we mounted this volume from a Red Hat 7.6 instance. To do that, you first need to install the NFS client on your Red Hat Linux instance.
Next, you need to create the directory for the mount point:
Now apply the mount command:
sudo mount -t nfs -o
rw,hard,rsize=65536,wsize=65536,vers=3,tcp
172.17.52.20:/NFS-volume example1
As the footnote in the mounting instructions says, you’ll want to check which mounting options are best for your scenario. You’ll need to consider attributes such as read-and-write access to the file system, maximum read size, and maximum write size.
And that’s it!
You can use the touch command to create a file and make sure the mounting was successful.
Creating an SMB Volume in Cloud Volumes Service
If you need Windows SMB file-level access, you’ll follow the first two steps for creating an NFS volume, as shown earlier. But instead of choosing an NFS protocol, you’ll choose an SMB one:
SMB volumes allow you to select additional in-transit SMB3 protocol encryption:
Note: Do not enable SMB3 encryption if SMB 2.1 clients need to mount the volume.
The process for setting up SMB volume access differs from that of an NFS volume because, in this instance, you’ll need to enter different information for the volume access permissions. That’s because you’re no longer dealing with export policies, but with an active directory instead:
In the previous image, you can see how to integrate the volume with Windows Active Directory or with AWS Managed Windows Active Directory. When using AWS Managed Microsoft Active Directory, enter the following information for the Domain and Organizational Unit boxes:
- Domain: Use the value from the Directory DNS Name field in AWS.
- Organizational Unit: Enter the organizational unit in the format OU=NetBIOS_name>. An example is OU=AWSmanagedAD. For detailed information about integration with AWS Managed Microsoft Active Directory, check AWS Directory Service Setup with NetApp Cloud Volumes Service for AWS.
The final step is to mount the SMB volume. Just follow the mount instructions for Windows after the volume is created. The process is even simpler than mounting NFS volumes
Creating Dual-Protocol Volumes in Cloud Volumes Service
Finally, you can create dual-protocol volumes in Cloud Volumes Service. Because access will be enabled for both NFS and SMB clients, you need to enter both the export policy details and the Active Directory information when you create these volumes.
As the message states, make sure the users root and pcuser exist in your Active Directory before mounting the volume from NFS clients.
Creating a Cloud Volume by Using REST APIs
All the NetApp cloud orchestration capabilities are also supported through REST APIs. That level of flexibility offers additional management and cost advantages. For example, you can schedule an increase in a volume’s performance level when you’re going to deploy a more demanding workload. You can later return the volume to its normal performance requirements when the workload is no longer running.
In this section, we’ll show you how to create a volume by using the supported API call. Before getting there, we want to point you to the resources you’ll need for API management.
You’ll first need your API public key, API private key, and API URL. You’ll need the keys for each API call you intend to make to the server; otherwise, the authentication will fail. Log in to the NetApp Cloud Orchestrator site and select the API Access option:
Copy your URL from the navigation address bar. The URL you need to enter in your API calls should be in the format https://user.region.netapp.com:8080. In this example, the URL was https://cv.us-west-1.netapp.com:8080.
After you get your API keys and URL, you need access to the supported API calls and some example guides. You can get these from two main sources:
- You can find some examples and command syntax guides from the Cloud Volumes APIs section of the NetApp Cloud Volumes Service for AWS documentation.
- You can access the Cloud Volumes as a Service API webpage, which is also accessible from your main Cloud Orchestrator UI. This page contains the supported API operations and the syntax for every HTTP or HTTPS method selected. Review the following section carefully to familiarize yourself with the supported operations and command syntax:
Now that the API keys and URL are set, we are going to create an NFS volume from our Red Hat instance by using the Curl command tool from Bash. Curl is also supported in Windows.
Creating the NFS Volume by Using an API Call Through Curl
The command we use in Bash to create our volume is:
curl -s -H accept:application/json -H "Content-type: application/json" -H api-key:YkjhgTgyuuiiHGDddd -H secret-key:NBHgttyyuuiDD2hZandaMUliQ2NkS3RqcUU4 -X POST https://cv.us-west-1.netapp.com:8080/v1/FileSystems -d '
{
"name": "TestAPIVOL",
"creationToken": "random-token",
"region": "us-west-1",
"serviceLevel": "basic",
"quotaInBytes": 100000000000,
"exportPolicy": {"rules": [{"ruleIndex": 1,"allowedClients": "0.0.0.0/0","unixReadOnly":
false,"unixReadWrite": true,"cifs": false,"nfsv3": true,"nfsv4": false}]},
"labels": ["test"]
}'
In this code, the parameter creationToken is the same as the Volume Path field in the Cloud Orchestrator web interface. The text or path you enter here will be part of your volume export path and is required for mounting the volume. Also, if you want to enable SMB access to the volume, you need to set the cifs parameter to true.
Note: There is a separate API call you can use to enter all of your Active Directory details:
-X POST
"http://nfs.netapp.com/v1/Storage/ActiveDirectory"
With the volume already created, follow the instructions outlined earlier in order to mount it from your instance.
More Tools, Less Hassle
In this article, we’ve shown you how to create and mount NFS, SMB, and dual-protocol volumes for Cloud Volumes Service for AWS by using either the NetApp Cloud Orchestrator site or API calls. These utilities make it extremely easy for you to take advantage of Cloud Volumes Service’s functions so that you don’t need to focus your administrative efforts on mounting your volumes.
Ready to Get Started?
To try Cloud Volumes Service for AWS today, sign up for your free demo.