When an organization is considering moving workloads to the cloud, the inevitable questions that arise are how easy—or difficult—the process will be and how it can be executed with minimal disturbance to users. Setting up a cloud environment can be done with just a few clicks, but moving data to the environment can pose a real challenge because data is constantly in use, and even changing, while applications are running.
The best way to migrate workloads to the cloud is the “lift-and-shift” approach. With the lift-and-shift migration method, an application and its data are migrated to the cloud with minimal or no changes in the application. The term “lift and shift” is fairly straightforward—an application is simply lifted and shifted to the cloud, much like one would lift cargo and move it into a different storage space. There’s still legwork, but it’s simpler than rearchitecting from scratch.
In addition to determining whether there is enough room in the storage space for the “goods,” when migrating to the cloud, it is important to consider the resources the application will require in the new environment.
In the majority of cases, in on-premises environments, more resources are provisioned than the application actually needs. It’s important not to rush into migration, and instead take the time needed to revisit application requirements when moving the application to the cloud. Doing so can lead to significant cost savings.
Almost all cloud providers are able to upgrade a resource while an application is running. It is possible, however, to start with a smaller configuration and upgrade the server (application) in the cloud environment as needed.
Before an application can be moved, the environment and architecture must be prepared. The assessment phase requires accessing all of the resources the application needs. Compute, network, and storage must all be taken into account, and a clear plan should be laid out.
Once a plan is in place, the next step is to allocate resources in the cloud. This involves first setting up compute and storage resources and then determining how to connect to these allocated resources. Securing the cloud environment is another critical step when provisioning the network.
Once the environment has been provisioned, the data can be moved to the cloud. The appropriate method for migrating data to the cloud depends on the data storage type, which can vary. There are a number of ways to move data to a NetApp Cloud Volume Service.
Securely connecting on-premises data centers with a NetApp Cloud Volume Service environment is perhaps the most challenging part of the process of data migration to the service. First, the connection between the data center to the cloud provider must be established via a secure VPN connection. Once this has been done, it is possible to begin mounting volumes to servers with the migration tool installed on it.
NetApp Cloud Volume Service for AWS is a fully managed cloud service that makes it easy for customers to migrate to and manage mission-critical workloads and applications in the cloud.
CVS can be provisioned very quickly—in a matter of minutes. Migrating the data to CVS can also be done with ease, provided the right tools are used. Cloud Volumes Service supports both CIFS/SMB and NFSv3 protocols. There are a number of third-party tools on the market that can be employed to move data to Cloud Volumes Service.
Quest Secure Copy can be used to migrate data with zero impact to CIFS volumes on CVS. Quest Secure Copy can migrate data, attributes, and NTFS permissions. The tool has the benefits of shortening the migration process and making it more secure. Configuring the migration process is simple; all that is needed is to select the source and destination target.
The only step required prior to configuring Quest Secure Copy is granting the migration server access to both the source on-premises and the destination in CVS. This allows the migration process to be configured and tested before running the actual process. It is also possible to schedule a job in advance of maintenance window approval.
PeerSync from PeerSoftware is another useful tool for migrating data from a file server or an on-premises CIFS server. The tool offers real-time replication. Files can be replicated to other servers when all file events (file add, delete, modify, open, and local) are happening.
It also integrates with popular vendors, including NetApp’s DataONTAP. In addition, PeerSync features WAN optimization, making it ideal for data migration to the cloud. PeerSync can be used as a stand-alone version or as part of Peer Global File Service (PeerGFS).
Rsync and Robocopy are two tools that are well known to system administrators. Rsync is widely used in the Linux world and is useful for migrating data between NFS volumes on premises and NFS volumes in Cloud Volumes Service. Rsync copies data from one mount point to another.
In order to configure replication successfully, a server needs to be running Linux distribution and both volumes must be mounted to this server. Rsync is then used to synchronize one volume to another. It examines the time and size of each file in order to determine if any files are different and require copying. It then selects the relevant file and copies it to the destination. For users who do not want to rely on the algorithm alone to catch all changes, checksum comparison can also be turned on—though this will extend the time to completion.
RoboCopy, Xcopy’s replacement, offers greater options and is part of the Windows Resource Kit. It was introduced as a standard feature in Windows Vista and Windows Server in 2008. Among the advantages of Robocopy is its ability to resume copy after network interruptions. It is also able to copy file data and attributes correctly, preserving original timestamps as NTFS and ACLs, as well as owner and audit information.
The Windows file server administrator frequently denies reads on many files. This can be bypassed by including the “switch backup all” command. Anybody who has ever migrated files from the source to destination has likely experienced the issue of an error path that was too long. With a limit of 32,000 characters as opposed to 259 with a regular copy, Robocopy eliminates this problem.
NetApp’s free XCP Migration Tool is popular among NetApp storage system administrators. The tool can also be used to migrate data from on-premises to cloud volumes on CVS. XCP is capable of fast CIFS and NFS migrations and offers a number of advanced features. The tool can handle the migration of volumes with millions of files.
XCP is able to scan volumes and recursively read all subdirectories. It then develops easy-to-read lists and reports to be used for further processing. The copying process transfers the files to the destination in a way that matches the source exactly, along with hard links, symlinks, special file type permissions, ownership, and NTFS ACLs.
After the copying process is complete, XCP can verify whether the files are in fact the same on both ends. One good way to minimize downtime is by resyncing volumes of files that were changed over the copying process and then cutting them over. This would allow access to the data in the cloud in the shortest amount of time. Network interruptions can occur while moving data to the cloud, but those interruptions do not mean the migration process needs to be re-executed from the beginning. Instead, the tool can start from where it stopped.
Discussed above are just some of the tools that can be used to migrate data—whether NFS or CIFS—to Cloud Volumes Service using a “lift-and-shift” method; but any tool available on the market that supports NFS or CIFS and migration can be used.
The migration process can be daunting, but it doesn’t have to be. Using virtually any migration tool that supports NFS or CIFS, migration to Cloud Volumes Service is easy and can be just as simple as migration between two physical servers in the on-premises world.
Learn more about the benefits of moving to Cloud Volumes Service for AWS today. Learn about the 5 phases of enterprise migration to the cloud with our ebook!