When it comes to cloud workload mobility, the solution involves going beyond replicating some VMware stack to the cloud. Mobility is not about constrained migration of a static environment. It's about complete services that can be provisioned quickly, without freezing or shutting down a cloud workload.
The required flexibility of moving services and its challenges are not new in the world of enterprise IT.
However, with the rise of public clouds, including AWS and Azure, IT teams and leaders have another major incentive to open the doors (of their on-premises infrastructure) in a way that allows them to maintain control and decide when and where a cloud workload runs.
If done right, live migration and complete cloud workload mobility hold great benefits.
Just think about the option to leverage AWS spot instances when they are offered at their lowest, region-based prices (up to 90% discount according to Amazon).
If you are an IT leader in your organization, this article will help you understand the concepts and fundamental considerations for building a dynamic multi-environment cloud that can enable full cloud workload mobility.
Workload mobility means that at any point in time a service or application can be taken down, adjusted, and then restored to the target location with little or no disruption to end users and businesses.
However, the advantages offered by workload mobility do not remove a main challenge: that of moving the data while ensuring its integrity, such as database consistency.
The Data Gravity notion was proposed by David McCrory, who describes the dependency of applications and services on the data. As the data grows, so grows the amount of services and applications using it, which in turn increase the complexity of the environment.
In addition, McCrory discusses latency and throughput in relation to the data’s location: the closer an application is to the data, the better the throughput and lower the latency.
Considering all above, data mobility is a significant element when assessing the feasibility of migrating a workload in a fast and predictable way.
Workload mobility is a key component of controlling the balance between you and your vendors. The fact that you can switch vendors at any point in time puts you in control and reduces your vendor lock-in levels. It provides you with the option to benefit from the advantages of each, balancing your infrastructure footprint as you see fit between price and performance, and allows you to change at any time without risk.
In order to be certain, your systems need to be interoperable, meaning that each of the environments must have the capabilities required to run the workload. It also refers to your ability to use the same management tools, server images, and other software with a variety of cloud and hosting providers.
Interoperability ensures your workloads, processes, and procedures will work the same way consistently, no matter where they run.
In order to take advantage of using several different on-premises or off-premises infrastructures, you need a common denominator. For instance, the organization uses only the VMs and not higher platform capabilities, such as AWS beanstalk, that would make workload mobility almost infeasible.
To understand their least common denominators, IT leaders should relate to the on-premises infrastructure as the flagship standard. This also means, however, looking for third-party vendors who provide out-of-the-box compatible management solutions.
You should check both the AWS and Azure marketplaces to see what’s available from existing vendors, specifically the products that you already use to run your on-premises storage and workloads.
When looking at traditional migration, veterans in IT know that great efforts are involved, especially when it comes to reliance on manual activities; for example, the setting up of new physical machines and network addresses, both of which are subject to human error.
On the contrary, when dealing with migration in or to the cloud, transitional processes may appear simple and linear, especially if an environment is small, but this might be misleading. IT teams may then rush to manually migrate without implementing any automation of workload mobility.
The key to success is to automate as much of the manual processes as possible.
In order to keep an “always-on infrastructure” as you grow in the cloud, one of the main challenges is the scale and flexibility of your storage repositories. This includes both capacity and performance.
You must be able to expand your storage and move data seamlessly; for example, when disaster occurs and you want to ensure your secondary cloud site is up to date. For that, you need to maintain synchronous mirroring with clustering at the storage level.
You also need to employ compression and deduplication processes to ensure optimized data transfer and repository sites. Considering time and money, these protocols allow efficient and cost-effective data operations.
The benefits of workload and data mobility are great, and include optimized operational and hosting costs, as well as being able to bring the workload close to your users to avoid latency issues. It supports migration for dev/test purposes, allowing the newly developed features to return “back home” whenever required.
However, with all this beauty comes great challenges that require you to rethink architecture and qualify your existing vendors, making sure the services you purchase from them enable your applications to work everywhere, anytime, in the clouds.
If you are a NetApp ONTAP user, you can optimize and automate your data mobility to the cloud using ONTAP Cloud.