With the hybrid and multicloud storage strategy now widely being adopted by many organizations, it’s important for both business leaders and engineers to truly understand what these hybrid and multicloud storage strategy and deployment models actually entail and the impact they’ll have on technology decisions.
In this article, we are going to address the typical mistakes and misconceptions associated with hybrid and multicloud deployments, and how Cloud Volumes ONTAP can help.
The first reaction business leaders often have about hybrid and multicloud strategies is to think about it as a technology-agnostic environment. That reaction usually translates into two core misconceptions:
Both of these ideas are wrong. To understand the importance of hybrid and multicloud to an organization, it’s important to look beneath the surface and understand what that actually means at both the micro and macro levels.
In the context of a single system or solution—i.e., at a micro level—there are certain requirements that can warrant the need to design a hybrid or multicloud architecture. Yet, it is worth keeping in mind that such architectural decisions should never be taken lightly.
In such architectural scenarios, there is a natural tendency to opt for the lowest common denominator in terms of cloud capabilities, such as using only virtual machines instead of leveraging managed services due to the inherent distinctions of each cloud provider. Virtual machines are available in any cloud environment, enabling customers to install their own platform services, while similar managed services may or may not exist depending on the cloud provider. Not leveraging managed service in cloud-native applications increases significantly your operational overhead and does not enable you to really take advantage of the cloud benefits.
The unique or more advanced capabilities of a given cloud provider in a certain area is a good example where it could be justified to design a hybrid or multicloud architecture. The rapid innovation pace of the major cloud providers makes it very appealing to select best of breed services and be able to take advantage of certain capabilities.
A typical example of a multicloud and hybrid deployment is an on-premises system that can burst to the cloud for data processing, analytics, and machine learning training. These models are also beneficial in systems that require some of their data processing and storage to happen in a specific geographical location (e.g., Europe or China) for compliance and legal reasons. In that scenario, development teams might want to deploy part of the system to a specific cloud provider that offers resources in that needed geographic region.
When looking from an organizational strategy perspective—i.e. on a macro level—it is truly important to understand the role of hybrid and multicloud. Even when using only a single cloud provider, it is good to note that any company is only a M&A away from having to govern resources in multiple cloud providers.
A company that today only uses AWS, might decide to acquire another company that only uses Microsoft Azure and find themselves in a challenging situation overnight if their governance and operating model does not take hybrid and multicloud into account.
In large organizations it's fairly common to have more than one cloud provider in use. Moreover, with hefty investments made in on-premises data centers during the past years, a lot of organizations are still slowly working on migrating their workloads to the public cloud and managing their hybrid cloud environment.
While M&A and cloud migration are two obvious examples that push hybrid and multicloud strategies, there are also plenty of organizations that made a conscious choice to opt for that path. While all cloud providers follow a pay-as-you-go cost structure, there can be special pricing advantages for customers who are willing to commit to a certain spending level. A reserved capacity commitment can lead to discounts over 70% of the usual pricing. With big cloud providers fighting for market share, there are often good volume discount opportunities for customers to select a second cloud provider to fulfill their computing needs. While that might bring additional complexity to the organization, when combined with a solid cloud management operating model, it can bring massive cost efficiencies.
It all boils down to cloud and data governance. That is the real “secret” to make it work in the day to day. It does not matter if you are working with a single cloud provider, multiple, or only on-premises data centers. Every organization needs a governance and operating model that can bridge the gap between engineering and executive leadership.
Cloud and data governance is not a one-time thing. Neither is it something done in a siloed part of the organization. It is a continuous process and discussion that connects the dots and becomes the fabric of the organization. Discussions and decision making can’t be done in vacuum; these require multiple stakeholders such as technical leads, architects, purchasing, procurement, cyber security, product and service owners, to truly come together and sit down in the same virtual table to decide on tooling, processes and guidelines.
When engineering teams are designing a solution architecture, the baseline tooling and guidelines have a huge role in shaping the implementation plan. When it comes to the cloud, it is always preferable to use a single provider to maximize the usage of out-of-the-box managed services but worth considering a multicloud architecture scenario if there is a good use case for it.
The natural vendor lock-in that happens when using a single cloud provider is often an argument used to opt for hybrid and multicloud architectures. The fallacy of that reasoning is that it will force the organization to use the lowest common denominator in terms of computing, networking and storage services, therefore, increasing significantly the operational overhead and preventing teams from reaping the benefits of a cloud-native design.
It is worth remembering that in any technology choice there is a lock (one way or the other), whether it’s in programming language, database, containers, functions, CI/CD, etc. Considering the tradeoffs, both technical and business, of those choices is truly essential when selecting the technology stack.
As part of their risk assessment and thought process, architects can weigh the pros and cons of each option and consider their exit strategy. As an example, a certain cloud-native solution in provider X could be built five times faster using certain managed services (and operated with minimal effort) compared with building on top of virtual machines with the same provider. Let’s say if and when the solution needs to be migrated to another cloud provider (i.e., the exit strategy) it would take 12 months to complete. Is that delay tolerable for the business stakeholders? This mindset can really generate a more constructive dialog between engineers and business stakeholders on vendor locking concerns rather than threatening hybrid and multicloud architectures as silver bullets for it.
The ability to store data in the cloud with nearly unlimited capacity was a game changer for organizations. The pay-as-you-go model was a perfect fit compared with the constant blind capacity increases. Built-in features such as easy encryption at rest and data tiering made it quite attractive for engineering teams to take it into use in their day-to-day software development. The case for storage is even more important in hybrid and multicloud.
Storage and data persistency have critical roles in shaping the way we think about hybrid and multicloud. Where and how data is stored defines what the potential exit strategy might be, the effort required, and what type of integrations are possible.
Each cloud provider has different services for storing data, such as object and file storage, that don’t provide interoperability. Therefore, migrating a system from cloud provider A to cloud provider B typically involves copying data from A to B, which consumes a lot of time and engineering resources. Moreover, when it comes to data governance, it implies that certain controls might need to be implemented in slightly different ways, to accommodate the specificity of each storage service / cloud provider.
With Cloud Volumes ONTAP, a managed storage platform that works natively with on-premises environments and the main cloud providers, you can greatly improve your cloud and data governance processes and tooling in a hybrid or multicloud deployment across AWS, Azure, or GCP.
With Cloud Volumes ONTAP you gain much needed enterprise-grade advanced capabilities such as data tiering, cost-cutting storage efficiencies and data protection at scale, that span across all environments and bring consistency to how data can be managed. Plus, with extreme high performance and data cloning features, it opens the door to new possibilities when designing hybrid and multicloud system architectures or significantly decreasing the time required to migrate between different environments, both on-premises and cloud.
Learn more about enterprises that turned to multicloud and hybrid cloud architectures with Cloud Volumes ONTAP.