More about AWS High Availability
- AWS High Availability: Compute, SQL and Storage
- AWS Availability Zones, Regions, & Placement Groups Explained
- High Availability Cluster: Concepts and Architecture
- AWS Data Loss Prevention: 5 Strategies and 5 Tools You Can Use
- Windows Server Failover Clustering on AWS with NetApp Cloud Volumes ONTAP
- AWS GovCloud Services: Sensitive and Classified Data on the Public Cloud
- Making the Most of AWS High Availability Architecture for Enterprise Workloads
- AWS HA: How to Set Up Cloud Volumes ONTAP HA Configuration in Multiple Availability Zones for Multiple VPCs
- RPO=0: How to Create High Availability Systems in AWS to Prevent Data Loss
- AWS Redundancy: Creating Data Redundant Sites with Cloud Volumes ONTAP
- Application Availability in the Cloud: Meeting the Challenges of High Availability and Continuous Operation
December 21, 2017
Topics: Cloud Volumes ONTAP High AvailabilityData MigrationAWSDisaster RecoveryAdvanced5 minute read
Amazon Web Services (AWS) makes it easier than ever to deploy workloads in the cloud, but having your workload on AWS doesn’t mean that you can stop worrying about it. What if something goes wrong? Can you ensure that your AWS high availability requirements are met?
Take one look at the Shared Responsibility Model with AWS and you’ll see that you are still responsible for a good portion of the maintenance of your applications and data.
When a single point of failure causes a system-wide problem, AWS may not come to the rescue: The Shared Responsibility Model clearly states that while they are ultimately responsible for the resources your cloud deployment depends on, such as backing up the Amazon EBS storage itself, they don’t play any part in maintaining your data.
You have to make sure that your secondary sites have up-to-date, redundant data stored in case of a failover. This can be tricky when it involves data combined from cloud and on-premises sources, which is how Cloud Volumes ONTAP users manage their data with AWS storage.
How can you use Cloud Volumes ONTAP and OnCommand Cloud Manager to build an infrastructure that can be ready to bounce back in the event of a failure? This post will look at creating data redundant sites — how AWS resources and Cloud Volumes ONTAP combine to ensure you operate secondary sites that have high availability through data redundancy.
Redundant Environment Basics
How can you use AWS tools to ramp up your AWS redundancy and create a data-redundant site? Start by figuring out what matters most to you: What are your data integrity objectives? Your job is to create the framework to set up a cloud-based secondary site for backup that will ensure high availability by increasing data redundancy.
Network, storage and database services form a trident for application-hosting on AWS. There are three core services on AWS that have to be set up for these:
- Amazon EC2 instances for compute.
- Amazon Route 53 or Amazon Elastic Load Balancing for networking and availability.
- Amazon Relational Database Service (Amazon RDS) for the database service.
There are challenges to creating data redundant sites in the cloud. Therefore, it will be necessary to do these three things:1. Build an identically-functioning environment in the AWS cloud.
2. Find a way to continuously replicate your primary site’s data over to the redundant cloud site.
3. Make sure the data is secure as it is in transit and on the new site.
In the case of all three challenges, Cloud Volumes ONTAP comes in handy in crafting solutions.
Achieving AWS Redundancy with Cloud Volumes ONTAP
Cloud Volumes ONTAP has the robustness to offer both backup transfers to instances and the ability to encrypt and secure that data.
AWS resources such as Amazon S3 and Amazon EBS may achieve AWS redundancy by replicating within Availability Zones automatically, but that only will take care of data that is stored on the AWS storage system. There is no protection for your data on-prem if you’re not taking care of it yourself.
Cloud Volumes ONTAP does two important things:
- Allows you to easily create secondary copies of your on-premises deployments.
- Makes sure that if one site fails, you can failover and failback between copies with no loss of data.
The backup-and-restore architecture is one of the easiest and most dependable patterns to use: This pattern allows you to simply ship data backups to the cloud on a continuing basis. This is possible with Cloud Volumes ONTAP through the use of SnapMirror®.
Providing an easy-to-use and efficient data replication resource, SnapMirror keeps data on-prem synced with cloud storage. The cloud copy is initially synced using a snapshot of the original data, syncing incrementally — which means it only sends over the data that has changed — from that point onwards.
Now, this would otherwise drive up AWS storage costs considerably, but Cloud Volumes ONTAP features data compression, deduplication and thin provisioning capabilities, which can considerably reduce the amount of data you send to the cloud, thus saving you both time and money.
When designing a hybrid cloud redundancy solution, OnCommand Cloud Manager makes this cloud onboarding process as simple as possible. All it takes is a quick drag-and-drop between on-premises and cloud-based environments. This will discover the identified on-premises environment and automatically replicate the selected data to the new cloud-based environment, making your migration quick and easy.
SnapMirror is used here as well, and it will keep the source and target environments synchronized as per your defined schedules. The same mechanism can also be used for AWS disaster recovery, backup creation or any other secondary copy of your dataset that may be useful in the cloud.
A failover of operations can often require a slow, downtime-intensive process of rebuilding environments from cloud backup copies. With Cloud Volumes ONTAP HA systems, data is available to the two nodes of an Cloud Volumes ONTAP HA cluster, thus providing you with data high availability in AWS.
That means you avoid downtime in the case of an unforeseen disruptive event. This is because data is synchronously written to both nodes of the Cloud Volumes ONTAP cluster, so your data is always available. This feature allows Cloud Volumes ONTAP to keep your RPO down at zero, and keep your operations up and running without pause when things go wrong.
With so many complicating factors involved with maintaining AWS redundancy and high availability, you need solutions that can simplify the process of protecting your data.
Redundancy is just one part in the larger scheme of things. As an example use case, the health of any company depends on having a good disaster recovery strategy.
Replicating and synchronizing your data is a critical part of these processes. AWS doesn’t do it all: To keep your operations backed up with dependable secondary copies means finding a way to easily replicate your data to the cloud and make sure it’s highly available in failure scenarios. Cloud Volumes ONTAP gives you both.