Data migration is the process of copying or moving data from one device or system to another. Migration must be designed in such a way that it doesn’t interrupt or disable business operations. After the data is transferred, the business process transitions to the new device or system.
For any company, data migration is essential to making better business decisions. It allows organizations to expand their data storage, data management and analytics capabilities, by introducing new systems and processes. Data migration is highly strategic and a large part of digital transformation efforts—according to an IDC study, 60% of the workload in large enterprise projects can be attributed to data migration.
In this article, you will learn:
The goals of an effective data migration strategy include:
There are three main types of data migration: “big bang” in which systems are migrated in one go, “trickle” in which data is transferred gradually, and synchronization, in which the source and destination systems continue to live side by side.
In this type of migration, a complete data transfer is performed within a limited time. As soon as data is transferred to the new database or storage system, the old system goes down. This process is a one-time event and can be completed relatively quickly, if properly planned.
Big bang migration has an obvious advantage—it is completed in the shortest possible time—but it carries significant risk.
When migrating from one system to another, critical business functions are inevitably interrupted. Few companies can last for a long time when core systems are not running. This means the migration process happens under tremendous pressure, with very little room for error.
To succeed in a big bang migration, it is recommended that companies perform at least one pilot implementation of the migration process, and develop a detailed emergency response plan prior to field activities.
This type of migration can be completed in smaller steps, and take a longer period of time. Old and new systems run in parallel during data transfer, eliminating downtime and reducing the risk of the big bang approach.
However, there are still risks—being able to track the migrated data is a complex process. Trickle migration also means that the users have to switch between the two systems, which may lead to inconsistencies.
In many modern migration projects, organizations use a synchronization architecture to ensure source and target systems contain the same data. This can mitigate most of the risks of big bang or trickle migration.
There are two types of synchronization:
Most data migration projects will include the following steps: assessing source data, designing the migration, building a migration solution, and monitoring the migration once in progress.
Before starting the migration project, it is important to understand:
In the design phase, you choose the type of migration (learn about migration types above ), and define the exact migration process. Think about how data will be pulled from the source and transferred to the target system, define timelines, risks and dependencies. Clearly document your migration plan.
It is important to consider your data security plan as part of migration design. Identify data that needs to be protected, and ensure you adhere to security policies through the migration process.
Because migration is typically a one-time project, and is critical to the business, it is important to implement it correctly. A common strategy is to subdivide the data, create a technical process to transfer one category at a time, and then test it. Building the migration solution incrementally can dramatically reduce risks and help you catch problems early on.
To reduce risk, use the trickle approach—deploy the migration solution for each category of data, as opposed to deploying the entire solution at the end.
Both source and destination environments must be carefully monitored before, during and after migration. The following are key considerations for migration monitoring:
A solid monitoring strategy will ensure you can catch any migration problems early and remediate them before they cause damage to critical business processes.
NetApp Cloud Data Sense automatically discovers, maps, and classifies your data wherever it may be. Data availability, ownership and quality are crucial for business efficiency and cost optimization. With Cloud Data Sense, you can automatically label and act on information stored in files and database entries on premise and in the cloud. Make smart data decisions and automate your data optimization and compliance plans.