In the first post in this three-part series about microservices and legacy applications, we compared a legacy monolith with microservices. In the second post, we outlined the path of an actual migration from a monolith to a microservices platform. This third post will be helpful if, after reading the previous articles, you don’t feel comfortable making such drastic changes on a legacy application (yet), but you still want to use cloud resources to improve performance and scalability, and to reduce costs.
Usually, a legacy system not designed for the cloud is divided into a single application with a large codebase and several supporting services, such as databases, caching, storage, and reverse proxy. Today, we call this kind of system a monolith.
Cloud computing itself can trace its roots back to one of the key shortcomings of monolithic applications: scalability. Imagine Amazon’s situation: Black Friday is their most demanding day of the year, and their infrastructure needed to be able to handle it. Throughout the rest of the year, however, there were huge amounts of unused resources. At some point, Amazon considered that they weren't the only one with this particular challenge and that they could sell these idle resources to other companies. This was the beginning of AWS—and cloud computing as we know.
The migration of each part of the monolith should be carefully analyzed. First, analyze the monolith itself. Since this is the most customized part of the environment, it’s also the most prone to issues during migration. The monolith won’t be changed from its original design, so this doesn’t necessarily to be in scope of testing, though for other seasons such as code quality and regression, it may still be a good idea.
The most important part of the cloud migration is to identify the following items and create a migration plan for each of them:
Also, it’s essential to track application behavior by using a monitoring tool, such as Cloud Insights, that can identify the application’s resource utilization and peak hours.
Before migration, the metrics will help estimate costs and right-size the resources that you provision in the cloud. Also, they will help create a migration plan, as knowing the habits of your customers and machines will help you predict the best time and most efficient way to move from on-premises to the cloud.
After the migration, these metrics will enable you to manage service level objectives consistently and determine whether the migration was a success, and control costs on an ongoing basis: right-sizing is not only a task that should be performed prior to migration, but continuously throughout the application’s life cycle.
There are many types of services that must be migrated along with the monolith. Databases, for example, should be both logically and physically near the monolith due to network latency. The database is often one of the most critical software components of an application, holding all of the data produced by an application’s users. Because of that, databases should be updated regularly and have well-defined and tested backup and disaster recovery policies.
Databases are a vital service for the majority of enterprise applications, and their migration must be carefully considered. Since databases have persistent volumes, they are usually difficult to scale or distribute. Moreover, changes to their physical location are slower because all data must be transferred to another place. Backup/restore and data access policies are also essential and must be observed before migration to the cloud.
With databases, you can take advantage of the cloud without making any changes to the application by simply creating a virtual machine instance and running the database on that as you would any bare metal host on-premises. Alternatively, cloud providers offer managed database services that ease database administration by controlling backup/restore, scalability, and availability. It is possible to reduce some costs here, since the services can be more easily scaled up and down. The drawback is that you have a limited selection of database services made available by the cloud provider. If your application is SQL Standard it’s likely you can seamlessly move it to one of these services, however in other cases re-development will be required.
You can also take advantage of the cloud with simple data storage, offering easily scalable and virtually infinite capacity. Since the intention is to maintain the original application, simple data storage is needed to create a volume that can be network mapped.
To create network-mapped storage, besides disks, you can use Amazon EBS, Azure Files and Azure Disks, and Google Cloud Storage. However, these services do not guarantee consistent disk performance. You may find you need to provision well in excess of your required capacity in order to provide an acceptable level of performance for high-throughput applications, thereby increasing costs significantly. On the other hand, NetApp Cloud Volumes ONTAP is easy to scale and provides an increased level of performance and consistency, whilst keeping costs minimized.
A reverse proxy can also help you benefit from a cloud migration. Reverse proxies are used by web applications to direct data flow, and, sometimes, to resolve SSL connections. It’s possible to create a virtual machine and replicate the on-premises solution. However, costs and management can be simplified using SaaS solutions such as AWS Application Load Balancer, Azure Application Gateway, or load balancers in Google Cloud.
Generally, most services can be migrated like-for-like with simple virtual machine instances in the cloud. Some, like databases, storage and reverse proxies, are easy to migrate without making any changes to the monolith’s code. Others, such as message brokers, may require making changes in the application’s code in order to benefit from the cloud. Replicating an individual virtual machine to the cloud can be very simple, as you can directly upload the virtual disk to the your cloud provider (AWS, Azure, and Google Cloud), though at scale this can become cumbersome. For greater ease in migrating multiple virtual machines and services, Cloud Volumes ONTAP provides a consistent data management layer with your on-premises NetApp storage to enable seamless data movement at scale.
You may be inclined to use a hybrid solution, with some parts of the monolith in the cloud and others in the on-premises data center. This solution is valid, but you should be aware of data export charges and networking latency. Cloud providers generally don’t bill for uploaded data, but it may be expensive if your application regularly pulls large amounts of data from the cloud to your data center by design.
Regarding latency, some services may be projected to a 0ms latency, even 30ms of additional network latency can result in a significant performance degradation. For example, imagine you have a serial batch update with lines that each take 30ms on average to conclude. Each registry waits for the previous one to conclude before starting. For 10,000 registries, it may take 5 minutes locally and 10 in the cloud, regardless of the performance of the resources you’ve provisioned.
And whereas your local network is entirely within your control, the internet is not. Network latency over the internet and long distances is not likely to be as stable as your local network, leading to unpredictable performance.
So far, we’ve discussed how to migrate a legacy app without changing anything in the core application, making it possible to take advantage some of the cloud’s capabilities and to reduce costs without changing the legacy code. You will lose many of the cloud’s advantages, but can perform the migration quickly in the first instance and then consider options for modernization from there.
Many legacy applications are continually evolving and adding new features, so whilst it’s reasonable to keep legacy code untouched, what about those new features? If you are the codebase holder, once you’ve migrated your monolith to the cloud you should consider leveraging cloud capabilities to improve your application’s productivity, cost, and performance.
Every improvement can lead to a better system that will be even easier to improve or scale the next time around. Take, for example, an email-sending feature that is embedded in the monolith. If changes to that code occur, why not create a separate service that can rely on serverless architecture? Initially, this can be as simple as separating services and small configuration steps, but it can result in reduced costs and increased productivity and flexibility.
Not all changes are atomic. Some simply pave the path for other new features, creating solutions and experiences that will help improve more complex capabilities. You don’t need to refactor everything at the same time, or refactor for the sake of refactoring—just try to improve the system when you can and you’ll be good to go.
Moving legacy code to the cloud can be a difficult task. People often try to refactor the legacy to adhere to cloud philosophy, but this is the wrong approach. Agile development teaches us that small improvements lead to significant changes over time. First, try to migrate the legacy as-is, taking advantage of cloud-provided services. After that, when adding new features to your application, remember that the code is now in the cloud, and use its qualities to your benefit.