hamburger icon close icon

Refactor Monoliths Into Microservices in Cloud

December 16, 2019

Topics: Cloud Insights 8 minute read

In the first article in this three-part series about microservices, we compared monolithic and microservices-based architectures. In this post, we’ll delve deeper into what’s involved in making the move. As you may know, migrating from a monolith to a microservice architecture is far from easy, especially since legacy systems aren’t always as well documented as we would like. Read on to explore what you should know before diving into microservices. 

What to Consider

Migrating from a monolith to microservices alters your application significantly. After you refractor, you need to guarantee that everything that worked before is still working. The only way to do this is to test, test, and test again. For this reason, it's crucial to have a large and comprehensive batch of testing scenarios at your disposal. Integration tests are preferred in this case because you are testing new services on your infrastructure, but unit tests, and even manual tests, will also be helpful.

Also keep in mind that the "micro" in microservice doesn't necessarily mean tiny; it means smaller than the current state of the application. You may have come across a variety of different definitions for microservice, but for the purpose of thise post, we'll characterize microservice as 'designed to address some cohesive domain rules' and 'restricted in scope'. You should keep those attributes in mind. Service must make sense to be separated. A straightforward way to apply this philosophy is with domain-driven design (DDD).

Understanding What You'll Need

A microservice architecture is a design pattern that distributes all the application logic into several services. As such, the platform running those services is more complex, so it can handle how the services are deployed and communicate. On a monolithic system, all parts run into the same process. Consequently, they share memory and communicate by function calls. On services, there are myriad ways for systems to communicate.

Stateless and Stateful Services

There are two clear types of services: stateless and stateful. Stateless services are applications that don’t rely on storage persistence and won’t lose any vital information if they are stopped or crash. For this reason, these services are much easier to handle in a distributed way. They can be allocated anywhere and are easily scalable, but do need a load balancer or service bus to distribute the load across every instance.

Stateful services, on the other hand, hold data. Databases are the most well-known type of stateful services, but there are others, such as messaging services, authentication services, or user directories. These services must have policies to mitigate data loss in the event of a crash. Because data needs to be consistent, stateful services can be difficult to scale in more than one instance, and merely using a load balancer can result in more problems than solutions.

One of the first things an architect needs to do when designing a microservice migration is identify the type of each platform service. The architect should reduce the number of stateful services to a bare minimum to create a more scalable platform.

Deployment Procedures

Another critical difference between monoliths and microservices is the deployment procedure. Since a monolith is delivered as one artifact, it is easy to deploy, even manually. On a microservices architecture, however, there are many artifacts to deploy and manage. For that reason, microservice deployment should be automated. Microservices need a CI/CD environment in order to be viable, so if you don’t have this already, hold off until you’ve created one.

Services Discovery

Monoliths usually have three parts: a frontend, a backend, and a database. It’s easy for the frontend to know the backend location on the infrastructure, and the backend knows the database location as well. It just needs one address that an operator can manually type. In a microservice-based architecture, on the other hand, hundreds of services scale up and down every second, and new microservices can be deployed continuously without issues. 

Microservices need to discover other services automatically and without manual intervention. There are several service discovery applications that simplify how one service finds another. Consul and Eureka are two of the most popular. These services work as central points of localization: each service asks for the location of another service. Another approach is to use a service mesh like Istio to discover and secure service communication. As opposed to service discovery, a service mesh works by creating a network layer on each service, so the service does no need to know anything about a central location.

Service Communication: REST and Messages

Monoliths communicate with their parts by using in-memory system calls because all parts are running the same process. Microservices are distributed by nature, so they need a communication protocol to ask for other service resources. There are two clear ways that services communicate with each other: REST and messages.

REST communication is the most common and relies on HTTP verbs to transfer data, but the first service needs to know the exact location of the second service. Due to its nature, REST is generally used by non-transactional communication: if one service is down, all of the data may be lost.

Messages, on the other hand, are not as ubiquitous as REST, and there are many specialized messaging implementations. The message protocol also needs a third-party broker that centralizes all messages in and out.

However, each instance consumer will take a message that it can process, keeping others in a queue for another instance. Also, a message producer does not need to know the location of each consumer: it just needs to send a message—and someone, somewhere, will consume it. The broker is also responsible for securing messages until they are delivered to their destination. This feature makes messaging a good option for transactional communication.

Monitoring and Logging

When a request is made to a monolith, you can trace all events of that request by examining the log output. If there’s a problem, it’s generally pretty easy to follow the events and discover what caused it. With microservices, on the other hand, there is a whole new level of complexity. Since each service can have its own logging output, an operator would need to read through each service output and manually correlate events to understand the sequence.

Centralized logging services allow all services to write output to a single place. Generally, the logging services are text-indexing tools that allow near real-time querying and thread detection. It’s mandatory for every request to have a unique identifier that is passed to all involved services, to be used as metadata for each log output of the request. That way, a query using the identifier will provide an understandable event sequence of the request.

Handling Exceptions

Finally, there are many moving parts when working with microservices because each service relies on networking and other services to be fully functional. With so many components, a successful microservice architecture should be designed to handle failure.

An unresponsive service can lead to a cascade sequence, where other services also slow down due to resource exhaustion. A circuit-breaker pattern avoids this scenario by adding a proxy between the real endpoint: If the proxied service starts returning too many errors, the circuit break will immediately reply “error” to all subsequent requests. After some time, the proxy will allow some test requests as a means to evaluate the service. Most service-mesh products have circuit breakers implemented.

Make a Plan

After considering all the elements you need to create a microservice architecture, it’s time to make a plan to start the migration. Keep in mind that you don’t need to migrate everything at once. Select the most critical parts, prepare the infrastructure, and start dividing the monolith one step at a time. Each new step will improve the architecture and facilitate what comes next. Don’t be afraid to make mistakes, but do go slowly, because it will be cheaper to repair them if you catch them early.

In terms of services, try to identify the most-used domain resources and separate them first. Doing so will allow you to immediately enable scaling and a better development cycle. Moreover, create and finish products on a single sprint to help management follow the project’s evolution.

Also, remember that your first service already has a scope: the authentication service. Without it, other services cannot verify if the user or application that are trying to access a resource have the necessary permission. Also, without an authentication service it will be much harder to trace between services.

Moving Monoliths

No one said that microservices are easy, especially if you’re talking about a migration from a legacy monolithic application. On the other hand, the benefits are evident: teams can work in parallel without conflicts, and systems can scale without the need to have a considerable backend replicated.

A legacy monolith usually has many technical debts that need to be addressed before beginning the migration process. Issues like lack of testing, documentation, or high cohesion between components may block the first migration attempts.

Finally, the team planning your migration should adopt an evolutionary model, which provides faster feedback, a better sense of completion, and changes in company schedules. Keep in mind that each step will make the next one easier.

For information on how NetApp can help you with your migration, watch the webinar.

In the next post in this series, we'll discuss the other side of the story: Can you lift and shift your monolithic applications to the cloud and still take advantage of the benefits the cloud has to offer? 

New call-to-action

Principal Technologist

-