hamburger icon close icon
Cloud Migration

Refactoring Applications to Kubernetes in Cloud Migrations

Among the various cloud migration options, the refactoring strategy helps you break down monolith workloads into multiple container-based microservices to achieve a highly-scalable and agile cloud environment. But how does it work when a cloud migration is moving workloads to Kubernetes?

In this article, we discuss the various benefits and challenges of refactoring applications to Kubernetes in cloud migrations. We also discuss the various approaches to refactor applications while learning use cases and benefits of each.

Use the links below to jump down to the sections on:

Refactoring Applications to Kubernetes-Based Cloud Frameworks

Kubernetes is an incredibly powerful tool for managing and orchestrating containerized workloads. It can help you automate a lot of effort-intensive, error-prone tasks associated with managing workloads, such as scaling resources to meet traffic spikes, or faster rollout of new application versions. And because it's open source, there's a thriving community of developers who are constantly improving Kubernetes and adding new features and functionality.

Organizations embracing a Kubernetes-based cloud framework typically aim to future-proof their legacy stack or monolith workloads while taking advantage of a container-native ecosystem for enhanced agility and portability.

A typical approach is also to refactor workloads to a Kubernetes orchestrated container infrastructure, which is designed to be highly extensible and customizable to support different use cases. With that in mind, it is important to note that refactoring alone cannot solve inherent architectural flaws. Additionally, without diligent planning and clear objectives, refactoring legacy apps may often lead to higher expenses and reduced application performance.

Benefits of Refactoring Applications to Kubernetes in Cloud Migrations

The merits of refactoring applications to Kubernetes include:

  • Self-healing for workload availability and reliability
    Refactoring applications to Kubernetes helps make applications more resilient to failures. To ensure uninterrupted services, containerized workloads are load balanced through a number of techniques, including:
    • Replicating pods across multiple nodes
    • Using health checks to identify and recover from failed pods
    • Rolling updates to prevent downtimes during deployments

Although self-healing capabilities can’t prevent all outages or failures, they can help reduce the impact of those that do occur by replacing, rescheduling, or restarting failed containers. This helps maintain optimum uptime without disrupting user experience.

  • Minimal performance overhead
    As container images encapsulate all dependencies of an application runtime, containerized workloads are lean and only require minimal performance overhead when compared to VMs.

    Besides the benefits of utilizing containers, refactoring apps to Kubernetes is often much simpler and less time-consuming than migrating to other platforms. Dynamic scheduling, horizontal scaling, automated bin packing, and load balancing are some key features of Kubernetes that enable refactored apps to reduce operational efforts while requiring less resources than legacy models.

    Additionally, the declarative workflow of Kubernetes eliminates redundant processes, consequently reducing administration bottlenecks for easier cluster management.
  • Enhanced workload portability
    By decoupling application code from infrastructure, Kubernetes simplifies the migration of workloads between on-premises data centers and cloud services. Being platform-agnostic, Kubernetes' declarative configuration model also streamlines the replication of an application's desired state across different environments.

    As the orchestration workflow is the same regardless of the programming language used for the workload’s source code, Kubernetes can be used to orchestrate applications built using any development framework.
  • Robust workload security
    Kubernetes enforces role-based access control (RBAC) out of the box, making it easy to control who can access cluster resources and the Kubernetes API. With pod and container-level security contexts, operators can restrict how containers are created, accessed, and decommissioned. Kubernetes also encrypts all inter-node traffic by default, further securing the network fabric from eavesdropping attacks.

Challenges of Refactoring with Kubernetes

While Kubernetes offers the flexibility and scalability to manage containerized workloads, managing a distributed ecosystem of clusters also introduces a number of complexities. These include:

  • Overprovisioning resources
    When refactoring legacy applications to Kubernetes, a common perception is to prioritize performance over other factors that define an application’s sustainability. As the platform does not offer clear guidelines to control or predict how much resources an individual application will require, it’s common to overprovision resources.

    Such overprovisioning can become a serious concern. Inappropriate provisioning of resources eventually leads to poor workload performance and higher operating costs. The platform’s hidden infrastructure complexity also causes an unplanned utilization of cluster autoscaling and dynamic provisioning of resources.
  • Complex cost tracking across immutable infrastructure
    Through refactoring, organizations can choose to break down monolithic workloads into multiple services for flexibility and performance optimization. These services can be further hosted on a single cloud platform or distributed across different managed service providers to meet security and compliance benchmarks. As every cloud platform has its own pricing models and billing cycles, aggregating cost data and billing calculations to track resource usage costs is highly effort intensive.
  • Persistent data storage
    Containers work seamlessly with stateless applications where stored data can be discarded as soon as the container shuts down. However, stateful applications introduce complexities since they rely on a storage framework where the data must persist across container restarts.

    As such, refactoring of stateful applications also requires considerations of dedicated storage management services or plugins to connect workloads with Kubernetes persistent volumes. Data security and protection are other key challenges with persistent storage as Kubernetes does not offer native features to secure data at rest.

Approaches to Refactor Workloads in a Kubernetes Ecosystem

Refactoring legacy applications requires the restructuring of various workload components to support containerization. There are a number of approaches that can be taken to refactor workloads in a Kubernetes ecosystem, and the most appropriate approach often depends on the workload type, dependencies, and organizational use case.

Source Code Refactoring

Source code refactoring involves building a container image that packages the existing application code, its libraries, dependencies, and an operating system to run entirely on Kubernetes.

This approach of refactoring changes the structure of the application and adds new cloud-native features while retaining its functionality and run-time behavior. Refactoring source code is typically used for legacy applications that are either incompatible with modern computing practices, applications with minimal or missing documentation, or those without an ongoing support roadmap.


  • Simplifies code complexity
  • Easy detection and remediation of bugs
  • Seamless integration with cloud-native framework
  • Prevents configuration drifts
  • Retain original application functionality

Database Refactoring

Database refactoring involves the alteration of database schemas and their backend storage for improved data access. Depending on the type of database being used, database refactoring involves the transformation of key-value entries, batch data updates or table denormalization to improve the design and performance of the database.

Database refactoring is mostly used for applications:

  • that require performance optimization
  • require transformation from being stateless to stateful
  • for applications whose legacy design schema needs to be updated with a modern one


  • Acts as the first step of refactoring an entire application
  • Helps fix foundational design issues
  • Helps with database normalization and performance optimization

User Interface (UI) Refactoring

UI refactoring involves making changes to graphical interfaces on the presentation tier for maintaining consistency without making functional or logical changes. While a UI refactoring approach requires minimal changes, it may introduce scope creep resulting from the need to include undocumented dependencies and application functionalities.

Refactoring user interface is typically used in instances that require a phased migration strategy of the legacy app, which assumes the user experience stays consistent while cluster administrators work on migrating other components of the workload to Kubernetes clusters.


  • Quicker achievement of goals as compared to other refactoring approaches
  • Helps improve application efficiency without undertaking substantial changes
  • Can be taken multiple times across various phases of an application lifecycle
  • Does not impact core application logic

Complete Application Refactoring

Complete refactoring involves changing the entire codebase, databases and user interface to utilize comprehensive benefits of a cloud-native feature set. As an exhaustive app modernization approach, components and dependencies of all the application tiers are refactored to operate on a containerized ecosystem for enhanced agility and scalability.

A full application refactoring is considered suitable for use cases:

  • that have the budget and skillset to migrate and operate workloads on a cloud-native ecosystem
  • where the organization intends to leverage all-cloud capabilities
  • when the legacy application requires dynamic scalability and resource allocation to meet growing user demands


  • Complete decoupling of application code and infrastructure
  • Modernizes the application to be highly adaptive to changing environments
  • Cost effective when compared to other refactoring approaches

How BlueXP Supports Workload Migration to the Cloud

Enterprises embrace the cloud to help them become more agile, efficient, and resilient, but it comes with its own challenges. Most often, there is also the lack of understanding towards the right migration approach and whether workloads would operate optimally as desired in the post-migration framework. NetApp BlueXP fits perfectly in such instances by bringing the best out of a cloud migration, regardless of which one of the cloud migration strategies you chose.

BlueXP combines all of NetApp’s data services to provide one interface for users to control all aspects of their data estates. At its center is BlueXP Cloud Volumes ONTAP, the leading storage management platform that abstracts the implementation of storage services for workloads running in the cloud or in hybrid setups. Cloud Volumes ONTAP offers seamless scaling, portability, and data protection for persistent volumes, enabling production-ready storage for Kubernetes applications.

By leveraging NetApp Astra Trident as the Container Storage Interface (CSI) plugin for integrating with Kubernetes, Cloud Volumes ONTAP serves as the backend storage management of persistent workloads, whether on AWS EBS, Azure managed disks, or Google Persistent Disk. Trident provides features beyond those allowed by the CSI and can be used to orchestrate storage with Docker, Red Hat OpenShift, and other popular container platforms.

To find out more about how the storage platform can help manage data and scale workloads while minimizing cloud expenses, check out these Cloud Volumes ONTAP and Kubernetes success stories.


As organizations move to adopt Kubernetes, they often need to refactor their workloads to take advantage of the benefits that Kubernetes can offer. Irrespective of the refactoring approach, it is important to consider the impact on existing processes and dependencies, as well as the training and support requirements for teams that will be working with Kubernetes.

In order to take full advantage of Kubernetes, it is important to understand how to properly refactor source code into modularity components that can be run in parallel. This process can be complex and time-consuming, but the benefits of a well-designed Kubernetes infrastructure are numerous. For enterprise-scale deployments, it’s easier to do that with NetApp BlueXP.

BlueXP can help you build, protect, and govern your Kubernetes workloads in your hybrid and multicloud data estate, whether you’re performing a refactoring, a more traditional lift and shift, or any other migration. To find out more, read these migration success stories with BlueXP and Cloud Volumes ONTAP.

New call-to-action


  • What is refactoring an application?
    Refactoring an application is the process of restructuring existing code to improve its organization, readability, or performance. It can also be used to upgrade an application to use new features or technologies.

  • When should we do refactoring?
    In many cases, refactoring an application is a strategy to avoid having to rewrite an entire application from scratch and retrofit to operate on a new framework. The process involves improving the design of an existing code without changing its functionality.

  • Why is refactoring important?
    When done properly, refactoring can lead to significant improvements in an application's quality and maintainability. Refactoring legacy applications can also help you stay ahead of technical debt and keep your codebase healthy and efficient.
Sudip Sengupta, Technical Consultant

Technical Consultant