More about Kubernetes Storage
- How to Provision Persistent Volumes for Kubernetes with the NetApp BlueXP Console
- Fundamentals of Securing Kubernetes Clusters in the Cloud
- Kubernetes Storage Master Class: A Free Webinar Series by NetApp
- Kubernetes StorageClass: Concepts and Common Operations
- Kubernetes Data Mobility with Cloud Volumes ONTAP
- Scaling Kubernetes Persistent Volumes with Cloud Volumes ONTAP
- What's New in K8S 1.23?
- Kubernetes Topology-Aware Volumes and How to Set Them Up
- Kubernetes vs. Nomad: Understanding the Tradeoffs
- How to Set Up MySQL Kubernetes Deployments with Cloud Volumes ONTAP
- Kubernetes Volume Cloning with Cloud Volumes ONTAP
- Container Storage Interface: The Foundation of K8s Storage
- Kubernetes Deployment vs StatefulSet: Which is Right for You?
- Kubernetes for Developers: Overview, Insights, and Tips
- Kubernetes StatefulSet: A Practical Guide
- Kubernetes CSI: Basics of CSI Volumes and How to Build a CSI Driver
- Kubernetes Management and Orchestration Services: An Interview with Michael Shaul
- Kubernetes Database: How to Deploy and Manage Databases on Kubernetes
- Kubernetes and Persistent Apps: An Interview with Michael Shaul
- Kubernetes: Dynamic Provisioning with Cloud Volumes ONTAP and Astra Trident
- Kubernetes Cloud Storage Efficiency with Cloud Volumes ONTAP
- Data Protection for Persistent Data Storage in Kubernetes Workloads
- Managing Stateful Applications in Kubernetes
- Kubernetes: Provisioning Persistent Volumes
- An Introduction to Kubernetes
- Google Kubernetes Engine: Ultimate Quick Start Guide
- Azure Kubernetes Service Tutorial: How to Integrate AKS with Azure Container Instances
- Kubernetes Workloads with Cloud Volumes ONTAP: Success Stories
- Container Management in the Cloud Age: New Insights from 451 Research
- Kubernetes Storage: An In-Depth Look
- Monolith vs. Microservices: How Are You Running Your Applications?
- Kubernetes Shared Storage: The Basics and a Quick Tutorial
- Kubernetes NFS Provisioning with Cloud Volumes ONTAP and Trident
- Azure Kubernetes Service How-To: Configure Persistent Volumes for Containers in AKS
- Kubernetes NFS: Quick Tutorials
- NetApp Trident and Docker Volume Tutorial
Subscribe to our blog
Thanks for subscribing to the blog.
October 13, 2020
Topics: Cloud Volumes ONTAP DevOpsAdvanced6 minute readKubernetes
The way that software is developed has changed tremendously in the last two decades. The evolution of computing, network, and storage resources have made it possible to build applications more distributed and scalable. In the early days, applications were designed as black boxes. Those applications, which we now refer to as monoliths, were planned as all-in-one systems that bundled all the required features and tried to maximize the server capacity usage. Luckily, microservices and containerized deployments such as Kubernetes have changed all that.
In recent years, the microservices architecture has emerged as a way to break apart the monolith pattern. Leveraging technology advancements such as virtualization, containerization and the cloud, this paradigm shift in development splits a system in smaller modular and flexible components that work together to satisfy business requirements.
In this blog we’ll take a look at some of the background for this shift from monoliths to microservices, and see how NetApp Cloud Volumes ONTAP can be used in your development cycle.
Why Move Towards a Microservices Architecture?
Having a distributed architecture based on microservices makes software development more agile and easy to maintain, resulting in faster time to market and lowering costs. The modular, assemble-from-parts mindset also makes it easier to reuse readily available components between different systems and leverage existing open-source projects and 3rd party services.
From an operational point of view, organizations choose to move towards a microservices architecture because it’s more reliable, uses less resources and scales a lot better compared with a legacy monolithic architecture. A system that is built using multiple small independent pieces, microservices, can also handle failures better and make it easy to grow beyond the limits of individual physical (or virtual) machines.
For anyone designing modern software systems, it is important to truly understand the differences and tradeoffs of monolith vs microservices architectures. While the microservices architecture benefits largely outweigh its downsides, there are challenges that need to be taken into account and tackled from the get-go. In this article we will cover the most important ones to take into account.
Stateless vs. Stateful Services
The best place to start designing your microservices architecture is to plan what services are truly needed and understand the different service types. One can divide services in two categories: stateless or stateful.
When talking about stateless services, you can think of applications that are processing requests and data but do not actually store it. For instance, a web service is a good example of a stateless service. It processes and answers to client requests but also retrieves data from a stateful application (e.g., an SQL database). A stateful application has data persistence, which in our web application example could mean that each client session has some individual data that is stored and used the next time the client makes a new request.
Understanding the Deployment Differences
While the microservices architecture benefits are easy to understand, there is a significant amount of complexity involved in the application deployment process compared to a monolith. In a big monolithic application, we usually have one artifact that is deployed on a specific target environment. Any application changes, even if minimal, will need to go through the same long deployment process. This is, of course, quite time consuming and often leads to errors.
A microservices deployment is significantly different. Each service has its own artifact and can be changed and deployed independently. It becomes a lot faster and less error prone to make application changes. With multiple deployment pipelines and artifacts to manage, a microservices deployment benefits from a proper CI/CD environment and automation.
Despite its higher complexity, a microservices architecture deployment also opens the door to different models such as A/B testing and canary deployments. This has several advantages and being used correctly can truly enable business experimentation and innovation.
Service Discovery and Communication
A microservices architecture is distributed by nature. With each individual service focused on a specific task, it’s important to have a way to communicate and exchange information between those separate parts. The most common non-transactional communication method used between microservices is REST. For transactional communication, message queues are usually the preferred option.
But communication, of any type, is only possible if each microservice is aware of each other. With near-unlimited growth potential, where hundreds of services scale up and down based on demand and need, it is vital to have tools and services that can automatically enable service discovery capabilities, which allow microservices to dynamically become aware of each other and communicate. This is one of the most important aspects of the microservices model, one that greatly contrasts with the monolithic architecture, where there are usually only a few components that are static and don’t change often.
The Importance of Monitoring and Logging
In a monolithic application, troubleshooting and understanding what is happening with the system is often as easy as opening a handful of log files. Microservices logging is in turn a lot more challenging. With the system events created per service being dispersed, it is hard to follow and correlate what happened and why.
A key piece of technology that makes microservices monitoring and event management more efficient are centralized logging services. Centralized logging services enable all microservices to write outputs and metrics to a single place, enabling different aspects of observability to the whole distributed microservices system.
More for Microservices with Cloud Volumes ONTAP
When planning a microservices architecture, it’s important to keep in mind these different aspects. There are no silver bullets and while offering numerous technical and business benefits, microservices bring their own challenges.
The good news is that the challenges associated with microservices are becoming incredibly easy to tackle. Great tooling and services have emerged, from managed cloud resources to container orchestration platforms (e.g. Kubernetes), making the operational day to day work less painful.
If you are planning a microservices architecture, a tool that you should consider is NetApp Cloud Volumes ONTAP, the enterprise-grade storage management solution for data in the cloud. Cloud Volumes ONTAP is particularly useful for stateful workloads that require data persistence, providing additional features such as instant data cloning, multicloud and hybrid cloud operability, data protection, and overall storage efficiency that help reduce the cloud data storage footprint and costs. Plus, if you plan to use Kubernetes, Cloud Volumes ONTAP leverages the NetApp Trident provisioner to automatically and dynamically provision persistent volumes using AWS EBS, Azure disk, or Google Persistent Disk. That makes it possible for your Kubernetes workloads to gain access to all of Cloud Volumes ONTAP’s capabilities, making the task of running microservices much easier and more cost effective.
Would you like to hear more about running microservices in the cloud?
Watch on-demand our From Legacy to Fully Containerized Applications in the Cloud webinar, and get in-depth knowledge on building and orchestrate a container-based microservice architecture with the free NetApp Guide to Kubernetes: Persistent Volumes, Dynamic Provisioning, Cloud Storage.