In recent years, software developers and DevOps engineers have benefited from encapsulating applications into lightweight, independent units called containers. Kubernetes storage takes container deployment to a whole new level by providing a robust solution for managing and scaling containers and containerized applications and workloads across a cluster of machines.
In this introduction to Kubernetes, the first part of our series on the subject, we’ll help you learn Kubernetes starting with the basics: the history of containers and container orchestration, the problems that Kubernetes solves, and some of the related high-level terminology. We’ll also see how storage for a Kubernetes cluster can be dynamically allocated using Cloud Volumes ONTAP and Trident, NetApp’s dynamic persistent volume provisioner for Kubernetes.
Read on below as we cover:
This is part of an extensive series of guides about microservices.
On deployment, a container provides process and file system separation in much the same way as a virtual machine, but with considerable improvements in server efficiency. That efficiency allows a much greater density of containers to be co-located on the same host.
While container technology has been a part of Unix-like operating systems since the turn of the century, it was only with the advent of Docker that containers really came into the mainstream.
Docker has succeeded by bringing both standardization to container runtimes, for example, through the Open Container Initiative, and by creating a complete container management system around the raw technology, simplifying the process of creating and deploying containers for end users. Docker, however, can only be used to execute a container on a single host machine. That’s where Kubernetes steps in.
Kubernetes makes it possible to execute multiple instances of a container across a number of machines and achieve fault tolerance and horizontal scale-out at the same time. Kubernetes was created by Google after over a decade of using container orchestration internally to operate their public services. Google had been using containers for a long time and developed their own proprietary solutions for data center container deployment and scaling. Kubernetes builds on those solutions as an open source project, enabling the world-wide community of software developers to grow the platform.
And Kubernetes introduction and implementation are growing. A recent bi-annual survey of over 2000 IT professionals from North America and Europe by the Cloud Native Computing Foundation found that 75% of respondents were using containers in production today, with the remaining number planning to use them in the future. Kubernetes usage has remained very strong with 83%, up from 77%, using the platform and 58% using it in production.
The ability to manage applications independently of infrastructure holds great value for cloud deployments. We can build out a cluster of machines in the cloud that provides the compute and storage resources for all of our applications, and then let Kubernetes ensure we get the best resource utilization. Kubernetes can also be configured to automatically scale the cluster up and down in response to changes in demand.
For deployed applications, Kubernetes offers many benefits, such as service discovery, load balancing, rolling updates, and much more. Kubernetes acts as an application server that is used to run all of the services, message queues, batch processes, database systems, caching services, etc. that make up an enterprise application deployment.
The flexibility of this service has driven Kubernetes introduction and adaptation across the cloud, with all major cloud vendors offering a native Kubernetes service, for example Amazon EKS, Google Kubernetes Engine, and Azure Kubernetes Service. Kubernetes is also the foundation of other container orchestration platforms, such as Red Hat OpenShift.
There is a lot to understand when it comes to Kubernetes. In this section, we’ll give you an introduction to Kubernetes terminology that describes the main moving parts that make up the service.
The following terms relate to storage provisioning in a Kubernetes cluster:
Cloud Volumes ONTAP brings the power and versatility of NetApp’s ONTAP storage systems to the cloud, using AWS, Google Cloud, or Azure compute and storage resources to create a virtual storage appliance. This provides a level of storage management that goes far beyond anything else currently available, featuring:
NetApp Trident connects the capabilities of Cloud Volumes ONTAP to Kubernetes by acting as a dynamic storage provisioner that allocates and makes available cloud storage in response to persistent volume claims. This alleviates the need for administrators to manually deploy and manage storage, as well as providing a very sophisticated feature set.
For example, using NetApp Trident, a persistent volume claim can be used to dynamically clone an existing persistent volume, as is typically required for DevOps CI/CD pipelines, as opposed to allocating a new storage volume. When the pod using the clone is destroyed, the cloned storage is also cleaned up automatically.
NetApp Trident and Cloud Volumes ONTAP can also be used to manage the storage for stateful sets, ensuring that pods are always bound to the same storage volumes. This is critical for deploying stateful applications, such as MySQL, to Kubernetes.
Kubernetes is today’s most widely-used platform for container and microservices orchestration and provides the scalability and flexibility required for deploying enterprise applications and services. Managing storage in a Kubernetes cluster with dynamic storage provisioning massively reduces the manual administration required for allocating cloud storage to pods and containers.
Using NetApp Trident, Kubernetes storage requests are dynamically fulfilled by Cloud Volumes ONTAP, which similarly does for storage what Kubernetes does for containers. Cloud Volumes ONTAP can also be used to allocate volumes for Docker containers directly using the nDVP plugin.
Learn more about how Cloud Volumes ONTAP supports Kubernetes Persistent Volume provisioning and management requirements of containerized workloads, and how Cloud Volumes ONTAP helps to address the challenges of containerized applications in these Kubernetes Workloads with Cloud Volumes ONTAP Case Studies.
Together with our content partners, we have authored in-depth guides on several other topics that can also be useful as you explore the world of microservices.
Authored by CodeSee
Authored by NetApp
Authored by NetApp