More about Kubernetes Storage
- Fundamentals of Securing Kubernetes Clusters in the Cloud
- Kubernetes Storage Master Class: A Free Webinar Series by NetApp
- Kubernetes StorageClass: Concepts and Common Operations
- Kubernetes Data Mobility with Cloud Volumes ONTAP
- Scaling Kubernetes Persistent Volumes with Cloud Volumes ONTAP
- What's New in K8S 1.23?
- Kubernetes Topology-Aware Volumes and How to Set Them Up
- How to Use NetApp Cloud Manager for Provisioning Persistent Volumes in Kubernetes
- Kubernetes vs. Nomad: Understanding the Tradeoffs
- How to Set Up MySQL Kubernetes Deployments with Cloud Volumes ONTAP
- Kubernetes Volume Cloning with Cloud Volumes ONTAP
- Container Storage Interface: The Foundation of K8s Storage
- Kubernetes Deployment vs StatefulSet: Which is Right for You?
- Kubernetes for Developers: Overview, Insights, and Tips
- Kubernetes StatefulSet: A Practical Guide
- Kubernetes CSI: Basics of CSI Volumes and How to Build a CSI Driver
- Kubernetes Management and Orchestration Services: An Interview with Michael Shaul
- Kubernetes Database: How to Deploy and Manage Databases on Kubernetes
- Kubernetes and Persistent Apps: An Interview with Michael Shaul
- Kubernetes: Dynamic Provisioning with Cloud Volumes ONTAP and Astra Trident
- Kubernetes Cloud Storage Efficiency with Cloud Volumes ONTAP
- Data Protection for Persistent Data Storage in Kubernetes Workloads
- Managing Stateful Applications in Kubernetes
- Kubernetes: Provisioning Persistent Volumes
- An Introduction to Kubernetes
- Google Kubernetes Engine: Ultimate Quick Start Guide
- Azure Kubernetes Service Tutorial: How to Integrate AKS with Azure Container Instances
- Kubernetes Workloads with Cloud Volumes ONTAP: Success Stories
- Container Management in the Cloud Age: New Insights from 451 Research
- Kubernetes Storage: An In-Depth Look
- Monolith vs. Microservices: How Are You Running Your Applications?
- Kubernetes Shared Storage: The Basics and a Quick Tutorial
- Kubernetes NFS Provisioning with Cloud Volumes ONTAP and Trident
- Azure Kubernetes Service How-To: Configure Persistent Volumes for Containers in AKS
- Kubernetes NFS: Quick Tutorials
- NetApp Trident and Docker Volume Tutorial
For stateful workloads that need data persistence beyond the lifecycle of the container, a scalable and robust storage management solution is a must. But native storage solutions may not be able to scale at the level many enterprises required for Kubernetes storage.
Cloud Volumes ONTAP—the data management platform from NetApp—provides a solution, offering a robust and petabyte-scale storage solution for Kubernetes deployments in the cloud. In this blog we’ll explore some of the container scalability challenges for stateful Kubernetes workloads and see how Cloud Volumes ONTAP can help solve them.
Read on below as we cover:
- Container Storage Scaling Challenges
- Native Cloud Provider Container Storage Scaling Considerations
- Addressing Container Storage Scaling Challenges with Cloud Volumes ONTAP
- Get More for Kubernetes with Cloud Volumes ONTAP
Container Storage Scaling Challenges
As is the case with any other deployment, the scalability of an application running in a container depends on the scalability of its storage layer. Here are some of the challenges that users may run into when trying to scale out Kubernetes storage.
Cloud provider size limits
While the cloud has generally been billed as offering limitless storage, that may not always be the case. When using native cloud storage for containers, the quota and limitations of the specific services can become a bottleneck. There is a maximum size limit to the amount of persistent storage you can allocate using a native file share service on AWS, Azure, or Google Cloud.
Volume resizing also becomes a challenge since that has a dependency on the scalability limits mentioned above. There could be provider-specific limitations that can also come into the picture for volume resizing. For example, in Azure Kubernetes Service volume resizing is not supported for the built-in storage classes that use Azure disks in the backend. Customers need to opt for a custom storage class to overcome this limitation.
Storage capacity management
Storage capacity management could also become an overhead as you need to consider the native cloud storage and cloud service provider specific limitations. If there are multiple storage types being used—for example, both disks and file shares—then each of them has to be evaluated separately while planning your containerized applications’ storage capacity.
This could also impact the speed and agility at which these configurations can be managed. You might need a different automation approach for different types of storage depending on the cloud service provider.
Last but not the least is the added complexity users face when containerized workloads are deployed in multicloud or hybrid cloud environments. Such architectures could result in cloud administrators switching between multiple cloud consoles and automation tools for container storage layer management on a constant basis. This process often becomes cumbersome, with no unified approach for managing persistent storage for your containerized workloads across such complex architectures.
Native Cloud Provider Container Storage Scaling Considerations
While there are multiple options for using cloud native storage services for Kubernetes persistent volumes, there are certain inherent limitations that you should take into account when using them.
Disk-based block storage such as Azure disks, AWS EBS, and Google Persistent Disk can be configured as persistent volumes for containerized workloads. However, there is a limit on the number of disks that can be attached to specific VM SKUs. Each cloud provider’s service also has its own limit on the maximum size of disk that can be attached.
The maximum supported disk size when correlated with the maximum number of disks supported by different VM SKUs limits the maximum storage capacity that can be supported by VMs. This capacity is currently at 257 TB for Google Cloud, 336 TB for AWS, and 256 TB for Azure. This could become a bottleneck if your applications require storage to scale beyond the aforementioned limits and into the petabytes.
Using storage other than local storage for containers is not a straightforward process. You will have to deal with multiple storage classes and PV configuration files to begin with. There could be native cloud-specific configurations that should also be taken into consideration. For example, the process for mounting an Amazon FSx for Windows or FSx for Lustre file share on an EKS cluster involves multiple manual configuration steps, including installing additional CSI drivers before the persistent volume configuration.
File share scaling limitations
AWS, Azure, and GCP provide options for using native managed file share services as persistent volumes for their respective managed Kubernetes services. However, the scalability limits of these services will be applicable for the persistent volumes created using these services.
The limits for some of the commonly used file share services are as follows: 64 TB for Amazon FSx, 5 TB for Azure Files (standard tier), and 63.9 TB for GCP Filestore (Basic SSD). The premium/high-scale tiers for file share service in Azure and GCP can scale to a maximum size of 100 TB, but would incur additional cost.
There could be data residing on disks attached as persistent volumes rarely accessed by applications. This data could still be required at some point in time, say for audit, compliance, or referral purposes. However, since this data is not in active use, keeping it in block storage is wasteful considering the cost and potential uses of the format. There are no native solutions provided by cloud service providers for efficient lifecycle management of data in container local storage.
Native cloud block storage services base their charges on the size of the provisioned disk. Infrequently accessed data residing on local storage would add to the storage costs, irrespective of the access pattern. Customers end up paying for storage that they are not using on a day-to-day basis. This reduces the overall ROI for cloud storage.
Now that we’ve seen what some of the scaling constraints are when using native cloud provider services, let’s see how Cloud Volumes ONTAP can overcome these storage limitations to scale Kubernetes storage up to the petabyte scale.
Addressing Container Storage Scaling Challenges with Cloud Volumes ONTAP
Cloud Volumes ONTAP delivers the capabilities of the trusted NetApp ONTAP data management platform with the cloud-based block storage offered by AWS, Azure, and Google Cloud. It provides access to storage volumes over iSCSI, NFS, and SMB protocol and can be configured as persistent storage for your containerized workloads.
Though Cloud Volumes ONTAP uses native cloud storage to create a virtual storage appliance, it provides the following additional benefits when used as persistent volumes for containers in the cloud.
Container storage capacity requirements can change on the fly. While using Cloud Volumes ONTAP as the storage layer for your persistent volumes, there is no need to predict the capacity requirements and pre-provision volumes. Cloud Volumes ONTAP uses NetApp Astra Trident as the CSI-based storage provisioner to dynamically provision storage for your persistent volume claims.
Cloud Volumes ONTAP provides an option to bypass the block storage size limitations of the cloud service providers through license stacking and storage tiering so you can reach storage capacity in the petabyte scale.
Block storage is freed up when data is tiered to object storage, which is virtually limitless. By stacking multiple ONTAP BYOL licenses, you also increase the overall available block storage capacity into the petabytes. For example, a single Cloud Volumes ONTAP license can support up to 368 TB due to the single VM block storage limitation. However, adding three licenses would take this up to 1.4 PB, thereby provisioning you a petabyte-scale storage pool for containerized workloads.
Cloud Volumes ONTAP helps overcome the storage lifecycle management challenges associated with native cloud storage services by providing an option to tier infrequently accessed data to low-cost object storage. This storage tiering feature is transparent to the application and does not impact its performance. The data remains accessible to the application whenever required. At the same time it helps bring down the storage cost drastically by moving rarely accessed data to a cost effective cloud storage tier.
While using Cloud Volumes ONTAP as persistent storage, the volumes can be expanded directly from the Kubernetes layer. The configuration required is as simple as setting an allowVolumeExpansion flag to true in the storage class definition. This helps to overcome the rigid limitations of native cloud storage based persistent volumes that do not support resizing once the volume is provisioned. You can start small based on the requirements of the application and later expand the volume as the data size increases.
The NetApp Cloud Manager SaaS provides a centralized control plane to manage your Cloud Volumes ONTAP volumes across hybrid and multicloud environments. No matter which cloud platform you use to deploy your containerized workloads, the storage layers for all of them can be controlled from this single dashboard.
You can also enable replication/data copy between the persistent volumes through SnapMirror® data replication directly from the Cloud Manager interface. This gives you a way to eliminate the hassle of switching between tools to manage different persistent storage layers. Plus, all of these capabilities can be carried out programmatically with RESTful API calls, with no need to use the GUI at all.
The persistent volumes provisioned using Cloud Volumes ONTAP can be mounted over a protocol of your choice, i.e., iSCSI, NFS, or SMB. That means you aren’t limited to just using local storage options. It also caters to the requirements of shared and non-shared storage. iSCSI driver-based volumes can be used for non-shared storage requirements while NAS driver based volumes can be used for shared storage requirements where multiple pods might need access to the same volume.
Optimized file caching
Cloud Volumes ONTAP uses FlexCache® technology to enable faster reads of data. The proprietary caching technology used by FlexCache ensures that the reads are cached to the nearest client. This offers scalability in that you can cache data without having to replicate your entire data set.
Instant, zero-capacity data cloning via NetApp FlexClone® allows Cloud Volumes ONTAP users to scale their dev/test operations without worrying about drastically increasing the storage footprint and costs for storage. All clone copies are based on Snapshot images, and only delta data requires additional storage space.
Get More for Kubernetes with Cloud Volumes ONTAP
Scalability is just one of the benefits offered by Cloud Volumes ONTAP. It comes packed with additional features that deliver more value for your cloud storage investment.
Features like thin provisioning, deduplication and compression help bring down the cost of persistent volume storage by up to 70%. Cloud Volumes ONTAP also ensures a higher level of data protection for persistent volumes, through its dual-node high availability configuration and point-in-time backup snapshot copies.
No matter how large the scale, Cloud Volumes ONTAP delivers a great value proposition for your persistent storage requirements for Kubernetes workloads in the cloud.