hamburger icon close icon
Google Cloud Storage

How to Use Google Filestore with Microservices

Microservices-based architectures have become the norm in cloud computing, especially for born-in-the-cloud organizations. But while microservices are essentially stateless, some applications might still need access to persistent Google Cloud storage.

Google Filestore is a fully managed file share that can be leveraged as storage for containers hosted in Google Kubernetes Engine (GKE). This blog will explore the capability of attaching Filestore-based persistent volumes to containers, the benefits, limitations and the configuration steps.

Use the links below to see more on:

Container Deployment in GCP

Kubernetes was originally developed by Google as an in-house solution for hosting containerized workloads that was later released as an open-source solution in 2014. It is undoubtedly the most popular container orchestration platform and is fast gaining popularity in the cloud as well as preferred container hosting solution.

Today, all the cloud service providers offer managed Kubernetes service solutions that can help accelerate the deployment process. With managed Kubernetes services, the cloud service provider will manage the control plane of Kubernetes so that customers can focus on the application development, packaging, and deployment.

Google Kubernetes Engine (GKE) is the managed Kubernetes service from GCP, with single-click cluster deployment and scalability of up to 1500 nodes capable of delivering truly cloud-scale applications. The service has built-in high availability to protect from regional as well as zonal downtime.

GKE also has an ecosystem of products and services to support your containerized workloads: container registry, Google Anthos, container image scanning, and binary authorization, to name a few. Unlike other managed services, GKE has an autopilot deployment option which provides a fully managed hands-off Kubernetes platform with per-pod billing.

Using Google Cloud Filestore as Persistent Volumes for Containers

The storage associated with a container is ephemeral which means that all the data is lost when the container shuts down, crashes, or restarts. While running enterprise applications at scale on containers this could pose an issue. To solve this, Kubernetes provides an option to attach persistent volumes to containers. The data in persistent volumes is not dependent on the container lifecycle, meaning it remains available even if the container crashes/restarts.

The main concepts associated with this fundamental part of Kubernetes deployment are explained below, in brief.

  • PersistentVolumes (PV): A PV is the storage volume created as a Kubernetes cluster resource that can be attached to pods. The volumes can either be statically or dynamically provisioned.
  • PersistentVolumeClaim (PVC): This is the request for storage with defined storage size and access mode, i.e. ReadWriteOnce, RealWriteMany or ReadOnlyMany.

GCP customers can use Google Filestore as persistent volume for pods deployed in GKE clusters. Filestore is a fully managed Network Attached Storage (NAS) service available in GCP. The storage can be attached to a Compute Engine or GKE instance. The size of the provisioned volumes can be scaled up or down from the GCP console, command-line interface, or APIs on-demand with scalability in the range of 100s of TBs. 

With 720K IOPS and 2Gb/s speed, Filestore can meet the demands of the most performance-intensive enterprise applications. It can be used to statically provision volumes for pods in GKE. Dynamic provisioning is possible through GCP Filestore CSI driver, but it is not officially supported by Google.

How to Configure Filestore with GKE

Let’s look at a sample configuration for attaching a Filestore as Persistent Volume in a GKE cluster

Prerequisites

  1. The persistent volume configuration requires an existing GKE cluster.

For greenfield deployments you can easily create a new cluster using GCP CloudShell. Ensure that the default project, zone, and region of the cluster is all set in CloudShell according to the specifications in this reference document.

  1. Run this command to create the GKE clustergcloud container clusters create cluster1 --num-nodes=1

cluster

Replace cluster1 with the cluster name of your choice. For this example we are creating a single-node cluster and it is connected to the default network. You can add more nodes per your requirements

  1. Create Filestore using the following command:gcloud beta filestore instances create gke-nfs --zone=us -west1-a --tier=BASIC_HDD --file -share=name="vol1",capacity=1TB --network=name="default"

This command will create a filestore instance named gke-nfs in the Basic HDD tier. For this example, the volume name is vol1 with a capacity 10 GB connected to the default network. You can change the settings per your specific application requirements.

filestore

Note: The GKE cluster and Filestore should both be in the same project and connected to the same VPC network or that the Filestore instance is connected to a shared VPC.

Configuring a Persistent Volume

Now we’ll see how to configure a persistent volume for our container to leverage using Filestore.

  1. Run the following command to list the created Filestore instance you created in the preceding step: gcloud filestore instances list

filestoreinstance

Make sure to copy down the FILE_SHARE_NAME and IP_ADDRESS values in the output. These will be used while creating the persistent volume specifications.

  1. Create pv.yaml file in CloudShell with the following content to define the persistent volume:apiVersion: v1 kind: PersistentVolume metadata: name: gkefile spec: capacity:   storage: 128M accessModes: - ReadWriteMany nfs:    path: /vol1    server: 10.81.100.202

Note that the FILE_SHARE_NAME and IP_ADDRESS values from the earlier step are used here as inputs for the path and server parameters, respectively. The value of storage is configured as 128 MB, but it can be any value as per persistent value requirements of the application.

  1. Get the cluster credentials using the following command:gcloud container clusters get-credentials cluster1 --zone=us-west1-a

credentials

  1. Create the persistent volume using the following command:kubectl create -f ./pv.yaml

podstatus

  1. Create the pvc.yaml file in CloudShell with the following content to define the   persistent volume claim specification: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gkefile-claim spec: accessModes: - ReadWriteMany storageClassName: "" volumeName: gkefile resources:    requests:      storage: 128M

Note that the value of the storage parameter should be less than or equal to the amount of storage specified in step 2.

  1. Deploy the PVC specification: kubectl create -f ./pvc.yaml

pvc_creation

  1. The persistent volume claim can now be used in pod definition. A sample pod specification file that uses the persistent volume claim created in the earlier step is shown below:apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: test-gke    image: us-docker.pkg.dev/google-samples/containers/gke/hello-app:1.0    volumeMounts:    - mountPath: /mnt/filestore      name: gkepvc volumes: - name: gkepvc    persistentVolumeClaim:      claimName: gkefile-claim     readOnly: false

Create the testapp.yaml file in CloudShell with the above content and replace the highlighted parameters as required:

  • image: Provide the tag of the container image to be used
  • mountPath: Provide the path to mount the persistent volume
  • claimName: Use the same name as the PVC created in step 5
  1. Deploy the test application using the following command:kubectl create -f ./testapp.yaml

appcreation

  1. Check status of the pod using the following command:kubectl get pods

podstatus

  1. Describe the pod using the command:kubectl describe pod my-pod

You can see in the output that persistent volume details are now attached to the pod.

poddescribe

Get More for Your Containers with Cloud Volumes ONTAP

With more and more production workloads adopting microservices-based architectures, choosing the right storage for persistent storage used by these containerized workloads is important for consistent user experience. NetApp Cloud Volumes ONTAP is an enterprise-grade data management solution that can help you here.

Cloud Volumes ONTAP delivers the trusted ONTAP capabilities of data management with single pane monitoring capabilities and improved storage efficiency across multiple cloud environments. It can be used in GCP to complement the native storage service capabilities, especially for containerized workloads.

Cloud Volumes ONTAP works well with Anthos, the hybrid and multicloud focused Kubernetes solution from Google Cloud. Anthos focuses on delivering a consistent experience for organizations leveraging Kubernetes for their workloads, irrespective of where the cluster is located. Anthos customers can leverage Cloud Volumes ONTAP in their Kubernetes clusters using the Astra Trident provisioner for creating dynamic persistent volumes. Pivoted on a strategic partnership between NetApp and Google, Cloud Volumes ONTAP delivers a fully validated enterprise-class storage solution for containerized workloads. Both GKE on-prem and GKE in the cloud can use Cloud Volumes ONTAP as the storage provider.

Cloud Volumes ONTAP delivers a highly-available and resilient storage layer for containers managed through Anthos with built-in storage efficiency and data protection features for persistent volumes. It also enables smooth transfer of data between different Kubernetes clusters in cloud platforms or on-prem through SnapMirror® data replication. These features also help in disaster recovery scenarios through seamless failover and failback between environments so that storage availability for your containerized workloads is not impacted. These capabilities go beyond the average Google Cloud backup.

These features make Cloud Volumes ONTAP a must-have component in your containerized workloads architecture in Google Cloud.

New call-to-action
Yifat Perry, Technical Content Manager

Technical Content Manager

-