Background
For a variety of reasons, users may wish to mount a persistent volume on two or more pods spanning multiple availability zones. One such use case is to make data stored outside of IRIS available to both mirror members in case of failover.
Unfortunately the built-in storage classes in most Kubernetes implementations (whether cloud or on-prem) do not provide this capability:
- Does not support access mode "ReadWriteMany"
- Does not support being mounted on more than one pod at a time
- Does not support access across availability zones
However, some Kubernetes add-ons (both provider and third-party) do provide this capability. The one we'll be looking at in this article is Google FIlestore.
Overview
In this article we will:
- Create a Kubernetes cluster on GKE (Google Kubernetes Engine)
- Use Google Filestore to create a persistent volume of type ReadWriteMany
- Use IKO to deploy an IRIS failover mirror spanning two availability zones
- Mount the persistent volume on both mirror members
- Demonstrate that both mirror members have read/write access to the volume
Steps
The following steps were all carried out using Google Cloud Shell. Please note that InterSystems is not responsible for any costs incurred in the following examples.
We will be using region "us-east1" and availability zones "us-east1-b" and "us-east1-c".
Create a Network
Google Filestore requires creation of a custom network and subnet:
gcloud compute networks create vpc-multi-region --subnet-mode=customCreate a Subnet
gcloud compute networks subnets create subnet-us-east1 \
--project=$GOOGLE_CLOUD_PROJECT \
--region=us-east1 \
--network=vpc-multi-region \
--range=172.16.1.0/24 \
--secondary-range=pods=10.0.0.0/16,services=192.168.1.0/24Enable Google Services
gcloud services enable file.googleapis.com servicenetworking.googleapis.comCreate Addresses for VPC Peering
gcloud compute addresses create google-service-range \
--project=$GOOGLE_CLOUD_PROJECT \
--global \
--purpose=VPC_PEERING \
--prefix-length=20 \
--description="Peering range for Google managed services" \
--network=vpc-multi-regionCreate VPC Peering
gcloud services vpc-peerings connect \
--project=$GOOGLE_CLOUD_PROJECT \
--service=servicenetworking.googleapis.com \
--ranges=google-service-range \
--network=vpc-multi-regionCreate Kubernetes Cluster
gcloud container clusters create sample-cluster \
--project=$GOOGLE_CLOUD_PROJECT \
--region us-east1 \
--node-locations us-east1-b,us-east1-c \
--machine-type t2d-standard-2
--num-nodes=3 \
--release-channel rapid \
--enable-ip-alias \
--network=vpc-multi-region \
--subnetwork=subnet-us-east1 \
--cluster-secondary-range-name=pods \
--services-secondary-range-name=services \
--shielded-secure-boot \
--shielded-integrity-monitoring \
--addons=GcpFilestoreCsiDrive
Create a StorageClass
Google provides a number of storage classes, but none of them will work with our custom network, so we must create our own. Add the following to a file named gke-filestore-sc.yaml:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gke-filestore-sc
provisioner: filestore.csi.storage.gke.io
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
tier: ENTERPRISE
network: vpc-multi-region
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- us-east1Note that the value under key "toppology.gke.io/zone" is a region, not a zone.
Now create the storage class:
kubectl apply -f gke-filestore-sc.yamlCreate a PersistentVolumeClaim
Add the following to a file named gke-filestore-pvc.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: gke-filestore-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: gke-filestore-sc
resources:
requests:
storage: 1TiNote that 1TB is the minimum size allowed.
Now create the persistent volume claim:
kubectl apply -f gke-filestore-pvc.yamlInstall IKO
Download IKO. In the Helm chart, edit file values.yaml, and enable "useIrisFsGroup". This ensures that the volume will be writable by "irisowner":
useIrisFsGroup: trueInstall and run IKO:
helm install sample iris_operator_amd-3.8.42.100/chart/iris-operatorSee IKO documentation for additional information on how to download and configure IKO.
Create an IrisCluster
Add the following to a file named iris-filestore-demo.yaml:
apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
name: sample
spec:
storageClassName: iris-ssd-storageclass
licenseKeySecret:
name: iris-key-secret
imagePullSecrets:
- name: dockerhub-secret
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: gke-filestore-pvc
topology:
data:
image: containers.intersystems.com/intersystems/iris:2025.2
preferredZones: ["us-east1-b","us-east1-c"]
mirrored: true
volumeMounts:
- name: nfs-volume
mountPath: "/mnt/nfs"Notes:
- If we hadn't enabled useIrisFsGroup, we would have to create an init container to make the volume writable by "irisowner"
- The mirror spans both availability zones in our cluster
- See IKO documentation for information on how to configure an IrisCluster
Now create the IrisCluster:
kubectl apply -f iris-filestore-demo.yamlNote that it may take some time for the shared volume to be allocated, and you may see errors such as the following in the events log:
15s Warning ProvisioningFailed PersistentVolumeClaim/nfs-pvc failed to provision volume with StorageClass "gke-filestore-sc": rpc error: code = DeadlineExceeded desc = context deadline exceededA better way to confirm that allocation is underway is to look directly at the filestore instance:
$ gcloud filestore instances list
INSTANCE_NAME: pvc-6d59f5ce-7d4f-42d0-8a20-2ee3602f8d32
LOCATION: us-east1
TIER: ENTERPRISE
CAPACITY_GB: 1024
FILE_SHARE_NAME: vol1
IP_ADDRESS: 10.110.51.194
STATE: CREATING
CREATE_TIME: 2025-09-09T03:13:07Note the status of "CREATING"; deployment won't proceed until the state reaches "READY". This may take between five and ten minutes. Soon after that you should see the IrisCluster is up and running:
$ kubectl get pod,pv,pvc
NAME READY STATUS RESTARTS AGE
pod/sample-data-0-0 1/1 Running 0 9m34s
pod/sample-data-0-1 1/1 Running 0 91sNAME CAPACITY ACCESS MODES STATUS CLAIM STORAGECLASS
pvc-bbdb986fba54 1Ti RWX Bound gke-filestore-pvc gke-filestore-sc
pvc-9f5cce1010a3 4Gi RWO Bound iris-data-sample-data-0-0 iris-ssd-storageclass
pvc-5e27165fbe5b 4Gi RWO Bound iris-data-sample-data-0-1 iris-ssd-storageclassNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
gke-filestore-pvc Bound pvc-bbdb986fba54 1Ti RWX gke-filestore-sc
iris-data-sample-data-0-0 Bound pvc-9f5cce1010a3 4Gi RWO iris-ssd-storageclass
iris-data-sample-data-0-1 Bound pvc-5e27165fbe5b 4Gi RWO iris-ssd-storageclassWe can also (by joining the output of "kubectl get pod" with "kubectl get node") see that the mirror members reside in different availability zones:
sample-data-0-0 gke-sample-cluster-default-pool-a3b1683a-7g77 us-east1-c
sample-data-0-1 gke-sample-cluster-default-pool-010e420a-dw1h us-east1-bTest the shared volume
We can create files on the shared volume on each pod:
kubectl exec sample-data-0-0 -- touch /mnt/nfs/primary.txt
kubectl exec sample-data-0-1 -- touch /mnt/nfs/backup.txtAnd then observe that files are visible from both pods:
$ kubectl exec sample-data-0-0 -- ls /mnt/nfs
primary.txt
backup.txt
$ kubectl exec sample-data-0-1 -- ls /mnt/nfs
primary.txt
backup.txtCleanup
Delete IrisCluster deployment
kubectl delete -f iris-filestore-demo.yaml --ignore-not-found
helm uninstall sample --ignore-not-foundDelete Persistent Volumes
kubectl delete pvc gke-filestore-pvc iris-data-sample-data-0-0 iris-data-sample-data-0-1 --ignore-not-foundNote that deleting PersistentVolumeClaim triggers deletion of the corresponding PersistentVolume.
Delete Kubernetes Cluster
gcloud container clusters delete sample-cluster --region us-east1 --quietDelete Google Filestore resources
gcloud services vpc-peerings delete --network=vpc-multi-region --quiet
gcloud compute addresses delete google-service-range --global --quiet
gcloud services disable servicenetworking.googleapis.com file.googleapis.com --quiet
gcloud compute networks subnets delete subnet-us-east1 --quiet
gcloud compute networks delete vpc-multi-region --quiet
Conclusion
We demonstrated how Google Filestore can be used to mount read/write volumes on pods residing in different availability zones. Several other solutions are available both for GKE and for other cloud providers. As you can see, their configuration can be highly esoteric and vendor-specific, but once working can be reliable and effective.
