Background
For a variety of reasons, users may wish to mount a persistent volume on two or more pods spanning multiple availability zones. One such use case is to make data stored outside of IRIS available to both mirror members in case of failover.
Unfortunately the built-in storage classes in most Kubernetes implementations (whether cloud or on-prem) do not provide this capability:
- Does not support access mode "ReadWriteMany"
- Does not support being mounted on more than one pod at a time
- Does not support access across availability zones
However, some Kubernetes add-ons (both provider and third-party) do provide this capability. The one we'll be looking at in this article is Azure Blob Store.
Overview
In this article we will:
- Create a Kubernetes cluster on AKS (Azure Kubernetes Service)
- Use Azure Blob Store to create a persistent volume of type ReadWriteMany
- Use IKO to deploy an IRIS failover mirror spanning two availability zones
- Mount the persistent volume on both mirror members
- Demonstrate that both mirror members have read/write access to the volume
Steps
The following steps were all carried out using Azure Cloud Shell. Please note that InterSystems is not responsible for any costs incurred in the following examples.
We will be using region "eastus" and availability zones "eastus-2" and "eastus-3".
Create Resource Group
az group create \
--name samplerg \
--location eastus
Create Service Principal
We extract the App Id and Client Secret for the next call:
SP=$(az ad sp create-for-rbac -o tsv)
APP_ID="$(echo $SP | cut -d' ' -f1)"
CLIENT_SECRET="$(echo $SP | cut -d' ' -f3)"
Create Kubernetes Cluster
az aks create \
--resource-group samplerg \
--name sample \
--node-count 6 \
--zones 2 3 \
--generate-ssh-key \
--service-principal $APP_ID \
--client-secret $CLIENT_SECRET \
--kubernetes-version 1.33.2 \
--enable-blob-drive
Create a PersistentVolumeClaim
Add the following to a file named azure-blob-pvc.yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-blob-storage
spec:
accessModes:
- ReadWriteMany
storageClassName: azureblob-nfs-premium
resources:
requests:
storage: 5Gi
Now create the persistent volume claim:
kubectl apply -f azure-blob-pvc.yaml
Install IKO
Install and run IKO:
helm install sample iris_operator_amd-3.8.42.100/chart/iris-operator
See IKO documentation for additional information on how to download and configure IKO.
Create an IrisCluster
Add the following to a file named iris-azureblob-demo.yaml:
apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
name: sample
spec:
storageClassName: iris-ssd-storageclass
licenseKeySecret:
name: iris-key-secret
imagePullSecrets:
- name: dockerhub-secret
volumes:
- name: nfs-volume
persistentVolumeClaim:
claimName: azure-blob-pvc
topology:
data:
image: containers.intersystems.com/intersystems/iris:2025.2
preferredZones: ["eastus-2","eastus-3"]
mirrored: true
volumeMounts:
- name: nfs-volume
mountPath: "/mnt/nfs"
Notes:
- The mirror spans both availability zones in our cluster
- See IKO documentation for information on how to configure an IrisCluster
Now create the IrisCluster:
kubectl apply -f iris-azureblob-demo.yaml
Soon after that you should see the IrisCluster is up and running:
$ kubectl get pod,pv,pvc
NAME READY STATUS RESTARTS AGE
pod/sample-data-0-0 1/1 Running 0 9m34s
pod/sample-data-0-1 1/1 Running 0 91s
NAME CAPACITY ACCESS MODES STATUS CLAIM STORAGECLASS
pvc-bbdb986fba54 5Gi RWX Bound azure-blob-pvc azureblob-nfs-premium
pvc-9f5cce1010a3 4Gi RWO Bound iris-data-sample-data-0-0 iris-ssd-storageclass
pvc-5e27165fbe5b 4Gi RWO Bound iris-data-sample-data-0-1 iris-ssd-storageclass
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS
azure-blob-pvc Bound pvc-bbdb986fba54 5Gi RWX azureblob-nfs-premium
iris-data-sample-data-0-0 Bound pvc-9f5cce1010a3 4Gi RWO iris-ssd-storageclass
iris-data-sample-data-0-1 Bound pvc-5e27165fbe5b 4Gi RWO iris-ssd-storageclass
We can also (by joining the output of "kubectl get pod" with "kubectl get node") see that the mirror members reside in different availability zones:
sample-data-0-0 aks-nodepool1-10664034-vmss000001 eastus-2
sample-data-0-1 aks-nodepool1-10664034-vmss000002 eastus-3
Test the shared volume
We can create files on the shared volume on each pod:
kubectl exec sample-data-0-0 -- touch /mnt/nfs/primary.txt
kubectl exec sample-data-0-1 -- touch /mnt/nfs/backup.txt
And then observe that files are visible from both pods:
$ kubectl exec sample-data-0-0 -- ls /mnt/nfs
primary.txt
backup.txt
$ kubectl exec sample-data-0-1 -- ls /mnt/nfs
primary.txt
backup.txt
Cleanup
Delete IrisCluster deployment
kubectl delete -f iris-azureblob-demo.yaml --ignore-not-found
helm uninstall sample --ignore-not-found
Delete Persistent Volumes
kubectl delete azure-blob-pvc iris-data-sample-data-0-0 iris-data-sample-data-0-1 --ignore-not-found
Note that deleting PersistentVolumeClaim triggers deletion of the corresponding PersistentVolume.
Delete Kubernetes Cluster
az aks delete --resource-group samplerg --name sample --yes
Delete Resource Group
az group delete --name samplerg --no-wait --yes
Conclusion
We demonstrated how Azure Blob Store can be used to mount read/write volumes on pods residing in different availability zones. Several other solutions are available both for AKS and for other cloud providers. As you can see, their configuration can be highly esoteric and vendor-specific, but once working can be reliable and effective.