Pesquisar

Artigo
· Set. 21, 2025 5min de leitura

Share volumes across pods and zones on AKS

Background

For a variety of reasons, users may wish to mount a persistent volume on two or more pods spanning multiple availability zones. One such use case is to make data stored outside of IRIS available to both mirror members in case of failover.

Unfortunately the built-in storage classes in most Kubernetes implementations (whether cloud or on-prem) do not provide this capability:

  • Does not support access mode "ReadWriteMany"
  • Does not support being mounted on more than one pod at a time
  • Does not support access across availability zones

However, some Kubernetes add-ons (both provider and third-party) do provide this capability. The one we'll be looking at in this article is Azure Blob Store.

Overview

In this article we will:

  • Create a Kubernetes cluster on AKS (Azure Kubernetes Service)
  • Use Azure Blob Store to create a persistent volume of type ReadWriteMany
  • Use IKO to deploy an IRIS failover mirror spanning two availability zones
  • Mount the persistent volume on both mirror members
  • Demonstrate that both mirror members have read/write access to the volume

Steps

The following steps were all carried out using Azure Cloud Shell. Please note that InterSystems is not responsible for any costs incurred in the following examples.

We will be using region "eastus" and availability zones "eastus-2" and "eastus-3".

Create Resource Group

az group create \
   --name samplerg \
   --location eastus

Create Service Principal

We extract the App Id and Client Secret for the next call:

SP=$(az ad sp create-for-rbac -o tsv)
APP_ID="$(echo $SP | cut -d' ' -f1)"
CLIENT_SECRET="$(echo $SP | cut -d' ' -f3)"

Create Kubernetes Cluster

az aks create \
   --resource-group samplerg \
   --name sample \
   --node-count 6 \
   --zones 2 3 \
   --generate-ssh-key \
   --service-principal $APP_ID \
   --client-secret $CLIENT_SECRET \
   --kubernetes-version 1.33.2 \
   --enable-blob-drive

Create a PersistentVolumeClaim

Add the following to a file named azure-blob-pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: azure-blob-storage
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: azureblob-nfs-premium
  resources:
    requests:
      storage: 5Gi

Now create the persistent volume claim:

kubectl apply -f azure-blob-pvc.yaml

Install IKO

Install and run IKO:

helm install sample iris_operator_amd-3.8.42.100/chart/iris-operator

See IKO documentation for additional information on how to download and configure IKO.

Create an IrisCluster

Add the following to a file named iris-azureblob-demo.yaml:

apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: sample
spec:
  storageClassName: iris-ssd-storageclass
  licenseKeySecret:
    name: iris-key-secret
  imagePullSecrets:
    - name: dockerhub-secret
  volumes:
  - name: nfs-volume
    persistentVolumeClaim:
      claimName: azure-blob-pvc
  topology:
    data:
      image: containers.intersystems.com/intersystems/iris:2025.2
      preferredZones: ["eastus-2","eastus-3"]
      mirrored: true
      volumeMounts:
      - name: nfs-volume
        mountPath: "/mnt/nfs"

Notes:

  • The mirror spans both availability zones in our cluster
  • See IKO documentation for information on how to configure an IrisCluster

Now create the IrisCluster:

kubectl apply -f iris-azureblob-demo.yaml

Soon after that you should see the IrisCluster is up and running:

$ kubectl get pod,pv,pvc
NAME                 READY  STATUS   RESTARTS  AGE
pod/sample-data-0-0  1/1    Running  0         9m34s
pod/sample-data-0-1  1/1    Running  0         91s
NAME              CAPACITY  ACCESS MODES  STATUS   CLAIM                      STORAGECLASS
pvc-bbdb986fba54   5Gi       RWX           Bound    azure-blob-pvc             azureblob-nfs-premium
pvc-9f5cce1010a3   4Gi       RWO           Bound    iris-data-sample-data-0-0  iris-ssd-storageclass
pvc-5e27165fbe5b   4Gi       RWO           Bound    iris-data-sample-data-0-1  iris-ssd-storageclass
NAME                      STATUS  VOLUME            CAPACITY  ACCESS MODES  STORAGECLASS            
azure-blob-pvc             Bound   pvc-bbdb986fba54  5Gi       RWX           azureblob-nfs-premium
iris-data-sample-data-0-0  Bound   pvc-9f5cce1010a3  4Gi       RWO           iris-ssd-storageclass
iris-data-sample-data-0-1  Bound   pvc-5e27165fbe5b  4Gi       RWO           iris-ssd-storageclass

We can also (by joining the output of "kubectl get pod" with "kubectl get node") see that the mirror members reside in different availability zones:

sample-data-0-0 aks-nodepool1-10664034-vmss000001 eastus-2
sample-data-0-1 aks-nodepool1-10664034-vmss000002 eastus-3

Test the shared volume

We can create files on the shared volume on each pod:

kubectl exec sample-data-0-0 -- touch /mnt/nfs/primary.txt
kubectl exec sample-data-0-1 -- touch /mnt/nfs/backup.txt

And then observe that files are visible from both pods:

$ kubectl exec sample-data-0-0 -- ls /mnt/nfs
primary.txt
backup.txt
$ kubectl exec sample-data-0-1 -- ls /mnt/nfs
primary.txt
backup.txt

Cleanup

Delete IrisCluster deployment

kubectl delete -f iris-azureblob-demo.yaml --ignore-not-found
helm uninstall sample --ignore-not-found

Delete Persistent Volumes

kubectl delete azure-blob-pvc iris-data-sample-data-0-0 iris-data-sample-data-0-1 --ignore-not-found

Note that deleting PersistentVolumeClaim triggers deletion of the corresponding PersistentVolume.

Delete Kubernetes Cluster

az aks delete --resource-group samplerg --name sample --yes

Delete Resource Group

az group delete --name samplerg --no-wait --yes

Conclusion

We demonstrated how Azure Blob Store can be used to mount read/write volumes on pods residing in different availability zones.  Several other solutions are available both for AKS and for other cloud providers.  As you can see, their configuration can be highly esoteric and vendor-specific, but once working can be reliable and effective.

1 Comment
Discussão (1)2
Entre ou crie uma conta para continuar
Anúncio
· Set. 19, 2025

InterSystems sera présent au Supply Chain Event 2025

Salut la communauté,

Rejoignez-nous pour notre atelier et nos démonstrations pendant le Supply Chain Event 2025 !

📅  Dates : 14 - 15 octobre, 2025

📌 Lieu : Paris, Porte de Versailles - Stand D16

Venez échanger avec nos experts et découvrir comment orchestrer une supply chain grâce à des données fiables et accessibles.

Au programme :
🔗 Retours d’expérience concrets
🔗 Démonstrations animées par nos experts et partenaires
🔗 Atelier immersif avec notre expert @Sylvain Guilbaud en partenariat avec SUPPLAÏ et CGI Business Consulting
🔗 Opportunités de networking avec la communauté Supply Chain

Réservez la date, préparez vos questions et rejoignez-nous pour des échanges, nous avons hâte de vous rencontrer !
Inscrivez-vous dès maintenant !

Discussão (0)1
Entre ou crie uma conta para continuar
Anúncio
· Set. 19, 2025

[Video] The Evolution of AI: Embracing Agency

Hey Community,

Enjoy the new video on InterSystems Developers YouTube:

⏯ The Evolution of AI: Embracing Agency @ Ready 2025

This video explores the evolution and future of AI in healthcare, tracing the shift from traditional machine learning to emerging approaches centered on agency and generative AI. It showcases innovative work in predictive modeling using non-clinical data, such as healthcare interaction timestamps and supermarket shopping patterns, to detect early signs of disease, including ovarian cancer. It also covers the use of behavioral data collected through living labs to support diagnostics for conditions like Parkinson’s. The overarching goal is to develop "patient-ready AI" that addresses gaps in observability and care, reduces clinician burnout, and drives meaningful transformation in digital healthcare delivery.

Presenter:
🗣 Aldo Faisal, Professor of AI & Neuroscience, School of Convergence Science in Human & Artificial Intelligence

Wondering what’s possible? Watch the video and subscribe for more examples!

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Set. 19, 2025 12min de leitura

利用 FHIR 简化健康数据集成

数字健康解决方案提供者面临的压力越来越大,他们不仅要集成复杂的健康数据系统,还要确保可扩缩性、安全性和符合 HL7 FHIR 等标准。 Fast Healthcare Interoperability Resources (FHIR) 提供了一个标准化框架,使不同的健康 IT 系统能够毫不费力地进行通信,彻底改变了健康数据的交换方式。 但是,仅仅遵循 FHIR 标准并不足以应对健康数据集成错综复杂的问题。 解决方案合作伙伴必须利用 FHIR 代理、装饰和仓库等先进的架构组件来构建可扩缩的高效解决方案。 无论是本地部署、在公共云中,还是作为 InterSystems 管理的基于云的服务,InterSystems 提供为您的健康数据实现 FHIR 所需的所有必要功能。

Medical Science Hospital Lab Meeting healthcare

Discussão (0)1
Entre ou crie uma conta para continuar