Pesquisar

Artigo
· Set. 23 2min de leitura

How to keep your InterSystems IRIS Mirror configurations in sync

Following on from JediSoft’s announcement of the general availability of JediSoft IRISsync®, I wanted to show how it can help prevent configuration drift and ensure your failover is always ready. 

When managing InterSystems IRIS production servers, even a minor configuration change can cause significant issues if it’s not replicated in your mirror environments. Often, these differences go unnoticed until your failover environment breaks.

This common, but critical, problem can lead to unexpected downtime at a vital moment and impact your business continuity.

IRISsync eliminates that risk by ensuring your primary and failover IRIS environments are fully aligned. It compares configuration settings across multiple servers, pinpointing differences right down to the parameter level. IRISsync is an active system configuration tool that saves time, eliminates tedious manual checking, and ensures your failover environments are always ready.

Key benefits of JediSoft IRISsync®:

  • Instantly detect configuration drift before it causes downtime – IRISsync alerts you to any deviations between your production and failover servers, helping you catch issues early and avoid outages.
  • Verify synchronization before a planned failover – before testing your failover environment, use IRISsync to confirm all settings are aligned, ensuring a smooth switchover.
  • Run regularly for ongoing assurance – incorporate IRISsync into your recurring maintenance tasks to provide confidence that your failover will work at critical moments.

Watch this short clip to see how IRISsync works:

 

Confidence in every IRIS failover

Manual configuration checks are time-consuming and prone to errors, which cause issues in the event of an unplanned failover. IRISsync prevents configuration drift by providing a clear and detailed comparison of configuration settings, ensuring your failover is always up to date and ready to run.

Schedule a demo

To find out how IRISsync can keep your InterSystems IRIS mirror configurations aligned, contact us to schedule a demo.

George James Software is an authorized reseller of JediSoft IRISsync®. For more information, including pricing, please visit our website here.

Discussão (0)2
Entre ou crie uma conta para continuar
Artigo
· Set. 23 2min de leitura

如何获取 InterSystems IRIS 社区版

大家好! 我最近才加入 InterSystems,但发现尽管我们推出了完全免费且出色的社区版,但大家并不是十分清楚如何获取。 因此我决定编写一份指南,详细介绍获取 InterSystems IRIS 社区版的所有不同方式:

以容器形式获取 InterSystems IRIS 社区版

对于刚刚接触 InterSystems IRIS 开发的伙伴,推荐使用社区版的容器化实例,在我看来,这是最简单直接的方式。 InterSystems IRIS 社区版可以在 DockerHub 上获取;如果您有 InterSystems SSO 帐户,还可以在 InterSystems 容器注册表中获取。

在这两种情况下,您都需要使用 docker CLI 拉取所需镜像:

docker pull intersystems/iris-community:latest-em
// or
docker pull containers.intersystems.com/intersystems/iris-community:latest-em

接下来,您需要启动容器:要从容器外部与 IRIS 进行交互(例如使用管理门户),您需要发布一些端口。 以下命令将运行 IRIS 社区版容器,并发布超级服务器和 Web 服务器端口;请注意,此时不能运行其他依赖 1972 或 52773 端口的程序!

docker run --name iris -d --publish 1972:1972 --publish 52773:52773 intersystems/iris-community:latest-em

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Set. 23 8min de leitura

Représentation de la mémoire libre sur un serveur de base de données Linux

Mes clients me contactent régulièrement à propos du dimensionnement de la mémoire lorsqu'ils reçoivent des alertes indiquant que la mémoire libre est inférieure à un seuil ou lorsqu'ils constatent que la mémoire libre a soudainement diminué. Existe-t-il un problème? Leur application va-t-elle cesser de fonctionner parce qu'elle manque de mémoire pour exécuter les processus système et applicatifs? La réponse est presque toujours non, il est inutile de s'inquiéter. Mais cette réponse simple n'est généralement pas suffisante. Que se passe-t-il?

Discussão (0)2
Entre ou crie uma conta para continuar
Anúncio
· Set. 23

Second French Technical Article Contest

Hi Community!

We are excited to announce the new French technical article writing contest!

✍️ Technical Article Contest ✍️

This is the perfect opportunity for all InterSystems technology enthusiasts to share their knowledge and showcase their writing skills. No matter your experience level, everyone is welcome to participate. Articles can cover a wide range of topics, from technical implementation to your impressions and feedback on using InterSystems products or services. So let your creativity and expertise run wild!

📅 Contest period: September 15 - November 2, 2025

🎁 Gifts for all: a special gift is prepared for each participant!

🏅 Prizes for the authors of the best articles

<--break->

Prizes

1. Participation gift - authors of all articles will receive a surprise gift 🎁

2. Experts prize - 3 best articles selected by InterSystems experts and their authors will receive (one of) :

🥇 1st place : Apple Watch SE / GOOGLE Nest Hub Max with Google Assistant

🥈 2nd place : SteelSeries Arctis Nova 7X / GARMIN DriveSmart 66 6” Sat Nav

🥉 3d place : INSTAX mini 12 Instant Camera / DJI Osmo Mobile 7 Smartphone Gimbal

Alternatively, any winner can choose a prize from a lower tier than his own.

3. Developer Community prize - author of the article with the most likes will receive (one of) :

🎁 INSTAX mini 12 Instant Camera / DJI Osmo Mobile 7 Smartphone Gimbal

Note: the author can only be awarded once per category (in total, the author will win max two prizes: one for Expert and one for the Community)

Who can participate?

Any Developer Community member, including InterSystems employees. Create an account!

Contest period

📝  15 September - 26 October: Publication of articles. 

📝  27 October - 2 November: Voting period.

📝 3 November: Winners announcement.

Publish an article(s) throughout this period. Developer Community members can vote for published articles with Likes – votes in the Community award.

Note: The sooner you publish the article(s), the more time you will have to collect Community votes.

What are the requirements? 

❗️ Any article written during the contest period and satisfying the requirements below will automatically enter the contest:

  • Article must be in French and published on the Communauté des developpeurs francophone.
  • The article must be 100% new (it can be a continuation of an existing article not in the contest).  
  • The article cannot be a translation of an article already published in other communities.  
  • Article size: 300 words minimum (links and code are not counted towards the word limit).  
  • A single author may submit multiple articles if they deal with different topics.
  • Different authors may submit articles on the same topic.
  • Multiple authors may co-author an article.

🎯 EXTRA BONUSES 

Experts award 3 points to the article they consider the best, 2 points to the 2nd best, and 1 to the 3rd best. Additionally, articles can receive more points based on the following bonuses:

Bonus Nominal  Details
Topic bonus

Write an article about one of the proposed topics

Video bonus

Besides publishing the article, make an explanatory video.

Feedback bonus 2

Present an implementation of IRIS in a project

Code examples bonus 1

Illustrate your article with code examples

Code upload bonus 1

Upload code examples to Open Exchange or other open-source platform

Proposed topics

Here is a list of suggested topics that will give your article an extra topic bonus (3 points):

✔️ Python
✔️ FHIR
✔️ IA (Vector Search, ML, GenAI, RAG, AgenticAI)
✔️ VSCode
✔️ CI/CD
✔️ Data (Warehouse / Lake / Time Series)
✔️ HealthCare
✔️ Supply Chain

We look forward to reading your contributions. Get typing! Happy writing & Good luck!


Important note: Delivery of prizes varies by country and may not be possible for some of them. A list of countries with restrictions can be requested from @Liubka Zelenskaia

Judges reserve the right to refuse an article if it misuses LLMs.

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Set. 22 10min de leitura

Share volumes across pods and zones on EKS

Background

For a variety of reasons, users may wish to mount a persistent volume on two or more pods spanning multiple availability zones. One such use case is to make data stored outside of IRIS available to both mirror members in case of failover.

Unfortunately the built-in storage classes in most Kubernetes implementations (whether cloud or on-prem) do not provide this capability:

  • Does not support access mode "ReadWriteMany"
  • Does not support being mounted on more than one pod at a time
  • Does not support access across availability zones

However, some Kubernetes add-ons (both provider and third-party) do provide this capability. The one we'll be looking at in this article is Amazon Elastic File System (EFS).

Overview

In this article we will:

  • Create a Kubernetes cluster on Amazon EKS (Elastic Kubernetes Service)
  • Use EFS to create a persistent volume of type ReadWriteMany
  • Use IKO to deploy an IRIS failover mirror spanning two availability zones
  • Mount the persistent volume on both mirror members
  • Demonstrate that both mirror members have read/write access to the volume

Steps

The following steps were all carried out using AWS CloudShell. Please note that InterSystems is not responsible for any costs incurred in the following examples.

We will be using region "us-east-2" and availability zones "us-east-2b" and "us-east-2c".

Create Kubernetes Cluster

export AWS_REGION=us-east-2
export CLUSTER=sample

eksctl create cluster \
  --name $CLUSTER \
  --region $AWS_REGION \
  --zones us-east-2b,us-east-2c \
  --node-type m5.2xlarge \
  --nodes 3

Configure EBS and EFS

export AWS_ID=$(aws sts get-caller-identity --query Account --output text)
export EBS_ROLE=AmazonEKS_EBS_CSI_DriverRole_$CLUSTER
export EFS_ROLE=AmazonEKS_EFS_CSI_DriverRole_$CLUSTER

eksctl utils associate-iam-oidc-provider \
  --cluster $CLUSTER  \
  --region $AWS_REGION \
  --approve

aws eks create-addon \
  --addon-name aws-ebs-csi-driver \
  --cluster-name $CLUSTER \
  --region $AWS_REGION \
  --service-account-role-arn arn:aws:iam::${AWS_ID}:role/${EBS_ROLE} \
  --configuration-values '{"defaultStorageClass":{"enabled":true}}'

eksctl create addon \
  --name aws-efs-csi-driver \
  --cluster $CLUSTER \
  --region=$AWS_REGION \
  --service-account-role-arn arn:aws:iam::$AWS_ID:role/$EFS_ROLE \
  --force

eksctl create addon \
  --name=eks-pod-identity-agent \
  --cluster=$CLUSTER

export ADDONS=$(aws eks list-addons --cluster-name $CLUSTER --query addons[] --output text)
for ADDON in $ADDONS; do
  eksctl update addon \
    --name $ADDON \
    --cluster $CLUSTER \
    --region $AWS_REGION
  done

eksctl create iamserviceaccount \
  --name ebs-csi-controller-sa \
  --namespace kube-system \
  --cluster $CLUSTER \
  --region $AWS_REGION \
  --role-name $EBS_ROLE \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --approve \
  --override-existing-serviceaccounts

eksctl create iamserviceaccount \
  --name efs-csi-controller-sa \
  --namespace kube-system \
  --cluster $CLUSTER \
  --region $AWS_REGION \
  --role-name $EFS_ROLE \
  --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
  --approve \
  --override-existing-serviceaccounts

Configure Security and Ingress

We create a Security Group and configure ingress to EFS port 2049 (NFS):

export VPC_ID=$(aws eks describe-cluster --name $CLUSTER --query "cluster.resourcesVpcConfig.vpcId" --output text)
export SG=$(aws ec2 create-security-group \
              --description efs-sample-sg \
              --group-name efs-sg \
              --vpc-id $VPC_ID \
              --query "GroupId" \
              --output text)

export VPC_CIDR=$(aws ec2 describe-vpcs --vpc-ids $VPC_ID --query "Vpcs[].CidrBlock" --output text)

aws ec2 authorize-security-group-ingress \
  --group-id $SG \
  --protocol tcp \
  --port 2049 \
  --cidr $VPC_CIDR

Create a File System

The File System routes traffic from the PersistentVolume in each zone to the shared file store.

export FS_ID=$(aws efs create-file-system \
                  --region $AWS_REGION \
                  --performance-mode generalPurpose \
                  --query 'FileSystemId' \
                  --output text)

Each File System needs an Access Point.  We set user and group to 51773 ("irisowner") and provide access to the entire volume ("/").  Note that changing ownership requires root access by EFS ("Uid=0,Gid=0"):

export ACCESS_POINT=$(aws efs create-access-point \
                        --file-system-id $FS_ID \
                        --root-directory "Path=/,CreationInfo={OwnerUid=51773,OwnerGid=51773,Permissions=777}" \
                        --posix-user "Uid=0,Gid=0" \
                        --tags Key=Name,Value=east-users \
                        --query "AccessPointId" \
                        --output text)

Each File System also needs a Mount Target in the subnet of each availability zone.   Each Mount Target has an IP address that routes to the local PersistentVolume:

export SUBNET_IDS=$(aws eks describe-cluster --name $CLUSTER --query "cluster.resourcesVpcConfig.subnetIds" --output text)
for SUBNET_ID in $SUBNET_IDS; do
  aws efs create-mount-target \
    --file-system-id $FS_ID \
    --subnet-id $SUBNET_ID \
    --security-group $SG
done

Create a StorageClass

Add the following to a file named "efs-sc.yaml":

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com

Now create the storage class:

kubectl apply -f efs-sc.yaml

Create a PersistentVolume

Determine the Volume Handle for the File System:

echo $FS_ID::$ACCESS_POINT
fs-0e67f9ac9a3ba51cd::fsap-02c3ed5dc9233394f    // <-- example only, do not use

Add the following to a file named "efs-pv.yaml".  Replace the volumeHandle field below with your own:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-0e67f9ac9a3ba51cd::fsap-02c3ed5dc9233394f
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  volumeMode: Filesystem

Now create the persistent volume:

kubectl apply -f efs-pv.yaml

Create a PersistentVolumeClaim

Add the following to a file named "efs-pvc.yaml":

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-pvc
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi

Now create the persistent volume claim:

kubectl apply -f efs-pvc.yaml

Install IKO

Install and run IKO:

helm install sample iris_operator_amd-3.8.42.100/chart/iris-operator

See IKO documentation for additional information on how to download and configure IKO.

Create an IrisCluster

Add the following to a file named iris-efs-demo.yaml:

apiVersion: intersystems.com/v1beta1
kind: IrisCluster
metadata:
  name: sample
spec:
  storageClassName: iris-ssd-storageclass
  licenseKeySecret:
    name: iris-key-secret
  imagePullSecrets:
    - name: dockerhub-secret
  volumes:
  - name: efs-volume
    persistentVolumeClaim:
      claimName: efs-pvc
  topology:
    data:
      image: containers.intersystems.com/intersystems/iris:2025.2
      preferredZones: ["us-east-2a","us-east-2b"]
      mirrored: true
      volumeMounts:
      - name: efs-volume
        mountPath: "/mnt/nfs"

Notes:

  • The mirror spans both availability zones in our cluster
  • See IKO documentation for information on how to configure an IrisCluster

Now create the IrisCluster:

kubectl apply -f iris-efs-demo.yaml

Soon after that you should see the IrisCluster is up and running:

$ kubectl get pod,pv,pvc
NAME                 READY  STATUS   RESTARTS  AGE
pod/sample-data-0-0  1/1    Running  0         9m34s
pod/sample-data-0-1  1/1    Running  0         91s
NAME              CAPACITY  ACCESS MODES  STATUS   CLAIM                      STORAGECLASS
pvc-bbdb986fba54   5Gi       RWX           Bound    efs-pvc                    efs-sc
pvc-9f5cce1010a3   4Gi       RWO           Bound    iris-data-sample-data-0-0  iris-ssd-storageclass
pvc-5e27165fbe5b   4Gi       RWO           Bound    iris-data-sample-data-0-1  iris-ssd-storageclass
NAME                      STATUS  VOLUME            CAPACITY  ACCESS MODES  STORAGECLASS            
efs-pvc                    Bound   pvc-bbdb986fba54  5Gi       RWX           efs-sc
iris-data-sample-data-0-0  Bound   pvc-9f5cce1010a3  4Gi       RWO           iris-ssd-storageclass
iris-data-sample-data-0-1  Bound   pvc-5e27165fbe5b  4Gi       RWO           iris-ssd-storageclass

We can also (by joining the output of "kubectl get pod" with "kubectl get node") see that the mirror members reside in different availability zones:

sample-data-0-0 ip-192-168-18-38.us-east-2.compute.internal   us-east-2b
sample-data-0-1 ip-192-168-52-17.us-east-2.compute.internal   us-east-2c

Test the shared volume

We can create files on the shared volume on each pod:

kubectl exec sample-data-0-0 -- touch /mnt/nfs/primary.txt
kubectl exec sample-data-0-1 -- touch /mnt/nfs/backup.txt

And then observe that files are visible from both pods:

$ kubectl exec sample-data-0-0 -- ls /mnt/nfs
primary.txt
backup.txt
$ kubectl exec sample-data-0-1 -- ls /mnt/nfs
primary.txt
backup.txt

Cleanup

Delete IrisCluster deployment

kubectl delete -f iris-efs-demo.yaml --ignore-not-found
helm uninstall sample --ignore-not-found

Delete Persistent Volumes

kubectl delete pvc efs-pvc iris-data-sample-data-0-0 iris-data-sample-data-0-1 --ignore-not-found

Note that deleting PersistentVolumeClaim triggers deletion of the corresponding PersistentVolume.

Delete EFS resources

export ACCESS_POINTS=$(aws efs describe-access-points --file-system-id $FS_ID --query "AccessPoints[].AccessPointId" --output text)
for ACCESS_POINT in $ACCESS_POINTS; do
    aws efs delete-access-point \
        --access-point-id $ACCESS_POINT
done

export MOUNT_TARGETS=$(aws efs describe-mount-targets --file-system-id $FS_ID --query "MountTargets[].MountTargetId" --output text)
for MOUNT_TARGET in $MOUNT_TARGETS; do
    aws efs delete-mount-target \
        --mount-target-id $MOUNT_TARGET
done

aws efs delete-file-system --file-system-id $FS_ID

Delete more resources

aws iam detach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
  --role-name $EBS_ROLE

aws iam delete-role \
    --role-name $EBS_ROLE


aws iam detach-role-policy \
  --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
  --role-name $EFS_ROLE

aws iam delete-role \
    --role-name $EFS_ROLE

ADDONS=$(eksctl get addon --cluster $CLUSTER --region $AWS_REGION --output json | grep Name | cut -d '"' -f4 | xargs echo)
for ADDON in $ADDONS; do
  aws eks delete-addon \
    --addon-name $ADDON \
    --cluster-name $CLUSTER
  done

aws ec2 revoke-security-group-ingress \
  --group-id $SG

aws ec2 delete-security-group \
  --group-id $SG

Delete Kubernetes Cluster

eksctl delete cluster --name $CLUSTER

Conclusion

We demonstrated how Amazon EFS can be used to mount read/write volumes on pods residing in different availability zones.  Several other solutions are available both for AWS  and for other cloud providers.  As you can see, their configuration can be highly esoteric and vendor-specific, but once working can be reliable and effective.

2 Comments
Discussão (2)3
Entre ou crie uma conta para continuar