Pesquisar

Pergunta
· 15 hr atrás

Creating a lookup table with Data from ADT message to reference later in business rule

I have a vendor that only wants results on patients that arrive to the ED via EMS/Ambulance.  The value I need for filtering is in the PV2;38. Some of the results requested do not allow the PV2 segment to be added to the schema in the EMR.  I was told that other orgs have used a lookup table that is populated with the PV1;19 value when an ADT messages that meets the criteria is sent in.  This table is then referenced in the business rule for the results that do not have PV2;38 and if Encounter number from result message exists on the Table, the result is sent.  Has anyone done this before?  

Documentation I found suggests copying the EnsLib.Hl7.SearchTable.  Is this the correct route to take?  

Appreciate any feedback or instructions.  

Thanks!

Gigi La Course

1 novo comentário
Discussão (1)2
Entre ou crie uma conta para continuar
Artigo
· 16 hr atrás 3min de leitura

Como usar a query FreeSpace da classe SYS.Database para checar o espaço disponível no disco onde a base de dados está

Rubrica InterSystems FAQ

Você pode verificar o espaço em disco a qualquer momento usando a classe utilitária do sistema: SYS.Database e a consulta: FreeSpace.

Aqui está como testar no terminal IRIS (vá para o namespace %SYS e então execute):

zn "%SYS"
set stmt=##class(%SQL.Statement).%New()
set st=stmt.%PrepareClassQuery("SYS.Database","FreeSpace")
set rset=stmt.%Execute()
do rset.%Display()

O exemplo de resultado de saída é o seguinte:

*No exemplo de execução do comando, todos os bancos de dados estão localizados no mesmo disco, então o espaço livre em disco (DiskFreeSpace) retorna o mesmo valor.

Dumping result #1
DatabaseName    Directory       MaxSize Size    ExpansionSize   AvailableFreeDiskFreeSpace      Status  SizeInt AvailableNum    DiskFreeSpaceNum        ReadOnly
IRISSYS c:\intersystems\irishealth3\mgr\        無制限  159MB   システムデフォル           ト      18MB    11.32   245.81GB        マウント/RW     159     18      2517050
ENSLIB  c:\intersystems\irishealth3\mgr\enslib\ 無制限  226MB   システムデフォル           ト      19MB    8.4     245.81GB        マウント/R      226     19      2517051
      <一部省略>
IRISTEMP        c:\intersystems\irishealth3\mgr\iristemp\       無制限  51MBシス     テムデフォルト  49MB    96.07   245.81GB        マウント/RW     51      49251705           0
USER    c:\intersystems\irishealth3\mgr\user\   無制限  31MB    システムデフォル           ト      8.5MB   27.41   245.81GB        マウント/RW     31      8.5     2517050

Se você quiser especificar o diretório do banco de dados ao qual se referir, execute o seguinte:

// Use a função $LISTBUILD() para o obter o caminho completo do diretório da base de dados que deseja visualizar.
set dbdir=$LISTBUILD("c:\intersystems\irishealth3\mgr","c:\intersystems\irishealth3\mgr\user")
set rset=stmt.%Execute(dbdir)
do rset.%Display()

Para obter apenas o nome do banco de dados (DatabaseName),o tamanho atual (Size) em MB, o espaço disponível (Available) em MB, o espaço livre (Free), e  e o espaço livre em disco (DiskFreeSpace) em um diretório de banco de dados especificado, siga os passos abaixo (crie uma rotina/classe no VSCode ou Studio enquanto conectado ao namespace %SYS e escreva o código).

Class ZMyClass.Utils
{
ClassMethod GetDiskFreeSpace()
{
    set dbdir=$LISTBUILD("c:\intersystems\irishealth3\mgr","c:\intersystems\irishealth3\mgr\user")
    set stmt=##class(%SQL.Statement).%New()
    set st=stmt.%PrepareClassQuery("SYS.Database","FreeSpace")
    set rset=stmt.%Execute(dbdir)
    while(rset.%Next()) {
        write rset.%Get("DatabaseName")," - ",
        rset.%Get("Size")," - ",rset.%Get("Available")," - ",
        rset.%Get("Free"),"% - ",rset.%Get("DiskFreeSpace"),!
    }
}
}

 

NOTA: Se você colocar rotinas ou classes definidas pelo usuário no namespace %SYS, criá-las com nomes que comecem com Z garante que o código-fonte definido pelo usuário permaneça disponível após uma instalação de atualização.

Um exemplo de execução é o seguinte.

USER>zn "%SYS"
%SYS>do ##class(ZMyClass.Utils).GetDiskFreeSpace()

IRISSYS - 159MB - 18MB - 11.32% - 245.81GB
USER - 31MB - 8.5MB - 27.41% - 245.81GB

%SYS>
Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· 17 hr atrás 8min de leitura

Share volumes across pods and zones on GKE

Background

For a variety of reasons, users may wish to mount a persistent volume on two or more pods spanning multiple availability zones. One such use case is to make data stored outside of IRIS available to both mirror members in case of failover.

Unfortunately the built-in storage classes in most Kubernetes implementations (whether cloud or on-prem) do not provide this capability:

  • Does not support access mode "ReadWriteMany"
  • Does not support being mounted on more than one pod at a time
  • Does not support access across availability zones

However, some Kubernetes add-ons (both provider and third-party) do provide this capability. The one we'll be looking at in this article is Google FIlestore.

Overview

In this article we will:

  • Create a Kubernetes cluster on GKE (Google Kubernetes Engine)
  • Use Google Filestore to create a persistent volume of type ReadWriteMany
  • Use IKO to deploy an IRIS failover mirror spanning two availability zones
  • Mount the persistent volume on both mirror members
  • Demonstrate that both mirror members have read/write access to the volume

Steps

The following steps were all carried out using Google Cloud Shell. Please note that InterSystems is not responsible for any costs incurred in the following examples.

We will be using region "us-east1" and availability zones "us-east1-b" and "us-east1-c".  Replace project "gke-filestore-demo" with your own.

Create a Network

Google Filestore requires creation of a custom network and subnet:

gcloud compute networks create vpc-multi-region --subnet-mode=custom

Create a Subnet

gcloud compute networks subnets create subnet-us-east1 \
  --project=gke-filestore-demo \
  --region=us-east1 \
  --network=vpc-multi-region \
  --range=172.16.1.0/24 \
  --secondary-range=pods=10.0.0.0/16,services=192.168.1.0/24

Enable Google Services

gcloud services enable file.googleapis.com servicenetworking.googleapis.com

Create Addresses for VPC Peering

gcloud compute addresses create google-service-range \
    --project=gke-filestore-demo \
    --global \
    --purpose=VPC_PEERING \
    --prefix-length=20 \
    --description="Peering range for Google managed services" \
    --network=vpc-multi-region

Create VPC Peering

gcloud services vpc-peerings connect \
    --project=gke-filestore-demo \
    --service=servicenetworking.googleapis.com \
    --ranges=google-service-range \
    --network=vpc-multi-region

Create Kubernetes Cluster

gcloud container clusters create sample-cluster \
    --project=gke-filestore-demo \
    --region us-east1 \
    --node-locations us-east1-b,us-east1-c \
    --machine-type t2d-standard-2
    --num-nodes=3 \
    --release-channel rapid \
    --enable-ip-alias \
    --network=vpc-multi-region \
    --subnetwork=subnet-us-east1 \
    --cluster-secondary-range-name=pods \
    --services-secondary-range-name=services \
    --shielded-secure-boot \
    --shielded-integrity-monitoring \
    --enable-autoscaling \
    --addons=GcpFilestoreCsiDrive

Create a StorageClass

Google provides a number of storage classes, but none of them will work with our custom network, so we must create our own.  Add the following to a file named gke-filestore-sc.yaml:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gke-filestore-sc
provisioner: filestore.csi.storage.gke.io
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
  tier: ENTERPRISE
  network: vpc-multi-region
allowedTopologies:
- matchLabelExpressions:
  - key: topology.gke.io/zone
    values:
    - us-east1

Note that the value under key "toppology.gke.io/zone" is a region, not a zone.

Now create the storage class:

kubectl apply -f gke-filestore-sc.yaml

Create a PersistentVolumeClaim

Add the following to a file named gke-filestore-pvc.yaml:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gke-filestore-pvc
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: gke-filestore-sc
  resources:
    requests:
      storage: 1Ti

Note that 1TB is the minimum size allowed.

Now create the persistent volume claim:

kubectl apply -f gke-filestore-pvc.yaml

Install IKO

See IKO documentation for information on how to download and configure IKO:

helm install sample iris_operator_amd-3.8.42.100/chart/iris-operator

Create an IrisCluster

Add the following to a file named iris-filestore-demo.yaml:

apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: sample
spec:
  storageClassName: iris-ssd-storageclass
  licenseKeySecret:
    name: iris-key-secret
  imagePullSecrets:
    - name: dockerhub-secret
  volumes:
  - name: nfs-volume
    persistentVolumeClaim:
      claimName: gke-filestore-pvc
  topology:
    data:
      image: containers.intersystems.com/intersystems/iris:2025.2
      preferredZones: ["us-east1-b","us-east1-c"]
      mirrored: true
      volumeMounts:
      - name: nfs-volume
        mountPath: "/mnt/nfs"
      podTemplate:
        spec:
          initContainers:
          - name: nfs-init
            image: busybox
            command: ["sh","-c","/bin/chown -R 51773:51773 /mnt/nfs; /bin/chmod -R 777 /mnt/nfs"]
            securityContext:
              runAsUser: 0
              runAsGroup: 0
              runAsNonRoot: false
            volumeMounts:
            - name: nfs-volume
              mountPath: "/mnt/nfs"

Notes:

  • We use an init container to make the volume writable by user "irisowner"
  • The mirror spans both availability zones in our cluster
  • See IKO documentation for information on how to configure an IrisCluster

Now create the IrisCluster:

kubectl apply -f iris-filestore-demo.yaml

Note that it may take some time for the shared volume to be allocated, and you may see errors such as the following in the events log:

15s  Warning  ProvisioningFailed  PersistentVolumeClaim/nfs-pvc  failed to provision volume with StorageClass "gke-filestore-sc": rpc error: code = DeadlineExceeded desc = context deadline exceeded

A better way to confirm that allocation is underway is to look directly at the filestore instance:

$ gcloud filestore instances list
INSTANCE_NAME: pvc-6d59f5ce-7d4f-42d0-8a20-2ee3602f8d32
LOCATION: us-east1
TIER: ENTERPRISE
CAPACITY_GB: 1024
FILE_SHARE_NAME: vol1
IP_ADDRESS: 10.110.51.194
STATE: CREATING
CREATE_TIME: 2025-09-09T03:13:07

 Note the status of "CREATING"; deployment won't proceed until the state reaches "READY". This may take between five and ten minutes. Soon after that you should see the IrisCluster is up and running:

$ kubectl get pod,pv,pvc
NAME                 READY  STATUS   RESTARTS  AGE
pod/sample-data-0-0  1/1    Running  0         9m34s
pod/sample-data-0-1  1/1    Running  0         91s
NAME              CAPACITY  ACCESS MODES  STATUS   CLAIM                      STORAGECLASS
pvc-bbdb986fba54   1Ti       RWX           Bound    gke-filestore-pvc          gke-filestore-sc
pvc-9f5cce1010a3   4Gi       RWO           Bound    iris-data-sample-data-0-0  iris-ssd-storageclass
pvc-5e27165fbe5b   4Gi       RWO           Bound    iris-data-sample-data-0-1  iris-ssd-storageclass
NAME                      STATUS  VOLUME            CAPACITY  ACCESS MODES  STORAGECLASS            
gke-filestore-pvc          Bound   pvc-bbdb986fba54  1Ti       RWX           gke-filestore-sc
iris-data-sample-data-0-0  Bound   pvc-9f5cce1010a3  4Gi       RWO           iris-ssd-storageclass
iris-data-sample-data-0-1  Bound   pvc-5e27165fbe5b  4Gi       RWO           iris-ssd-storageclass

We can also (by joining the output of "kubectl get pod" with "kubectl get node") see that the mirror members reside in different availability zones:

sample-data-0-0 gke-sample-cluster-default-pool-a3b1683a-7g77   us-east1-c
sample-data-0-1 gke-sample-cluster-default-pool-010e420a-dw1h   us-east1-b

Test the shared volume

We can create files on the shared volume on each pod:

kubectl exec sample-data-0-0 -- touch /mnt/nfs/primary.txt
kubectl exec sample-data-0-1 -- touch /mnt/nfs/backup.txt

And then observe that files are visible from both pods:

$ kubectl exec sample-data-0-0 -- ls /mnt/nfs
primary.txt
backup.txt
$ kubectl exec sample-data-0-1 -- ls /mnt/nfs
primary.txt
backup.txt

Cleanup

Delete IrisCluster deployment

kubectl delete -f iris-filestore-demo.yaml --ignore-not-found
helm uninstall sample --ignore-not-found

Delete Persistent Volumes

kubectl delete pvc gke-filestore-pvc iris-data-sample-data-0-0 iris-data-sample-data-0-1 --ignore-not-found

Note that deleting PersistentVolumeClaim triggers deletion of the corresponding PersistentVolume.

Delete Kubernetes Cluster

gcloud container clusters delete sample-cluster --region us-east1 --quiet

Delete Google Filestore resources

gcloud services vpc-peerings delete --network=vpc-multi-region --quiet
gcloud compute addresses delete google-service-range --global --quiet
gcloud services disable servicenetworking.googleapis.com file.googleapis.com --quiet
gcloud compute networks subnets delete subnet-us-east1 --quiet
gcloud compute networks delete vpc-multi-region --quiet

Conclusion

We demonstrated how Google Filestore can be used to mount read/write volumes on pods residing in different availability zones.  Several other solutions are available both for GKE and for other cloud providers.  As you can see, their configuration can be highly esoteric and vendor-specific, but once working can be reliable and effective.

Discussão (0)1
Entre ou crie uma conta para continuar
Discussão (0)0
Entre ou crie uma conta para continuar
Anúncio
· 22 hr atrás

Temáticas para la Recompensa de Artículos de Septiembre

Probablemente ya habéis visto que la Recompensa de Artículos de Septiembre está en pleno auge 🚀

Podéis enviar un artículo existente y actualizado sobre uno de los temas y ganar 30 puntos,
o escribir un artículo completamente nuevo desde cero y conseguir una recompensa de 🏆 5.000 puntos una vez que sea aprobado 🎉

Aquí tenéis la lista de temas de este mes:

  • Modernización y uso de menos sistemas heredados
  • InterSystems API Manager - Pasos de configuración y utilización
  • Consejos y trucos - atajos en ObjectScript que no son obvios
  • Buenas prácticas de desarrollo
  • Docker/contenedores con IRIS para principiantes / Uso de Docker con aplicaciones IRIS
  • Añadir mapeos a un espacio de nombres
  • Guías de configuración en InterSystems IRIS / IRIS for Health
  • Mejores prácticas para almacenar y manipular imágenes y documentos usando interoperabilidad
  • Errores comunes que cometen los nuevos desarrolladores al usar ObjectScript
  • InterSystems IRIS y Angular - El mejor frontend se encuentra con la base de datos más rápida / Mejor flujo de trabajo para trabajar con Angular e IRIS

Enviad vuestro artículo en el reto dedicado en Global Masters 👉¡Clic aquí!

📌 Recordatorio: aquí están las reglas que debéis seguir 👇

  • El artículo debe seguir las >>Developer Community Guidelines<< generales y no debe estar escrito por IA.
  • Tamaño del artículo: mínimo 400 palabras (los enlaces y el código no cuentan para el límite de palabras).
  • El artículo debe tratar sobre productos y servicios de InterSystems.
  • El artículo debe estar en inglés (incluyendo código, capturas de pantalla, etc.).
  • El artículo debe ser 100% nuevo.
  • El artículo no puede ser una traducción de un artículo ya publicado en otras comunidades.
  • El artículo debe contener solo información correcta y fiable sobre la tecnología de InterSystems.
  • Se permiten artículos sobre el mismo tema pero con ejemplos diferentes de distintos autores.

Se permiten artículos sobre el mismo tema pero con ejemplos diferentes de distintos autores.

Discussão (0)1
Entre ou crie uma conta para continuar