Encontrar

Anúncio
· Set. 22

Sortie de VS Code août 2025 (version 1.104)

Visual Studio Code publie chaque mois de nouvelles mises à jour avec de nouvelles fonctionnalités et des corrections de bugs. La version d'août 2025 est désormais disponible.

Cette version offre une sélection plus intelligente des modèles d'IA, une sécurité renforcée pour les modifications sensibles et les commandes de terminal, ainsi que des améliorations de productivité telles que la modification simplifiée du chat et le contexte personnalisable avec AGENTS.md.

Les mises à jour de la version 1.104 incluent :

Flexibilité du modèle

Laissez VS Code sélectionner le meilleur modèle
Contribuez aux modèles via les extensions VS Code

Sécurité

Confirmez les modifications des fichiers sensibles
Laissez les agents exécuter les commandes de terminal en toute sécurité

Productivité

Supprimez les distractions liées aux modifications des fichiers de chat
Utilisez AGENTS.md pour ajouter du contexte de chat

Cette version inclut également les contributions de notre collaborateur @John Murray via des pull requests qui corrigent les problèmes en suspens.

Consultez les notes de version complètes ici > https://code.visualstudio.com/updates/v1_103

Si vous utilisez VS Code, votre environnement devrait se mettre à jour automatiquement. Vous pouvez vérifier manuellement les mises à jour en accédant à Aide > Rechercher les mises à jour sous Linux et Windows, ou à Code > Rechercher les mises à jour sous macOS.

Discussão (0)1
Entre ou crie uma conta para continuar
Anúncio
· Set. 22

[Vídeo] La evolución de la IA: adoptando la agencia

Hola comunidad,

Disfrutad del nuevo vídeo en el YouTube de InterSystems Developers:

La evolución de la IA: adoptando la agencia @ Ready 2025

Este vídeo explora la evolución y el futuro de la IA en el ámbito sanitario, siguiendo el cambio desde el aprendizaje automático tradicional hacia nuevos enfoques centrados en la agencia y la IA generativa. Presenta trabajos innovadores en modelado predictivo usando datos no clínicos, como marcas de tiempo de interacciones sanitarias y patrones de compra en supermercados, para detectar signos tempranos de enfermedades, incluido el cáncer de ovario. También aborda el uso de datos de comportamiento recogidos a través de laboratorios vivos para apoyar el diagnóstico de afecciones como el Parkinson. El objetivo general es desarrollar una "IA lista para el paciente" que cubra lagunas en la observabilidad y la atención, reduzca la sobrecarga de los profesionales sanitarios y promueva una transformación significativa en la prestación digital de la atención médica.

Ponente:
🗣 Aldo Faisal, Profesor de IA y Neurociencia, School of Convergence Science in Human & Artificial Intelligence

¿Os preguntáis qué más es posible? Ved el vídeo y suscribíos para más ejemplos.

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Set. 21 5min de leitura

Share volumes across pods and zones on AKS

Background

For a variety of reasons, users may wish to mount a persistent volume on two or more pods spanning multiple availability zones. One such use case is to make data stored outside of IRIS available to both mirror members in case of failover.

Unfortunately the built-in storage classes in most Kubernetes implementations (whether cloud or on-prem) do not provide this capability:

  • Does not support access mode "ReadWriteMany"
  • Does not support being mounted on more than one pod at a time
  • Does not support access across availability zones

However, some Kubernetes add-ons (both provider and third-party) do provide this capability. The one we'll be looking at in this article is Azure Blob Store.

Overview

In this article we will:

  • Create a Kubernetes cluster on AKS (Azure Kubernetes Service)
  • Use Azure Blob Store to create a persistent volume of type ReadWriteMany
  • Use IKO to deploy an IRIS failover mirror spanning two availability zones
  • Mount the persistent volume on both mirror members
  • Demonstrate that both mirror members have read/write access to the volume

Steps

The following steps were all carried out using Azure Cloud Shell. Please note that InterSystems is not responsible for any costs incurred in the following examples.

We will be using region "eastus" and availability zones "eastus-2" and "eastus-3".

Create Resource Group

az group create \
   --name samplerg \
   --location eastus

Create Service Principal

We extract the App Id and Client Secret for the next call:

SP=$(az ad sp create-for-rbac -o tsv)
APP_ID="$(echo $SP | cut -d' ' -f1)"
CLIENT_SECRET="$(echo $SP | cut -d' ' -f3)"

Create Kubernetes Cluster

az aks create \
   --resource-group samplerg \
   --name sample \
   --node-count 6 \
   --zones 2 3 \
   --generate-ssh-key \
   --service-principal $APP_ID \
   --client-secret $CLIENT_SECRET \
   --kubernetes-version 1.33.2 \
   --enable-blob-drive

Create a PersistentVolumeClaim

Add the following to a file named azure-blob-pvc.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: azure-blob-storage
spec:
  accessModes:
  - ReadWriteMany
  storageClassName: azureblob-nfs-premium
  resources:
    requests:
      storage: 5Gi

Now create the persistent volume claim:

kubectl apply -f azure-blob-pvc.yaml

Install IKO

Install and run IKO:

helm install sample iris_operator_amd-3.8.42.100/chart/iris-operator

See IKO documentation for additional information on how to download and configure IKO.

Create an IrisCluster

Add the following to a file named iris-azureblob-demo.yaml:

apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: sample
spec:
  storageClassName: iris-ssd-storageclass
  licenseKeySecret:
    name: iris-key-secret
  imagePullSecrets:
    - name: dockerhub-secret
  volumes:
  - name: nfs-volume
    persistentVolumeClaim:
      claimName: azure-blob-pvc
  topology:
    data:
      image: containers.intersystems.com/intersystems/iris:2025.2
      preferredZones: ["eastus-2","eastus-3"]
      mirrored: true
      volumeMounts:
      - name: nfs-volume
        mountPath: "/mnt/nfs"

Notes:

  • The mirror spans both availability zones in our cluster
  • See IKO documentation for information on how to configure an IrisCluster

Now create the IrisCluster:

kubectl apply -f iris-azureblob-demo.yaml

Soon after that you should see the IrisCluster is up and running:

$ kubectl get pod,pv,pvc
NAME                 READY  STATUS   RESTARTS  AGE
pod/sample-data-0-0  1/1    Running  0         9m34s
pod/sample-data-0-1  1/1    Running  0         91s
NAME              CAPACITY  ACCESS MODES  STATUS   CLAIM                      STORAGECLASS
pvc-bbdb986fba54   5Gi       RWX           Bound    azure-blob-pvc             azureblob-nfs-premium
pvc-9f5cce1010a3   4Gi       RWO           Bound    iris-data-sample-data-0-0  iris-ssd-storageclass
pvc-5e27165fbe5b   4Gi       RWO           Bound    iris-data-sample-data-0-1  iris-ssd-storageclass
NAME                      STATUS  VOLUME            CAPACITY  ACCESS MODES  STORAGECLASS            
azure-blob-pvc             Bound   pvc-bbdb986fba54  5Gi       RWX           azureblob-nfs-premium
iris-data-sample-data-0-0  Bound   pvc-9f5cce1010a3  4Gi       RWO           iris-ssd-storageclass
iris-data-sample-data-0-1  Bound   pvc-5e27165fbe5b  4Gi       RWO           iris-ssd-storageclass

We can also (by joining the output of "kubectl get pod" with "kubectl get node") see that the mirror members reside in different availability zones:

sample-data-0-0 aks-nodepool1-10664034-vmss000001 eastus-2
sample-data-0-1 aks-nodepool1-10664034-vmss000002 eastus-3

Test the shared volume

We can create files on the shared volume on each pod:

kubectl exec sample-data-0-0 -- touch /mnt/nfs/primary.txt
kubectl exec sample-data-0-1 -- touch /mnt/nfs/backup.txt

And then observe that files are visible from both pods:

$ kubectl exec sample-data-0-0 -- ls /mnt/nfs
primary.txt
backup.txt
$ kubectl exec sample-data-0-1 -- ls /mnt/nfs
primary.txt
backup.txt

Cleanup

Delete IrisCluster deployment

kubectl delete -f iris-azureblob-demo.yaml --ignore-not-found
helm uninstall sample --ignore-not-found

Delete Persistent Volumes

kubectl delete azure-blob-pvc iris-data-sample-data-0-0 iris-data-sample-data-0-1 --ignore-not-found

Note that deleting PersistentVolumeClaim triggers deletion of the corresponding PersistentVolume.

Delete Kubernetes Cluster

az aks delete --resource-group samplerg --name sample --yes

Delete Resource Group

az group delete --name samplerg --no-wait --yes

Conclusion

We demonstrated how Azure Blob Store can be used to mount read/write volumes on pods residing in different availability zones.  Several other solutions are available both for AKS and for other cloud providers.  As you can see, their configuration can be highly esoteric and vendor-specific, but once working can be reliable and effective.

1 Comment
Discussão (1)2
Entre ou crie uma conta para continuar
Artigo
· Set. 20 2min de leitura

Snapshot DB free - Strategies

These are the strategic plans of my example for the External Languages Contest 2025

  • Top of all: Speed
    • Anything that is running in endless loops doesn't help
    • So the steps happen where the environment is best suited for
    • Communications reduce speed. Less Ping-Pong od messages
    • Just exchange what can't be avoided
  • No artificial constructs. Keep it compact and simple to follow,

Python

I'm using the Native API for Python to keep it independent of IRIS instances
The interactive entered connection parameters allow a wide range of IRIS instances
Running it in a Docker container makes me independent of locally installed Python.

Once connected, a slave class is triggered in IRIS to prepare the data by columns.
The fetched 4 columns fill a DataSet and generate the bar chart.
The result is delivered outside the Docker container. 

IRIS#1

I tried to create a class that is independent of the IRIS version.
Int namespace %SYS runs 1 embedded SQL query, resulting in a single string.
The string is splitted into 4 columns, ready for the Python Dataset.
Synchronization is provided by the standard ClassMethod call. 

IRIS#2

Display of the results is implemented a very simple CSP page using the
results deposited outside the container. The biggest challenge was to force
the browser to ignore its actual cached graphic images.
The trick: Instead of just a source request for tab.jpg  I used tab.jpg?#($h)#
the makes no logical sense, but creates a different signature in the browser cache
So what you see is really the last image generated and in synch with the displayed data

Compact code

As both parts in IRIS result in only 4 ClassMethods I decided to pack them together
in a single resulting class that made coding and maintenance very easy.
Similar in Python:
The code is almost linear and easy to follow. Exception: Mimic of ZWRITE for testing

Summary

The exercise could have been composed in IRIS only with or without embedded Python
Though the challenge was to use external code to use connections to IRIS.

Discussão (0)1
Entre ou crie uma conta para continuar