查找

Artigo
· Out. 9 4min de leitura

Expanda a capacidade do ObjectScript de processar YAML

A linguagem ObjectScript possui um suporte incrível a JSON por meio de classes como %DynamicObject e %JSON.Adaptor. Esse suporte se deve à imensa popularidade do formato JSON em relação ao domínio anterior do XML. O JSON trouxe menos verbosidade à representação de dados e aumentou a legibilidade para humanos que precisavam interpretar conteúdo JSON. Para reduzir ainda mais a verbosidade e aumentar a legibilidade, o formato YAML foi criado. O formato YAML, muito fácil de ler, rapidamente se tornou o formato mais popular para representar configurações e parametrizações, devido à sua legibilidade e verbosidade mínima. Embora o XML raramente seja usado para parametrização e configuração, com o YAML, o JSON está gradualmente se limitando a ser um formato de troca de dados, em vez de ser usado para configurações, parametrizações e representações de metadados. Agora, tudo isso é feito com YAML. Portanto, a linguagem primária das tecnologias InterSystems precisa de amplo suporte para processamento YAML, no mesmo nível que para JSON e XML. Por esse motivo, lancei um novo pacote para tornar o ObjectScript um poderoso processador YAML. O nome do pacote é yaml-adaptor.

Vamos começar instalando o pacote

1. Se for de IPM, abra o IRIS Terminal e execute:

USER>zpm “install yaml-adaptor”

2. Se for de Docker, Clone/git pull o repositório do yaml-adaptor em uma pasta local:

$ git clone https://github.com/yurimarx/yaml-adaptor.git

3. Abra o terminal na pasta e execute:

$ docker-compose build

4. Execute o IRIS container do projeto:

$ docker-compose up -d

Por que usar o pacote?

Com este pacote, você poderá interoperar, ler, escrever e transformar YAML em DynamicObjects, JSON e XML bidirecionalmente. Este pacote permite ler e gerar dados, configurações e parametrizações nos formatos mais populares do mercado de forma dinâmica, com pouco código, alto desempenho e em tempo real.

O pacote em ação!

É muito simples. As capacidades são:

1. Converter de YAML string para JSON string

ClassMethod TestYamlToJson() As %Status
{
    Set sc = $$$OK
    set yamlContent = ""_$CHAR(10)_
        "user:"_$CHAR(10)_
        "    name: 'Jane Doe'"_$CHAR(10)_
        "    age: 30"_$CHAR(10)_
        "    roles:"_$CHAR(10)_
        "    - 'admin'"_$CHAR(10)_
        "    - 'editor'"_$CHAR(10)_
        "database:"_$CHAR(10)_
        "    host: 'localhost'"_$CHAR(10)_
        "    port: 5432"_$CHAR(10)_
        ""
    Do ##class(dc.yamladapter.YamlUtil).yamlToJson(yamlContent, .jsonContent)
    Set jsonObj = {}.%FromJSON(jsonContent)
    Write jsonObj.%ToJSON()

    Return sc
}

2. Gerar arquivo YAML de um arquivo JSON 

ClassMethod TestYamlFileToJsonFile() As %Status
{

    Set sc = $$$OK
    Set yamlFile = "/tmp/samples/sample.yaml"
    Set jsonFile = "/tmp/samples/sample_result.json"
    Write ##class(dc.yamladapter.YamlUtil).yamlFileToJsonFile(yamlFile,jsonFile)
    

    Return sc
}

3. Converter de JSON string para YAML string

ClassMethod TestJsonToYaml() As %Status
{
    Set sc = $$$OK
    set jsonContent = "{""user"":{""name"":""Jane Doe"",""age"":30,""roles"":[""admin"",""editor""]},""database"":{""host"":""localhost"",""port"":5432}}"
    Do ##class(dc.yamladapter.YamlUtil).jsonToYaml(jsonContent, .yamlContent)
    Write yamlContent

    Return sc
}

4. Gerar arquivo JSON de um arquivo YAML 

ClassMethod TestJsonFileToYamlFile() As %Status
{

    Set sc = $$$OK
    Set jsonFile = "/tmp/samples/sample.json"
    Set yamlFile = "/tmp/samples/sample_result.yaml"
    Write ##class(dc.yamladapter.YamlUtil).jsonFileToYamlFile(jsonFile, yamlFile)
    

    Return sc
}

5. Carregar um objeto dinâmico a partir de YAML string ou arquivos YAML 

ClassMethod TestYamlFileToDynamicObject() As %Status
{
    Set sc = $$$OK
    Set yamlFile = "/tmp/samples/sample.yaml"
    Set dynamicYaml = ##class(YamlAdaptor).CreateFromFile(yamlFile)

    Write "Title: "_dynamicYaml.title, !
    Write "Version: "_dynamicYaml.version, !

    Return sc
}

6. Gerar YAML de objetos dinâmicos

ClassMethod TestDynamicObjectToYaml() As %Status
{
    Set sc = $$$OK
    Set dynaObj = {}
    Set dynaObj.project = "Project A"
    Set dynaObj.version = "1.0"
    Set yamlContent = ##class(YamlAdaptor).CreateYamlFromDynamicObject(dynaObj)

    Write yamlContent

    Return sc
}

7. Gerar arquivo XML de arquivo YAML

ClassMethod TestXmlFileToYamlFile() As %Status
{

    Set sc = $$$OK
    Set xmlFile = "/tmp/samples/sample.xml"
    Set yamlFile = "/tmp/samples/sample_xml_result.yaml"
    Write ##class(dc.yamladapter.YamlUtil).xmlFileToYamlFile(xmlFile, yamlFile)
    

    Return sc
}

8. Gerar arquivo YAML de arquivo XML

ClassMethod TestYamlFileToXmlFile() As %Status
{

    Set sc = $$$OK
    Set yamlFile = "/tmp/samples/sample.yaml"
    Set xmlFile = "/tmp/samples/sample_result.xml"
    Write ##class(dc.yamladapter.YamlUtil).yamlFileToXmlFile(yamlFile, "sample", xmlFile)
    

    Return sc
}
Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Out. 9 4min de leitura

Expand ObjectScript's ability to process YAML

The ObjectScript language has incredible JSON support through classes like %DynamicObject and %JSON.Adaptor. This support is due to the JSON format's immense popularity over the previous dominance of XML. JSON brought less verbosity to data representation and increased readability for humans who needed to interpret JSON content. To further reduce verbosity and increase readability, the YAML format was created. The very easy-to-read YAML format quickly became the most popular format for representing configurations and parameterizations, due to its readability and minimal verbosity. While XML is rarely used for parameterization and configuration, with YAML, JSON is gradually becoming limited to being a data exchange format rather than for configurations, parameterizations, and metadata representations. Now, all of this is done with YAML. Therefore, the primary language of InterSystems technologies needs broad support for YAML processing, at the same level as it does for JSON and XML. For this reason, I've released a new package to make ObjectScript a powerful YAML processor. The package name is yaml-adaptor.

Let's start by installing the package

1. If you using IPM, open the IRIS Terminal and call:

USER>zpm “install yaml-adaptor”

2. If you using Docker, Clone/git pull the yaml-adaptor repo into any local directory:

$ git clone https://github.com/yurimarx/yaml-adaptor.git

3. Open the terminal in this directory and run:

$ docker-compose build

4. Run the IRIS container with your project:

$ docker-compose up -d

Why use this package?

With this package, you'll be able to easily interoperate, read, write, and transform YAML to DynamicObjects, JSON, and XML bidirectionally. This package allows you to read and generate data, configurations, and parameterizations in the most popular market formats dynamically, with little code, high performance, and in real time.

The package in action!

It is very simple use this package. The features are:

1. Convert from YAML string to JSON string

ClassMethod TestYamlToJson() As %Status
{
    Set sc = $$$OK
    
    set yamlContent = ""_$CHAR(10)_
        "user:"_$CHAR(10)_
        "    name: 'Jane Doe'"_$CHAR(10)_
        "    age: 30"_$CHAR(10)_
        "    roles:"_$CHAR(10)_
        "    - 'admin'"_$CHAR(10)_
        "    - 'editor'"_$CHAR(10)_
        "database:"_$CHAR(10)_
        "    host: 'localhost'"_$CHAR(10)_
        "    port: 5432"_$CHAR(10)_
        ""

    Do ##class(dc.yamladapter.YamlUtil).yamlToJson(yamlContent, .jsonContent)
    Set jsonObj = {}.%FromJSON(jsonContent)
    Write jsonObj.%ToJSON()

    Return sc
}

2. Generate YAML file from a JSON file

ClassMethod TestYamlFileToJsonFile() As %Status
{

    Set sc = $$$OK

    Set yamlFile = "/tmp/samples/sample.yaml"
    Set jsonFile = "/tmp/samples/sample_result.json"

    Write ##class(dc.yamladapter.YamlUtil).yamlFileToJsonFile(yamlFile,jsonFile)
    

    Return sc
}

3. Convert from JSON string to YAML string

ClassMethod TestJsonToYaml() As %Status
{
    Set sc = $$$OK
    
    set jsonContent = "{""user"":{""name"":""Jane Doe"",""age"":30,""roles"":[""admin"",""editor""]},""database"":{""host"":""localhost"",""port"":5432}}"

    Do ##class(dc.yamladapter.YamlUtil).jsonToYaml(jsonContent, .yamlContent)
    Write yamlContent

    Return sc
}

4. Generate JSON file from YAML file

ClassMethod TestJsonFileToYamlFile() As %Status
{

    Set sc = $$$OK

    Set jsonFile = "/tmp/samples/sample.json"
    Set yamlFile = "/tmp/samples/sample_result.yaml"

    Write ##class(dc.yamladapter.YamlUtil).jsonFileToYamlFile(jsonFile, yamlFile)
    

    Return sc
}

5. Load a dynamic object from YAML string or YAML files

ClassMethod TestYamlFileToDynamicObject() As %Status
{
    Set sc = $$$OK

    Set yamlFile = "/tmp/samples/sample.yaml"
    
    Set dynamicYaml = ##class(YamlAdaptor).CreateFromFile(yamlFile)

    Write "Title: "_dynamicYaml.title, !
    Write "Version: "_dynamicYaml.version, !

    Return sc
}

Generate YAML from Dynamic Objects

ClassMethod TestDynamicObjectToYaml() As %Status
{
    Set sc = $$$OK

    Set dynaObj = {}
    Set dynaObj.project = "Project A"
    Set dynaObj.version = "1.0"
    
    Set yamlContent = ##class(YamlAdaptor).CreateYamlFromDynamicObject(dynaObj)

    Write yamlContent

    Return sc
}

Generate XML file from YAML file

ClassMethod TestXmlFileToYamlFile() As %Status
{

    Set sc = $$$OK

    Set xmlFile = "/tmp/samples/sample.xml"
    Set yamlFile = "/tmp/samples/sample_xml_result.yaml"

    Write ##class(dc.yamladapter.YamlUtil).xmlFileToYamlFile(xmlFile, yamlFile)
    

    Return sc
}

Generate YAML file from XML File

ClassMethod TestYamlFileToXmlFile() As %Status
{

    Set sc = $$$OK

    Set yamlFile = "/tmp/samples/sample.yaml"
    Set xmlFile = "/tmp/samples/sample_result.xml"

    Write ##class(dc.yamladapter.YamlUtil).yamlFileToXmlFile(yamlFile, "sample", xmlFile)
    

    Return sc
}
1 Comment
Discussão (1)1
Entre ou crie uma conta para continuar
Artigo
· Out. 9 1min de leitura

IRIS のベクトル検索を活用しユーザーへ最新で正確な応答を提供する RAG AI チャットボットを作成するチュートリアル

開発者の皆さん、こんにちは!

この記事では、Developer Hub にあるチュートリアルに新しいチュートリアル:InterSystems IRIS ベクトル検索を使用した RAG が追加されましたので内容をご紹介します。(準備不要でブラウザがあれば試せるチュートリアルです!)

このチュートリアルでは、生成 AI アプリケーションの精度向上に向けて、ベクトル検索と検索拡張生成(Retrieval Augmented Generation)の活用を体験できます。

具体的には、InterSystems IRIS のベクトル検索機能を活用し、生成 AI チャットボット向けのナレッジベースをサンプルコードを利用して作成します。

また、Streamlit を使用して作成したチャットボットを動かしながら、ナレッジベースの情報を追加することで生成 AI からの回答が変化していくことを確認していきます。

アカウント作成やログインも不要で  ボタンをクリックするだけで始められます👍

チュートリアルへのリンクは「開発者コミュニティのリソース」からも辿れます!

ぜひ、お試しください!​​​​​​

Discussão (0)1
Entre ou crie uma conta para continuar
Anúncio
· Out. 9

Security & AI Meetup for Developers and Startups

Join our next in-person Developer Meetup in Boston to explore Security & AI for Developers and Startups.

This event is hosted at CIC Venture Cafe.

Talk 1: When Prompts Become Payloads
Speaker: Mark-David McLaughlin, Director, Corporate Security, InterSystems

Talk 2: Serial Offenses: Common Vulnerability Types
Speaker: Jonathan Sue-Ho, Senior Security Engineer, InterSystems

>> Register here
 

⏱ Day and Time: October 21, 5:30 p.m. to 7:30 p.m.
📍CIC Venture Café in Cambridge, MA

Save your seat now!

Food, beverages, and networking opportunities will be provided as always.
Join our Discord channel to connect with developers from the InterSystems developer ecosystem.

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Out. 9 6min de leitura

Enhancing FHIR Data Exploration with Local LLMs: Integrating IRIS and Ollama

Introduction

In my previous article, I introduced the FHIR Data Explorer, a proof-of-concept application that connects InterSystems IRIS, Python, and Ollama to enable semantic search and visualization over healthcare data in FHIR format, a project currently participating in the InterSystems External Language Contest.

In this follow-up, we’ll see how I integrated Ollama for generating patient history summaries directly from structured FHIR data stored in IRIS, using lightweight local language models (LLMs) such as Llama 3.2:1B or Gemma 2:2B.

The goal was to build a completely local AI pipeline that can extract, format, and narrate patient histories while keeping data private and under full control.

All patient data used in this demo comes from FHIR bundles, which were parsed and loaded into IRIS via the IRIStool module. This approach makes it straightforward to query, transform, and vectorize healthcare data using familiar pandas operations in Python. If you’re curious about how I built this integration, check out my previous article Building a FHIR Vector Repository with InterSystems IRIS and Python through the IRIStool module.

Both IRIStool and FHIR Data Explorer are available on the InterSystems Open Exchange — and part of my contest submissions. If you find them useful, please consider voting for them!

1. Setup with Docker Compose

To make the setup simple and reproducible, everything runs locally via Docker Compose.
A minimal configuration looks like this:

services:
  iris:
    container_name: iris-patient-search
    build:
      context: .
      dockerfile: Dockerfile
    image: iris-patient-search:latest  
    init: true
    restart: unless-stopped
    volumes:
      - ./storage:/durable
    ports:
      - "9092:52773"  # Management Portal / REST APIs
      - "9091:1972"   # SuperServer port
    environment:
      - ISC_DATA_DIRECTORY=/durable/iris
    entrypoint: ["/opt/irisapp/entrypoint.sh"]

  ollama:
    image: ollama/ollama:latest
    container_name: ollama
    pull_policy: always
    tty: true
    restart: unless-stopped
    ports:
      - 11424:11434
    volumes:
      - ./ollama_entrypoint.sh:/entrypoint.sh
    entrypoint: ["/entrypoint.sh"]

You can find all the configurations on the GitHub project page.

2. Integrating Ollama into the Workflow

Ollama provides a simple local REST API for running models efficiently on CPU, which makes it ideal for healthcare applications where privacy and performance matter.

To connect IRIS and Streamlit to Ollama, I implemented a lightweight Python class for streaming responses from the Ollama API:

import requests, json

class ollama_request:
    def __init__(self, api_url: str):
        self.api_url = api_url

    def get_response(self, content, model):
        payload = {
            "model": model,
            "messages": [
                {"role": "user", "content": content}
            ]
        }
        response = requests.post(self.api_url, json=payload, stream=True)

        if response.status_code == 200:
            for line in response.iter_lines(decode_unicode=True):
                if line:
                    try:
                        json_data = json.loads(line)
                        if "message" in json_data and "content" in json_data["message"]:
                            yield json_data["message"]["content"]
                    except json.JSONDecodeError:
                        yield f"Error decoding JSON line: {line}"
        else:
            yield f"Error: {response.status_code} - {response.text}"

This allows real-time streaming of model output, giving users the feeling of “watching” the AI write clinical summaries live in the Streamlit UI.

3. Preparing Patient Data for the LLM

Before sending anything to Ollama, data must be compact, structured, and clinically relevant.
For this, I wrote a class that extracts and formats the patient’s most relevant data — demographics, conditions, observations, procedures, and so on — into YAML, which is both readable and LLM-friendly.

Here’s the simplified process:

  1. Select the patient row from IRIS via pandas
  2. Extract demographics and convert them into YAML
  3. Process each medical table (Conditions, Observations, etc.)
  4. Remove unnecessary or redundant fields
  5. Output a concise YAML document used as the LLM prompt context.

This string is then passed directly to the LLM prompt, forming the structured context from which the model generates the patient’s narrative summary.


4. Why Limit the Number of Records?

While building this feature, I noticed that passing all medical records often led small LLMs to become confused or biased toward older entries, losing focus on recent events.

To mitigate this, I decided to:

  • Include only a limited number records per category in reverse chronological order (most recent first)
  • Use concise YAML formatting instead of raw JSON
  • Normalize datatypes (timestamps, nulls, etc.) for consistency

This design helps small LLMs focus on the most clinically relevant data avoiding “prompt overload”.


💬 4. Generating the Patient History Summary

Once the YAML-formatted data is ready, the Streamlit app sends it to Ollama with a simple prompt like:

“You are a clinical assistant. Given the following patient data, write a concise summary of their medical history, highlighting relevant conditions and recent trends.”

The output is streamed back to the UI line by line, allowing the user to watch the summary being written in real time.
Each model produces a slightly different result, even with the same prompt — revealing fascinating differences in reasoning and style.


🧠 5. Comparing Local LLMs

To evaluate the effectiveness of this approach, I tested three lightweight open models available through Ollama:

Model Parameters Summary Style Notes
Llama 3.2:1B 1B Structured, factual Highly literal and schema-like output
Gemma 2:2B 2B Narrative, human-like Most coherent and contextually aware
Gemma 3:1B 1B Concise, summarizing Occasionally omits details but very readable

You can find example outputs on this GitHub folder. Each patient summary highlights how model size and training style influence the structure, coherence, and detail level of the narrative.

Here’s a comparative interpretation of their behavior:

  • Llama 3.2:1B tends to reproduce the data structure verbatim, almost as if performing a database export. Its summaries are technically accurate but lack narrative flow — resembling a structured clinical report rather than natural text.
  • Gemma 3:1B achieves better linguistic flow but still compresses or omits minor details. 
  • Gemma 2:2B strikes the best balance. It organizes information into meaningful sections (conditions, risk factors, care recommendations) while maintaining a fluent tone.

In short:

  • Llama 3.2:1B = factual precision
  • Gemma 3:1B = concise summaries
  • Gemma 2:2B = clinical storytelling

Even without fine-tuning, thoughtful data curation and prompt design make small, local LLMs capable of producing coherent, contextually relevant clinical narratives.


🔒 6. Why Local Models Matter

Using Ollama locally provides:

  • Full data control — no patient data ever leaves the environment
  • Deterministic performance — stable latency on CPU
  • Lightweight deployment — works even without GPU
  • Modular design — easy to switch between models or adjust prompts

This makes it an ideal setup for hospitals, research centers, or academic environments that want to experiment safely with AI-assisted documentation and summarization.


🧭 Conclusion

This integration demonstrates that even small local models, when properly guided by structured data and clear prompts, can yield useful, human-like summaries of patient histories.

With IRIS managing data, Python handling transformations, and Ollama generating text, we get a fully local, privacy-first AI pipeline for clinical insight generation.

Discussão (0)1
Entre ou crie uma conta para continuar