検索

Artigo
· 23min atrás 10min de leitura

XML 到 HL7、FHIR 和 V2 的转换

什么是 XML?

XML (可扩展标记语言)是一种灵活的、基于文本的、独立于平台的格式,用于以结构合理 、人机可读的方式存储和传输数据 。XML 允许用户定义自定义标签来描述数据的含义和组织结构。例如:<book><title>The Hitchhiker's Guide</title></book>.

XML 文档具有自描述性,其结构是一棵分层的元素树。每个文档都有一个封装所有其他内容的根元素。元素可以包含文本、子元素和属性(提供补充信息的名-值对)。这些文档通常用 .xml 文件存储

这种结构的完整性可以通过以下方式实现:

  • DTD(文档类型定义):提供基本的验证规则。
  • XSD(XML 模式定义):提供高级规则,包括数据类型和约束。

转换 XML 文档

这部分内容介绍如下:

  1. 解析一般 XML 并将其转换为 HL7 标准。
  2. 解析 CCDA(综合临床文档架构)文档(XML)并将其转换为 HL7 格式。

在这些实施过程中,两种格式都会首先转换为 InterSystems IRIS SDA(标准化数据架构)格式。这被认为是一种标准、高效、不易出错的方法,因为它有效地利用了平台的 预置类。数据采用 SDA 格式后,可无缝转换为任何目标标准,如 HL7 v2FHIRCCDA

解析通用 XML 文档

通用 XML 文档具有自描述功能,可与 <name>、<sex> 和 <address> 等自定义标记一起使用。本节将介绍如何解析此类文档,并利用它通过中间 SDA(标准化数据架构)格式构建 HL7 消息。

在开始转换之前,您需要选择适当的工作流程:

  1. 推荐方法:转换为 SDA。这是最有效的方法。它包括将 XML 文档作为数据流读取,并在互操作性生产(Interoperability Production)中将其直接转换为 SDA(标准化数据架构)格式。这种做法是标准的,但对大规模数据处理非常有效。
  2. 替代方法:手动转换。您可以将 XML 文件作为对象读取,然后以编程方式执行转换。这种方法可提供更精细的控制,但实施起来通常更为复杂,可扩展性也较差。

读取 XML 文档

InterSystems IRIS 为顺利解析 XML 流提供了一套完整的类。其中包括以下两种关键方法:

  1. %XML.Reader:通过 %XML.Adaptor,提供了一种读取 XML 流并将其内容加载到对象中的编程方法。
  2. EnsLib.EDI.XML.Document:在互操作性生产中用于动态表示和解析 XML 文档。

 

使用 %XML.Reader 和 %XML.Adaptor 类

通过将 %XML.Adaptor 和 %XML.Reader 类结合使用,这是一种将 XML 文件或数据流解析为内存对象的强大而直接的技术。

让我们以下面的 XML 文件为例进行说明:

<Patient>
    <PatientID>12345</PatientID>
    <PatientName>DOE^JOHN</PatientName>
    <DateOfBirth>19900101</DateOfBirth>
    <Sex>M</Sex>
    <PatientClass>I</PatientClass>
    <AssignedPatientLocation>GEN^A1</AssignedPatientLocation>
    <AttendingDoctor>1234^DOCTOR^JOHN</AttendingDoctor>
</Patient>

首先,您必须创建一个表示 XML 文档结构的类定义。该类必须从 %XML.Adaptor扩展而来 。一旦加载了 XML,数据就可以作为对象的属性使用,从而方便访问和后续代码操作。

Class MyApp.Messages.PatientXML Extends (%Persistent, %XML.Adaptor)
{
Parameter XMLNAME = "Patient";
Property PatientID As %String;
Property PatientName As %String;
Property Age As %String;
Property DateOfBirth As %String;
Property Sex As %String;
Property PatientClass As %String;
Property AssignedPatientLocation As %String;
Property AttendingDoctor As %String;
ClassMethod XMLToObject(xmlStream As %Stream.Object = "", xmlString, filename = "C:\learn\hl7msg\test.xml")
{
	Set reader = ##class(%XML.Reader).%New() 

	// Begin processing of the XML input
	If filename'="" {
		Set sc=reader.OpenFile(filename) ; open the file directly
	}
	ElseIf $IsObject(xmlStream){
		Set sc=reader.OpenStream(xmlStream) ; parse from stream
	}
	ElseIf xmlString'="" {
		Set sc=reader.OpenString(xmlString) ; parse from stream
	}
	Else {
		Return $$$ERROR("No file name,string or stream found")
	}
  
	If $$$ISERR(sc) Do $system.OBJ.DisplayError(sc) Quit
	// Associate a class name with the XML element name
	;Do reader.Correlate(..#XMLNAME,$classname())
               Do reader.CorrelateRoot($classname())

	Do reader.Next(.patient,.sc) 
	If $$$ISERR(sc) Do $system.OBJ.DisplayError(sc) Quit
	ZWrite patient
}
}

XMLToObject 方法可解析文件、流或字符串中的 XML 数据,创建一个类实例,随后可用于程序转换或在互操作性产品中使用。

利用 EnsLib.EDI.XML.Document

EnsLib.EDI.XML.Document类提供了对任何XML内容的运行时和基于XPath的访问,而无需预定义的模式或类。当您需要在运行时动态提取值时,该类是理想之选。只需将您的 XML 加载到该类中,然后使用其方法通过 XPath 表达式快速访问元素即可。

通过保存对象实例,该类还能将 XML 文档直接持久化到EnsLib_EDI_XML.Document表中。

ClassMethod ParseXML(xmlfile As %String="")
{
	Set ediXMLDoc = ##class(EnsLib.EDI.XML.Document).ImportFromFile(xmlfile,,.sc)
	If $$$ISERR(sc) {
	  Quit
	}
	; pass XPath into GetValueAt method
	Write ediXMLDoc.GetValueAt("/Patient/PatiendID") ;returns the patient id
}

然后,您可以在类中声明 GetValueAt() 方法的返回值、局部变量或 JSON 格式。

XML 文件业务服务(EnsLib.EDI.XML.Service.FileService)使用此虚拟文档类进行解析操作。

注意: 在 InterSystems IRIS 中 通过 GetValueAt("XPath")获取过大 字符串 超过 3,641,144 个字符) 时,通常会收到 <MAXSTRING> 错误。您的代码应包括对这一限制的适当处理。

InterSystems IRIS 中的 XSD 模式加载

XSD(XML 模式定义)概述了 XML 文档的结构、元素、类型和验证规则,以确保数据的一致性和有效性。

<xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema">

<xs:element name="Patient">

<xs:complexType> <xs:sequence> <xs:sequence

<xs:sequence> <xs:element name="Patient

<xs:element name="PatientID" type="xs:string"/> <xs:element name="PatientID" type="xs:string"/>

<xs:element name="PatientName" type="xs:string"/> <xs:complexType

<xs:element name="DateOfBirth" type="xs:string"/> <xs:element name="DateOfBirth" type="xs:string"/>

<xs:element name="Sex" type="xs:string"/> <xs:sequence> </xs:sequence

</xs:sequence> </xs:complexType

</xs:complexType

</xs:element

</xs:schema

在 IRIS 中,加载 XSD 模式有几个主要目的:

  1. 验证: 它使传入的 XML 文档能够根据预定义的结构进行验证,确保数据的完整性。
  2. 生成 ObjectScript 类: 它能自动创建反映 XML 结构的 ObjectScript 类,从而简化程序访问。(您可以按照 Studio > Tools > AddIn > XMLSchema 向导加载 XSD 文件并生成类定义)。
  3. DTL 转换: 它有助于在数据转换语言(DTL)中进行基于模式的转换,从而实现不同格式之间的无缝数据映射。

导入 XML 模式

要将 XML 模式 (.xsd) 文件导入 InterSystems IRIS,请按以下步骤操作:

  1. 导航至 系统管理门户System Management Portal)
  2. 转到 互操作性 > 互操作 > XML > XML 模式结构Interoperability > Interoperate > XML > XML Schema Structures)
  3. 单击 导入(Import) 按钮。
  4. 在对话框中选择 schema.xsd 文件,然后单击 OK 完成导入。

所有导入的 XSD 模式都保存在每个命名空间内的 ^EnsEDI.XML.Schema 全局中。全局的第一个下标是模式名称,与管理门户中显示的名称相同。

源 XSD 文件的路径存储 在 ^EnsEDI.XML.Schema(<schema name>, "src",1) 中

重要提示:如果源文件从其原始位置删除,今后针对模式的任何验证尝试都将导致错误。

使用模式验证 XML

加载模式后,您可以使用下面的自定义代码根据模式验证 XML 文档。为此,您需要提供 XML 文件名称和模式名称作为参数:

/// XMLSchema – Imported schema or give the full path of the schema in the directory, e.g,/xml/xsd/patient.xsd
ClassMethod XMLSchemaValidation(xmlFileName As %String,XMLSchema As %Stirng="")
{
	Set xmlFileName="C:\test.xml"
	Set ediXMLdoc1 = ##class(EnsLib.EDI.XML.Document).ImportFromFile(xmlFileName,,.sc)
	If $$$ISERR(sc) Quit sc
	Set ediXMLdoc1.DocType=XMLSchema
	Return ediXMLdoc1.Validate()
}


嵌入式 Python 示例:

Class pySamples.XML Extends %RegisteredObject
{
ClassMethod GetError(er)
{
    Return $SYSTEM.Status.GetErrorText(er)
}

ClassMethod pyXMLShcemaValidation(xmlFileName = "C:\\hl7msg\\test.xml", XMLSchema = "Patient") [ Language = python ]
{
    import iris

    xml_status = iris.ref()
    ediXMLdoc = iris.cls("EnsLib.EDI.XML.Document").ImportFromFile(xmlFileName,1,xml_status)
    if xml_status.value!=1:
        print("XML Parsing error: ",iris.cls(__name__).GetError(xml_status))
    else:
        print(ediXMLdoc)
}
}

从对象获取模式:

Set object = ##class(MyApp.Messages.PatientXML).%New() ;replace your class here
Set XMLSchema = object.XMLSchema() ; this will return the XML schema for this class.

在探索 SDA 和其他医疗保健信息标准之前,让我们简要了解一下 CDA。

临床文档架构(CDA)

临床文档架构(CDAHL7 为电子临床文档 制定的 医疗保健标准 ,它定义了这些文档的结构、编码和交换方式,以确保人类和机器的可读性。

CDA 是一种 基于 XML 的标准 ,用于表示以下临床文档:

  • 出院摘要
  • 进展记录
  • 转诊信
  • 成像或实验室报告。

CDA 文档一般由 <ClinicalDocument> 元素封装,有两个主要部分:页眉和正文。

1.页眉(必填): 它位于 <ClinicalDocument> 和 <structuredBody> 元素之间。它包含文档的元数据,说明文档的内容、创建者、时间、原因和地点。

页眉中的关键元素:

  • 患者人口统计数据(recordTarget)、作者(医生、系统)、保管人(负责机构)、文档类型和模板 ID、相遇信息、法律验证器。

2.正文(必填): 它包含临床内容报告,由 <structuredBody> 元素封装,可以是非结构化的,也可以由结构化标记组成。它通常还分为递归嵌套的文档部分:

  • 非结构化:可能附带附件(如 PDF)的自由格式文本。

结构化:XML 部分,包含过敏、药物、问题、程序、化验结果等编码条目。

<ClinicalDocument>

  ... CDA Header ...

 <structuredBody>

    <section>

      <text>...</text>

      <observation>...</observation>

      <substanceAdministration>

        <supply>...</supply>

      </substanceAdministration>

      <observation>

        <externalObservation>...

        </externalObservation>

      </observation>

    </section>

    <section>

        <section>...</section>

    </section>

  </structuredBody>

</ClinicalDocument>

CCDA 到 HL7 的转换

在 InterSystems IRIS 中,将 CCDA(综合临床文档架构)文档转换为 HL7 v2 消息是一种常见的互操作性用例。虽然您仍然可以直接使用单步 DTL(数据转换语言)映射,但我们建议您选择称为 SDA 标准化数据架构)的中间数据格式 作为最稳健的方法。

步骤 1:C-CDA 到 SDA(XSLT)

第一步是将输入的 C-CDA 文档转换为 SDA 对象。(SDA 是一种供应商中立的临床数据模型,可简化临床信息的表示)。

  • 为什么要使用 SDA? C-CDA 是一种复杂的、分层的 XML 结构,有许多模板和部分。试图将其直接映射到 HL7 v2 信息的扁平、基于分段的配置极为困难,而且往往需要复杂而脆弱的逻辑。SDA 可作为简化的中间模型,从 C-CDA 中提取基本临床数据,避免了 XML 结构的复杂性。
  • 它是如何工作的? InterSystems IRIS 提供了一个预建 XSLT 文件库(通常位于 install-dir\CSP\xslt\SDA3 目录 ),用于将 C-CDA 转换为 SDA。这种转换通常需要使用业务流程或业务操作来调用正确的 XSLT。

所有 InterSystems 医疗保健产品都有一个 XSLT 库,用于将 CDA 文档转换为 SDA,反之亦然。您可以在 install-dir\CSP\xslt\ 查看可用的根级 XSLT 位置。

例如,CCDA-to-SDA 转换如下:

  • Consolidated CDA 1.1 CCD 到 SDA、 CCDAv21 到 SDA 转换。
  • 合并 CDA 2.1 CCD 到 SDA, SDA 到 C32v25 转换

从 SDA 转换或转换为 SDA

XML 加载到类对象后,就可以进行转换了。此时,您应创建自定义 DTL,将数据结构映射到 HS.SDA3.Container 或 HS.SDA3.* 特定类中,以构建 SDA 文档。

FHIR

利用 IRIS 内置的数据转换器进行 SDA 转换。您可以参考有关 FHIR 转换的文章。

HL7 V2

  • 您可以使用类方法 HS.Gateway.HL7.HL7ToSDA3.GetSDA(),以编程方式将 HL7 信息转换为 SDA。例如,执行 ##class(HS.Gateway.HL7.HL7ToSDA3).GetSDA(pRequest,.tSDA).
  • 注意:目前还没有直接从 SDA 转换回 HL7 v2 的编程方法。

关键类、表、全局、链接

  • %XML.*.cls:所有与 XML 相关的类都可以在此软件包中找到。
  • Ens.Util.XML.Validator:该类包含验证 XML 的实用方法。
  • EnsLib.EDI.XML.Service.FileService:它是一个业务服务主机。
  • %XML.XSLT.Transformer:
  • %XML.Writer:
  • %XML.Reader:
  • %XML.Adaptor:
  • %XML.Document:
  • %XML.Schema:
  • %XML.String:它是 XML 的数据类型类。
  • %XML.SAX.Parser
  • HS.SDA3.Container:它是一个主要的 SDA 容器类。
  • HS.SDA3.*.cls:SDA 类。

表格

  • EnsLib_EDI_XML.Document:该表用于存储 EDI XML 文档。

Globals

  • ^EnsEDI.XML.Schema:该全局存储 XSD 模式。
  • ^EnsLib.EDI.XML.DocumentD: 该全局保存 EnsLib_EDI_XML.Document 表的数据。

本文概述了 XML基本原理。

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· 4 hr atrás 5min de leitura

On Cloud Schuhe 2025: The Future of Comfort, Style, and Performance

In the global sneaker landscape, few brands have reshaped expectations quite like On Cloud Schuhe. What began as an innovative Swiss running shoe concept has grown into one of the most recognizable and respected footwear names in the world. In 2025, On has secured its place not only among professional athletes and runners but also among everyday consumers who value comfort, sleek design, and performance-driven engineering.

At the heart of this success lies the brand’s flagship category: On Schuhe. And within that, one segment in particular — On Schuhe Damen — has become a worldwide favorite. Women are increasingly choosing On shoes as their go-to footwear for commuting, exercising, traveling, and everyday styling. The combination of function and fashion makes them unique in today’s competitive shoe market.

This article explores why On Schuhe and especially On Schuhe Damen are dominating in 2025, how models like the On Cloud 5 shape the brand’s identity, and what continues to set On apart from the rest.


The Evolution of On Schuhe: Swiss Technology for Everyday Life

On Running, now widely referred to simply as “On,” was created to revolutionize running comfort. The brand’s founders introduced CloudTec®, a cushioning technology made from hollow pods (“clouds”) on the sole that compress under impact and expand as you push off. It’s this mechanism that gives On Schuhe their iconic “walking on clouds” sensation.

Why On Schuhe Are So Popular in 2025

  • Lightweight construction reduces fatigue during long days
  • Responsive performance suits both athletes and everyday users
  • Minimalist Swiss aesthetic works with modern lifestyle outfits
  • Breathable materials keep feet cool all day
  • Improved sustainability, with many models containing 40–50% recycled materials

In 2025, On shoes have become a top choice not only for running enthusiasts but also for commuters, students, travelers, and professionals who spend hours on their feet.


On Schuhe Damen: The Most In-Demand Category of the Brand

While On appeals to all genders, On Schuhe Damen consistently lead global sales. Women appreciate the thoughtful design, ergonomic comfort, and stylish versatility that On integrates into each model.

Why On Schuhe Damen Dominate in 2025

1. All-Day Comfort for Active Women

Women wear their shoes across a wide range of activities — from workouts to work, from travel to casual outings. On Schuhe Damen offer support for each step throughout the day thanks to CloudTec® cushioning and precise foot ergonomics.

2. Elegant, Modern Styles

On’s clean aesthetic has become synonymous with understated luxury. Popular women’s colors for 2025 include:

  • Rose and soft pink
  • Pearl white
  • Tide, glacier, and mint
  • Sand and beige neutrals
  • Classic black and white

These looks pair easily with jeans, leggings, dresses, and athleisure outfits, making On Schuhe Damen ideal for women who want one shoe that works for everything.

3. Wide Range of Fits

On’s women’s models offer various fits to accommodate narrow, standard, and slightly wider feet. This inclusive design philosophy plays a major role in the brand’s success among women who previously struggled with shoe comfort in other athletic brands.

4. Stylish Yet Functional

Every pair of On Schuhe Damen blends beauty with practicality. The shoes are lightweight, breathable, and durable — perfect for busy days, long walks, or light exercise.


On Cloud 5: The Icon of the On Collection

No discussion of On shoes would be complete without the legendary On Cloud 5, one of the best-selling models in the entire On lineup. In 2025, it remains a universal favorite for both women and men.

Why the On Cloud 5 Remains Popular

  • Enhanced CloudTec® system for smoother steps
  • Speedboard® technology for a more energetic toe-off
  • Upgraded recycled materials, pushing sustainability forward
  • Speed-lacing system for easy slip-on wear
  • A lightweight, breathable upper ideal for everyday performance
  • Stylish silhouette that complements any outfit

The On Schuhe Damen Cloud 5 is particularly beloved for its versatile design that transitions seamlessly from gym sessions to office commutes to weekend getaways.


On Schuhe Herren: Performance and Style for Modern Men

Although this article emphasizes women’s shoes, On Schuhe Herren also play a major role in the brand’s global presence. Men appreciate On for its durability, stability, and modern style.

Popular men’s models in 2025 include:

  • Cloud 5
  • Cloudrunner
  • Cloudswift
  • Cloudmonster
  • Cloudstratus

For men who want one pair of shoes for everything — running, daily wear, or travel — On Schuhe Herren deliver with consistency and reliability.


Why On Cloud Schuhe Work for Every Lifestyle

Part of the brand’s appeal is that On Cloud Schuhe are not limited to athletes. Their comfort and design make them perfect for everyday use. Many people wear them for hours without discomfort, even on long commutes or active days.

Benefits of On Cloud Schuhe for Daily Life

  • Significantly reduced foot fatigue
  • Breathable and lightweight, ideal for warm climates
  • Modern fashion-forward design
  • Supportive yet flexible construction
  • Easy to pair with casual and active outfits

From students and professionals to travelers and fitness enthusiasts, On shoes offer comfort that fits naturally into daily routines.


How to Choose the Perfect On Schuhe Damen

When selecting On Schuhe Damen, consider:

Primary Use

  • Running
  • Walking/commuting
  • Gym training
  • Everyday lifestyle

Cushioning Level

  • Cloud 5 – light, everyday comfort
  • Cloudrunner – stable, supportive
  • Cloudmonster – maximal cushioning
  • Cloudstratus – long-distance responsiveness

Fit

Most women find On true to size, but some prefer a half-size up for extra toe room.

Color Choices

Neutral tones offer versatility; bold colors add flair.


Conclusion: On Schuhe and On Schuhe Damen Define 2025 Footwear Trends

In 2025, On Cloud Schuhe represent the ideal combination of performance, innovation, and aesthetics. With advanced cushioning, lightweight construction, and stylish designs, On Schuhe continue to dominate the global sneaker market.

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· 4 hr atrás 4min de leitura

Scripting with .Net core 10 and Iris SDK

One of the newest features of .Net core 10 with C# 14 is the file-based apps. This feature allows you to execute C# code in a simple .cs file without the need to create a solution, a project, or any of the related structure.

For example you can create a script.cs file using the notepad with the content: 

Console.WriteLine(“This is a script in c#.”);

Then in the command line or the terminal you execute the command:

dotnet run script.cs

There is plenty of information about this new feature of .net 10. To work with IRIS we can make use of the option to add NuGet package references directly inside the file. For example, to add the nuget for InterSystems IRIS, you include the following lines at the top:

#:package InterSystems.Data.IRISClient@2.5.0 

using InterSystems.Data.IRISClient;
using InterSystems.Data.IRISClient.ADO;

This allows the file to include the IRIS NuGet package and use the IRIS SDK. For example, below is a .cs file with a script to check the status of an InterSystems Interoperability production: 

#:package InterSystems.Data.IRISClient@2.5.0

using InterSystems.Data.IRISClient;
using InterSystems.Data.IRISClient.ADO;

//This script expects the namespace to connect
string irisNamespace = string.Empty;
if (args.Length > 0)
{
    irisNamespace = args[0];
}

if (string.IsNullOrEmpty(irisNamespace))
{
    Console.WriteLine("Please indicate the namespace to connect");
    return;
}

//Open a connection to InterSystems IRIS
IRISConnection conn;
IRIS iris;

try
{
    conn = new IRISConnection();
    conn.ConnectionString = $"Server = 127.0.0.1;Port = 1972; Namespace = {irisNamespace.ToUpper()}; Password = SYS; User ID = _system;";
    conn.Open();
    iris = IRIS.CreateIRIS(conn);
}
catch (Exception ex)
{
    Console.WriteLine($"Cannot connect to the interoperability server. Error message: {ex.Message} ");
    return;
}

try
{

    bool? isInteroperabilityEnabledNamespace = iris.ClassMethodBool("%Library.EnsembleMgr", "IsEnsembleNamespace");
    if (isInteroperabilityEnabledNamespace ?? false)
    {
        //The valid values are specified in the documentation
        //https://docs.intersystems.com/irislatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=ENSLIB&CLASSNAME=Ens.Director#METHOD_GetProductionStatus
        //And the numeric values can be found in the include file EnsConstants.inc
        Func<decimal, string> GetStateDescription = state => state switch
        {
            0 => "Unknown",
            1 => "Running",
            2 => "Stopped",
            3 => "Suspended",
            4 => "Troubled",
            5 => "Network stopped",
            _ => "Unknown state"
        };
        string currentInteroperabilityProduction = string.Empty;
        decimal currentInteroperabilityProductionState = 0;

        IRISReference currentProductionName = new IRISReference("");
        IRISReference currentProductionState = new IRISReference("");

        var status = iris.ClassMethodObject("Ens.Director", "GetProductionStatus", currentProductionName, currentProductionState, 2, 1);

        if (status.ToString() == "1")
        {
            currentInteroperabilityProduction = currentProductionName.GetValue()?.ToString() ?? "";
            currentInteroperabilityProductionState = currentProductionState.GetDecimal() ?? 0;
        }
        if (string.IsNullOrEmpty(currentInteroperabilityProduction))
        {
            //In the case the production is stopped, the call to GetProductionStatus doesn't return the production name
            //in this case we try to get the active production name
            currentInteroperabilityProduction = iris.ClassMethodString("Ens.Director", "GetActiveProductionName");
        }

        Console.WriteLine($"Active production in this namespace: {currentInteroperabilityProduction}");

        Console.WriteLine($"Production State: {GetStateDescription(currentInteroperabilityProductionState)}");

    }
    else
    {
        Console.WriteLine("The namespace is not enabled for interoperability");
    }
}
catch (Exception ex)
{
    Console.WriteLine($"Error checking the state of the production in the namespace {irisNamespace}:{ex.Message}");
}
finally
{
    iris.Dispose();
    conn.Close();
    conn.Dispose();
}

Running this file with the parameter indicating the namespace in which you want to check the status of the production will verify if there is any production on the namespace and its current status:

PS C:\IrisScripting> dotnet run ScriptTest.cs INTBUS
Active production in this namespace: Integrations BUS
Production State: Running
PS C:\IrisScripting>

This new feature opens a new and interesting way to run scripts or small programs to automate tasks from the command line.

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· 4 hr atrás 19min de leitura

Step-by-Step Guide: Setting Up RAG for Gen AI Agents Using IRIS Vector DB in Python

How to set up RAG for OpenAI agents using IRIS Vector DB in Python

In this article, I’ll walk you through an example of using InterSystems IRIS Vector DB to store embeddings and integrate them with an OpenAI agent.

To demonstrate this, we’ll create an OpenAI agent with knowledge of InterSystems technology. We’ll achieve this by storing embeddings of some InterSystems documentation in IRIS and then using IRIS vector search to retrieve relevant content—enabling a Retrieval-Augmented Generation (RAG) workflow.

Note: Section 1 details how process text into embeddings. If you are only interested in IRIS vector search you can skip ahead to Section 2.

 

Section 1: Embedding Data

Your embeddings are only as good as your data! To get the best results, you should prepare your data carefully. This may include:

  • Cleaning the text (removing special characters or excess whitespace)
  • Chunking the data into smaller pieces
  • Other preprocessing techniques

For this example, the documentation is stored in simple text files that require minimal cleaning. However, we will divide the text into chunks to enable more efficient and accurate RAG.

 

Step 1: Chunking Text Files

Chunking text into manageable pieces benefits RAG systems in two ways:

  1. More accurate retrieval – embeddings represent smaller, more specific sections of text.
  2. More efficient retrieval – less text per query reduces cost and improves performance.

For this example, we’ll store the chunked text in Parquet files before uploading to IRIS (though you can use any approach, including direct upload).

 

Chunking Function

We’ll use RecursiveCharacterTextSplitter from langchain_text_splitters to split text strategically based on paragraph, sentence, and word boundaries.

  • Chunk size: 300 tokens (larger chunks provide more context but increase retrieval cost)
  • Chunk overlap: 50 tokens (helps maintain context across chunks)
from langchain_text_splitters import RecursiveCharacterTextSplitter

def chunk_text_by_tokens(text: str, chunk_size: int, chunk_overlap: int) -> list[str]:
    """
    Chunk text prioritizing paragraph and sentence boundaries using
    RecursiveCharacterTextSplitter. Returns a list of chunk strings.
    """
    splitter = RecursiveCharacterTextSplitter(
        # Prioritize larger semantic units first, then fall back to smaller ones
        separators=["\n\n", "\n", ". ", " ", ""],
        chunk_size=chunk_size,
        chunk_overlap=chunk_overlap,
        length_function=len,
        is_separator_regex=False,
    )
    return splitter.split_text(text)

Next, we’ll use the chunking function to process one text file at a time and apply a tiktoken encoder to calculate token counts and generate metadata. This metadata will be useful later when creating embeddings and storing them in IRIS.

from pathlib import Path
import tiktoken

def chunk_file(path: Path, chunk_size: int, chunk_overlap: int, encoding_name: str = "cl100k_base") -> list[dict]:
    """
    Read a file, split its contents into token-aware chunks, and return metadata for each chunk.
    Returns a list of dicts with keys:
    - filename
    - relative_path
    - absolute_path
    - chunk_index
    - chunk_text
    - token_count
    - modified_time
    - size_bytes
    """
    p = Path(path)
    if not p.exists() or not p.is_file():
        raise FileNotFoundError(f"File not found: {path}")
    try:
        text = p.read_text(encoding="utf-8", errors="replace")
    except Exception as e:
        raise RuntimeError(f"Failed to read file {p}: {e}")
    # Prepare tokenizer for accurate token counts
    try:
        encoding = tiktoken.get_encoding(encoding_name)
    except Exception as e:
        raise ValueError(f"Invalid encoding name '{encoding_name}': {e}")
    # Create chunks using provided chunker
    chunks = chunk_text_by_tokens(text, chunk_size, chunk_overlap)
    # File metadata
    stat = p.stat()
    from datetime import datetime, timezone
    modified_time = datetime.fromtimestamp(stat.st_mtime, tz=timezone.utc).isoformat()
    absolute_path = str(p.resolve())
    try:
        relative_path = str(p.resolve().relative_to(Path.cwd()))
    except Exception:
        relative_path = p.name
    # Build rows
    rows: list[dict] = []
    for idx, chunk in enumerate(chunks):
        token_count = len(encoding.encode(chunk))
        rows.append({
            "filename": p.name,
            "relative_path": relative_path,
            "absolute_path": absolute_path,
            "chunk_index": idx,
            "chunk_text": chunk,
            "token_count": token_count,
            "modified_time": modified_time,
            "size_bytes": stat.st_size,
        })
    return rows

Step 2: Creating embeddings

You can generate embeddings using cloud providers (e.g., OpenAI) or local models via Ollama (e.g., nomic-embed-text). In this example, we’ll use OpenAI’s text-embedding-3-small model to embed each chunk and save the results back to Parquet for later ingestion into IRIS Vector DB.

from openai import OpenAI
import pandas as pd

def embed_and_save_parquet(input_parquet_path: str, output_parquet_path: str):
    """
    Loads a Parquet file, creates embeddings for the 'chunk_text' column using 
    OpenAI's small embedding model, and saves the result to a new Parquet file.
    Args:
        input_parquet_path (str): Path to the input Parquet file containing 'chunk_text'.
        output_parquet_path (str): Path to save the new Parquet file with embeddings.
        openai_api_key (str): Your OpenAI API key.
    """
    key = os.getenv("OPENAI_API_KEY")
    if not key:
        print("ERROR: OPENAI_API_KEY environment variable is not set.", file=sys.stderr)
        sys.exit(1)
    try:
        # Load the Parquet file
        df = pd.read_parquet(input_parquet_path)
        # Initialize OpenAI client
        client = OpenAI(api_key=key)
        # Generate embeddings for each chunk_text
        embeddings = []
        for text in df['chunk_text']:
            response = client.embeddings.create(
                input=text,
                model="text-embedding-3-small"  # Using the small embedding model
            )
            embeddings.append(response.data[0].embedding)
        # Add embeddings to the DataFrame
        df['embedding'] = embeddings
        # Save the new DataFrame to a Parquet file
        df.to_parquet(output_parquet_path, index=False)
        print(f"Embeddings generated and saved to {output_parquet_path}")
    except FileNotFoundError:
        print(f"Error: Input file not found at {input_parquet_path}")
    except KeyError:
        print("Error: 'chunk_text' column not found in the input Parquet file.")
    except Exception as e:
        print(f"An unexpected error occurred: {e}")

Step 3: Put the data processing together

Now it’s time to run the pipeline. In this example, we’ll load and chunk the Business Service documentation, generate embeddings, and write the results to Parquet for IRIS ingestion.

CHUNK_SIZE_TOKENS = 300
    CHUNK_OVERLAP_TOKENS = 50
    ENCODING_NAME="cl100k_base"
    current_file_path = Path(__file__).resolve()

    load_documentation_to_parquet(input_dir=current_file_path.parent / "Documentation" / "BusinessService", 
                                  output_file=current_file_path.parent / "BusinessService.parquet", 
                                  chunk_size=CHUNK_SIZE_TOKENS, 
                                  chunk_overlap=CHUNK_OVERLAP_TOKENS, 
                                  encoding_name=ENCODING_NAME)
    embed_and_save_parquet(input_parquet_path=current_file_path.parent / "BusinessService.parquet", 
                            output_parquet_path=current_file_path.parent / "BusinessService_embedded.parquet")

A row in our final business service parquet file will look something like this:

{"filename":"FileInboundAdapters.txt","relative_path":"Documentation\\BusinessService\\Adapters\\FileInboundAdapters.txt","absolute_path":"C:\\Users\\…\\Documentation\\BusinessService\\Adapters\\FileInboundAdapters.txt","chunk_index":0,"chunk_text":"Settings for the File Inbound Adapter\nProvides reference information for settings of the file inbound adapter, EnsLib.File.InboundAdapterOpens in a new tab. You can configure these settings after you have added a business service that uses this adapter to your production.\nSummary","token_count":52,"modified_time":"2025-11-25T18:34:16.120336+00:00","size_bytes":13316,"embedding":[-0.02851865254342556,0.01860344596207142,…,0.0135544464207155]}

Section 2: Using IRIS Vector Search

 

Step 4: Upload Your Embeddings to IRIS

Choose the IRIS namespace and table name you’ll use to store embeddings. (The script below will create the table if it doesn’t already exist.) Then use the InterSystems IRIS Python DB-API driver to insert the chunks and their embeddings.

The function below reads a Parquet file containing chunk text and embeddings, normalizes the embedding column to a JSON-serializable list of floats, connects to IRIS, creates the destination table if it doesn’t exist (with a VECTOR(FLOAT, 1536) column, where 1536 is the number of dimensions in the embedding), and then inserts each row using TO_VECTOR(?) in a parameterized SQL statement. It commits the transaction on success, logs progress, and cleans up the connection, rolling back on database errors.

import iris  # The InterSystems IRIS Python DB-API driver 
import pandas as pd
import numpy as np
import json
from pathlib import Path


# --- Configuration ---
PARQUET_FILE_PATH = "your_embeddings.parquet"
IRIS_HOST = "localhost"
IRIS_PORT = 8881
IRIS_NAMESPACE = "VECTOR"
IRIS_USERNAME = "superuser"
IRIS_PASSWORD = "sys"
TABLE_NAME = "AIDemo.Embeddings" # Must match the table created in IRIS
EMBEDDING_DIMENSIONS = 1536 # Must match the dimensions for the embeddings you used
def upload_embeddings_to_iris(parquet_path: str):
    """
    Reads a Parquet file with 'chunk_text' and 'embedding' columns 
    and uploads them to an InterSystems IRIS vector database table.
    """
    # 1. Load data from the Parquet file using pandas
    try:
        df = pd.read_parquet(parquet_path)
        if 'chunk_text' not in df.columns or 'embedding' not in df.columns:
            print("Error: Parquet file must contain 'chunk_text' and 'embedding' columns.")
            return
    except FileNotFoundError:
        print(f"Error: The file at {parquet_path} was not found.")
        return
    # Ensure embeddings are in a format compatible with TO_VECTOR function (list of floats)
    # Parquet often saves numpy arrays as lists
    if isinstance(df['embedding'].iloc[0], np.ndarray):
        df['embedding'] = df['embedding'].apply(lambda x: x.tolist())
    print(f"Loaded {len(df)} records from {parquet_path}.")
    # 2. Establish connection to InterSystems IRIS
    connection = None
    try:
        conn_string = f"{IRIS_HOST}:{IRIS_PORT}/{IRIS_NAMESPACE}"
        connection = iris.connect(conn_string, IRIS_USERNAME, IRIS_PASSWORD)
        cursor = connection.cursor()
        print("Successfully connected to InterSystems IRIS.")
        # Create embedding table if it doesn't exist
        cursor.execute(f"""
            CREATE TABLE IF NOT EXISTS  {TABLE_NAME} (
            ID INTEGER IDENTITY PRIMARY KEY,
            chunk_text VARCHAR(2500), embedding VECTOR(FLOAT, {EMBEDDING_DIMENSIONS})
            )"""
        )
        # 3. Prepare the SQL INSERT statement
        # InterSystems IRIS uses the TO_VECTOR function for inserting vector data via SQL
        insert_sql = f"""
        INSERT INTO {TABLE_NAME} (chunk_text, embedding) 
        VALUES (?, TO_VECTOR(?))
        """
        # 4. Iterate and insert data
        count = 0
        for index, row in df.iterrows():
            text = row['chunk_text']
            # Convert the list of floats to a JSON string, which is required by TO_VECTOR when using DB-API
            vector_json_str = json.dumps(row['embedding']) 
            
            cursor.execute(insert_sql, (text, vector_json_str))
            count += 1
            if count % 100 == 0:
                print(f"Inserted {count} rows...")
        
        # Commit the transaction
        connection.commit()
        print(f"Data upload complete. Total rows inserted: {count}.")
    except iris.DBAPIError as e:
        print(f"A database error occurred: {e}")
        if connection:
            connection.rollback()
    except Exception as e:
        print(f"An unexpected error occurred: {e}")
    finally:
        if connection:
            connection.close()
            print("Database connection closed.")

Example usage:

current_file_path = Path(__file__).resolve()
    upload_embeddings_to_iris(current_file_path.parent / "BusinessService_embedded.parquet")

Step 5: Create your embedding search functionality

Next, we’ll create a search function that embeds the user’s query, runs a vector similarity search in IRIS via the Python DBAPI, and returns the topk matching chunks from our embeddings table.

The example function below reads a Parquet file containing text chunks and their corresponding embeddings, then uploads this data into the InterSystems IRIS vector storage table. It first validates the Parquet file and normalizes the embedding format into a JSON array string compatible with IRIS’s TO_VECTOR function. After establishing a connection to IRIS, the function creates the target table if it does not exist, prepares a parameterized SQL INSERT statement, and iterates through each row to insert the chunk text and embedding. Finally, it commits the transaction, logs progress, and ensures proper error handling and cleanup of the database connection.

import iris
from typing import List
import os
from openai import OpenAI

# --- Configuration ---
PARQUET_FILE_PATH = "your_embeddings.parquet"
IRIS_HOST = "localhost"
IRIS_PORT = 8881
IRIS_NAMESPACE = "VECTOR"
IRIS_USERNAME = "superuser"
IRIS_PASSWORD = "sys"
TABLE_NAME = "AIDemo.Embeddings" # Must match the table created in IRIS
EMBEDDING_DIMENSIONS = 1536
MODEL = "text-embedding-3-small"
def get_embedding(text: str, model: str, client) -> List[float]:
    # Normalize newlines and coerce to str
    payload = [("" if text is None else str(text)).replace("\n", " ") for _ in range(1)]
    resp = client.embeddings.create(model=model, input=payload, encoding_format="float")
    return resp.data[0].embedding
def search_embeddings(search: str, top_k: int):
    print("-------RAG--------")
    print(f"Searching IRIS vector store for: ", search)
    key = os.getenv("OPENAI_API_KEY")
    client = OpenAI(api_key=key)
 # 2. Establish connection to InterSystems IRIS
    connection = None
    try:
        conn_string = f"{IRIS_HOST}:{IRIS_PORT}/{IRIS_NAMESPACE}"
        connection = iris.connect(conn_string, IRIS_USERNAME, IRIS_PASSWORD)
        cursor = connection.cursor()
        print("Successfully connected to InterSystems IRIS.")
        # Embed query for searching
        #emb_raw = str(test_embedding) # FOR TESTING
        emb_raw = get_embedding(search, model=MODEL, client=client)
        emb_raw = str(emb_raw)
        #print("EMB_RAW:", emb_raw)
        emb_values = []
        for x in emb_raw.replace('[', '').replace(']', '').split(','):
            try:
                emb_values.append(str(float(x.strip())))
            except ValueError:
                continue
        emb_str = ", ".join(emb_values)
        # Prepare the SQL SELECT statement
        search_sql = f"""
        SELECT TOP {top_k} ID, chunk_text FROM {TABLE_NAME}
        ORDER BY VECTOR_DOT_PRODUCT((embedding), TO_VECTOR(('{emb_str}'), FLOAT)) DESC
        """
        cursor.execute(search_sql)
        results = []
        row = cursor.fetchone()
        while row is not None:
            results.append(row[:])
            row = cursor.fetchone()
    
    except iris.DBAPIError as e:
        print(f"A database error occurred: {e}")
        if connection:
            connection.rollback()
    except Exception as e:
        print(f"An unexpected error occurred: {e}")
    finally:
        if connection:
            connection.close()
            print("Database connection closed.")
        print("------------RAG Finished-------------")
        return results

Step 6: Add RAG context to your agent

Now that you’ve:

  • Chunked and embedded your documentation,
  • Uploaded embeddings to IRIS and created a vector index,
  • Built a search function for IRIS vector queries,

it’s time to put it all together into an interactive Retrieval-Augmented Generation (RAG) chat using the OpenAI Responses API. For this example we will give the agent access to the search function directly (for more fine-grained control of the agent), but this can also be done using a library like langchain as well.

First, you will need to create your instructions for the agent, making sure give it access to the search function:

 

import os
# ---------------------------- Configuration ----------------------------
MODEL = os.getenv("OPENAI_RESPONSES_MODEL", "gpt-5-nano")
SYSTEM_INSTRUCTIONS = (
    "You are a helpful assistant that answers questions about InterSystems "
    "business services and related integration capabilities. You have access "
    "to a vector database of documentation chunks about business services. "
    "\n\n"
    "Use the `search_business_docs` tool whenever the user asks about specific "
    "settings, configuration options, or how to perform tasks with business "
    "services. Ground your answers in the retrieved context, quoting or "
    "summarizing relevant chunks. If nothing relevant is found, say so "
    "clearly and answer from your general knowledge with a disclaimer."
)

# ---------------------------- Tool Definition ----------------------------
TOOLS = [
    {
        "type": "function",
        "name": "search_business_docs",
        "description": (
            "Searches a vector database of documentation chunks related to "
            "business services and returns the most relevant snippets."
        ),
        "parameters": {
            "type": "object",
            "properties": {
                "query": {
                    "type": "string",
                    "description": (
                        "Natural language search query describing what you want "
                        "to know about business services."
                    ),
                },
                "top_k": {
                    "type": "integer",
                    "description": (
                        "Maximum number of results to retrieve from the vector DB."
                    ),
                    "minimum": 1,
                    "maximum": 10,
                },
            },
            "required": ["query", "top_k"],
            "additionalProperties": False,
        },
        "strict": True,
    }
]

Now we need a small “router” method to let the model actually use our RAG tool.

call_rag_tool(name, args) receives a function call emitted by the OpenAI Responses API and routes it to our local implementation (the search_business_docs tool that wraps Search.search_embeddings). It takes the model’s query and top_k, runs the IRIS vector search, and returns a JSONencoded payload of the top matches (IDs and text snippets). This stringified JSON is important because the Responses API expects tool outputs as strings; by formatting the results predictably, we make it easy for the model to ground its final answer in the retrieved documentation. If an unknown tool name is requested, the function returns an error payload so the model can handle it gracefully.

def call_rag_tool(name: str, args: Dict[str, Any]) -> str:
    """Route function calls from the model to our local Python implementations.
    Currently only supports the `search_business_docs` tool, which wraps
    `Search.search_embeddings`.
    The return value must be a string. We will JSON-encode a small structure
    so the model can consume the results reliably.
    """
    if name == "search_business_docs":
        query = args.get("query", "")
        top_k = args.get("top_k", "")
        results = search_embeddings(query, top_k)
        # Expecting each row to be something like (ID, chunk_text)
        formatted: List[Dict[str, Any]] = []
        for row in results:
            if not row:
                continue
            # Be defensive in case row length/structure changes
            doc_id = row[0] if len(row) > 0 else None
            text = row[1] if len(row) > 1 else None
            formatted.append({"id": doc_id, "text": text})
        payload = {"query": query, "results": formatted}
        return json.dumps(payload, ensure_ascii=False)
    # Unknown tool; return an error-style payload
    return json.dumps({"error": f"Unknown tool name: {name}"})

Now that we have our RAG tool, we can start work on the chat loop logic. First, we need a helper to reliably pull the model’s final answer and any tool outputs from the OpenAI Responses API. extract_answer_and_sources(response) walks the response.output items containing the models outputs and concatenates them into a single answer string. It also collects the function_call_output payloads (the JSON we returned from our RAG tool), parses them, and exposes them as tool_context for transparency and debugging. The function parses the model output into a compact structure: {"answer": ..., "tool_context": [...]}.

def extract_answer_and_sources(response: Any) -> Dict[str, Any]:
    """Extract a structured answer and optional sources from a Responses API object.
    We don't enforce a global JSON response schema here. Instead, we:
    - Prefer the SDK's `output_text` convenience when present
    - Fall back to concatenating any `output_text` content parts
    - Also surface any tool-call-output payloads we got back this turn as
      `tool_context` for debugging/inspection.
    """
    answer_text = ""
    # Preferred: SDK convenience
    if hasattr(response, "output_text") and response.output_text:
        answer_text = response.output_text
    else:
        # Fallback: walk output items
        parts: List[str] = []
        for item in getattr(response, "output", []) or []:
            if getattr(item, "type", None) == "message":
                for c in getattr(item, "content", []) or []:
                    if getattr(c, "type", None) == "output_text":
                        parts.append(getattr(c, "text", ""))
        answer_text = "".join(parts)
    # Collect any function_call_output items for visibility
    tool_context: List[Dict[str, Any]] = []
    for item in getattr(response, "output", []) or []:
        if getattr(item, "type", None) == "function_call_output":
            try:
                tool_context.append({
                    "call_id": getattr(item, "call_id", None),
                    "output": json.loads(getattr(item, "output", "")),
                })
            except Exception:
                tool_context.append({
                    "call_id": getattr(item, "call_id", None),
                    "output": getattr(item, "output", ""),
                })
    return {"answer": answer_text.strip(), "tool_context": tool_context}

With the help of extract_answer_and_sources we can build the whole chat loop to orchestrate a twophase, toolcalling conversation with the OpenAI Responses API. The chat_loop() function runs an interactive CLI: it collects the user’s question, sends a first request with system instructions and the search_business_docs tool, and then inspects any function_call items the model emits. For each function call, it executes our local RAG tool (call_rag_tool, which wraps search_embeddings) and appends the result back to the conversation as a function_call_output. It then makes a second request asking the model to use those tool outputs to produce a grounded answer, parses that answer via extract_answer_and_sources, and prints it. The loop maintains running context in input_items so each turn can build on prior messages and tool results.

def chat_loop() -> None:
    """Run an interactive CLI chat loop using the OpenAI Responses API.
    The loop supports multi-step tool-calling:
    - First call may return one or more `function_call` items
    - We execute those locally (e.g., call search_embeddings)
    - We send the tool outputs back in a second `responses.create` call
    - Then we print the model's final, grounded answer
    """
    key = os.getenv("OPENAI_API_KEY")
    if not key:
        raise RuntimeError("OPENAI_API_KEY is not set in the environment.")
    client = OpenAI(api_key=key)
    print("\nBusiness Service RAG Chat")
    print("Type 'exit' or 'quit' to stop.\n")
    # Running list of inputs (messages + tool calls + tool outputs) for context
    input_items: List[Dict[str, Any]] = []
    while True:
        user_input = input("You: ").strip()
        if not user_input:
            continue
        if user_input.lower() in {"exit", "quit"}:
            print("Goodbye.")
            break
        # Add user message
        input_items.append({"role": "user", "content": user_input})
        # 1) First call: let the model decide whether to call tools
        response = client.responses.create(
            model=MODEL,
            instructions=SYSTEM_INSTRUCTIONS,
            tools=TOOLS,
            input=input_items,
        )
        # Save model output items to our running conversation
        input_items += response.output
        # 2) Execute any function calls
        # The Responses API returns `function_call` items in `response.output`.
        for item in response.output:
            if getattr(item, "type", None) != "function_call":
                continue
            name = getattr(item, "name", None)
            raw_args = getattr(item, "arguments", "{}")
            try:
                args = json.loads(raw_args) if isinstance(raw_args, str) else raw_args
            except json.JSONDecodeError:
                args = {"query": user_input}
            result_str = call_rag_tool(name, args or {})
            # Append tool result back as function_call_output
            input_items.append(
                {
                    "type": "function_call_output",
                    "call_id": getattr(item, "call_id", None),
                    "output": result_str,
                }
            )
        # 3) Second call: ask the model to answer using tool outputs
        followup = client.responses.create(
            model=MODEL,
            instructions=(
                SYSTEM_INSTRUCTIONS
                + "\n\nYou have just received outputs from your tools. "
                + "Use them to give a concise, well-structured answer."
            ),
            tools=TOOLS,
            input=input_items,
        )
        structured = extract_answer_and_sources(followup)
        print("Agent:\n" + structured["answer"] + "\n")

That’s it! You’ve built a complete RAG pipeline powered by IRIS Vector Search. While this example focused on a simple use case, IRIS Vector Search opens the door to many more possibilities:

  • Knowledge store for more complex customer support agents
  • Conversational context storage for hyper-personalized agents 
  • Anomaly detection in textual data
  • Clustering analysis for textual data

I hope this walkthrough gave you a solid starting point for exploring vector search and building your own AI-driven applications with InterSystems IRIS!

The full codebase can be found here:

Discussão (0)1
Entre ou crie uma conta para continuar
Anúncio
· 5 hr atrás

A Comunidade de Desenvolvedores completa 10 anos!

Olá Comunidade,

Em 7 de dezembro de 2025, a Comunidade de Desenvolvedores da InterSystems comemorou oficialmente seu 10º aniversário! 🥳🎉

E agora, celebramos esta década de aprendizado, colaboração, resolução de problemas e avanço das tecnologias da InterSystems. Seja você um membro antigo ou recente, agradecemos suas contribuições, perguntas, ideias e apoio. Este marco pertence a todos vocês 💖 Vocês construíram esta comunidade e a transformaram no que ela é hoje, e somos verdadeiramente gratos!

Como parte da comemoração, convidamos vocês a participarem de um vídeo especial de aniversário. E vocês arrasaram! Agradecemos a todos que dedicaram um tempo para compartilhar suas mensagens, lembranças e palavras gentis.

Que venham os próximos 10 anos de inovação e colaboração! 💙

PS: Deixe um comentário se você se reconheceu nas fotos! 😉


Fiquem ligados — isso é só o começo. Mais destaques e surpresas do aniversário estão por vir em breve.

Discussão (0)1
Entre ou crie uma conta para continuar