Nova postagem

Pesquisar

InterSystems Oficial
· Fev. 11

Nouveautés d'InterSystems Language Server 2.7

Tout d'abord, permettez-moi de souhaiter une bonne année à la Communauté des Développeurs ! Nous espérons vous apporter beaucoup de bonnes choses cette année, et aujourd'hui, j'aimerais vous présenter la dernière version de l'extension Intersystems Language Server pour VS Code. La plupart des améliorations de Language Server sont expérimentées via l'interface utilisateur de l'extension ObjectScript, vous n'êtes donc peut-être pas au courant des nombreuses améliorations dans des domaines tels qu'Intellisense et les survols qui ont été publiées tout au long de l'année 2024. Veuillez lire rapidement le CHANGELOG de Language Server et découvrir ce que vous avez manqué. Plus récemment, la version 2.7.0 apporte la prise en charge de la plateforme Windows ARM, donc si vous avez un appareil tel que le Surface Pro 11 (sur lequel j'écris avec plaisir cet article), vous pouvez désormais bénéficier d'une excellente expérience de développement ObjectScript sur votre machine. Essayez-le et faites-nous savoir comment cela se passe dans les commentaires ci-dessous.

Discussão (0)0
Entre ou crie uma conta para continuar
Pergunta
· Fev. 11

Change %Response.ContentLength

Hey everyone, 
Might be a stupid question, but i was trying to set the ContentLength of the %response object in my website.
In different places (like the onPreHttp / onPostHttp etc) but none seems to work.

The reason behind it is to send a more accurate representation of the actual data I send, instead of having the overhead of the broker that adds more characters , I want the exact length of the response accounting only for the data that I actually returned.

2 Comments
Discussão (2)1
Entre ou crie uma conta para continuar
Artigo
· Fev. 10 2min de leitura

第二十章 P 开头的术语

第二十章 P 开头的术语

通过引用传递 (passing by reference)

系统

一种传递参数地址而不是值的方式。这允许访问实际的变量,使得传递给的方法、函数或例程可以更改变量的实际值。

通过值传递 (passing by value)

系统

传递参数值的一种方式。这提供了变量的副本。因此,传递给的方法、函数或例程无法更改变量的实际值。

模式匹配表 (pattern match table)

系统

内部表,用于指示 IRIS 是否将字符视为字母、标点、数字或控制字符。

权限 (permission)

系统

在资源上执行某项活动的能力说明。对于数据库资源,可用的权限有读取(Read)和写入(Write)。对于服务、应用程序或管理操作,可用的权限是使用(Use)。

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Fev. 10 8min de leitura

Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris - Part 3 – REST and Interoperability

Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris

Part 3 – REST and Interoperability

Now that we have finished the configuration of the SQL Gateway and we have been able to access the data from the external database via python, and we have set up our vectorized base, we can perform some queries. For this in this part of the article we will use an application developed with CSP, HTML and Javascript that will access an integration in Iris, which then performs the search for data similarity, sends it to LLM and finally returns the generated SQL. The CSP page calls an API in Iris that receives the data to be used in the query, calling the integration. For more information about REST in the Iris see the documentation available at https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

The following is the code of the REST API created:

Class Rest.Vector Extends %CSP.REST
{

XData UrlMap
{
<Routes>
        <Route Url="/buscar" Method="GET" Call="Buscar"  Cors="true"/>
    </Routes>
}

ClassMethod Buscar() As %Status
{
    set arg1 = %request.Get("arg1")
    set sessionID = %request.Get("sessionID")
    Set saida = {}
    Set obj=##Class(ws.rag.msg.Request).%New()
    Set obj.question=arg1
    Set obj.sessionId=sessionID
    Set tSC=##Class(Ens.Director).CreateBusinessService("ws.rag.bs.Service",.tService)
    If tSC
    {
                  Set resp=tService.entrada(obj)
    } Else {
                  Set resp=##Class(ws.rag.msg.Response).%New()
                  Set resp.resposta=$SYSTEM.Status.GetErrorText(tSC)
    }
    Set saida = {}
    Set saida.resposta=resp.resposta
    Set saida.sessionId=sessionID // Devolve o SessionId que chegou
    //
    Write saida.%ToJSON()
    Quit $$$OK
}

}

Once the REST API code is created, we need to create the application configuration in the Administration->System Administration->Security->Web Applications Portal:

In the CSP application, in Javascript, we then have the API call:

...

async function chamaAPI(url, sessionID)

    var div = document.getElementById('loader');
    div.style.opacity=1;
    fetch(url)
         .then(response => {
               if (!response.ok) {
                     throw new Error('Erro na resposta da API');
               }
              return response.json();
         })
        .then(data => {
               incluiDIVRobot(data.resposta, data.sessionID);
        })
       .catch(error => {
               incluiDIVRobot('Erro na chamada da API:: ' + error, sessionID);
       });
 }

//

 const url = 'http://' + 'localhost' + '/api/vector/buscar?arg1=' + texto + '&sessionID=' + sessionID;
 chamaAPI(url, sessionID);

...

The CSP application then receives the user's request (e.g., "what's the lowest temperature recorded?") and calls the REST API.

 

The REST API in turn calls an integration in Iris composed of Service, Process and Operation. In the Operation layer we have the LLM call, which is made by a python method of a class. By seeing the integration trace we can see the whole process taking place.

For more information on using productions in Iris, see the documentation available at https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls...

Below are the codes used in the BS, BP and BO layers:

The following is the BS (Service) code:

Class ws.rag.bs.Service Extends Ens.BusinessService
{

Parameter SERVICENAME = "entrada";

Method entrada(pInput As ws.rag.msg.Request) As ws.rag.msg.Response [ WebMethod ]
{
              Set tSC=..SendRequestSync("bpRag",pInput,.tResponse)
              Quit tResponse
}

}

And the BP Code (Process)

Class ws.rag.bp.Process Extends Ens.BusinessProcessBPL [ ClassType = persistent, ProcedureBlock ]
{

/// BPL Definition
XData BPL [ XMLNamespace = "http://www.intersystems.com/bpl" ]
{
<process language='objectscript' request='ws.rag.msg.Request' response='ws.rag.msg.Response' height='2000' width='2000' >
<sequence xend='200' yend='350' >
<call name='boRag' target='boRag' async='0' xpos='200' ypos='250' >
<request type='ws.rag.msg.Request' >
<assign property="callrequest" value="request" action="set" languageOverride="" />
</request>
<response type='ws.rag.msg.Response' >
<assign property="response" value="callresponse" action="set" languageOverride="" />
</response>
</call>
</sequence>
</process>
}

Storage Default
{
<Type>%Storage.Persistent</Type>
}

}

 

And the BO Code (Operation):

 

Class ws.rag.bo.Operation Extends Ens.BusinessOperation [ ProcedureBlock ]
{

Method retrieve(pRequest As ws.rag.msg.Request, Output pResponse As ws.rag.msg.Response) As %Library.Status
{
 Set pResponse=##Class(ws.rag.msg.Response).%New()
 Set pResponse.status=1
 Set pResponse.mensagem=”OK”
 Set pResponse.sessionId=..%SessionId
 Set st=##Class(Vector.Util).RetrieveRelacional(“odbc_work”,pRequest.question,pRequest.sessionId)
 Set pResponse.resposta=st
 Quit $$$OK
}

XData MessageMap
{
<MapItems>
<MapItem MessageType=”ws.rag.msg.Request”>
 <Method>retrieve</Method>
 </MapItem>
</MapItems>
}

}

 

And the Request and Response classes:


Request:

 

Class ws.rag.msg.Request Extends Ens.Request
{

Property collectionName As %String;

Property question As %String(MAXLEN = "");

Property sessionId As %String;

}

 

Response:

 

Class ws.rag.msg.Response Extends Ens.Response
{

Property resposta As %String(MAXLEN = "");

Property status As %Boolean;

Property mensagem As %String(MAXLEN = "");

Property sessionId As %Integer;

 

}

 

And the Production class:

Class ws.rag.Production Extends Ens.Production
{

XData ProductionDefinition
{
<Production Name="ws.rag.Production" LogGeneralTraceEvents="false">
  <Description>Produção do Rag DEMO</Description>
  <ActorPoolSize>2</ActorPoolSize>
  <Item Name="ws.rag.bs.Service" Category="rag" ClassName="ws.rag.bs.Service" PoolSize="0" Enabled="true" Foreground="false" Comment="" LogTraceEvents="false" Schedule="">
  </Item>
  <Item Name="bpRag" Category="rag" ClassName="ws.rag.bp.Process" PoolSize="1" Enabled="true" Foreground="false" Comment="" LogTraceEvents="false" Schedule="">
  </Item>
  <Item Name="boRag" Category="rag" ClassName="ws.rag.bo.Operation" PoolSize="1" Enabled="true" Foreground="false" Comment="" LogTraceEvents="false" Schedule="">
  </Item>
</Production>
}

}

When executed, the integration keeps the Requests and Responses stored, allowing traceability on the bus, as we see in the following trace:

 

We can see in the trace, for example, that the time it took for the LLM call to send the data and return the requested SQL was approximately 10s:

We can see the return of the BO after treating the LLM's response in the python code, to our question:

Thus, through the traceability of the Iris interoperability layer, we can see the entire flow of information trafficked, the elapsed times, any failures and, if applicable, reprocess any call, if necessary.

The call from the BO to the python method passes the request made by the user. The python code through vector search finds the most similar records and sends them to LLM along with the user's request (in this case the model in our table), the conversation history (if it exists) and the prompt that are the guidelines to guide the LLM's performance.

The LLM then generates the SQL which is returned to the method in python. The code then executes SQL and formats the response to the expected pattern, creating the presentation lists, charts, or downloads according to the user's expected feedback.

Thus, through the created CSP application, we can request various information, such as answer tables:

Or graphics:

Or even, the download of information:

To download, download and open the file we have the requested data:

These examples show reading the data from the external table with the Iris SQL Gateway and using it with code written in python. In this way we can use the full potential of the data, which does not need to be stored inside Iris. Imagine being able to set up an analysis station that collects data from various systems and provides information for decision making.

We can, for example, have dashboards visualizing data from the various environments that make up a company's ecosystem, predictions based on ML algorithms, RAG to facilitate data collection, and much more.

Iris can be responsible for accessing, processing and making available the data of the various environments, with control, security and traceability, thanks to the interoperability characteristics, being able to use code in COS and python, and be accessed through codes in R, C, Java and much more. And all this within the same product and without the need to duplicate or move data between environments.

1 Comment
Discussão (1)1
Entre ou crie uma conta para continuar
Artigo
· Fev. 10 7min de leitura

Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris - Part 2 – Python and Vector Search

Using SQL Gateway with Python, Vector Search, and Interoperability in InterSystems Iris

Part 2 – Python and Vector Search

 

Since we have access to the data from our external table, we can use everything that Iris has to offer with this data. Let's, for example, read the data from our external table and generate a polynomial regression with it.

For more information on using python with Iris, see the documentation available at https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_epython

Let's now consume the data from the external database to calculate a polynomial regression. To do this, we will use a python code to run a SQL that will read our MySQL table and turn it into a pandas dataframe:

ClassMethod CalcularRegressaoPolinomialODBC() As %String [ Language = python ]
{
    import iris
    import json
    import pandas as pd
    from sklearn.preprocessing import PolynomialFeatures
    from sklearn.linear_model import LinearRegression
    from sklearn.metrics import mean_absolute_error
    import numpy as np
    import matplotlib
    import matplotlib.pyplot as plt
    matplotlib.use("Agg")

    # Define Grau 2 para a regressão
    grau = 2
    
    # Recupera dados da tabela remota via ODBC
    rs = iris.sql.exec("select venda as x, temperatura as y from estat.fabrica")
    df = rs.dataframe()
   
    # Reformatando x para uma matriz 2D exigida pelo scikit-learn
    X = df[['x']]
    y = df['y']
    
    # Transformação para incluir termos polinomiais
    poly = PolynomialFeatures(degree=grau)
    X_poly = poly.fit_transform(X)

    # Inicializa e ajusta o modelo de regressão polinomial
    model = LinearRegression()
    model.fit(X_poly, y)

    # Extrai os coeficientes do modelo ajustado
    coeficientes = model.coef_.tolist()  # Coeficientes polinomiais
    intercepto = model.intercept_       # Intercepto
    r_quadrado = model.score(X_poly, y) # R Quadrado

    # Previsão para a curva de regressão
    x_pred = np.linspace(df['x'].min(), df['x'].max(), 100).reshape(-1, 1) 
    x_pred_poly = poly.transform(x_pred)
    y_pred = model.predict(x_pred_poly)
    
    # Calcula Y_pred baseado no X
    Y_pred = model.predict(X_poly)
            
    # Calcula MAE
    MAE = mean_absolute_error(y, Y_pred)
    
    # Geração do gráfico da Regressão
    plt.figure(figsize=(8, 6))
    plt.scatter(df['x'], df['y'], color='blue', label='Dados Originais')
    plt.plot(df['x'], df['y'], color='black', label='Linha dos Dados Originais')
    plt.scatter(df['x'], Y_pred, color='green', label='Dados Previstos')
    plt.plot(x_pred, y_pred, color='red', label='Curva da Regressão Polinomial')
    plt.title(f'Regressão Polinomial (Grau {grau})')
    plt.xlabel('X')
    plt.ylabel('Y')
    plt.legend()
    plt.grid(True)

    # Salvando o gráfico como imagem
    caminho_arquivo = 'c:\\temp\\RegressaoPolinomialODBC.png'
    plt.savefig(caminho_arquivo, dpi=300, bbox_inches='tight')
    plt.close()
    
    resultado = {
        'coeficientes': coeficientes,
        'intercepto': intercepto,
        'r_quadrado': r_quadrado,
        'MAE': MAE
    }

    return json.dumps(resultado)
}

 

 

The first action we take in the code is to read the data from our external table via SQL and then turn it into a Pandas dataframe. Always remembering that the data is physically stored in MySQL and is accessed via ODBC through the SQL Gateway configured in Iris. With this we can use the python libraries for calculation and graphing, as we see in the code.

Executing our routine we have the information of the generated model:

Our routine also generates a graph that gives us visual support for polynomial regression. Let's see how the graph turned out:

Another action we can take with the data that is now available is the use of Vector Search and RAG with the use of an LLM. To do this, we will vectorize the model of our table and from there ask LLM for some information.

For more information on using Vector Search in Iris, see the text available at https://www.intersystems.com/vectorsearch/

First, let's vectorize our table model. Below is the code with which we carry out this task:

ClassMethod IngestRelacional() As %String [ Language = python ]
{

    import json
    from langchain_iris import IRISVector
    from langchain_openai import OpenAIEmbeddings
    from langchain_text_splitters import RecursiveCharacterTextSplitter
    import iris    

    try:
    
        apiKey = iris.cls("Vector.Util").apikey()
        collectionName = "odbc_work"

        metadados_tabelas = [
            "Tabela: estat.fabrica; Colunas: chave(INT), venda(INT), temperatura(INT)"
        ]
        
        text_splitter = RecursiveCharacterTextSplitter(chunk_size=2048, chunk_overlap=0)
        documents=text_splitter.create_documents(metadados_tabelas)
            
        # Vetorizar as definições das tabelas
        vectorstore = IRISVector.from_documents(
        documents=documents,
        embedding=OpenAIEmbeddings(openai_api_key=apiKey),
        dimension=1536,
        collection_name=collectionName
        )
        
        return json.dumps({"status": True})
    except Exception as err:
        return json.dumps({"error": str(err)})
}

 

Note that we don't pass the contents of the table to the ingest code, but its model. In this way, LLM is able, upon receiving the columns and their properties, to define an SQL according to our request.

This ingest code creates the odbc_work table that will be used to search for similarity in the table model, and then ask the LLM to return an SQL. For this we created a KEY API at OpenAI and used langchain_iris as a python library. For more details about the langchain_iris see the link https://github.com/caretdev/langchain-iris

After ingest the definition of our table, we will have the odbc_work table  generated:

Now let's go to our third part, which is the access of a REST API that will consume the data that is vectorized to assemble a RAG.

See you later!

Discussão (0)1
Entre ou crie uma conta para continuar