查找

Artigo
· Abr. 10 2min de leitura

Configurando uma Conexão de Tabela Vinculada ODBC/JDBC ao MySQL a partir do Iris

Devido à interpretação de SCHEMA pelo MySQL diferir do entendimento comum em SQL (como visto em IRIS/SQL Server/Oracle), nosso Assistente de Tabelas Vinculadas automatizado pode encontrar erros ao tentar recuperar informações de metadados para construir a Tabela Vinculada

(Isto também se aplica a Linked Procedures e Views)

Ao tentar criar uma Tabela Vinculada através do Assistente, você encontrará um erro semelhante a este

ERROR #5535: SQL Gateway catalog table error in 'SQLPrimaryKeys'. Error: ' SQLState: (HY000) NativeError: [0] Message: [MySQL][ODBC 8.3(a) Driver][mysqld-5.5.5-10.4.18-MariaDB]Support for schemas is disabled by NO_SCHEMA option
 

Para criar uma Tabela Vinculada a um banco de dados MySQL que emprega uma estrutura "sem schema" (o comportamento padrão), por favor, siga as instruções abaixo

  1. Crie uma Conexão SQL Gateway
  • Configure a conexão SQL Gateway como de costume
  • Certifique-se de que a caixa de seleção "Não usar identificadores delimitados por padrão" esteja marcada
  • Clique em "Testar Conexão" para confirmar se a conexão foi bem-sucedida

  1. Use a API baseada em Terminal para Criar a Tabela Vinculada
  • Utilize a seguinte API:$SYSTEM.SQL.Schema.CreateLinkedTable() O método CreateLinkedTable() utiliza os seguintes parâmetros::

CreateLinkedTable(dsn As %String, externalSchema As %String, externalTable As %String, primaryKeys As %String, localClass As %String = "User.LinkedClass", localTable As %String, ByRef columnMap As %String = "")

  • Exemplo:Neste exemplo, usamos a tabela de sistema help_keyword  do MySQL com o camponame como chave primária:

USER>do $SYSTEM.SQL.Schema.CreateLinkedTable("MyDSN", "", "help_keyword", "name", "User.LinkedClass", "LocalTable")

Por favor, certifique-se de que todos os parâmetros estejam especificados corretamente para evitar quaisquer erros durante o processo de configuração

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Abr. 10 3min de leitura

<METHOD DOES NOT EXIST> erreurs pour les objets de la carte d'enregistrement

Le mappeur d'enregistrements complexes peut vous aider à transformer des données de fichiers texte composées de différents types d'enregistrements en messages persistants dans IRIS. Pour comprendre les bases du mappeur d'enregistrements complexes et découvrir un exemple de mise en œuvre en production, visionnez la vidéo des services d'apprentissage.

Cet article vous aidera à résoudre les problèmes liés à l'affichage du message <Method DOES NOT EXIST> lors de la gestion des objets d'enregistrement. Par exemple, lors de la purge des messages en production, des erreurs peuvent apparaître, indiquant des problèmes lors de la suppression de certains enregistrements :

<METHOD DOES NOT EXIST>zDeleteRecords+17^User.Test.Batch.1 *%ParentRemove,User.Test.Record

Le message suivant peut également s'afficher dans le journal des événements lors du traitement de fichiers utilisant une table d'enregistrements complexe :
<METHOD DOES NOT EXIST>updateArrayReferences+40^EnsLib.RecordMap.ComplexParent.1 *%ParentAdd,User.Test.Record

En vérifiant la classe spécifiée dans le message, vous constatez que la méthode indiquée après l'astérisque n'est pas incluse dans la classe d'enregistrement. Pourquoi ces méthodes sont-elles absentes de la classe ?

Si vous rencontrez cette erreur, dans le portail de gestion, accédez à Interoperability>List>Record Maps, puis à la page de la table d'enregistrements concernée (« User.Test.Record » dans ce cas). Vérifiez si les options « Allow Complex Record Mapping » ou « Allow Complex Batching » sont cochées dans les paramètres.

Si vous utilisez un type d'enregistrement dans une table d'enregistrements complexes, vous devez configurer la table d'enregistrements simple pour pouvoir l'utiliser dans la table d'enregistrements complexes. La documentation précise que « Allow Complex Batching » est une option qui spécifie si la table d'enregistrements peut être utilisée dans une table d'enregistrements complexes.

Lorsque vous cochez l'option « Allow Complex Batching », la classe d'enregistrement étend EnsLib.RecordMap.ComplexChild ; sinon, elle étend uniquement %Persistent. Vous devez cocher cette option pour tous les enregistrements que vous utilisez dans la table d'enregistrements complexes.

Dans l'exemple ci-dessus, vous constatez que la méthode *%ParentRemove n'est pas définie, car elle est définie par la classe EnsLib.RecordMap.ComplexChild. Si la classe d'enregistrement n'étend pas EnsLib.RecordMap.ComplexChild, elle ne contient pas la méthode %ParentRemove. Nous avons donc rencontré l'erreur lors de la tentative de purge.

Pour y remédier, cochez la case « Allow Complex Record Mapping » pour l'enregistrement et régénérez la classe. Si vous cochez la classe régénérée, vous constaterez qu'elle étend désormais la classe EnsLib.RecordMap.ComplexChild. La purge devrait maintenant fonctionner correctement.

Discussão (0)1
Entre ou crie uma conta para continuar
Anúncio
· Abr. 10

Volved a ver el webinar del pasado Jueves: "Conectando sensores con InterSystems IRIS"

Hola! 

¿Os perdisteis el webinar de Jairo? ¡No pasa nada, podéis verlo en nuestro canal de YouTube o en la plataforma original donde se emitió.

 

Link YouTube: https://youtu.be/Tv5UpDAYxFQ?feature=shared

Link Plataforma: https://event.on24.com/wcc/r/4903467/717E138C41E142AEC2D1CB487D8FAA76

En este webinar veremos cómo capturar información de sensores hacia InterSystems IRIS. Gracias a esta recolección de datos se abren numerosas posibilidades que exploraremos de la mano de Jairo Ruiz, uno de nuestros expertos Sales Engineer de Colombia.

¡Disfrutadlo mucho!

Discussão (0)1
Entre ou crie uma conta para continuar
Pergunta
· Abr. 10

VS Code ObjectScript extension error

InterSystems ObjectScript extension for VS Code, version 3.0.1

Am also asking this question on the extension's GitHub page: Request textDocument/documentSymbol failed. Error: name must not be falsy · intersystems-community/vscode-objectscript · Discussion #1530 - but suspect more eyes will see it here, which might help gather additional information.

Tried to compile some legacy ObjectScript code via Import/Compile in VS Code using the vscode-objectscript extension. Getting an error, and the content of the file is left changed on the filesystem simply by being compiled.

We think the error occurs when you have a commented out Property line, with no space after the '//' comment starter. But we've not completely isolated the situations where it occurs. Couple of screenshots attached - first is a before compilation / after compilation of a simple test file, showing what state the extension leaves the file in after an attempt to compile. Second is of the Output tab in VS Code for the extension detailing the error. We only stumbled into this yesterday, so still gathering information about when it occurs / doesn't occur. We will be continuing to try and narrow down exactly what combination of syntax triggers the problem, and try and post any clarifying updates - probably on the GitHub discussion in the first instance, since that's where the developers should see them.

Has anyone else ever seen this? Are there any known work-arounds or mitigations?

4 Comments
Discussão (4)2
Entre ou crie uma conta para continuar
Discussão
· Abr. 10

Vector Embeddings Feedback

Background

Embeddings is a new IRIS feature empowering the latest capability in AI semantic search.
This presents as a new kind of column on a table that holds vector data.
The embedding column supports search for another existing column of the same table.
As records are added or updated to the table, the supported column is passed through an AI model and the semantic signature is returned.
This signature information is stored as the vector for future search comparison.
Subsequently when search runs, a comparison of the stored signatures occurs without any further AI model processing overhead.

Embedding search is like having a future proof categorization capability without manually adding new categories to existing data or labeling records.

ie: Show me others like this one.

The following ideas , questions and statements are limited but hope is they provide a starting point for any discssion and directions.

Class compilation deployment dependency

Before a class with an embedding column can be compiled a named config must have already been deployed.
This could affect patching for example Change-Control-Record other deployments needing a pre-insert-config step before loading new class versions step.

Clarification: If the configuration changes for a given embedded config name should the classes that use it need to be recompiled. ie: Is there any generator behavior cos / python that need to be aware of. If there is no actual compilation dependency, maybe a warning would be more flexible.
Also expecting it is supported to update the config after the class is compiled.

Docker build workaround

Ordinarily would insert embedding config by SQL but there seemed a SQL dependency was not available at Docker build time, so have used Object insert instead:
File: iris.script

// Add Embedding before dependent classes can be compiled
Set embedConf=##class(%Embedding.Config).%New()
Set embedConf.Name="toot-v2-config"
Set embedConf.Configuration="{""modelName"": ""toot-v2-config"",""modelPath"":""/opt/hub/toot/"",""tokenizerPath"":""/opt/hub/toot/tokenizer_tune.json"",""HotStart"":1}"
Set embedConf.EmbeddingClass="TOOT.Data.Embedding2"
Set embedConf.VectorLength=384
Set embedConf.Description="an embedding model provided by Alex Woodhead"
Set tSC=embedConf.%Save()

Where docker build runs instruction

RUN --mount=type=bind,src=.,dst=. \
    iris start IRIS && \
	iris session IRIS < iris.script && \
	iris session IRIS -U %SYS "##class(SYS.Container).QuiesceForBundling()" && \
    iris stop IRIS quietly

Online updates

One feature used in production for the Trackcare product is online index updates.
This allows users safely back onto a system while new indexes are being built.
It reduces or eliminates patching downtime for end user.
Can also allow application specialists to smoke test a patched production early to accelerate availability.
Capability: At a point in time the application just transparently switches to using the newest version of an index.
Is there synergy for an online update capability for embeddings, as a competitive feature?
Consider an IRIS customer will have to make a decision to choose a specific model to get embeddings from.
As a persistent data column this is in the hands of the IRIS customer to manage.
One trend that seems relentless is that better, smaller and more efficient embedding models keep arriving.

Challenge 1

An external API is used to generate embeddings.
The dependency service upgrades to a new version with different embeddings.
Is there timly planned application downtime to update all embeddings to use the new API version.
ie: The embedding of new search queries need to resemble the embeddings already saved in the table.

Challenge 2

The embedding model is staged locally ( from Huggingface or local directory).
Now the development project is responsible for timely model version choice.
Do they delay choice waiting for a better model later in the project?
Delaying can reduce early learnings in an innovative project cycle.

Challenge 3

Bespoke embeddings. Harder task, higher reward.
Having a hard cutoff for embeddings quality completion.
Therefore no scope to upgrade production embedding data after go live.
Can this dissuade an otherwise viable business option.

The ask:

Could an "Online Update" equivalent IRIS feature allow seamless transparent switch-over to a newer embeddings version in production? Does this improve early adoption for using the IRIS  embeddings feature over the choice of additional hybrid services?
Could an online update atomically also wrap the update of existing embedding config to new version (same name). ie: The named config in property/index in class definition will remain unaltered. So no recomplation necessary.

Embedding Batching

The current interface facilitates the generation of a single embedding, one record at a time.
1) Hypothesis - For external API embedding services, batching into less messages could be more efficient in latency, throughput and processing cost of the remote API?
2) Hypothesis - Where enabled by config and infrastructure capable, batching embedding generation cold be a more efficient use of locally hosted models and corresponding GPU / CPU?
Where this might have synergy
In the area of table index updates, there is a batching context where multiple updates to same bitmap chunk is deferred.
Was wondering if this also might have a similar context to schedule and then conclude a batch of embedding updates.

Customization

The customization of embedding is straight forward.
Subclass %Embedding.Interface
Implement two methods:
* Embedding
* IsValidConfig

A good reason to subclass at least in development environment is to give additional instrumentation points to log and catch errors (IRIS and Python), warnings, trace information, or specific issues loading and using a model.
Debugging this area is a bit different than conventional object script.
Some suggestions:

ClassMethod LoadModelPy(filepathIn, As %String status As Ens.Util.PyByRef) As %Boolean [ Language = python ]
import traceback
if not os.path.exists(filepathIn):
	if None!=status:
		status.value=iris.cls("%SYSTEM.Status").Error(5001,"File not found at "+filepathIn)
	return 0
try:
	# bla bla
	pass
except:
	print(traceback.format_exc())
	if None!=status:
		status.value=iris.cls("%SYSTEM.Status").Error(5001,"Error parsing midi file "+filepathIn+"::"+traceback.format_exc())
	return 0

Class not found error

Did you change the name of the class referred to by the embedding config, but have not updated the embedding config to the new value?

Controlling GPU usage

The python method that loads the model is an opportunity to confirm whether GPU is available.
Additionally the config could guide whether any GPU should actually be used. For example a web security context may always prefer using CPU only models.   

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if 'cpu'!=device:
    model.value.to(device)

Accept that an environment variable may alternatively be the preferred control point for the solution.

Accept the M1 to M4 Apple procesor has a different query for GPU availability.

Caching the model idea

How it works:
The first time an "Embedding" method is called for a new embedding:
The AI model is loaded into a process wide variable for example:  %model

Set modelV=##class(Ens.Util.PyByRef).%New()
Do ..LoadModelPy(config.%Get("modelPath"),config.%Get("tokenizerPath"),modelV)
Set %model("Toot","modelName")=config.%Get("modelName")
Set %model("Toot")=modelV.value
model.value = SentenceTransformer(...)

Each subsequent time the embedding method by IRIS, the same already loaded model is reused.
This means the model is not reloaded for each embedding insert.
It can be made to respond to config changes For example, reloading for a new config version.

The model can be cleared down by the application by removing the variable.

Reference

Feedback was from exploring using embedding in an application task:

https://openexchange.intersystems.com/package/toot

Hope this helps

Discussão (0)1
Entre ou crie uma conta para continuar