Rechercher

Pergunta
· Set. 16

HL7 Pass-through interface

For historic reasons we've got a mix of ADT feeds coming out of our PAS (TrakCare) to a wide range of downstream systems. In particular, there are some that are direct from TrakCare to the downstream systems, and many more that pass through Ensemble as our integration engine.

This is complicating management of the integrations, and so we'd like everything to go through the integration engine. In other words move from the flow in the top of the diagram to the flow in the bottom of the diagram:

So we want to build a couple of pass-through interfaces in Ensemble that respond identically to the current direct feeds - no transformations, and all ACK behaviour is transparent so that the only change visible to the TrakCare PAS is a change of IP/port for each of the currently direct feeds.

Should be easy, right? An HL7 TCP Service, possibly a routing process that simply passes everything on, and an HL7 TCP Operation that is connected to the downstream...

But what is happening is that the Router Process is generating an ACK and sending it back to the Service before the ACK comes back from the downstream. (I've faked a downstream, manually populated the ACK its sending back so I can definitely identify it. I can see it arriving in Ensemble - but after an ACK is sent to the upstream via the Service.) See the annotated message trace below:

Any pointers to what to be looking at?

I'm aware of Important HL7 Scenarios | Routing HL7 Version 2 Messages in Productions | InterSystems IRIS for Health 2025.1 and the Settings for HL7 Services, linked from there, but not seeing what we might have set wrong...

Thanks, Colin

6 Comments
Discussão (6)3
Entre ou crie uma conta para continuar
Artigo
· Set. 16 3min de leitura

Do "Ops!" ao "Aha!" - Evitando erros de principiante no ObjectScript

Começar a usar ObjectScript é realmente empolgante, mas também pode parecer um pouco estranho se você está acostumado com outras linguagens. Muitos iniciantes tropeçam nos mesmos obstáculos, então aqui estão alguns "pegadinhas" que você vai querer evitar. (Além de algumas dicas amigáveis para contorná-las.)


Nomear Coisas Aleatoriamente

Todos nós já fomos culpados de nomear algo como Test1 ou MyClass apenas para seguir em frente rapidamente. Mas quando seu projeto cresce, esses nomes se tornam um pesadelo.

➡  Escolha nomes claros e consistentes desde o início. Pense nisso como deixar um rastro para o seu "eu" do futuro e para seus colegas de equipe.


Confundir Globais e Variáveis

Globais (^GlobalName) podem ser confusas no começo. Eles não são apenas variáveis normais. Eles vivem no banco de dados e persistem mesmo depois que seu código para de rodar.

➡ Use-os apenas quando você realmente precisar de dados persistentes. Para todo o resto, use variáveis locais. (Isso também economiza armazenamento.)


Esquecer Transações

Imagine atualizar um registro de paciente e sua sessão travar na metade. Sem uma transação, você fica com dados incompletos.

➡ Envolva atualizações importantes em TSTART/TCOMMIT. É como apertar "salvar" e "desfazer" ao mesmo tempo.


Construir SQL em Strings

É tentador simplesmente colocar SQL em strings e executá-lo. Mas isso rapidamente se torna confuso e difícil de depurar.

➡ Use SQL embutido. É mais limpo, mais seguro e mais fácil de manter.

EXEMPLO:

❌ Construindo SQL em Strings

Set id=123
Set sql="SELECT Name, Age FROM Patient WHERE ID="_id
Set rs=##class(%SQL.Statement).%ExecDirect(,sql)

✅ Usando SQL Embutido

&SQL(SELECT Name, Age INTO :name, :age FROM Patient WHERE ID=:id)
Write name_" "_age,!

Ignorar o Tratamento de Erros

Ninguém gosta de ver seu aplicativo travar com uma mensagem enigmática. Isso geralmente acontece quando o tratamento de erros é ignorado.

➡Envolva operações arriscadas em TRY/CATCH e forneça a si mesmo mensagens de erro significativas.



Deixar de Usar Ferramentas Melhores

Sim, o terminal funciona. Mas se você só codifica por lá, está perdendo muito.

➡ Use o VS Code com a extensão ObjectScript. A depuração, o autocompletar e o destaque de sintaxe tornam a vida muito mais fácil.


Reinventar a Roda

Novos desenvolvedores frequentemente tentam escrever suas próprias utilidades para registro ou manipulação de JSON, sem perceber que o ObjectScript já tem soluções integradas.

➡ Explore a %Library e os objetos dinâmicos antes de criar os seus.


Escrever "Código Misterioso"

Todos nós já pensamos: "Vou me lembrar disso depois."

⚠️SPOILERVOCÊ NÃO VAI!

Adicione comentários curtos e claros. Mesmo uma única linha explicando por que você fez algo ajuda muuuito.


 

Considerações Finais : )

Aprender ObjectScript é como aprender qualquer outra língua nova. É preciso um pouco de paciência, e você cometerá erros ao longo do caminho. O segredo é reconhecer essas armadilhas comuns cedo e construir bons hábitos desde o início. Dessa forma, em vez de lutar contra a linguagem, você realmente aproveitará o que ela pode fazer. :)

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Set. 16 1min de leitura

Reviews on Open Exchange - #55

If one of your packages on OEX receives a review you get notified by OEX only of YOUR own package.   
The rating reflects the experience of the reviewer with the status found at the time of review.   
It is kind of a snapshot and might have changed meanwhile.   
Reviews by other members of the community are marked by * in the last column.

I also placed a bunch of Pull Requests on GitHub when I found a problem I could fix.    
Some were accepted and merged, and some were just ignored.     
So if you made a major change and expect a changed review, just let me know.

 

# Package Review Stars IPM Docker *
1 potato-analytics Pleasure to run 5.5 y y  
2 Beyond-Server-Limits hidden backdor 5.0   y *
3 csp-fileview-download nice Docker availale IPM ready 5.0 y y  
4 Full-OBJ-Dump nice helper 5.0 y y *
5 TaskScheduler OK, room for improvement 4.8 y y  
6 IRISFHIRServerLogs builds OK 4.6 y y  
7 customer-support-agent-demo IRIS runs fine 4.5   y  
8 MessageLogViz listing filtered text 4.0 y y  
9 iris-mock-server missing some parts 3.5 y y  
5 Comments
Discussão (5)3
Entre ou crie uma conta para continuar
Artigo
· Set. 16 14min de leitura

High Availability IAM

One of the recommendations when deploying InterSystems Technologies for production is to set up High Availability. The recommended API Manager for these InterSystems Technologies is the InterSystems API Manager (IAM). IAM (essentially Kong Gateway) has multiple deployment topologies.

If you are looking for high availability you could use:

a) Kong Traditional Mode: Multiple Node Clusters

b) Hybrid Mode

c) DB-less Mode

Before we break them down let's first understand the out of the box deployment that is provided by InterSystems: Installing IAM Version 3.10.

Kong Traditional Mode

This is the Kong Traditional Mode: Single Node Cluster. If you haven't yet, go over the great article by @Guillaume RongierIAM (InterSystems API Manager), Zero to Hero which does a great job explaining how to get IAM up and working with InterSystems IRIS. 

The Kong Traditional Mode Single Node Cluster is the only currently supported IAM deployment option via IKO at this time, check out the docs for deploying that here.

Note that in the YAML of Guillaume's article (section 2.4.4 of the article or 3.4.4 of the attached OpenExchange Application's GitHub Repository's README) , we have 3 containers:

  • iam-migrations (an initialization of the empty postgres database)
  • iam (IAM itself!)
  • db (the postgres database)

In Traditional Mode all configuration is done via the IAM container and all requests are sent to this container as well.

This can be scaled up so we are less reliant on a single IAM container: 

It is very simple, you just add another IAM section! So if you are working on just one YAML you could have:

  • iam-migrations
  • iam1
  • iam2
  • db

Note that if you are working on just one YAML you must make sure you give IAM1 different ports than IAM2.

 
Spoiler (Traditional)

Ideally, you would like for the database (including the migrations bootstrap) and each Kong node to be running on independent servers. In this case the ports would not be a problem, but you would have to make sure to change the KONG_PG_HOST environment variable to point to the host the database is running on.

Now you have High Availability, though this is Active/Active as opposed to the IRIS/HealthConnect Active/Passive Mirroring. You can go ahead and set up any configuration on either IAM1 or IAM2 and since they share a database the other will be updated and store the same information. 

If you would like independent databases you would have to have 2 individual single node clusters. Note that Kong does not have an option to "mirror" databases such as IRIS so you would need to create a pipeline to automate the configuration (or configure everything twice, or get rid of the DB entirely - see DB-less Mode). In the above diagram provided by Kong, we see that they would have just had 3 IAMs instead of 2. You would also want to add a load balancer as well as a health check probe on each IAM node, with the environment variable KONG_STATUS_LISTEN: 0.0.0.0:8100. 

A significant advantage to the Traditional Mode is (per the Kong Docs):

Traditional mode is the only deployment topology that supports plugins that require a database, like rate limiting with the cluster strategy, or OAuth2.

But it comes with downsides as well (from the Kong Docs):

  • When running in traditional mode, every Kong Gateway node runs as both a Control Plane (CP) and Data Plane (DP). This means that if any of your nodes are compromised, the entire running gateway configuration is compromised.
  • If you’re running Kong Gateway Enterprise with Kong Manager, request throughput may be reduced on nodes running Kong Manager due to expensive calculations being run to render analytics data and graphs.

The following modes solve these issues.

Kong Hybrid Mode

Hybrid Mode splits what we previously referred to as the IAM container into two: the Control Plane (CP) and the Data Plane (DP).

The CP is where the administration of Kong is hosted, such as the Kong Manager (this is the "management portal" of Kong, by default accessed on port 8002 or 8445). The control plane is connected to the database and sends its configuration out to the DPs. The DPs do not have any of the responsibility for managing Kong, only for implementing the configurations. All client API requests via Kong are sent directly to the DPs without going through the CP (this is the routes, by default at port 8000 or 8443). This resolves the downsides introduced by Traditional Mode Multi Node Clusters. An imperfect InterSystems analogy to Hybrid Mode is that of ECP.  The client interacts with the fast running DP who's only purpose is to serve the application, while we leave everything else, such as management, and rendering of data and graphs to the CP.

 

Have a look at what the Control and Data Plane YAMLs look like below. 

 
Spoiler (Control Plane)
Note that for the first time we have introduced some certificates and keys. For the sake of simplicity I will try to avoid them in this article and write about how to properly set up secure communication in an upcoming article. That being said, Hybrid Mode requires that the CP and DP communicate via mTLS hence it was necessary.
 
Spoiler (Data Plane)
 
As before you would want to make sure to set up a load balancer. This is my preferred Highly Available Deployment Topology for IAM as I am a fan of working with the GUI, but enjoy the compartmentalization in that administration is on the control plane, while the client/application connects to the data plane. Have a look through at the other benefits of this mode (from the Kong Docs):
 
  • Deployment flexibility: Users can deploy groups of Data Planes in different data centers, geographies, or zones without needing a local clustered database for each DP group.
  • Increased reliability: The availability of the database doesn’t affect the availability of the Data Planes. Each DP caches the latest configuration it received from the Control Plane on local disk storage, so if CP nodes are down, the DP nodes keep functioning.
    • While the CP is down, DP nodes constantly try to reestablish communication.
    • DP nodes can be restarted while the CP is down, and still proxy traffic normally.
  • Traffic reduction: Drastically reduces the amount of traffic to and from the database, since only CP nodes need a direct connection to the database.
  • Increased security: If one of the DP nodes is compromised, an attacker won’t be able to affect other nodes in the Kong Gateway cluster.
  • Ease of management: Admins only need to interact with the CP nodes to control and monitor the status of the entire Kong Gateway cluster.

DB-less mode

The third and final Kong Gateway Deployment Topology available is DB-less mode There is no database because all of the configuration is done declaratively in a YAML, and the application runs ephemerally (with no persistent storage). 

This of course comes with the following downsides (from the Kong Docs):

  • The Admin API is read only.
  • Any plugin that stores information in the database, like rate limiting (cluster mode), doesn’t fully function.

But there is upsides as well (from the Kong Docs):

  • Reduced number of dependencies: no need to manage a database installation if the entire setup for your use case fits in memory.
  • Your configuration is always in a known state. There is no intermediate state between creating a Service and a Route using the Admin API.
  • DB-less mode is a good fit for automation in CI/CD scenarios. Configuration for entities can be kept in a single source of truth managed via a Git repository.

Note that this time the YAML has no database, and hence no iam-migrations either. 

 
Spoiler (DB-less)

And of course we would need the declarative configuration of Kong. See an example YAML of this below.

 
Spoiler (kong.yml)

To summarize, there are three deployment topologies you can use for IAM to achieve HA, each with its advantages and disadvantages. I encourage you to check out the attached Open Exchange Application which takes you through deploying these topologies one by one. 

2 Comments
Discussão (2)3
Entre ou crie uma conta para continuar
Artigo
· Set. 16 4min de leitura

Run Your AI Agent with InterSystems IRIS and Local Models using Ollama

In the previous article, we saw how to build a customer service AI agent with smolagents and InterSystems IRIS, combining SQL, RAG with vector search, and interoperability.

In that case, we used cloud models (OpenAI) for the LLM and embeddings.

This time, we’ll take it one step further: running the same agent, but with local models thanks to Ollama.

Discussão (0)1
Entre ou crie uma conta para continuar