Nova postagem

Encontrar

Artigo
· Abr. 23 7min de leitura

Generating an ObjectScript Class from JSON Response Using ChatGPT

I have a new project to store information from REST responses into an IRIS database. I’ll need to sync information from at least two dozen separate REST endpoints, which means creating nearly that many ObjectScript classes to store the results from those endpoints.

Could I use ChatGPT to get a headstart on creating these classes? The answer is “Yes”, which is great since this is my first attempt at using generative AI for something useful. Generating pictures of giraffes eating soup was getting kind of old….

Here’s what I did:

  • Get some sample JSON REST call output using “curl” to query the endpoint.
  • Visit ChatGPT (InterSystems provides a company-wide license for employees)
  • Have the following dialog with ChatGPT.

I put the following request in the “Message ChatGPT” prompt:

Me: Generate an ObjectScript class that will hold data from the following JSON:

I hit “Return” before entering the JSON so ChatGPT directed me as follows:

ChatGPT: Please provide the JSON structure you'd like me to generate the ObjectScript class for. If you have an example JSON object, paste it here, and I'll create the corresponding InterSystems ObjectScript class for you.

I entered the following in the prompt:

{
  "id": 335818632,
  "appId": 113,
  "appName": "Batch",
  "contentId": 182455380233,
  "subject": "Test Subject",
  "name": "Test email campaign name",
  "counters": {
    "processed": 3910,
    "deferred": 86,
    "unsubscribed": 17,
    "statuschange": 17,
    "bounce": 153,
    "mta_dropped": 8,
    "dropped": 299,
    "delivered": 3749,
    "sent": 3903,
    "click": 16,
    "open": 323
  },
  "lastProcessingFinishedAt": 1733232499000,
  "lastProcessingStartedAt": 1733232303000,
  "lastProcessingStateChangeAt": 1733232499000,
  "numIncluded": 4201,
  "processingState": "DONE",
  "scheduledAt": 1733232300000,
  "type": "BATCH_EMAIL"
}

ChatGPT’s response:

Here is the InterSystems ObjectScript class that maps to the provided JSON structure:

Class MyNamespace.MyJSONData Extends (%Persistent, %JSON.Adaptor)
{
Property id As %Integer;
Property appId As %Integer;
Property appName As %String;
Property contentId As %Integer;
Property subject As %String;
Property name As %String;
Property counters As MyNamespace.Counters;
Property lastProcessingFinishedAt As %BigInt;
Property lastProcessingStartedAt As %BigInt;
Property lastProcessingStateChangeAt As %BigInt;
Property numIncluded As %Integer;
Property processingState As %String;
Property scheduledAt As %BigInt;
Property type As %String;
ClassMethod %OnJSONImport(jsonObject As %Object, instance As MyNamespace.MyJSONData) As %Status
{
    Quit ..%JSONImport(jsonObject, .instance)
}
}

Class MyNamespace.Counters Extends (%SerialObject, %JSON.Adaptor)
{
Property processed As %Integer;
Property deferred As %Integer;
Property unsubscribed As %Integer;
Property statuschange As %Integer;
Property bounce As %Integer;
Property mta_dropped As %Integer;
Property dropped As %Integer;
Property delivered As %Integer;
Property sent As %Integer;
Property click As %Integer;
Property open As %Integer;
}

Along with an Explanation:

This code is a great start, but I would like it to fit into my package named “MyPkg” and give the name “EmaiCampaign” to the class. Also, using a bit of NI, I know the values which generated the “%BigInt” properties are actually Linux times and the properties should be %Library.PosixTime”.

I make the following request (not sure why I'm saying “please” to a machine; I guess my parents did a good job):

Me: Can we recreate the class again please, renaming the package " MyPkg" and the Class as "EmailCampaign"? Also, please change types of Properties from %BigInt to %Library.PosixTime.

ChatGPT’s response:

Here's the updated ObjectScript class with the specified package name and class name, and the timestamp properties updated to use %Library.PosixTime:

Class MyPkg.EmailCampaign Extends (%Persistent, %JSON.Adaptor)
{
Property id As %Integer;
Property appId As %Integer;
Property appName As %String;
Property contentId As %Integer;
Property subject As %String;
Property name As %String;
Property counters As MyPkg.Counters;
Property lastProcessingFinishedAt As %Library.PosixTime;
Property lastProcessingStartedAt As %Library.PosixTime;
Property lastProcessingStateChangeAt As %Library.PosixTime;
Property numIncluded As %Integer;
Property processingState As %String;
Property scheduledAt As %Library.PosixTime;
Property type As %String;
ClassMethod %OnJSONImport(jsonObject As %Object, instance As MyPkg.EmailCampaign) As %Status
{
    Quit ..%JSONImport(jsonObject, .instance)
}
}

Class MyPkg.Counters Extends (%SerialObject, %JSON.Adaptor)
{
Property processed As %Integer;
Property deferred As %Integer;
Property unsubscribed As %Integer;
Property statuschange As %Integer;
Property bounce As %Integer;
Property mta_dropped As %Integer;
Property dropped As %Integer;
Property delivered As %Integer;
Property sent As %Integer;
Property click As %Integer;
Property open As %Integer;
}

Along with this explanation:

After copying the code “as is” into VSCode, syntax issues with the curly braces were highlighted. ObjectScript classes don’t enable internal class definitions.

Asking ChatGPT to fix the “curly brackets” issue by making the MyPkg.Counters internal class a separate class is a “no-op”; a human will need to move that class into another file. ChatGPT did rename that class after the following request:

Me: The class as it is written does not compile. Can we move the internal class named MyPkg.Counters out of the main class into its own class and rename it to MyPkg.CampaignCounters?

ChatGPT’s response:

Here's the updated ObjectScript code with the MyPkg.CampaignCounters class moved out as its own class:

Class MyPkg.EmailCampaign Extends (%Persistent, %JSON.Adaptor)
{
Property id As %Integer;
Property appId As %Integer;
Property appName As %String;
Property contentId As %Integer;
Property subject As %String;
Property name As %String;
Property counters As MyPkg.CampaignCounters;
Property lastProcessingFinishedAt As %Library.PosixTime;
Property lastProcessingStartedAt As %Library.PosixTime;
Property lastProcessingStateChangeAt As %Library.PosixTime;
Property numIncluded As %Integer;
Property processingState As %String;
Property scheduledAt As %Library.PosixTime;
Property type As %String;
ClassMethod %OnJSONImport(jsonObject As %Object, instance As MyPkg.EmailCampaign) As %Status
{
    Quit ..%JSONImport(jsonObject, .instance)
}
}

Class MyPkg.CampaignCounters Extends (%SerialObject, %JSON.Adaptor)
{
Property processed As %Integer;
Property deferred As %Integer;
Property unsubscribed As %Integer;
Property statuschange As %Integer;
Property bounce As %Integer;
Property mta_dropped As %Integer;
Property dropped As %Integer;
Property delivered As %Integer;
Property sent As %Integer;
Property click As %Integer;
Property open As %Integer;
}

After pasting the above into VSCode and compiling it, the compiler reported that the type named “%Object” does not exist in ObjectScript. According to the IRIS documentation, the property for the first parameter of %JSON.Adaptor’s %JSONImport() method is flexible:

“The input argument is either JSON as a string or stream, or a subclass of %DynamicAbstractObject.

After removing the “As %Object” property for %OnJSONImport()’s first parameter in the first class and separating the two classes into separate files, the MyPkg.EmailCampaign class compiled successfully. However, the MyPkg.CampaignCounters did not compile; ChatGPT doesn’t (yet?) know that the underscore is a special character in ObjectScript (the string concatenation operator).  

Let’s try to teach it.

Me: The underscore character "_" is a special character in ObjectScript. Please change properties with an underscore in their names to camel case names and remove the underscore.

ChatGPT’s response:

Here is the updated code with properties that had underscores in their names converted to camel case:

Class MyPkg.EmailCampaign Extends (%Persistent, %JSON.Adaptor)
{
Property id As %Integer;
Property appId As %Integer;
Property appName As %String;
Property contentId As %Integer;
Property subject As %String;
Property name As %String;
Property counters As MyPkg.CampaignCounters;
Property lastProcessingFinishedAt As %Library.PosixTime;
Property lastProcessingStartedAt As %Library.PosixTime;
Property lastProcessingStateChangeAt As %Library.PosixTime;
Property numIncluded As %Integer;
Property processingState As %String;
Property scheduledAt As %Library.PosixTime;
Property type As %String;
ClassMethod %OnJSONImport(jsonObject As %Object, instance As MyPkg.EmailCampaign) As %Status
{
    Quit ..%JSONImport(jsonObject, .instance)
}
}

Class MyPkg.CampaignCounters Extends (%SerialObject, %JSON.Adaptor)
{
Property processed As %Integer;
Property deferred As %Integer;
Property unsubscribed As %Integer;
Property statusChange As %Integer;
Property bounce As %Integer;
Property mtaDropped As %Integer;
Property dropped As %Integer;
Property delivered As %Integer;
Property sent As %Integer;
Property click As %Integer;
Property open As %Integer;
}

Along with this explanation:

ChatGPT went above and beyond and changed another property, one named with two separate words, into camel case as well (statuschange); I’m keeping that change since other similarly-named properties in the main class are all camel case.

With the latest generated code in two separate files (and removing the “As %Object” definition in the %OnJSONImport() method definition), all of the code compiles successfully.

These generated classes are an excellent starting point for what I need and I will repeat the process for the other REST data sources for this project, saving me from a lot of unnecessary typing.

2 novos comentários
Discussão (2)1
Entre ou crie uma conta para continuar
Anúncio
· Abr. 23

[Video] O que é InterSystems OMOP?

Olá, Comunidade! 

Buscando insights práticos a partir da sua pesquisa em saúde? Veja como o InterSystems OMOP pode ajudar:

👨‍🔬O que é InterSystems OMOP? 

Com o InterSystems OMOP — um software como serviço baseado em nuvem — você pode transformar dados clínicos no formato OMOP e obter insights mais rapidamente.

Os benefícios incluem:

  • Crie repositórios de dados de pesquisa com eficiência.
  • Insira, transforme e armazene dados com facilidade.

🎬 Assista ao vídeo para saber mais!

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Abr. 23 6min de leitura

OMOP Odyssey - AWS HealthLake ( Strait of Messina )

Nearline FHIR® Ingestion to InterSystems OMOP from AWS HealthLake

This part of the OMOP Journey we reflect before attempting to challenge Scylla on how fortunate we are that InterSystems OMOP transform is built on the Bulk FHIR Export as the source payload.  This opens up hands off interoperability with the InterSystems OMOP transform across several FHIR® vendors, including Amazon Web Services HealthLake.

HealthLake Bulk FHIR Export
 

Healthlake supports bulk fhir import/export from the cli or api, the premise is simple and the docs are over exhaustive, we'll save a model the trouble of training on it again and link it if interested.  The more valuable thing to understand of the heading of this paragraph is the implementation of the bulk fhir export standard itself.


Nearline?

Yeah, only "Nearline" ingestion, as the HealthLake export is the whole data store, and does not have a feature to be incremental. Additionally it does not support a resource based trigger, so it has to be invoked at an interval or via some other means yet to be apparent to me at the resource activity level.  Still a great number of ways to poke the export throughout AWS, and without incremental exports you only want it to be triggered inside a tolerable processing window anyway for the whole datastore.

The Whole Datastore?

Yes, the job exports all the resources into a flat structure.  Though it may not be the cleanest process to import the same data to catch the incremental data, the InterSystems OMOP transform should handle it.
 

Walkthrough

Trying to make this short and to the point, the illustration below really encapsulates what a that a scheduled lambda can glue these two solutions together and automate your OMOP ingestion.



Step One, AWS: Create Bucket

Create a bucket with a few of keys, one is shared with InterSystems OMOP for ingesting into the FHIR Transformation, the others will support the automated ingestion.


Explanations of the keys:

  • export - landing area for the raw resource ndjson from the job
  • from-healthlake-to-intersystems-omop - landing area for the create .zip and integtration point with InterSystems OMOP
  • output - job output

Step Two, InterSystems OMOP

Create the Deployment providing the arn of the bucket and the keys from above, ie: `from-healthlake-to-intersystems-omop` key.

Snag the example policy from the post configuration step as indicated and apply it to the bucket in AWS.  There are some exhaustive examples of this in a previous post OMOP Odyssey - InterSystems OMOP Cloud Service (Troy).

Step Three, Schedule a HealthLake Export to Expected InterSystems OMOP format 💫

The explanation of the flow of things is in the code itself as well, but I will also put it in the explanation in the form of a prompt so maybe you can land in the same spot with your own changes.

In python, show me how to start a HealthLake export job, export it to a target location, and poll the status of the job until it is complete, then read all of the ndjson files it creates and into a zip them without the relative path included in the zip and upload it to another location in the same bucket, once the upload is complete, remove the exported files from the export job.

The resulting function and code are the following:

import json
import boto3
import uuid
import boto3
import zipfile
import io
import os
import time


def lambda_handler(event, context):
    # Botos
    s3 = boto3.client('s3')
    client = boto3.client('healthlake')

    # Vars
    small_guid = uuid.uuid4().hex[:8]
    bucket_name = 'intersystems-omop-fhir-bucket'
    prefix = 'export/'  # Make sure it ends with '/'
    output_zip_key = 'from-healthlake-to-intersystems-omop/healthlake_ndjson_' + small_guid + '.zip'
    datastore_id = '9ee0e51d987e#ai#8ca487e8e95b1d'
    response = client.start_fhir_export_job(
        JobName='FHIR2OMOPJob',
        OutputDataConfig={
            'S3Configuration': {
                'S3Uri': 's3://intersystems-omop-fhir-bucket/export/',
                'KmsKeyId': 'arn:aws:kms:us-east-2:12345:key/54918bec-#ai#-4710-9c18-1a65d0d4590b'
            }
        },
        DatastoreId=datastore_id,
        DataAccessRoleArn='arn:aws:iam::12345:role/service-role/AWSHealthLake-Export-2-OMOP',
        ClientToken=small_guid
    )

    job_id = response['JobId']
    print(f"Export job started: {job_id}")

    # Step 2: Poll until the job completes
    while True:
        status_response = client.describe_fhir_export_job(
            DatastoreId=datastore_id,
            JobId=job_id
        )

        status = status_response['ExportJobProperties']['JobStatus']
        print(f"Job status: {status}")

        if status in ['COMPLETED', 'FAILED', 'CANCELLED']:
            break
        time.sleep(10)  # wait before polling again
    # Step 3: Final result
    if status == 'COMPLETED':
        output_uri = status_response['ExportJobProperties']['OutputDataConfig']['S3Configuration']['S3Uri']
        print(f"Export completed. Data available at: {output_uri}")

    # Get list of all objects with .ndjson extension under the prefix
    ndjson_keys = []
    paginator = s3.get_paginator('list_objects_v2')
    for page in paginator.paginate(Bucket=bucket_name, Prefix=prefix):
        for obj in page.get('Contents', []):
            key = obj['Key']
            if key.endswith('.ndjson'):
                ndjson_keys.append(key)

    # Create ZIP in memory
    zip_buffer = io.BytesIO()
    with zipfile.ZipFile(zip_buffer, 'w', zipfile.ZIP_DEFLATED) as zf:
        for key in ndjson_keys:
            obj = s3.get_object(Bucket=bucket_name, Key=key)
            file_data = obj['Body'].read()
            arcname = os.path.basename(key)
            zf.writestr(arcname, file_data)

    zip_buffer.seek(0)

    # Upload ZIP back to S3
    s3.put_object(
        Bucket=bucket_name,
        Key=output_zip_key,
        Body=zip_buffer.getvalue()
    )
    print(f"Created ZIP with {len(ndjson_keys)} files at s3://{bucket_name}/{output_zip_key}")
    # Clean up
    paginator = s3.get_paginator('list_objects_v2')
    pages = paginator.paginate(Bucket=bucket_name, Prefix=prefix)

    for page in pages:
        if 'Contents' in page:
            # Exclude the folder marker itself if it exists
            delete_keys = [
                {'Key': obj['Key']}
                for obj in page['Contents']
                if obj['Key'] != prefix  # protect the folder key (e.g., 'folder1/')
            ]

            if delete_keys:
                s3.delete_objects(Bucket=bucket_name, Delete={'Objects': delete_keys})
                print(f"Deleted {len(delete_keys)} objects under {prefix}")
        else:
            print(f"No objects found under {prefix}")
    else:
        print(f"Export job did not complete successfully. Status: {status}")
    
    return {
        'statusCode': 200,
        'body': json.dumps(response)
    }


This function fires at an interval of about every 10 minutes via an EventBridge schedule, this will have to be adjusted to meet your workload characteristics.
 

Step Four, Validate Ingestion ✔

LGTM! we can see the zips in the ingestion location are successfully getting picked up by the transform in InterSystems OMOP.

Step Five, Smoke Data ✔

LGTM! FHIR Organization Resource = OMOPCDM54 care_site.

Discussão (0)1
Entre ou crie uma conta para continuar
Pergunta
· Abr. 23

IntegratedML

Hi Guys,

I'm a newbie that doesn't know much about integratedML and looking for a first push into it, I've setup VSCode with my IRIS 2024.3 running in Linux and my understanding is that we can create models using SQL, so first, do I need to setup a specific environment where I can run my SQL commands to create & train Models or just using SMP, and do I need to install or enable Python ..etc things required to setup the environment?

Then if there are easy samples or training materials on how to create, train & deploy my model?  

 

Thanks

2 novos comentários
Discussão (2)3
Entre ou crie uma conta para continuar
InterSystems Oficial
· Abr. 23

Les versions de maintenance 2024.1.4 et 2023.1.6 d'InterSystems IRIS, IRIS for Health et HealthShare HealthConnect sont désormais disponibles

Les versions de maintenance 2024.1.4 et 2023.1.6 de la plateforme de données InterSystems IRIS®, d'InterSystems IRIS® for HealthTM et de HealthShare® Health Connect sont désormais disponibles en disponibilité générale (GA). Ces versions incluent les correctifs pour l'alerte suivante récemment émise : Alerte : Requêtes SQL renvoyant des résultats erronés | InterSystems. N'hésitez pas à partager vos commentaires via la Communauté des développeurs afin que nous puissions développer ensemble un meilleur produit.

Documentation

Vous trouverez les listes détaillées des modifications et des listes de contrôle des mises à niveau sur les pages suivantes :

Programmes d'accès anticipé (EAP)

De nombreux PAE sont actuellement disponibles. Consultez cette page et inscrivez-vous auprès des personnes intéressées.

Comment obtenir le logiciel ?

Les packages d'installation complets pour InterSystems IRIS et InterSystems IRIS for Health sont disponibles sur la page « Kits complets pour la plateforme de données InterSystems IRIS » du WRC. Les kits HealthShare Health Connect sont disponibles sur la page « Kits complets HealthShare » du WRC. Les images de conteneurs sont disponibles sur le registre de conteneurs InterSystems.

Disponibilité et informations sur les packages

Cette version est fournie avec des packages d'installation classiques pour toutes les plateformes prises en charge, ainsi que des images de conteneurs au format Docker. Pour obtenir la liste complète, consultez le document « Plateformes prises en charge ». Les numéros de build de ces versions de maintenance sont : 2024.1.4.512.0 et 2023.1.6.809.0.

Discussão (0)0
Entre ou crie uma conta para continuar