J'ai rencontré à plusieurs reprises un cas où j'ai besoin d'utiliser un fichier/dossier temporaire et de le supprimer ultérieurement.
La solution la plus naturelle consiste alors à suivre les recommandations de "Robust Error Handling and Cleanup in ObjectScript" avec un bloc try/catch/pseudo-finally ou un objet enregistré pour gérer le nettoyage dans le destructeur. %Stream.File* possède également une propriété RemoveOnClose que vous pouvez définir, mais avec précaution, car vous pourriez supprimer accidentellement un fichier important. De plus, cette propriété est réinitialisée par les appels à %Save(), vous devrez donc la remettre à 1 après chaque utilisation.
Il existe cependant un cas particulier : supposons que vous ayez besoin que le fichier temporaire subsiste dans la pile d'exécution. Par exemple :
ClassMethod MethodA()
{
Do..MethodB(.filename)
// Do something else with the filename
}
ClassMethod MethodB(Output filename)
{
// Create a temp file and set filename to the file's nameSet filename = ##class(%Library.File).TempFilename()
//... and probably do some other stuff
}
Vous pourriez toujours manipuler des objets %Stream.File* avec RemoveOnClose défini sur 1, mais nous nous intéressons ici uniquement aux fichiers temporaires.
C'est là qu'intervient le concept de « Singleton ». IPM propose une implémentation de base dans %IPM.General.Singleton, que vous pouvez étendre pour répondre à différents cas d'utilisation. Le comportement général et le modèle d'utilisation sont les suivants :
À un niveau de pile supérieur, appelez %Get() sur cette classe pour obtenir l'unique instance, également accessible par des appels à %Get() à des niveaux de pile inférieurs.
Lorsque l'objet sort de la portée au niveau de pile le plus élevé qui l'utilise, le code de nettoyage est exécuté.
Cette méthode est plus performante qu'une variable `%` car il n'est pas nécessaire de vérifier sa définition, et elle survit également aux appels NEW sans argument aux niveaux de pile inférieurs grâce à une astuce de manipulation d'objets plus profonde.
ClassMethod MethodA()
{
Set tempFileManager = ##class(%IPM.Utils.TempFileManager).%Get()
Do..MethodB(.filename)
// Do something else with the filename// The temp file is cleaned up automatically when tempFileManager goes out of scope
}
ClassMethod MethodB(Output filename)
{
Set tempFileManager = ##class(%IPM.Utils.TempFileManager).%Get()
// Create a temp file and set filename to the file's nameSet filename = tempFileManager.GetTempFileName(".md")
//... and probably do some other stuff
}
Spinning up an InterSystems IRIS Cloud Document deployment
Taking a quick tour of the service via the service UI
Part II - Sample (Dockerized) Java App (this article)
Grabbing the connection details and TLS certificate
Reviewing a simple Java sample that creates a collection, inserts documents, and queries them
Setting up and running the Java (Dockerized) end‑to‑end sample
As mentioned the goal is to give you a smooth “first run” experience.
Previously we created an IRIS Cloud Document deployment (and took a quick tour), now let's see how we can interact with it from a Java app.
Assuming you want to take this for a drive, and go hands-on, you'll need Docker, and Git, and start by hoping over to the Open Exchange App, and cloning the GitHub repo.
4. Note the connection details
Once the deployment is running, open it and look at the Overview page. There’s a table called Making External Connections that lists:
Keep those values handy; we’ll plug them into the Java demo.
Make sure external access is enabled, either for all IP addresses, or your client IP (or IP range) is allowed in the deployment firewall settings.
This is done in the Cloud Services Portal when creating the service, or in the section mentioned above.
Download the TLS certificate
Cloud Document requires TLS. From the deployment overview there’s a link to download a self‑signed X.509 certificate for your deployment. You’ll use this certificate on your client side to establish a trusted TLS connection. Save it as something like: certs/certificateSQLaaS.pem
That’s all we need from the portal: host, port, namespace, credentials, and the certificate file.
5. Review the Sample Accessing Cloud Document from Java
In general the pattern looks like:
Make a secure connection (Connecting - Docs) - Configure DataSource (server, port, namespace, user, password, TLS).
Ingest some data (Using Document and Collections - Docs) - Get a Collection by name (created automatically the first time). And build JSONObject/JSONArray instances, insert them as Documents.
Query / fetch data back (Querying - Docs) - Query using a ShorthandQuery (string that behaves like a WHERE clause on the collection).
If you’ve used other document databases, this should feel pretty familiar.
The Java driver for Cloud Document lives in the package com.intersystems.document. It gives you three main pieces:
DataSource – a connection pool to the Cloud Document server.
Document – base class for JSON documents; usually you’ll use its subclasses:
JSONObject – JSON object with put() methods for key/value pairs.
JSONArray – JSON array with add() methods.
Collection – represents a named collection; you can insert, get, getAll, drop, and run queries.
The code and data used in this sample is based directly on the examples provided within our Documentation.
5.1 Making the connection
First, the bits we need for a basic connection:
Hostname, port, namespace, user, password – from the deployment’s “external connections” information.
The deployment’s X.509 certificate, imported into a Java keystore.
A small SSLConfig.properties file so the driver knows which keystore to use.
Building a TLS-enabled DataSource
Here’s a compact example that focuses on the connection itself:
Java Connection Code
import com.intersystems.document.DataSource;
import com.intersystems.document.Document;
// 1. Create and configure the DataSource (connection pool)
pool = DataSource.createDataSource();
pool.setServerName(serverName);
pool.setPortNumber(port);
pool.setDatabaseName(namespace);
pool.setUser(user);
pool.setPassword(password);
// Require TLS – connectionSecurityLevel 10 enables TLS.
pool.setConnectionSecurityLevel(10);
pool.preStart(5);
pool.getConnection(); // force pool creation
If SSLConfig.properties and keystore.jks are set up correctly, calling createDataSource() should establish a connection over TLS to your Cloud Document deployment.
The Cloud Document Java driver looks for this SSLConfig.properties file and uses it when you set connectionSecurityLevel to require TLS.
This is what this file would look like:
SSLConfig.properties file sample
# SSL/TLS configuration for InterSystems Java client
# This file MUST be named SSLConfig.properties and be in the application's working directory.
# The Docker image will create /app/keystore.jks at container startup.
debug=false
protocol=TLS
trustStore=keystore.jks
trustStorePassword=changeit
In the Docker sample I provided there is a script that takes care of this for you.
If you're running your own samples, you can use a line like this one:
Once we have a DataSource, we work with collections and documents.
Collection is the named container, like colors or demoPeople.
A document is a JSONObject or JSONArray extending Document.
Here’s a small “ingest” example that mirrors the colors JSON file we imported in the UI earlier.
Java Ingest Code
// 2. Get (or create) the collection
Collection people = Collection.getCollection(pool, collectionName);
if (people.size() > 0) {
System.out.println("\nCollection '" + people.getName() + "' already has "
+ people.size() + " documents. Dropping them for a clean demo...");
people.drop();
}
System.out.println("Using collection: " + people.getName());
// 3. Insert a very simple array document
Document docOne = new JSONArray()
.add("Hello from Cloud Document (Docker demo)");
String id1 = people.insert(docOne);
System.out.println("\nInserted docOne (JSONArray) with id " + id1);
// 4. Insert a JSONObject document
Document docTwo = new JSONObject()
.put("name", "John Doe")
.put("age", 42)
.put("city", "Boston");
String id2 = people.insert(docTwo);
System.out.println("Inserted docTwo (JSONObject) with id " + id2);
// 5. Bulk insert of multiple JSONObject documents
List<Document> batch = new ArrayList<>();
batch.add(new JSONObject()
.put("name", "Jane Doe")
.put("age", 20)
.put("city", "Seattle"));
batch.add(new JSONObject()
.put("name", "Anne Elk")
.put("age", 38)
.put("city", "London"));
BulkResponse bulk = people.insert(batch);
System.out.println("Bulk insert completed. New ids: " + bulk.getIds());
A few notes:
Collection.getCollection(pool, name) will create the collection on first use if it doesn’t exist.
insert() returns the document ID assigned by Cloud Document.
insert(List<Document>) does a bulk write and returns all the IDs in a BulkResponse.
This is the same basic pattern you’d use in an application ingesting JSON from a file, a queue, or an API.
5.3 Querying and fetching data
On the Java side you have two main options:
Use the collection-centric APIs (getAll, createShorthandQuery, etc.).
Use regular SQL (for example with JDBC directly) and JSON_TABLE when you want rich SQL projections.
For a first experience, the collection APIs are usually enough.
List all documents in a collection and searching for some
Java Fetch/Query Code
// 6. Retrieve and display all documents in the collection
System.out.println("\nAll documents in collection '" + collectionName + "':");
List<Document> allDocuments = people.getAll();
for (Document d : allDocuments) {
System.out.println(" " + d.getID() + ": " + d.toJSONString());
}
System.out.println("Collection size reported by server: " + people.size());
// 7. Run a shorthand query
String shorthand = "name > 'H' AND age >= 21";
System.out.println("\nRunning shorthand query: " + shorthand);
ShorthandQuery query = people.createShorthandQuery(shorthand);
Cursor results = query.execute();
System.out.println("Shorthand query returned " + results.count() + " result(s).");
while (results.hasNext()) {
Document d = results.next();
System.out.println(" " + d.toJSONString());
}
What’s happening here:
getAll() gives you every document in the collection as Document objects.
createShorthandQuery("name > 'H'") creates a query that’s conceptually similar to WHERE name > 'H' in SQL.
Cursor lets you iterate the results and also ask for a count.
If you later want to bring this into the SQL world, the same collections you touched here can be queried with JSON_TABLE in the SQL UI or via JDBC. That’s one of the nice aspects of Cloud Document: you don’t have to choose between “document API” and “SQL”; you get both.
6. Setting up and Running the Sample
As mentioned I'm providing a Dockerized sample, to ensure a smooth as possible experience without requiring you to manually download and install various parts, but if you want you can use the same sample and run this on your own.
The Open Exchange and related GitHub repository include detailed instructions for running, but at high-level it comes down to simply:
6.1 Update .env file and place TLS certificate
This is what your environment variables file might look like after your edit it:
Environment Variables .env Edited File (example)
# Copy this file to .env and fill in your values
# Cloud Document connection settings
IRIS_HOST=k8s-e8c99d11-a90ppp7q-333333jj22-2222o11o111oo1o1.elb.us-east-1.amazonaws.com
IRIS_PORT=443
IRIS_NAMESPACE=USER
IRIS_USER=SQLAdmin
IRIS_PASSWORD=verySECRETpassword12345*
# Optional: collection name (override default)
COLLECTION_NAME=demoPeople
# Absolute path on your host to the Cloud Document X.509 certificate
CERT_FILE_HOST_PATH=./cert/certificateSQLaaS.pem
6.2 Run docker compose
Just run docker compose up --build and the sample will run.
behind the scenes we will:
Stage 1: Use a Maven + JDK image to build a shaded JAR.
Stage 2: Use a slim JDK image, copy the JAR and SSLConfig.properties, create a keystore from your cert at container startup, then run the JAR.
Here's a short video demonstrating this:
Wrapping up
If you’re new to InterSystems but not new to programming, the basic path to a good first experience with IRIS Cloud Document is:
Bring the service up: create a deployment, note host/port/namespace/credentials, download the certificate.
Kick the tires in the web portal: upload a JSON file, import into a collection, browse with the Collection Browser, and run a simple SQL query with JSON_TABLE.
Wire it into Java:
create a TLS-enabled DataSource (with SSLConfig.properties + keystore),
use Collection and Document to ingest data,
and query with getAll and shorthand queries.
From there you can iterate toward more interesting things: updates, deletes, richer queries, combining Cloud Document data with relational data, or using other drivers like .NET.
But if you’ve followed along to this point and seen your own JSON documents come back from the Java code, you’ve already taken the most important step: you’re up and running in the InterSystems ecosystem.
If you already know Java (or .Net) and perhaps also have used other document databases (or looking for one), but you are new to the InterSystems world, this post should help you.
InterSystems IRIS Cloud Document is a fully managed document database that lets you store JSON documents and query them with familiar SQL syntax, delivered as a cloud service managed by InterSystems.
In this article pair I’ll walk you through:
Part I - Intro and Quick Tour (this article)
What is it?
Spinning up an InterSystems IRIS Cloud Document deployment
Taking a quick tour of the service via the service UI
Grabbing the connection details and TLS certificate
Reviewing a simple Java sample that creates a collection, inserts documents, and queries them
Setting up and running the Java (Dockerized) end‑to‑end sample
The goal is to give you a smooth “first run” experience.
1. What is InterSystems IRIS Cloud Document?
Cloud Document is a document database service built on top of the InterSystems IRIS data platform, exposed as a managed cloud service. You work with JSON documents stored in Collections, then query them using SQL, or through language‑specific drivers (Java or .NET).
Conceptually:
A Document is a JSON object or array.
A Collection is a logical container for documents and gives you APIs for insert/get/update/delete/query.
Under the hood it’s the same engine that powers other IRIS data services, so you can use SQL to query document data if and when you need it.
[By the way you need a Subscription to the service - but this is outside the scope of this article, as this is more of a commercial topic. For more details you can see our Services page Docs and the related AWS Marketplace service listing page]
Create a new IRIS Cloud Document deployment.
Choose region, name, etc., and let the deployment finish provisioning.
For example:
Here's a short video demonstrating this:
3. A quick tour: upload JSON, import into a collection, browse and query
The Cloud Document web console gives you a nice “zero code” way to get familiar with the service. The flow looks like this: upload a JSON file → import it into a collection → browse with the Collection Browser → run some SQL.
You’ll find these pages under your deployment’s web UI; this page from the Docs walks through the same steps.
3.1 Upload a sample JSON file
Create a small colors.json file locally, for example:
Use the Upload button and select your colors.json file.
The file must have an object or an array at the top level, which our example does.
3.2 Import the JSON into a collection
Now import the uploaded file into a Cloud Document collection:
Navigate to the Collection Import page in the deployment UI.
Choose your uploaded colors.json via Select file.
For Collection, either:
pick an existing collection, or
choose (Add new collection) and enter something like colors(Here I clicked Preview, which shows the contents, and summarizes that upon import 3 documents will be added)
Click Import.
The service will parse the JSON and write each object into the colors collection. If your file is large, this may take a bit longer; for three tiny objects it’s almost instant.
You should see a green popup message saying 3 documents:
[In case you get a red popup message, indicating there was some error, this might be because this is your first import, and the service "backend" is still "warming up". Looking at the network trace you might see something like this:
... https response error StatusCode: 409, RequestID: ... , api error CodeArtifactUserPendingException: ERROR: Lambda is initializing your function. It will be ready to invoke shortly.
And indeed you can ignore this, wait a little, and try again shortly after.]
3.3 Explore the data with the Collection Browser
Once you’ve imported, go to the Collection Browser page and select the colors collection. You should see each document displayed as JSON.
Things to try:
Click on individual documents (via the Previous and Next buttons) and inspect their JSON.
Confirm that all the objects from your file are present.
Notice that collections are just logical groupings; you can have multiple collections with very different shapes of documents.
This browser is a good way to sanity-check what’s in your deployment without writing any code.
3.4 Run a simple SQL query
Cloud Document documents live in collections, but you can query them via SQL using JSON_TABLE (see Docs) to project JSON data into a tabular shape. In the deployment UI, go to the SQL Query Tools page and run queries such as:
SELECTname, rgb, hexFROM JSON_TABLE(colors FORMAT COLLECTION)
Or for example using more functionality of JSON_TABLE:
SELECT c.name, c.hex
FROM JSON_TABLE( 'colors',
'$[*]'COLUMNS (
nameVARCHAR(50) PATH'$.name',
hexVARCHAR(10) PATH'$.hex'
)
) AS c
ORDERBY c.name
That’s the core pattern: load JSON into a collection, browse it as documents, and query it via SQL when you want to slice or join it.
Here's a short video demonstrating this:
Now we can move on to the next article, there we'll review and explain running a Java app, connecting to our Cloud Service, and interacting with it.
Postos relacionados
IRIS Cloud Document - Beginner Guide & Sample : Part I - Intro and Quick Tour