Nova postagem

查找

Artigo
· Dez. 15, 2024 6min de leitura

Composing an OpenAPI 2.0 specification.

REST API (Representational State Transfer Application Programming Interface) is a standardized way for web applications to communicate with each other using HTTP methods like GET, POST, PUT, DELETE, etc. It's designed around resources, which can be anything from a user to a file. Each resource is identified by a unique URL, and interactions with these resources are stateless, meaning each request from a client to a server must contain all the information needed to understand and process the request. This statelessness, along with the use of standard HTTP methods, makes REST APIs highly scalable, easy to understand, and straightforward to integrate with different systems. By following REST principles, developers can create APIs that are consistent, easy to use, and capable of handling a wide range of tasks.

InterSystems supports REST API development with a variety of tools and techniques. In this article series, I am going to go over the ones that I personally prefer. The articles are divided as listed below.

  • Composing an OpenAPI 2.0 specification.
  • Documenting and Developing REST APIs using OpenAPI 2.0 specification.
  • Developing an IRIS production pipeline to serve REST API calls.

As perquisite to following the steps and instructions laid out in this article series, you should have the below setup.

  1. InterSystems IRIS for Health
  2. InterSystems IAM with Dev Portal deployed and enabled.
  3. Postman or any other API testing software.

Let’s assume that we are developing a workflow for managing clinic appointments. We will focus on an appointment resource and how to develop GET, POST, PUT methods for it. You can use the same steps even to develop a DELETE method, but I am not going to do it because I personally do not prefer it.

First, you need to start with composing an OpenAPI 2.0 spec for your application APIs. I use the swagger editor for this task to be able to develop in YAML format, and then convert the spec file to JSON. I then can use the JSON file to use with IRIS API management tools. You can use the online swagger editor for free or you can download and deploy a container of it locally from here.

When I compose an OpenAPI spec, I think of the file as three sections. The intro section is where you lay out the description information. It includes fields that describe the nature of the application such as the description, license, tags that categorizes different endpoints and the URL by combining the host and base URL. It also includes the confines in the which it can operate if any such as the schemas and security definitions.

The below code block has a sample intro section that going to be reference from here on. If you paste it into an open swagger editor, you will notice that the groundwork of the API documentation is taking shape.

 

swagger: "2.0"
info:
  description: "This is a sample server to be as an example Clinic server.  You can find out more about Swagger at [http://swagger.io](http://swagger.io). For this sample, you can use the api key `special-key` or Basic Auth user `Basic` with password `Basic` to test the authorization filters."
  version: "1.0.0"
  title: "Clinic Management"
  termsOfService: "http://swagger.io/terms/"
  contact:
    email: "raef.youssef@intersystems.com"
  license:
    name: "Apache 2.0"
    url: "http://www.apache.org/licenses/LICENSE-2.0.html"
host: "apigatewaydns"
basePath: "/clinic"
tags:
- name: "Scheduling"
  description: "Everything about your Pets"
  externalDocs:
    description: "Find out more"
    url: "http://intersystems.com"
schemes:
- "http"
- "https"
securityDefinitions:
  BasicAuth:
    type: "basic"
    name: "Basic"
  api_key:
    type: "apiKey"
    name: "api_key"
    in: "header"

Second section of the spec file is the body. This is where you list the different paths, aka endpoints, and the methods that are available for each. For the purpose of this demo, we will just compose spec for the “/appointment” end point. Below is the text for that.


paths:
  /appointment:
    get:
      tags:
        - Scheduling
      summary: "Fetches an existing appointment details"
      security:
      - Basic: []
      description: "Fetches an existing appointment details by providing its ID"
      operationId: "getAppointmentByID"
      produces:
      - "application/json"
      parameters:
      - in: "query"
        name: "id"
        type: "integer"
        format: "int64"
        description: "ID of the appointment sought"
        required: true
      responses:
        '200':
          description: successful operation
#          schema:
#            $ref: '#/definitions/appointment'
    post:
      tags:
        - Scheduling
      summary: ""
      security:
      - api_key: []
      description: ""
      operationId: "postAppointment"
      consumes:
      - "application/json"
      - "application/xml"
      produces:
      - "application/json"
      - "application/xml"
      parameters:
      - in: "formData"
        name: "Date"
        type: "string"
        format: "date-time"
        description: ""
        required: true
      - in: "formData"
        name: "duration"
        type: "integer"
        format: "int64"
        description: "number of half hour slots of the appointment duration"
        required: true
      responses:
        '200':
          description: successful operation
        '405':
          description: Time not available
    put:
      tags:
        - Scheduling
      summary: ""
      security:
      - BasicAuth: []
      description: ""
      operationId: "updateAppointment"
      consumes:
      - "application/json"
      - "application/xml"
      produces:
      - "application/json"
      - "application/xml"
      parameters:
      - in: "query"
        name: "id"
        type: "integer"
        format: "int64"
        description: "ID of the appointment sought"
        required: true
      - in: "body"
        name: "body"
        description: ""
        required: true
#        schema:
#          $ref: "#/definitions/appointment"
      responses:
        '200':
          description: successful operation
        '405' :
          description: Time not available
        '406' :
          description: Appointment Not Found

A few things to note in the paths section. First, you can assign a different authentication mechanism for each method. There is an “OperationID” field which would be used to name the backend process that serves the respective method call. Furthermore, you can define a schema for a request JSON body and response, it commented out here so it won’t error when you paste as we have not defined it yet. Finally, you can define custom response codes and messages.

The last section is where different message schemas are defined. In this example, we will compose the appointment schema to include appointment ID, Date, Duration, and Provider of service name. Below is the text of the YAML definition. After pasting the below in your editor, you can uncomment the lines where the schema definition is referenced.

definitions:
    appointment:
        type: "object"
        properties:
            id:
                type: "integer"
                format: "int64"
            Provider:
                type: "string"
                format: "string"
            Date:
                type: "string"
                format: "date-time"
            Duration:
                type: "integer"
                format: "int64"

This concludes this part of the series. We are going to use this spec in the later parts of the series. They are coming soon so be on the lookout. For more information on developing OpenAPI 2.0 specification please refer to the documentation here.

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· Dez. 15, 2024 3min de leitura

Setup OAuth2 Client for iris-http-calls to Epic on FHIR

I have started working on utilizing Epic on FHIR about a month ago.

Creating a Public Private Key Pair

mkdir /home/ec2-user/path_to_key
openssl genrsa -out ./path_to_key/privatekey.pem 2048

For backend apps, you can export the public key to a base64 encoded X.509 certificate named publickey509.pem using this command...

2 Comments
Discussão (2)1
Entre ou crie uma conta para continuar
Pergunta
· Dez. 15, 2024

Versión evaluación CACHE

Como se puede observar mi versión de Cache ya no esta en la lista de productos.

Hace ya bastante tiempo hice algunas preguntas de una versión aun mas vieja, en esa versión tenia 5 bases de datos, en su momento solo instalé la principal para mí en la versión 2010, estoy intentando migrar las otras cuatro y me encuentro con el siguiente problema:

         - Al montar la base de datos solo la monta "lectura", sin opción de cambiar ni por el "portal de gestión de sistema" ni con ^DATABASE

Ya me identifique en su momento como un dinosaurio para las épocas que corren, mi conclusión en este punto es que al ser una licencia de evaluación no deja montar mas de una base de datos (lectura/escritura).

¿Es correcta mi conclusión?, de lo contrario alguien me podría indicar a donde radica el problema.

Gracias

2 Comments
Discussão (2)1
Entre ou crie uma conta para continuar
Pergunta
· Dez. 15, 2024

Is it possible to feed a question to the D.C. A.I. via URL parameter?

I want to provide a shortcut to the D.C. A.I. from a hotkey which will automatically populate the question field here - https://community.intersystems.com/ask-dc-ai.

Is there a way to construct this URL such that it will populate the "Ask a Programming Question" field (and better yet, execute the query)?

Thanks!

3 Comments
Discussão (3)1
Entre ou crie uma conta para continuar
Artigo
· Dez. 14, 2024 6min de leitura

Rivian GeoLocation Plotting with IRIS Cloud Document and Databricks


 

Plotting the gnSSLocation data from my Rivian R1S across Michigan with InterSystems Cloud Document and Databricks

If you been looking for a use case for a Document Database, I came to the realization my favorite dead simple one is the ability to query a pile of JSON, right along side my other data with sql without really doing much. Which is the dream realized from the powerful Multi Model InterSystems Data Platform, and shown here in a simple notebook to visualize my geo location data my Rivian R1S is emitting for DeezWatts ( A Rivian Data Adventure ).

So here is the 2 step approach, Ingestion to and Visualization from InterSystems Cloud Document, using the JDBC document driver.

InterSystems Cloud Document Deployment

For starters, I fired up a small Cloud Document deployment on the InterSystems Cloud Services Portal, with an enabled listener.

I downloaded the ssl certificate, and snagged the drivers for JDBC and accompanying document driver as well.

Ingestion

For ingestion, I wanted to get a grip on how to lift a JSON document from the file system and persist it as a collection in the document database over the listener, for this I wrote a standalone java app. This was more utility as the fun all happened in the notebook after the data was up in there.
 

 
RivianDocDB.java

The above is quite close to JAVA trash, but worked, we can see the collection in the collection browser in the deployment.

Databricks

Now this takes a little bit of Databricks setup, but is well worth it to work with pyspark for the fun part.

I added the two InterSystems drivers to the cluster, and put the certificate in the import_cloudsql_certficiate.sh cluster init script so it gets added to the keystore.

For completeness, the cluster is running Databricks 16, Spark 3.5.0 and Scala 2.12

Visualization

So we should be set to run a PySpark job and plot where my whip has been in the subset of data Ill drag in.

We are using geopandas and geodatasets for a straight forward approach to plotting.

import geopandas as gpd
import geodatasets
from shapely.geometry import Polygon

Now, this takes a little bit to get used to, but here is the query to InterSystems Cloud Document using the JSON paths syntax and JSON_TABLE.

dbtablequery = f"(SELECT TOP 1000 lat,longitude FROM JSON_TABLE(deezwatts2 FORMAT COLLECTION, '$' COLUMNS (lat VARCHAR(20) path '$.whip2.data.vehicleState.gnssLocation.latitude', longitude VARCHAR(20) path '$.whip2.data.vehicleState.gnssLocation.longitude' ))) AS temp_table;"

 

I did manage to find a site that made it dead simple to create the json path @ jsonpath.com.

Next we setup the connection to the IRIS Document Database Deployment and read it into a dataframe.

# Read data from InterSystems Document Database via query above
df = (spark.read.format("jdbc") \
  .option("url", "jdbc:IRIS://k8s-05868f04-a88b7ecb-5c5e41660d-404345a22ba1370c.elb.us-east-1.amazonaws.com:443/USER") \
  .option("jars", "/Volumes/cloudsql/iris/irisvolume/intersystems-document-1.0.1.jar") \
  .option("driver", "com.intersystems.jdbc.IRISDriver") \
  .option("dbtable", dbtablequery) \
  .option("sql", "SELECT * FROM temp_table;") \
  .option("user", "SQLAdmin") \
  .option("password", "REDACTED") \
  .option("connection security level","10") \
  .option("sslConnection","true") \
  .load())


Next we grab an available map from geodatasets, the sdoh one is great for generic use of the united states.
 

# sdoh map is fantastic with bounding boxes
michigan = gpd.read_file(geodatasets.get_path("geoda.us_sdoh"))

gdf = gpd.GeoDataFrame(
    df.toPandas(), 
    geometry=gpd.points_from_xy(df.toPandas()['longitude'].astype(float), df.toPandas()['lat'].astype(float)), 
    crs=michigan.crs #"EPSG:4326"
)

Now the cool part, we want to zoom in on where we want to contain the geo location points of where the R1S has driven, for this we need a bounding box for the state of Michigan.

For this I used a really slick tool from Keene to draw the geo fence bounding box and it gives me the coordinates array!

Now that we have the coordinates array of the bounding box, we need slap them into a Polygon object.

polygon = Polygon([
      (
        -87.286377,
        45.9664245
      ),
      (
        -81.6503906,
        45.8134865
      ),
      (
        -82.3864746,
        42.1063737
      ),
      (
        -84.7814941,
        41.3520721
      ),
      (
        -87.253418,
        42.5045029
      ),
      (
        -87.5610352,
        45.8823607
      )
    ])

 

Now, lets plot the trail of the Rivian R1S! This will be for about 10,000 records (I used a top statement above to limit the results)
 

ax = michigan.clip(polygon).plot(color="lightblue", alpha=0.5,linewidth=0.8, edgecolor='gray')
ax.axis('off')
ax.annotate("Data: Rivian R1S Telemetry Data via InterSystems Document Database", xy=(0.01, .085), xycoords='figure fraction', fontsize=14, color='#555555')

gdf.plot(ax=ax, color="red", markersize=1.50, alpha=0.5, figsize=(200,200))

 

And there we have it... Detroit, Traverse City, Silver Lake Sand Dunes, Holland, Mullet Lake, Interlachen... Pure Michigan, Rivian style.


 

Discussão (0)1
Entre ou crie uma conta para continuar