Nova postagem

Pesquisar

Artigo
· jan 23 3min de leitura

Lean Integration via OS Delegation

Senior engineering is defined not by the volume of code produced, but by the strategic avoidance of it. In complex integration environments, the tendency to utilize general-purpose libraries for every niche requirement introduces unnecessary overhead. True architectural maturity requires a commitment to "minimalist tooling"—prioritizing resilient, battle-tested system utilities over custom logic. This assessment examines our PGP encryption/decryption pipeline to demonstrate how shifting from application-level libraries to OS-native delegation enhances system durability.

Current State: The Heavyweight Implementation

Our current MPHP.HS.PGPUtil class represents a high-friction design. While functional, the existing InterSystems IRIS Business Process is burdened by a significant dependency footprint. By bridging into Embedded Python to utilize the pgpy library, we have introduced a "weighty" stack that necessitates the Python runtime, third-party library management, and specific cryptographic binaries.

The technical core currently forces the application to manage internal cryptographic states, manual file I/O, and memory heap allocation:

Python Implementation:

ClassMethod DecryptPGP(EncryptedFilePath As %String, TargetFilePath As %String, PrivateKeyPath As %String, Passphrase As %String) As %String [ Language = python ] {
    import pgpy
    try:
        with open(EncryptedFilePath, "rb") as enc_file:
            encrypted_content = enc_file.read()
        pgp_key, _ = pgpy.PGPKey.from_file(PrivateKeyPath)
        encrypted_message = pgpy.PGPMessage.from_blob(encrypted_content)
        with pgp_key.unlock(Passphrase) as unlocked_key:
            decrypted_message = unlocked_key.decrypt(encrypted_message)
        with open(TargetFilePath, "wb") as dec_file:
            dec_file.write(decrypted_message.message)
        return 1
    except Exception as e:
        return f"ERROR: {str(e)}"
}

 

Critical Friction Points

* Maintenance Debt: Every third-party library increases the security attack surface, requiring constant auditing and patching.

* Resource Inefficiency: Loading file data into the application’s memory heap for processing is less efficient than stream-based OS utilities.

* Tool Oversizing: We are utilizing a general-purpose language to solve a narrow, standardized cryptographic task.

Proposed State: Native Utility Delegation

To align with modern programming guidelines, we must move from "writing" to "delegating." In this context, the operating system provides a high-performance, battle-tested solution: GnuPG (GPG).

By replacing custom Python methods with direct OS-level calls, we eliminate the external dependency footprint entirely. Native utilities are optimized to stream data directly from the disk, significantly improving memory efficiency and reducing our codebase to a pure orchestrator.

Command Line Implementation:

ClassMethod DecryptPGP(EncryptedFilePath As %String, TargetFilePath As %String, PrivateKeyPath As %String, Passphrase As %String) As %String {
    // Build the command to delegate to the OS-native GPG utility
    Set cmd = "gpg --batch --yes --passphrase " _ Passphrase _ " --output " _ TargetFilePath _ " --decrypt " _ EncryptedFilePath   
    // Execute at the system level; $zf(-1) returns the OS exit code
    Set exitCode = $zf(-1, cmd) 
    // Return 1 for success (exit code 0), or the error code
    Return (exitCode = 0) ? 1 : "ERROR: GPG exit code " _ exitCode
}

 

Architectural Conclusion

Shifting to an OS-delegation model reinforces two vital pillars of our design philosophy:

* Suitability: Purpose-built binaries outperform general-purpose libraries for standardized tasks.

* Orchestration: Our code should serve as a thin management layer for existing utilities rather than a re-invention of established tools.

The result is a leaner, more secure, and highly maintainable enterprise data pipeline while reserving developer talent to apply to new technical problems.

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· jan 23 6min de leitura

Connecting IRIS to Apache Kafka securely with SSL and SASL

1-) Introduction

This article reflects my recent experiences at trying to connect an IRIS Business Operation to a secure Kafka Server, using SSL tunnels to encrypt the communications and using SASL (Simple Authentication and Security Layer) password hashing with SCRAM-SHA-512.

2-) Background Information

Kafka implementations can be made extremely secure by using encryption and ACLs (Access Control Lists) to control access to topics and other resources in the cluster.

2.1-) SSL Encryption

The first part, encryption, can be easily implemented by configuring SSL on the Kafka Listeners.

The instructions and details can be found at this link:  https://kafka.apache.org/41/security/encryption-and-authentication-using-ssl/

From the link, it is important to notice that Kafka can implement Client Authentication through mutual authentication of the SSL tunnel, and that is the default behavior of the Kafka Listener.

However, to implement SASL (Simple Authentication and Security Layer) Authentication with SCRAM-SHA-512 (or SCRAM-SHA-256) hashing, the SSL Client Authentication must be disabled on the Listener.

To disable the Client Authentication on the SSL Listener, it is necessary to change the Kafka Server configuration:

ssl.client.auth=requested

ssl.endpoint.identification.algorithm=””

The default value for ssl.client.auth is required, which implies the client trying to connect to Kafka has to present a digital certificate to authenticate. Although extremely secure, SSL Authentication of the clients connecting to Kafka, require the overhead of issuing and managing the expiration of digital certificates to each individual client that is going to connect. Most customers, prefer not deal with this overhead and instead configure Kafka to use SASL.

The parameter ssl.endpoint.identification.algorithm must be set to an empty string.

2.2-) SASL Authentication

To enable SASL the required information can be found at this link: https://kafka.apache.org/41/security/authentication-using-sasl/

On my test environment I configured a Listener with the SASL_SSL security mapping on Kafka, using the SCRAM-SHA-512 mechanism for password hashing.

listeners=SASL_SSL://:9093

advertised.listeners=SASL_SSL://<kafka-hostname>:9093

sasl.enabled.mechanisms=SCRAM-SHA-512

On my test environment, with three Kafka Servers on the cluster, this configuration needs to be present on all servers.

2.3-) Authorizer and ACLs

To enable ACLs an Authorizer needs to be configured on each Broker and Controller of the Kafka cluster.

The relevant information can be found at this link: https://kafka.apache.org/41/security/authorization-and-acls/

My tests were executed on a cluster in Kraft Mode, so I had to add the following properties in the configuration files for the Kafka Brokers and Controller:

authorizer.class.name=org.apache.kafka.metadata.authorizer.StandardAuthorizer

allow.everyone.if.no.acl.found=true

super.users=User:broker;User:client

The authorizer.class.name points to the default Authorizer that is shipped with Kafka. 

The allow.everyone.if.no.acl.found=true is a bit of overkill that allows any authenticated users to access a resource that has no specific ACL attached to it. The default value of this parameter is false, which will deny access to resources without a specific ACL.

The super.users=User:broker;User:client is a list of Principals (aka Users) that are considered Super Users, which are allowed to access any resource.

2.4-) Create SCRAM Credentials

It is also important to create the SASL credentials on the Kafka Brokers for each user that is going to authenticate on the Cluster, by using the Kafka CLI:

$ kafka-configs.sh --bootstrap-server <Server URI> \

--alter --add-config 'SCRAM-SHA-512=[password=<passwd>]' \

--entity-type users --entity-name <USER> \

--command-config <Config File for Connection>

It is important to create the credentials with the same level of encryption that is configured on the Listener. In my tests, I configured the Listener with SCRAM-SHA-512, but it is possible to use SCRAM-SHA-256 instead.

You can verify that the SCRAM Credentials are created on the cluster by running this Kafka CLI:

$ kafka-configs.sh --bootstrap-server <Server URI> \

--describe \

--entity-type users \

--command-config <Config File for Connection>

Each User (aka Principal) with associated SASL credentials will be listed on the output, like the one bellow:

SCRAM credential configs for user-principal '<USER>' are SCRAM-SHA-512=iterations=4096

At this point, we have done all the configuration required on the Kafka side.

3-) Configuring IRIS

The tests were executed using a IRIS Version soon to be GA 2025.1.3 (Build 457U), which includes some enhancements necessary to use advanced Kafka Security options like SASL SCRAM-SHA-512 or SCRAM-SHA-256.

To test this configuration, I created a simple Business Operation, in a Production in IRIS.

3.1-) Create a Production

Create a new Interoperability Production in your Namespace, if necessary create a Namespace that is capable of running Interoperability Productions:

Create a new Production, and select the options to enable Testing and Trace Events:

3.2-) Create a Simple Business Operation

Create a new Business Operation, using the out-of-the-box Class EnsLib.Kafka.Operation, I named my “Kafka Producer”, select the “Enable Now” option.

On the Business Operation Settings make the necessary changes to the Kafka Settings section:

The above example reflect my test environment, make the necessary adjustments to reflect your environment.

  • On the Servers Setting, change the broker hostname and port number to point to a Kafka Listener that has SASL_SSL security protocol enabled;
  • Credentials point to the Interoperability Credential that has the userID and password created on the Kafka broker, that have SCRAM credentials created, see details bellow;
  • SecurityProtocol has to be set to SASL_SSL;
  • SASLMechanism is set to SCRAM-SHA-512,or the encryption level that is set on your environment;
  • TrustStoreLocation, KeyStoreLocation are the files containing the SSL TrustStore and KeyStore that are going to be used to create the SSL tunnel. Those files are created on the Kafka Server and should be copied over in jks or p12 format;
  • TrustStoreCredentials, KeyStoreCredentails and KeyCredentials point to a Interoperability Credentials that have the credentials to access the TrustStore,KeyStore and Key files, see bellow;

 

3.3-) Create the Credentials

On the Credentials page:

Create two credentials:

The Credential used to authenticate on the Kafka Broker with SCRAM Credentials created on step 2.4 above.

The second credential is the password used to access the TrustStore KeyStore and Keyfile.

In my case all files have the same password, so just one credential opens all files.

3.4-) Sending a Test Message

Now we can test the configuration settings and send a Test Message, using the Test function of the Business Operation.

If everything is configured properly you should see the Test Results instantly. If the Test Results progress bar takes some time to progress, this is an indication that some mis-configuration have occurred, and you need to double-check your settings.

Clicking on the Visual Trace shows the results:

 

And if you have a Kafka IU available you can see the message on the Topic, or you can see the messages using Kafka CLI:

$ kafka-console-consumer.sh \

--bootstrap-server <Server URI> \

--topic <TOPIC> --from-beginning \

--consumer.config <consumer.config.properties>

The above CLI show all messages on the Topic <TOPIC> and it runs until it is interrupted by a Ctrl-C.

This concludes this quick tests.

Best of luck!

1 novo comentário
Discussão (1)1
Entre ou crie uma conta para continuar
Artigo
· jan 23 2min de leitura

Destaque de Auditoria 2024.3 - Eventos de Instrução SQL com Granularidade Refinada

A auditoria é uma capacidade crítica que garante a segurança do seu servidor e, há algum tempo, incluímos a capacidade de auditar instruções SQL executadas no servidor.

Com a v2024.3 já disponível, estamos fornecendo opções com granularidade mais refinada para definir esses eventos a serem auditados.

Anteriormente, você podia decidir auditar instruções SQL de acordo com o mecanismo de acesso, por exemplo, executando instruções via JDBC/ODBC vs. SQL embarcado (ex: usando &sql em seu código) vs. SQL Dinâmico (ex: usando %SQL.Statement no código, ou usando a Execução de Consulta SQL do Portal de Administração, ou a partir do SQL Shell no terminal) e agora, além disso, você pode decidir auditar apenas certos tipos de instruções (por mecanismo de acesso, como era possível anteriormente). Esses tipos são:

  • DDL (Linguagem de Definição de Dados) - 

Instruções que alteram elementos do banco de dados, configurações ou outras coisas que não são dados. Ex: CREATE / ALTER TABLE

  • DML (Linguagem de Manipulação de Dados) - 

Instruções que alteram dados. Ex: INSERT / UPDATE / DELETE

  • Utility (Utilitários) - 

Instruções que não alteram dados nem metadados, mas sim alteram o status do processo ou modelos de machine learning.

  • Query (Consulta) - 

Instruções que executam ou definem uma consulta que pode retornar dados. Ex: SELECT

Esta é a lista dos eventos relacionados que você pode encontrar agora na configuração de Eventos de Auditoria (Audit Events):

Veja mais informações na Documentação relacionada e nas Notas de Lançamento relacionadas.

Discussão (0)1
Entre ou crie uma conta para continuar
Artigo
· jan 23 5min de leitura

Finding your way on the API Manager roads

 

A question that quickly arises when configuring IAM (aka Kong Gateway) is how many routes should be created to reach all the business objects in an IRIS API.

A common mistake is to create one route per business object, unnecessarily multiplying the number of routes.

Let's take the example of the Supply Chain Orchestrator Data Model API:

/api/SC/scmodel/v1/objects
/api/SC/scmodel/v1/objects/BOM
/api/SC/scmodel/v1/objects/Carrier
/api/SC/scmodel/v1/objects/Customer
/api/SC/scmodel/v1/objects/DemandPlan
/api/SC/scmodel/v1/objects/InventoryThreshold
/api/SC/scmodel/v1/objects/Issue
/api/SC/scmodel/v1/objects/LeadtimeVariant
/api/SC/scmodel/v1/objects/Location
/api/SC/scmodel/v1/objects/MfgOrder
/api/SC/scmodel/v1/objects/Milestone
/api/SC/scmodel/v1/objects/Product
/api/SC/scmodel/v1/objects/ProductInventory
/api/SC/scmodel/v1/objects/ProductSupplier
/api/SC/scmodel/v1/objects/ProductionCapacity
/api/SC/scmodel/v1/objects/PurchaseOrder
/api/SC/scmodel/v1/objects/PurchaseOrderLine
/api/SC/scmodel/v1/objects/RouteLeg
/api/SC/scmodel/v1/objects/SCException
/api/SC/scmodel/v1/objects/SLA
/api/SC/scmodel/v1/objects/SalesOrder
/api/SC/scmodel/v1/objects/SalesOrderLine
/api/SC/scmodel/v1/objects/SalesShipment
/api/SC/scmodel/v1/objects/SalesShipmentLine
/api/SC/scmodel/v1/objects/ServiceSLA
/api/SC/scmodel/v1/objects/ShipmentMilestone
/api/SC/scmodel/v1/objects/ShipmentStop
/api/SC/scmodel/v1/objects/ShipmentTracking
/api/SC/scmodel/v1/objects/ShippingCost
/api/SC/scmodel/v1/objects/Supplier
/api/SC/scmodel/v1/objects/SupplyPlan
/api/SC/scmodel/v1/objects/SupplyShipment
/api/SC/scmodel/v1/objects/SupplyShipmentLine
/api/SC/scmodel/v1/objects/TrackingService

Creating a route in API Manager for each business object would be tedious and strongly discouraged, and would not protect you from changes during a version upgrade.

 

Example of a clean cut (recommended)

Route Usage Plugins
/objects$ global list of all objects Cache, rate limit
/objects/{type} BOM / Carrier / Customer Business Monitoring

Kong doesn't natively understand /objects/{type}.

👉 It needs to be translated using a regex :

{type} = [^/]+

We start by creating a service scmodel in Kong 

### Create a new service in Kong for the SC Model application
curl -i -X POST \
--url http://localhost:8001/services/ \
--data 'name=scmodel' \
--data 'url=http://192.168.65.1:52880'

Then we add the route to retrieve the list of all objects with $ for the exact routes:

### Create a route for the SC Model service Objects
curl -i -X POST \
--url http://localhost:8001/services/scmodel/routes \
--data 'name=scmodel-objects' \
--data 'paths=~/api/SC/scmodel/v1/objects$' \
--data 'strip_path=false'

Then we create a generic route for all object type using [^/]+$ as the regex :

### Create a route for the SC Model for all object types
curl -i -X POST \
  http://localhost:8001/services/scmodel/routes \
  --data name=scmodel-object-type \
  --data 'paths[]=~/api/SC/scmodel/v1/objects/[^/]+$' \
  --data strip_path=false

If you want to monitor or manage one type of object in particular with different plugins or different auth, rate-limits, etc., you have to create a specific route for it, using regex_priority:

If multiple regex routes match, Kong chooses the highest regex_priority, otherwise… the internal (non-deterministic) order.

Example below for Customer and Product objects :

### Create a route for the SC Model service Customer
curl -i -X POST \
  http://localhost:8001/services/scmodel/routes \
  --data name=scmodel-object-customer \
  --data 'paths[]=~/api/SC/scmodel/v1/objects/Customer$' \
  --data 'regex_priority=10' \
  --data strip_path=false

### Create a route for the SC Model service Product
curl -i -X POST \
  http://localhost:8001/services/scmodel/routes \
  --data name=scmodel-object-product \
  --data 'paths[]=~/api/SC/scmodel/v1/objects/Product$' \
  --data 'regex_priority=10' \
  --data strip_path=false

You can also create a limited list of objects types by using regex (A,B,C).
Example below for all sales routes :

### Create a route for the SC Model Sales routes
curl -i -X POST \
  http://localhost:8001/services/scmodel/routes \
  --data name=scmodel-object-sales \
  --data 'paths[]=~/api/SC/scmodel/v1/objects/(SalesOrder|SalesOrderLine|SalesShipment|SalesShipmentLine)$' \
  --data 'regex_priority=10' \
  --data strip_path=false

 

You can then find the route configuration in the API Manager portal :

To test it, just make some calls to different routes :

### Get Object BOM
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/BOM'

### Get Object Carrier
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/Carrier'

### Get Object Customer
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/Customer'

### Get Object DemandPlan
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/DemandPlan'

### Get Object InventoryThreshold
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/InventoryThreshold'

### Get Object Issue
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/Issue'

### Get Object Product
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/Product'

### Get Object SalesOrder
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/SalesOrder'

### Get Object SalesOrderLine
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/SalesOrderLine'

### Get Object SalesShipment
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/SalesShipment'

### Get Object SalesShipmentLine
curl --user _SYSTEM:SYS 'http://localhost:8000/api/SC/scmodel/v1/objects/SalesShipmentLine'

This allows you to manipulate your Customer business object with a finer level of detail than other objects types:

Analytics for all sales objects :

Analytics of the other object types :

Discussão (0)1
Entre ou crie uma conta para continuar
Job
· jan 23

From Classics to Trending Toys: Your Go-To Online Toy Store in Dubai

Finding the perfect toy for your child can be a delightful yet overwhelming experience. From timeless classics to the latest trending toys, Dubai parents now have more options than ever. Whether you’re searching for a toy shop near me, exploring reputable toy shops in Dubai, or browsing a trusted online toy store Dubai, knowing where and how to shop smart ensures your child enjoys quality, safe, and engaging toys.

Explore Classics and Trendy Toys

Every child loves a mix of the old and the new. Classic toys like building blocks, dolls, and board games foster creativity and cognitive skills, while trending toys like STEM kits, interactive gadgets, and collectible figures encourage learning and curiosity. Browsing a trusted toy shops in Dubai selection online ensures you have access to both timeless favorites and modern innovations.

Convenience of Online Shopping

Searching for a toy shop near me is helpful, but shopping online offers convenience, variety, and quick delivery. A reputable online toy store Dubai lets parents compare prices, check safety information, and read customer reviews without leaving home. Platforms like ToyzINN combine the ease of online shopping with a wide range of toys for all ages and interests.

Safety and Quality Matter

Dubai parents prioritize safe and durable toys. Look for products that are certified and made from non-toxic materials. Avoid toys with small detachable parts for younger children. Trusted toy shops in Dubai often provide detailed safety information, helping parents make informed choices.

Tips for Choosing the Right Toy

  • Know your child’s age and developmental stage
  • Choose certified and non-toxic toys
  • Balance classics with trending options
  • Check reviews from other parents in Dubai
  • Search toy shop near me for local pick-up options

Benefits of Shopping Smart

Smart shopping means finding toys that are not only fun but also educational and developmentally appropriate. Combining classics and trending options provides children with diverse experiences that foster creativity, problem-solving, and social skills.

Conclusion

From classic favorites to the latest trends, choosing the right toys for your child in Dubai is easier than ever. Whether exploring a trusted toy shop near me or browsing popular toy shops in Dubai, parents can find quality, safe, and engaging toys that children will love. For a reliable selection and excellent service, ToyzINN is your go-to online toy store Dubai, offering both timeless classics and exciting new toys to delight every child.

Discussão (0)1
Entre ou crie uma conta para continuar