Deploy the application

You can install Adeptia Connect microservices using Helm Charts. A Helm Chart is a collection of files and templates that describe Kubernetes resources governing the deployment of microservices.

Follow the steps below in the same order to install Adeptia Connect. 

Ensure that you have met all the prerequisites and system requirements before you begin to install the application.

Creating Service Accounts, ClusterRoles, and ClusterRoleBindings

Before you begin to install the application, you need to create Roles, and RoleBindings. In case you are using external secrets, you need to create Service Accounts, ClusterRoles, and ClusterRoleBindings. This section makes you aware of the:

  • Permission required by the user who is deploying the helm chart.
  • Roles and RoleBindings required by this user.
  • External secret crd yaml.
  • External secret ClusterRole and ClusterRoleBindings.

Adeptia provides you with a roles zip file along with the Adeptia Connect Helm chart. This zip contains the YAML files required to create Roles, RoleBindings, ClusterRoles (to be used when implementing external secrets), etc. The following table helps you understand the purpose of each yaml file. 

File Name

Purpose

Description

For the user, who will deploy the helm chart

role-deployment-user.yaml

It creates the Role with name “adeptia-user”

This Role contains the permission required by the user who will deploy the Adeptia Connect helm chart. In this file, you need to update the namespace in which you want to deploy the Adeptia Connect helm chart.

rolebinding-deployment-user.yaml

It creates the RoleBinding with the name “adeptia-user”.

In this file, you need to update the namespace in which you want to deploy the Adeptia Connect helm chart. You also need to update the name of the user, who will deploy the Adeptia Connect helm chart.

Required for External Secret

This is needed only when you will use the external secret to fetch the secrets from the external vault.

serviceaccount-adeptia-es.yaml

It creates a Service Account with the name “adeptia- es”.

This Service Account will be used by external secret. You need to enter namespace in this file.

clusterrole-adeptia-es.yaml

It creates a ClusterRole with the name “adeptia- connect-es”.

This ClusterRole contains the permission required by the above service account used by external secret.

clusterrolebinding-adeptia-es.yaml

It creates a ClusterRoleBinding for the service account “adeptia-es” with the “adeptia-connect-es” ClusterRole.

In this file, you need to update the namespace in which you will deploy the Adeptia Connect helm.

es-crd.yaml

It creates the external secret component CRDs.

This is needed only when you will use the external secret to fetch the secrets from the external vault.

es-clusterrolebinding-deployment-user.yaml

It creates a ClusterRoleBinding to provide “system:auth-delegator” permission to the user who will deploy the Adeptia Connect helm.

In this file you need to update the name of the user, who will deploy the Adeptia Connect helm.

To create Roles, and RoleBindings, follow the steps given below.

  1. Download the roles.tgz (roles zip) file from the following link:
    https://adeptia.github.io/adeptia-connect-roles/charts/roles-4.2.0.tgz
  2. Unzip this file.
  3. Update the yaml files as explained in the table above.
  4. Run the following command to deploy the yaml files. 

    kubectl apply -f adeptia_roles/

    If you want to use external secret, you need to run the following command to create Service Accounts, ClusterRoles, ClusterRoleBindings and external crd for external secret.

    kubectl apply -f es/

Enabling OCI support in the Helm 3 client

Helm 3 supports Open Container Initiative (OCI) for package distribution. Set the HELM_EXPERIMENTAL_OCI in the environment to enable OCI support in the Helm 3 client by running the following command on the Helm CLI.

export HELM_EXPERIMENTAL_OCI=1

Installing Adeptia Connect

Follow the steps below to install Adeptia Connect.

  1. Go to https://artifacthub.io/packages/helm/adeptia-connect/adeptia-connect to view the details of Adeptia Connect helm package.
  2. Add Adeptia Connect repo using following command:

    helm repo add adeptia-connect https://adeptia.github.io/adeptia-connect-helm-package/charts/
  3. Click DEFAULT VALUES to download the values.yaml file.


    This downloads the values.yaml wherein the resource configurations are as per a production environment.

    You can download the sample values.yaml file for a development environment by clicking values-dev.yaml and change the values based on your requirement. 
  4. Update the values.yaml as per instruction given in this section.

    Important

    Ensure that you have the same tags for all the microservices in the values.yaml file.

  5. Run the the following command to deploy the application.

    helm install adeptia-connect adeptia-connect/adeptia-connect --version <Version number> -f <PATH_OF_VALUES.YAML> --timeout 10m

    Important

    Use a specific version number in the version argument, else the latest version of Adeptia Connect will be installed.

    This command deploys Adeptia Connect on the Kubernetes cluster.

    Once you've completed the deployment, you need to configure your domain specific SSL certificate by using either of the two options:

    • Use Ingress in front of the Gateway service and configure SSL on Ingress.
    • Configure SSL directly on the Gateway service.

    To know more about configuring SSL, click here.

Properties in values.yaml

Update the following properties in values.yaml before you run the helm install command to install the application.

Defining the required database properties

PropertyValue for Azure SQL DatabaseValue for Oracle DatabaseValue for Azure MySQL DatabaseDescription
BACKEND_DB_USERNAME<User defined><User defined><User defined>

Username for your backend database.

If you're using external Secrets, you need not provide a value for this property.

BACKEND_DB_PASSWORD<User defined><User defined><User defined>

Password for your backend database.

If you're using external Secrets, you need not provide a value for this property.

BACKEND_AUTHTYPEBasic
  • Basic
  • KerberosWithKeyTab
  • KerberosWithServiceAccount
Basic

Authentication type for your backend database.


BACKEND_KERBEROS_DB_LOGIN_MODULE_NAMENot applicable
Not applicable

Name of the login module.

Set its value to kerberosServer if you want to use Kerberos authentication for Oracle database.

BACKEND_KERBEROS_CONFIGURATIONNot applicable
Not applicable

Kerberos login module parameters.

  • This is applicable only when you're using Kerberos as the authentication type for Oracle database.
  • Kerberos authentication is not supported for AIMap microservice.

Enter the required Kerberos parameters with their respective values in key-value pairs (comma separated) as shown in the example below:

"principal=HTTP/hostname.mydomain.com,useKeyTab=true,keytab=<Keytab_File_Path>,storeKey=true,isInitiator=true,debug=true,realm=MYDOMAIN.COM"

For the details of the Kerberos parameters, for example, principal, useKeytab, storeKey, and others, refer to this page.

Important

To make the Kerberos configuration work, ensure that you:

  • Set the storeKey and isInitiator parameters to true.
  • Add the property allow_weak_crypto under [realms] and [libdefaults] tags in krb5.conf file located at /shared) and set its value to true.

AIMAP_BACKEND_URL

mssql+pyodbc://<host>:<port>/<database_name>?driver=ODBC+Driver+17+for+SQL+Serveroracle+cx_oracle://<host>:<port>/<database_name>mysql+mysqlconnector://<host>:<port>/<database_name>

The backend database URL for AIMAP.

If you need to add additional queries, use & followed by the query at the end of database URL, for example, &authentication=ActiveDirectoryPassword. You can add multiple queries using this approach.

LOG_DB_USERNAME<User defined><User defined><User defined>

Username for your log database.

If you're using external Secrets, you need not provide a value for this property.

LOG_DB_PASSWORD<User defined><User defined><User defined>

Password for your log database.

If you're using external Secrets, you need not provide a value for this property.

LOG_DB_AUTHTYPEBasic
  • Basic
  • KerberosWithKeyTab
  • KerberosWithServiceAccount
Basic

Authentication type for your log database.

LOG_DB_KERBEROS_DB_LOGIN_MODULE_NAMENot applicable
Not applicable

Name of the login module.

Set its value to kerberosServer if you want to use Kerberos authentication for Oracle database.

LOG_DB_KERBEROS_CONFIGURATIONNot applicable
Not applicable

Kerberos login module parameters.

  • This is applicable only when you're using Kerberos as the authentication type for Oracle database.
  • Kerberos authentication is not supported for AIMap microservice.

Enter the required Kerberos parameters with their respective values in key-value pairs (comma separated) as shown in the example below:

"principal=HTTP/hostname.mydomain.com,useKeyTab=true,keytab=<Keytab_File_Path>,storeKey=true,isInitiator=true,debug=true,realm=MYDOMAIN.COM"

For the details of the Kerberos parameters, for example, principal, useKeytab, storeKey, and others, refer to this page.

Important

To make the Kerberos configuration work, ensure that you:

  • Set the storeKey and isInitiator parameters to true.
  • Add the property allow_weak_crypto under [realms] and [libdefaults] tags in krb5.conf file located at /shared) and set its value to true.
LOG_ARCHIVE_DB_PASSWORD<User defined><User defined><User defined>

Password for your log archive database.

If you're using external Secrets, you need not provide a value for this property.

LOG_ARCHIVE_DB_USERNAME<User defined><User defined><User defined>

Username for your log archive database.

If you're using external Secrets, you need not provide a value for this property.

LOG_ARCHIVE_DB_AUTHTYPEBasic
  • Basic
  • KerberosWithKeyTab
  • KerberosWithServiceAccount
Basic

Authentication type for your log archive database.

LOG_ARCHIVE_DB_KERBEROS_DB_LOGIN_MODULE_NAMENot applicable
Not applicable

Name of the login module.

Set its value to kerberosServer if you want to use Kerberos authentication for Oracle database.

LOG_ARCHIVE_DB_KERBEROS_CONFIGURATIONNot applicable
Not applicable

Kerberos login module parameters.

  • This is applicable only when you're using Kerberos as the authentication type for Oracle database.
  • Kerberos authentication is not supported for AIMap microservice.

Enter the required Kerberos parameters with their respective values in key-value pairs (comma separated) as shown in the example below:

"principal=HTTP/hostname.mydomain.com,useKeyTab=true,keytab=<Keytab_File_Path>,storeKey=true,isInitiator=true,debug=true,realm=MYDOMAIN.COM"

For the details of the Kerberos parameters, for example, principal, useKeytab, storeKey, and others, refer to this page.

Important

To make the Kerberos configuration work, ensure that you:

  • Set the storeKey and isInitiator parameters to true.
  • Add the property allow_weak_crypto under [realms] and [libdefaults] tags in krb5.conf file located at /shared) and set its value to true.
QUARTZ_DB_USERNAME<User defined><User defined><User defined>

Password for your log archive database.

If you're using external Secrets, you need not provide a value for this property.

QUARTZ_DB_PASSWORD<User defined><User defined><User defined>

Password for your quartz database. This value will be the same as that for the backend database.

If you're using external Secrets, you need not provide a value for this property.

QUARTZ_DB_AUTHTYPEBasic
  • Basic
  • KerberosWithKeyTab
  • KerberosWithServiceAccount
Basic

Authentication type for your quartz database.

QUARTZ_DB_KERBEROS_DB_LOGIN_MODULE_NAMENot applicable
Not applicable

Name of the login module.

Set its value to kerberosServer if you want to use Kerberos authentication for Oracle database.

QUARTZ_DB_KERBEROS_CONFIGURATIONNot applicable
Not applicable

Kerberos login module parameters.

  • This is applicable only when you're using Kerberos as the authentication type for Oracle database.
  • Kerberos authentication is not supported for AIMap microservice.

Enter the required Kerberos parameters with their respective values in key-value pairs (comma separated) as shown in the example below:

"principal=HTTP/hostname.mydomain.com,useKeyTab=true,keytab=<Keytab_File_Path>,storeKey=true,isInitiator=true,debug=true,realm=MYDOMAIN.COM"

For the details of the Kerberos parameters, for example, principal, useKeytab, storeKey, and others, refer to this page.

Important

To make the Kerberos configuration work, ensure that you:

  • Set the storeKey and isInitiator parameters to true.
  • Add the property allow_weak_crypto under [realms] and [libdefaults] tags in krb5.conf file located at /shared) and set its value to true.
BACKEND_DB_URL

jdbc:sqlserver://<DB Hostname>:<Port Number>;database=<Backend Database Name>

jdbc:oracle:thin:@<hostName>:<portNumber>:<S ID/ServiceName>

jdbc:mysql://<hostName>:<portNumber>/<DBName>?useSSL=trueBackend database name and its URL. Currently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified.

BACKEND_DB_DRIVER_CLASS

com.microsoft.sqlserver.jdbc.SQLServerDriver

oracle.jdbc.OracleDriver

com.mysql.cj.jdbc.DriverDriver class name based on the backend db. Do not change the value for this pre-defined property.
BACKEND_DB_TYPE

SQL-Server

Oracle MySQLCurrently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified. Do not change the value for this pre-defined property.
BACKEND_DB_DIALECT

org.hibernate.dialect.SQLServer2008Dialect

org.hibernate.dialect.Oracle12cDialectorg.hibernate.dialect.MySQLDialectCurrently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified. Do not change the value for this pre-defined property.
BACKEND_DB_TRANSACTION_ISOLATION

1

81Transaction isolation level in database.
BACKEND_DB_VALIDATION_QUERYSELECT 1SELECT 1 from dualSELECT 1Query that can be used by the pool to validate connections before they are returned to the application.
LOG_DB_DRIVER_CLASS

com.microsoft.sqlserver.jdbc.SQLServerDriver

oracle.jdbc.OracleDriver

com.mysql.cj.jdbc.DriverDriver class name based on the log db. Do not change the value for this pre-defined property.
LOG_DB_URL

jdbc:sqlserver://<DB Hostname>:<Port Number>;database=<Log Database Name>

jdbc:oracle:thin:@<hostName>:<portNumber>:<S ID/ServiceName>

jdbc:mysql://<hostName>:<portNumber>/<DBName>?useSSL=trueLog database name and its URL. Currently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified.
LOG_DB_TYPE

SQL-Server

OracleMySQLCurrently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified. Do not change the value for this pre-defined property.
LOG_DB_DIALECT

org.hibernate.dialect.SQLServer2008Dialect

org.hibernate.dialect.Oracle12cDialect

org.hibernate.dialect.MySQLDialectCurrently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified. Do not change the value for this pre-defined property.
LOG_DB_TRANSACTION_ISOLATION

1


81Transaction isolation level in database.
LOG_DB_VALIDATION_QUERYSELECT 1SELECT 1 from dualSELECT 1Query that can be used by the pool to validate connections before they are returned to the application.
LOG_ARCHIVE_DB_URL

jdbc:sqlserver://<DB Hostname>:<Port Number>;database=<Log Archive Database Name>

jdbc:oracle:thin:@<hostName>:<portNumber>:<S ID/ServiceName>

jdbc:mysql://<hostName>:<portNumber>/<DBName>?useSSL=trueLog archive database name and its URL. Currently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified.

LOG_ARCHIVE_DB_DRIVER_CLASS

com.microsoft.sqlserver.jdbc.SQLServerDriver

oracle.jdbc.OracleDriver

com.mysql.cj.jdbc.DriverDriver class name based on the log archive db. Do not change the value for this pre-defined property.
LOG_ARCHIVE_DB_TYPE

SQL-Server

OracleMySQLCurrently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified. Do not change the value for this pre-defined property.
LOG_ARCHIVE_DB_DIALECT

org.hibernate.dialect.SQLServer2008Dialect

org.hibernate.dialect.Oracle12cDialect

org.hibernate.dialect.MySQLDialectCurrently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified. Do not change the value for this pre-defined property.
LOG_ARCHIVE_DB_TRANSACTION_ISOLATION

1


81Transaction isolation level in database.
LOG_ARCHIVE_DB_VALIDATION_QUERYSELECT 1SELECT 1 from dualSELECT 1Query that can be used by the pool to validate connections before they are returned to the application.
LOG_ARCHIVE_THREAD_COREPOOLSIZE<User defined><User defined><User defined>

Number of worker threads used for performing log archival and cleanup.

Each thread consumes one connection.
LOG_ARCHIVE_THREAD_MAXPOOLSIZE<User defined><User defined><User defined>

Maximum number of worker threads that can be used to perform log archival and cleanup.

Ensure that this number is less than or equal to the number of connections in the connection pool.
LOG_ARCHIVE_THREAD_QUEUECAPACITY<User defined><User defined><User defined>

The maximum number of  log archival and cleanup tasks that can remain in queued state after all the defined threads are exhausted.

LOG_ARCHIVE_DB_SEPARATE
  • false (default)
  • true
  • false (default)
  • true
  • false (default)
  • true

Property to define whether you want to use the log database for log cleanup and  archival, or want to use a separate one (Log Archival database).

Setting the value for this variable to true mandates the use of separate database for log cleanup and  archival. 

You need to define this variable in the environmentVariables as well as webrunner sections in the values.yaml file.

QUARTZ_DB_URL

jdbc:sqlserver://<DB Hostname>:<Port Number>;database=<Backend Database Name>

jdbc:oracle:thin:@<hostName>:<portNumber>:<S ID/ServiceName>

jdbc:mysql://<hostName>:<portNumber>/<DBName>?useSSL=trueQuartz database name and its URL. Currently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified. This value will be the same as that for the backend database.
QUARTZ_DB_DRIVER_CLASS

com.microsoft.sqlserver.jdbc.SQLServerDriver

oracle.jdbc.OracleDriver

com.mysql.cj.jdbc.DriverDriver class name based on the quartz db. This value will be the same as that for the backend database.
QUARTZ_DB_TYPE

SQL-Server

OracleMySQLCurrently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified. Do not change the value for this pre-defined property.
QUARTZ_DB_DIALECT

org.hibernate.dialect.SQLServer2008Dialect

org.hibernate.dialect.Oracle12cDialect

org.hibernate.dialect.MySQLDialectCurrently, the MS Azure SQL, Oracle, and Azure MySQL databases are certified. Do not change the value for this pre-defined property. This value will be the same as that for the backend database.
QUARTZ_DB_TRANSACTION_ISOLATION

1


81Transaction isolation level in database.
QUARTZ_DB_VALIDATION_QUERYSELECT 1SELECT 1 from dualSELECT 1Query that can be used by the pool to validate connections before they are returned to the application.

Defining the properties for integration with an external Vault

You may want to store the sensitive information such as imagePullSecrets, database credentials, and the credentials for the activities that you create and use in Adeptia Connect to an external Vault for added security.

Adeptia Connect enables you to fetch the imagePullSecrets and the database credentials from an external Vault. The following table contains the properties to be set if you're using an external Vault for managing imagePullSecrets and database Secrets. Refer to this page to know more about configuring these Secrets. 

PropertyValueDescription
config.image.pullSecret.enabledtrue (Default)

If set to false, external Secrets will be used.

infra.secret.enabledfalse (Default)You'd need to set this value to true to work with external Secrets.
infra.secret.vaultMountPoint
Authentication method that you have created in the external tool.
infra.secret.vaultRole
Role that you've created in the tool.
infra.secret.dbDataFrom
Path of the database Secret created in the tool.
infra.secret.imageDataFrom
Path of the image Secret created in the tool.
infra.secret.env.VAULT_ADDR
URL of the external tool such as Vault.

Adeptia Connect also allows you to fetch the credentials associated with an activity such as an FTP Source from HashiCorp Vault at runtime. To know what all properties you need to set in the values.yaml to integrate with HashiCorp Vault, refer to this page.

Setting the property EXECUTE_STATIC_JOB

To ensure that the application starts running successfully after its deployment, you need to set the property EXECUTE_STATIC_JOB in the static section of the global values.yaml file. 

PropertyDescription
EXECUTE_STATIC_JOBSet the value for this property to true to ensure that the files required for running the application are copied in the PVC while deploying the application.

Enabling or disabling PDB creation

You can enable or disable the creation of PDB for rabbitmq, listener, aimap, and license microservices by going to the respective microservice section in the global values.yaml file and setting the variable CREATE_PDB to true or false. The default value for this variable for rabbitmq and listener microservices is false, and true for aimap and license microservices.

Removing the resources if installation fails

If the installation fails for any reason, you need to take the following steps before you reinstall the application.

  1. Check if there is an entry of the release (Name given to the release while installing) in the namespace. Use the following command to check the entry:

    helm list -n <Namespace>
  2. Remove the entry of the release. Use the following command to remove the release.

    helm delete <Name of the release> -n <Namespace>
  3. Remove the resources that were deployed during the failed installation. 
    Ensure that you have removed the following resources before you begin to install the application once again.
    1. Jobs (For example, Migration)
    2. Deployment of the microservices
    3. Secrets
    4. PVC

Uninstalling Adeptia Connect

If you wish to uninstall the application, run the following command.

helm uninstall <adeptia-connect>

Where,

adeptia-connect is name of the deployment.

When you uninstall the application, there are some resources that you need to remove from the system manually. The resources on the following list are subject to be deleted manually, and their removal ensures a successful installation of the application in the future.

  • Service Account 
  • Secrets
  • PVC

If you've configured external Secrets, you need to manually delete the Secrets and its deployment after you uninstall the application.