Process Flows
RECOVERY
These properties are related to the recovery of the process flow that was unable to complete due to an unexpected shutdown of Kernel. When you create a process flow, by default the process flow is created as a recoverable process flow. Implicit checkpoints are added before and after each activity of the process flow.
On reaching each of the checkpoints, the state (data, context variables) of the process flow is written to a file in the recovery directory. When the system is restarted after failure, it checks the recovery directory, finds out the recoverable process flow, and restarts the process flow execution from the last saved successful checkpoint. The recovery information saved in the recovery folder remains intact unless the process flow is recovered and completed.
After the process flow is executed, this information is deleted. There is one file for each process flow. If the recovery option is set to NO, the recovery information is saved but recovery is not done. If you enable the recovery property, the failed process flows are recovered.
Property Name | Description | Default Value | Possible Values |
---|---|---|---|
abpm.transaction.recovery.enable | Enable or disable recovery of a process flow after a system failure. | yes | yes or no |
REPROCESSING
Property Name | Description | Default Value | Possible Values |
---|---|---|---|
abpm.transaction.reprocessing.enable | Enable or disable recovery of process flow after the process flow aborts for any reason, except system failure. | yes | yes or no |
Execution
Property Name | Description | Default Value | Possible Values |
---|---|---|---|
abpm.transaction.activities.executionTime.enable | Displays all executed services with execution time in the context file. | yes | yes or no |
Archival
Property | Description |
---|---|
abpm.logs.archival.enable | Enables or disables the archival of the Process Flow logs and repository files.
|
abpm.logs.archival.database | Specifies whether to use another database server for logs archival or not.
|
abpm.logs.sendNotification.onArchivalFailure | Specifies whether to send an email notification or not in case an error occurs during the logs archival.
|
abpm.archive.fetchsize | Number of records to be fetched at a time from result set cursor. |
abpm.archive.pagesize | The total number of records to be fetched by the query in a single page. |
abpm.archive.chunksize | Number of records to be processed in each thread. All the threads run in parallel fashion. |
abpm.logs.b2b.retainTime | Property that defines the retain time for EDI and non-EDI Transactions/Templates logs. |
abpm.logs.db.retryErrorCodes | Error codes for which retry is performed in case an error occurs while executing the query for all the tables. You can enter multiple error code separated by comma. |
abpm.logs.db.skipErrorCodes | Error codes for which no retry takes place in case an error occurs while executing the query for all the tables. You can enter multiple error code separated by comma. |
abpm.logs.retainTime | Property that defines the retain time for the logs (other than EDI). |
abpm.archive.logRetainTime | Property that defines the retain time for the archived logs. |
abpm.logs.retryCount | The number of times the application retries running the query for log cleanup and archival in case the query execution fails. The number of times the retry happens in case of failure of execution of a query. |
abpm.logs.retryInterval | Time interval (in seconds) after which the re-execution of query for transaction log and transaction data tables is performed if the query execution fails. |
Using separate database server for archived logs
In case you are using Adeptia Connect to process a large number of files everyday, it is recommended that you use a separate database server for logs archival. Following are the steps that you need to follow in order to create tables for logs archival on a separate database.
Important
Before you follow the steps to create a separate database for archived logs, ensure that you have set the value for the property abpm.logs.archival.database to 2.
- Create a database (for example, Adeptia_Logs_Archive on SQL Server) on the database server where you want to archive the logs.
- On this database, run the initialize-log-<database server name>.sql script located at .../AdeptiaServer-x.x/ServerKernel/etc folder. This creates the tables where the archive logs will be stored (for example, for a database created on the SQL server run initialize-log-sqlserver.sql script and for a database created on an Oracle Server run initialize-log-oracle.sql).
- Run the create-indexes-<database server name>.sql script located at .../AdeptiaServer-x.x/ServerKernel/etc folder. This applies the indexes on the tables created in the previous step (for example, for a database created on the SQL server run create-indexes-sqlserver.sql script and for a database created on an Oracle Server run create-indexes-oracle.sql script).