Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Corrected links that should have been relative instead of absolute.

You need to configure the following to handle a large volume datanumber of transactions. 

  • Configure larger Kernel memory
  • Use high-speed multi-core processors
  • DB lookups - use same connection rather than new connection per record
  • DB lookups - implement database caching inside Web Mapper
  • XML lookups - use for better performance compared to DB lookups by pre-fetching database records into XML and then doing XML lookups
  • Layout - skip non-required field in source layout
  • Handling a large number of files in file event -  abpm.event.concurrency 
  • Handling mail source activity errors - abpm.mailEvent.retry
  • Maintenance - enable frequent cleanup of logs and repository
  • Mapping - use splitting and parallel processing in Mapping. There are the following scenarios in which you can enable splitting in the Web Mapper.
  • Multiple concurrent jobs can be running or the mapping rules are complex or have external DB lookups: File size is more than 1% of the Kernel heap size. For example, if the Kernel heap size is 12GB then use splitting if the file size is more than 120MB.
  • Multiple concurrent jobs can be running and the mapping rules are simple and there are no external DB lookups: File size is more than 2% of the Kernel heap size. For example, if the Kernel heap size is 12GB then use splitting if the file size is more than 240MB.
  • Note that it may not be possible to enable splitting in the data mapping if there are aggregate functions that are used in the mapping rules that require all records to be processed at the same time. In this case, we recommend using a solution design recommended below.
    For information on how to apply splitting, click here.
    Solution design - Split the data and do parallel processing within the process. If the file size is more than 8% of the Kernel heap size then we recommend the approach to process this file should be carefully considered. For example, if the Kernel heap size is 12GB then use a design approach given below if the file size is more than 1GB.
     
  • Design process flow to enable processing in smaller chunks right from the beginning such as using more frequent triggers to get smaller chunks of source data or to have a pre-process step to split the source file before getting it to the layout and mapping stage.
  • Another approach to consider would be to avoid using the Web Mapper and to possibly use a custom plugin or a database bulk loader to simply load the file into a staging database and apply the processing rules in DB queries and stored procedures for the more efficient handling of a very large volume of dataConfigure database connection pool
  • Configure Queue Processor
  • Configure clustering
  • Reduce the number of discrete steps in flow by using custom plugins
  • Enable JDO caching
  • Set correct logging level
  • Optimize TCP settings if you are using Windows OS

Tip
For information on how to deploy clustering, click here.
For information on how to enable JDO caching, click here.
For information on how to set correct logging level, click here.

If Adeptia Connect is installed on Windows operating system then you should optimize TCP settings to avoid port exhaustion. For more information, click here.