Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Image RemovedImage Added

Figure 1: Clustering Architecture
 

Anchor
_Ref392234873
_Ref392234873
Components

Clustering provides high-availability, scalability, and manageability of the resources and applications by grouping multiple servers that are running Adeptia Suite. There are a number of components, which make that possible. Important components of clustering service include the following: 

...

WebRunner handles all the user's requests such as creating, editing, and deleting of activities through GUI. When you enable clustering in Adeptia Suite, it is enabled only for Kernel, and not for WebRunner. If you also want to load balance the GUI requests, you can use an external load balancer for WebRunners (see Figure 1 ). Make sure that the WebRunner runs on all the nodes. Secure Bridge and Secure Engine users must always use an external load balancer. 

Shared Location for all the Nodes

...

Databases

Back-end database

A back-end database is used to store the objects. All the activities (i.e. Process flows, activities, users, etc.) created through the GUI are stored in the backend database. By default, Adeptia Suite uses HSQLDB (which is an embedded database) as backend.
While setting up a clustered environment:

...

    • Adeptia Suite does not provide Clustering or Failover setup for the databases, however, you can set that up according to the database you use. For load sharing purposes, it is recommended to configure master/slave or replication (refer to the related database documentation).

Log Database:

Adeptia Suite maintains logs of all the design time and run time activities that you run within Adeptia Suite. For example Process flow log, event log, etc. Adeptia Suite writes all these logs into the log database. All the nodes of the cluster should use same log database. In addition, for load sharing purposes, it is recommended to configure master/slave or replication (refer to the related database documentation).

Anchor
SharedL
SharedL
Shared Location

To enable cluster, it is important that you set up a shared location that has both Read and Write permissions, and that can be accessed by all the nodes of a cluster. Depending on your operating system and needs, you may use any of the File Sharing services. Some of the popular options are as follows:

  • NFS

    It is considered as the easiest option and basically developed for sharing of files and folders between Linux/Unix systems. It allows you to mount your local file systems over a network and remote hosts to interact with them as they are mounted locally on the same system. It will let you authenticate just via IP  or via Kerberos Tickets. As it has kernel mode support, it will run faster than sshfs. Besides, as there's no encryption performed it will have a better throughput, and in the case of the tiny Raspberry ARM, it may make a difference. Besides, it's not so painful to setup simply you trust your network. You have automount support in /etc/fstab too, and you don't have to put sensitive data (such as usernames or passwords), and if you have your usernames syncrhronized (same /etc/passwd and /etc/group files) you can use the usual POSIX permissions toolset (chown, chgrp and chmod). For details, refer How to Setup NFS (Network File System) section.

  • Samba / CIFS

    It will allow you to use Windows and Unix machines to access the remote folder. It's easy to automount it on init (just input the apropriate values at /etc/fstab, including username=<your-samba-username>,password=<your-samba-password> in the options column.

  • SSHFS

    Through FUSE, you can mount remote filesystems via ssh. I won't cover how, as Cristopher has already very well explained that. Just note that, in order to mount the file automatically it will need a bit more of work.

Repository Folder

When the process flow is executed, data from the source is converted to the intermediate form and then it is dispatched to the target. The intermediate data is stored in a repository folder. This should be a shared folder in the network, which can be accessed by all the nodes of the cluster. There should not be any username/password required to connect to this folder.
 

Recovery Folder

During execution of a process flow, its current state is stored in a recovery file. These recovery files are stored in a recovery folder. Whenever a process flow aborts due to Kernel shutdown, the Recovery feature handles it automatically with the help of recovery files. These files, remains in the recovery folder unless the process flow execution is completed. This folder should be shared among all the nodes of the cluster.

...