Figure 1: Clustering Architecture
Anchor _Ref392234873 _Ref392234873
Components
_Ref392234873 | |
_Ref392234873 |
Clustering provides high-availability, scalability, and manageability of the resources and applications by grouping multiple servers that are running Adeptia Suite. There are a number of components, which make that possible. Important components of clustering service include the following:
...
- Adeptia Suite does not provide Clustering or Failover setup for the databases, however, you can set that up according to the database you use. For load sharing purposes, it is recommended to configure master/slave or replication (refer to the related database documentation).
Log Database:
Adeptia Suite maintains logs of all the design time and run time activities that you run within Adeptia Suite. For example Process flow log, event log, etc. Adeptia Suite writes all these logs into the log database. All the nodes of the cluster should use same log database. In addition, for load sharing purposes, it is recommended to configure master/slave or replication (refer to the related database documentation).
...
To enable cluster, it is important that you set up a shared location that has both Read and Write permissions, and that can be accessed by all the nodes of a cluster. Depending on your operating system and needs, you may use any of the File Sharing services. Some of the popular options are as follows:
- NFS
It is considered as the easiest option and basically developed for sharing of files and folders between Linux/Unix systems. It allows you to mount your local file systems over a network and remote hosts to interact with them as they are mounted locally on the same system. It will let you authenticate just via IP or via Kerberos Tickets. As it has kernel mode support, it will run faster than sshfs. Besides, as there's no encryption performed it will have a better throughput, and in the case of the tiny Raspberry ARM, it may make a difference. Besides, it's not so painful to setup simply you trust your network. You have automount support in /etc/fstab too, and you don't have to put sensitive data (such as usernames or passwords), and if you have your usernames syncrhronized (same /etc/passwd and /etc/group files) you can use the usual POSIX permissions toolset (chown, chgrp and chmod). For details, refer How to Setup NFS (Network File System) section.
- Samba / CIFS
It will allow you to use Windows and Unix machines to access the remote folder. It's easy to automount it on init (just input the apropriate values at /etc/fstab, including username=<your-samba-username>,password=<your-samba-password> in the options column.
- SSHFS
Through FUSE, you can mount remote filesystems via ssh. I won't cover how, as Cristopher has already very well explained that. Just note that, in order to mount the file automatically it will need a bit a bit more of work.
Repository Folder
When the process flow is executed, data from the source is converted to the intermediate form and then it is dispatched to the target. The intermediate data is stored in a repository folder. This should be a shared folder in the network, which can be accessed by all the nodes of the cluster. There should not be any username/password required to connect to this folder.
...