Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Prerequisites

Before configuring NFS storage in Adeptia Connect, ensure you have the following:

  • NFS file share

Steps to Configure NFS Storage

  1. Install NFS Feature on Server

    1. Open Server Manager.

    2. Click on the Manage option from the toolbar.

    3. Select Add Roles and Features.

    4. Click Next until you reach the Select Features page.

    5. Select the Server for NFS option.

    6. Click Add Feature.

    7. Click Next and then Install.

    8. Once the installation is complete, close the Server Manager.

  1. Create and Share NFS Folder

    1. Create a folder and go to its properties.

    2. Select the NFS Sharing option.

    3. Click on Manage NFS Sharing.

    4. Click on the Permissions button.

    5. Select the Read and Write option and click OK.

    6. Apply and save the changes.

image-20240716-092701.png

  1. Mount NFS Folder on Cluster Node

    • To test folder access, go to the cluster node shell and run the following command:

      Code Block
      shell

      Copy code

      mount -t nfs -o vers=3 10.0.1.4:/NFSShare_Test /mnt/adeptia

    • Here, NFSShare_Test is the NFS folder, and /mnt/adeptia is the cluster folder where it is being mounted to sync.

    • Note: To unmount, use the command:

      Code Block
      shell

      Copy code

      umount -lf /mnt/adeptia

  2. Backup Deployment

    • Before making changes, take a backup of the deployment mentioned in point number 6 below.

Configuration Files

Code Block
languagejson
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    helm.sh/hook: pre-install
    helm.sh/hook-weight: "-19"
    helm.sh/resource-policy: keep
    volume.beta.kubernetes.io/storage-class: nfs
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    applicationid: adeptia-connect-01
    chart: adeptia-infra-0.1.0
    heritage: Helm
    release: adeptia-connect-acdemo-solution
  name: pvc-claim-adeptia-lan
  namespace: karmak-training
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  volumeMode: Filesystem
  • PV YAML Reference

Code Block
languagejson
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
  finalizers:
  - kubernetes.io/pv-protection  
spec:
  accessModes:
  - ReadWriteMany
  mountOptions:
    - soft
    - nfsvers=3
  capacity:
    ## enter the size of volume
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: pvc-claim-adeptia-lan
    ## enter the namespace
    namespace: karmak-training    
  ## enter the storage class name  
  storageClassName: nfs
  persistentVolumeReclaimPolicy: Retain
  nfs:
    ## provide the mount path for the volume
    path: /Karmak-Training
    ## provide the server IP/domain
    server: 10.0.3.4
    readOnly: false

Steps to Configure NFS Share in Adeptia Connect

  1. Create PV YAML

    • Create a pv.yaml file with the following configuration, updating it as per your requirements:

      • PV name

      • Namespace

      • Storage

      • Path

      • Server IP

  1. Create PVC YAML

    • Create a pvc.yaml file with the following configuration, updating it as per your requirements:

      • PVC name

      • Namespace

      • Storage

Image RemovedImage Added
Info

To add multiple mounts, create separate PV and PVC files with unique names. For example, if you name the first set "nfs-pv" and "nfs-pvc," name the next set "nfs-pv-another" and "nfs-pvc-another." Use these names consistently when adding mounts in deployments.

  1. Deploy PV and PVC

    • Deploy the PV/PVC YAML using kubectl:

      Code Block
      shell

      Copy code

      kubectl apply -f pv.yaml -n <namespace> kubectl apply -f pvc.yaml -n <namespace>

    • Check the status of PV and PVC:

      • Persistent Volumes (nfs-pv) should be listed.

      • Persistent Volume Claims (pvc-claim-adeptia-lan) should be listed.

Image Removed

NFS Mounting on Deployment

  1. Take Backups

    • Backup the following pods (deployments):

      • Adeptia-connect-ac-event

      • Adeptia-connect-ac-runtime

      • Adeptia-connect-ac-runtime-deployment-manager

      • Adeptia-connect-ac-webrunner

      • Adeptia-connect-ac-portal

      • Adeptia-archival-and-cleanup

      • Adeptia-connect-ac-listener

  2. Update Deployment Configurations

    • Open the deployment of event and look for the property volumes.

    • Add the following lines after claimName:

      Code Block
      yaml

      Copy code

      name: pvc-claim-karmak-sandbox-lan persistentVolumeClaim: claimName: pvc-claim-karmak-sandbox-lan

    • Maintain the same structure and spacing.

    • Scroll up and look for the properties volumeMounts and add the following syntax under Container - Env:

      Code Block
      yaml

      Copy code

      mountPath: /mnt/adeptia name: pvc-claim-adeptia-lan

  1. Save Changes

    • Save the changes and wait for the pod to be up with the new changes.

    • Verify the files inside the /mnt/adeptia folder by running:

      Code Block
      shell

      Copy code

      cd /mnt/adeptia

  2. Repeat for Other Deployments

    • Repeat the same steps for the other deployments mentioned in point 6.

    • For adeptia-connect-ac-runtime-deployment-manager, in addition to the above changes, set LAN_VOLUME_ENABLED to true.

Final Verification

  1. Verify Folder Access

    • Once all pods are up and running, log in to the Connect application and create FileSource/LAN source.

    • Verify folder access.