Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

cpu and memory autoscaling handled by Kubernetes


Runtime HPA:
no of pf in queued state - shared/Autoscaling
Run time pod (RabbitMQ_Concurrency :10)
For shared queue - runtime
Autoscaliing: Threshold : 5
12 PF : 10 Running , 2 Queue
16PF : 10 running, 6: queue
For dedicated queue, we have option in UI.




Configuring HPA for runtime microservice before deployment (shared queue)

autoscaler>env:>min pod , and max pod, max queue is Threshold

Configuring HPA for runtime microservice after deployment (shared queue)

go to shared directory

Open AUTOSCALING file in the in the /shared directory in edit mode

2 = min pod

2 = max pod

5 = max queue or Threshold

performance = namespace

runtime = shared queue name

runtime = name of the deployment

Save the file.

Changes are reflected within 30 secs.

Downscaling also happens

Configuring HPA for runtime microservice (dedicated queue) - done only after deployment

can be performed only by admin user.

Refere to creating a queue "Creating a queue - Adeptia Connect Help v4.0 - Adeptia Docs"


  • No labels