Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Horizontal Pod Autoscaling (HPA) governs the spinning up of additional pods when the existing resources (CPU and memory) of the microservice are exhausted or the message count threshold (runtime) for the queue is exceeded. The deletion of the additional pods occurs as and when the resources and the message count values are below their threshold values.  

...

  • The autoscaling of runtime pods happens based on the threshold values for Message Queue, CPU, and memory you set in the global values.yaml file. For more details, refer to this section.

    Tip
    For a dedicated runtime (Deployment) pod, you need to set the threshold values for Message Queue, CPU, and memory while creating the Deployment. For more details, refer to this page.


  • The autoscaling of other microservices' pods happens based only on the threshold values for CPU and memory you set in the global values.yaml file. For more details, refer to this section.

When you use Kubernetes' HPA,

  • The autoscaling of runtime pods happens based only on the threshold values for CPU, and memory you set in the global values.yaml file. For more details, refer to this section.

    Tip
    For a dedicated runtime (Deployment) pod, you need to set the threshold values for CPU and memory while creating the Deployment. For more details, refer to this page.


  • The autoscaling of the other microservices' pods happens based only on the threshold values for CPU and memory you set in the global values.yaml file. For more details, refer to this section.

Anchor
HPA microservice
HPA microservice

...

ParameterDescriptionDefault value
RUNTIME_AUTOSCALING_ENABLED:Parameter to enable HPA by setting its value to true.true

RUNTIME_MIN_POD:

Anchor
RUNTIME_AUTOSCALING_TYPE
RUNTIME_AUTOSCALING_TYPE
Minimum number of pods.1

RUNTIME_MAX_POD:

The maximum number of pods the runtime microservice can scale up to.1

RUNTIME_AUTOSCALING_TYPE

Parameter to define whether you want the autoscaling to happen based on cpu or memory or both. The possible values for this parameter can be cpu, memory, and cpu-memory.cpu
RUNTIME_AUTOSCALING_TARGETCPUUTILIZATIONPERCENTAGE:Value in percentage of CPU requests set in the global values.yaml for the runtime pods at which the HPA spins up a new pod.400
RUNTIME_AUTOSCALING_TARGETMEMORYUTILIZATIONPERCENTAGE:Value in percentage of memory requests set in the global values.yaml for the runtime pods at which the HPA spins up a new pod.400
RUNTIME_SCALE_UP_STABILIZATION_WINDOW_SECONDS:The duration (in seconds) for which the application keeps a watch on the spikes in the resource utilization by the currently running pods. This helps in determining whether scaling up is required or not.300
RUNTIME_MAX_POD_TO_SCALE_UP:The maximum number of pods the runtime microservice can scale up to at a time.1
RUNTIME_SCALE_UP_PERIOD_SECONDS:The time duration (in seconds) that sets the frequency of tracking the spikes in the resource utilization by the currently running pods.60
RUNTIME_SCALE_DOWN_STABILIZATION_WINDOW_SECONDS:The duration (in seconds) for which the application keeps a watch for drop in resource utilization by the currently running pods. This helps in determining whether scaling down is required or not.300
RUNTIME_MAX_POD_TO_SCALE_DOWN:The maximum number of pods the runtime microservice can scale down to at a time.1
RUNTIME_SCALE_DOWN_PERIOD_SECONDS:The time duration (in seconds) that sets the frequency of tracking the drop in the resource utilization by the currently running pods.60

...

Anchor
runtime Config KEDA
runtime Config KEDA
Configuring KEDA for runtime microservice



...

Related topic

Creating a Deployment

...