Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

To enable HPA, you need to set the parameters as described below for each of the microservices individually.

The following screenshot illustrates the autoscaling parameters for webrunner microservice.

# Autoscaling[HPA] Values for Webrunner can be defined here You can find these parameters in the respective section of each microservice in the global values.yaml file.

ParameterDescriptionSample value
autoscaling:

...



Image Removed

targetmemoryUtilizationPercentage
ParameterDescription
enabled      enabled: Parameter to enable HPA by setting its value to true.TRUE
      minReplicas:Minimum number of pods for a microservice.1
      maxReplicas:The maximum number of pods a microservice can scale up to.1
      targetCPUUtilizationPercentageThe percentage value of CPU utilization at which the autoscaler spins up a new pod.400
      targetMemoryUtilizationPercentage: The percentage value of memory utilization at which the autoscaler spins up a new pod.400
      behavior:

        scaleUp:

          stabilizationWindowSeconds: The duration (in seconds) for which the application keeps a watch on the spikes in the resource utilization by the currently running pods. This helps in determining whether scaling up is required or not.300
          maxPodToScaleUp:The maximum number of pods a microservice can scale up to at a time.2
          periodSeconds:The time duration (in seconds) that sets the frequency of tracking the spikes in the resource utilization by the currently running pods. 60
        scaleDown:

          stabilizationWindowSeconds: The duration (in seconds) for which the application keeps a watch for drop in resource utilization by the currently running pods. This helps in determining whether scaling down is required or not.300
          maxPodToScaleDown: The maximum number of pods a microservice can scale down to at a time.1
          periodSeconds: The time duration (in seconds) that sets the frequency of tracking the drop in the resource utilization by the currently running pods. 60

Configuring HPA for runtime microservice

...

The following screenshot illustrates the autoscaling parameters for runtime microservice. You can find these parameters in the runtimeImage: section in the global values.yaml file.

ParameterDescriptionSample value
RUNTIME_AUTOSCALING_ENABLED:Parameter to enable HPA by setting its value to true.true
RUNTIME_MIN_POD:Minimum number of pods.1
RUNTIME_MAX_POD:The maximum number of pods the runtime microservice can scale up to.1
RUNTIME_AUTOSCALING_TARGETCPUUTILIZATIONPERCENTAGE:The value of CPU utilization (in percentage) at which the autoscaler spins up a new pod.400
RUNTIME_AUTOSCALING_TARGETMEMORYUTILIZATIONPERCENTAGE:The value of memory utilization (in percentage) at which the autoscaler spins up a new pod.400
RUNTIME_SCALE_UP_STABILIZATION_WINDOW_SECONDS:The duration (in seconds) for which the application keeps a watch on the spikes in the resource utilization by the currently running pods. This helps in determining whether scaling up is required or not.300
RUNTIME_MAX_POD_TO_SCALE_UP:The maximum number of pods the runtime microservice can scale up to at a time.1
RUNTIME_SCALE_UP_PERIOD_SECONDS:The time duration (in seconds) that sets the frequency of tracking the spikes in the resource utilization by the currently running pods. 60
RUNTIME_SCALE_DOWN_STABILIZATION_WINDOW_SECONDS:The duration (in seconds) for which the application keeps a watch for drop in resource utilization by the currently running pods. This helps in determining whether scaling down is required or not.300
RUNTIME_MAX_POD_TO_SCALE_DOWN:The maximum number of pods the runtime microservice can scale down to at a time.1
RUNTIME_SCALE_DOWN_PERIOD_SECONDS:

...

The time duration (in seconds) that sets the frequency of tracking the drop in the resource utilization by the currently running pods. 60