Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • The autoscaling of runtime pods can happen based on the threshold values for Message Queue or CPU or memory, or any combination of these three parameters. You can make these configurations in the global values.yaml file.

    To use KEDA, you first need to enable it by setting the value for the type variable to keda under global > config > autoscaling section in the values.yaml file as shown in the following screenshot. To set the other relevant parameters, for example, the threshold number of messages in the Message Queue, refer to this section.

    Tip
    For a dedicated runtime (Deployment) pod, you need to set the threshold values for Message Queue, CPU, and memory while creating the Deployment. For more details, refer to this page.


  • The autoscaling of other microservices' pods can happen based on the threshold values for CPU or memory, or both. You can make these configurations in the global values.yaml file. For more details, refer to this section.

When you use Kubernetes' HPA,

  • The autoscaling of runtime pods can happen based on the threshold values for CPU or memory, or both. You can make these configurations in the global values.yaml file. To set the relevant parameters in the values.yaml file, refer to this section.

    Tip
    Ensure that the value for the type variable under global > config > autoscaling section in the values.yaml file is set to hpa.  


    Tip
    For a dedicated runtime (Deployment) pod, you need to set the threshold values for CPU and memory while creating the Deployment. For more details, refer to this page.


  • The autoscaling of the other microservices' pods can happen based on the threshold values for CPU or memory, or both. You can make these configurations in the global values.yaml file. To set the relevant parameters in the values.yaml file, refer to this section.

Anchor
runtime microservice
runtime microservice

...