Using MCP, you can adjust the number of pod replicas without using an external
orchestrator by enabling the horizontal pod autoscaling feature in your
MCP Kubernetes deployment. The feature is based on observed CPU and/or
memory utilization and can be enabled using the metrics-server
add-on.
To enable horizontal pod autoscaling:
While generating a deployment metadata model for your new MCP Kubernetes cluster as described in Create a deployment metadata model, select the Kubernetes metrics server enabled option in the Kubernetes Product parameters section of the Model Designer UI.
If you have already generated a deployment metadata model without the
metrics-server
parameter or to enable this feature
on an existing Kubernetes cluster:
Open your Reclass model Git project repository on the cluster level.
In /kubernetes/control.yml
, add the metrics-server
parameters:
parameters:
kubernetes:
common:
addons:
...
metrics-server:
enabled: true
Select from the following options:
If you are performing an initial deployment of your cluster, proceed with further configuration as required. Pod autoscaling will be enabled during your Kubernetes cluster deployment.
If you are making changes to an existing cluster:
Log in to the Salt Master node.
Refresh your Reclass storage data:
salt-call state.sls reclass.storage
Apply the kube-addons
state:
salt -C 'I@kubernetes:master' state.sls kubernetes.master.kube-addons
On a running Kubernetes cluster, verify that autoscaling works successfully using the Official Kubernetes documentation.