Configure the parallel update of worker nodes

Available since 17.0.0, 16.0.0, and 14.1.0 as GA Available since 14.0.1(0) and 15.0.1 as TechPreview

Note

For MOSK clusters, you can start using the below procedure during cluster update from 23.1 to 23.2. For details, see MOSK documentation: Parallelizing node update operations.

By default, worker machines are upgraded sequentially, which includes node draining, software upgrade, services restart, and so on. Though, Container Cloud enables you to parallelize node upgrade operations, significantly improving the efficiency of your deployment, especially on large clusters.

For upgrade workflow of the control plane, see Change the upgrade order of a machine or machine pool.

Configure the parallel update of worker nodes using web UI

Available since 17.0.0, 16.0.0, and 14.1.0

  1. Log in to the Container Cloud web UI with the m:kaas:namespace@operator or m:kaas:namespace@writer permissions.

  2. Switch to the required project using the Switch Project action icon located on top of the main left-side navigation panel.

  3. In the Clusters tab, click the required cluster name. The cluster page with the Machines list opens.

  4. On the Clusters page, click the More action icon in the last column of the required cluster and select Configure cluster.

  5. In General Settings of the Configure cluster window, define the following parameters:

    Parallel Upgrade Of Worker Machines

    The maximum number of the worker nodes to update simultaneously. It serves as an upper limit on the number of machines that are drained at a given moment of time. Defaults to 1.

    You can configure this option after deployment before the cluster update.

    Parallel Preparation For Upgrade Of Worker Machines

    The maximum number of worker nodes being prepared at a given moment of time, which includes downloading of new artifacts. It serves as a limit for the network load that can occur when downloading the files to the nodes. Defaults to 50.

Configure the parallel update of worker nodes using CLI

Available since 15.0.1 and 14.0.1(0)

  1. Open the Cluster object for editing.

  2. Adjust the following parameters as required:

    Configuration of the parallel node update

    Parameter

    Default

    Description

    spec.providerSpec.maxWorkerUpgradeCount

    1

    The maximum number of the worker nodes to update simultaneously. It serves as an upper limit on the number of machines that are drained at a given moment of time.

    spec.providerSpec.maxWorkerPrepareCount

    50

    The maximum number of workers being prepared at a given moment of time, which includes downloading of new artifacts. It serves as a limit for the network load that can occur when downloading the files to the nodes.

  3. Save the Cluster object to apply the change.