Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!

Now, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.

Configure the parallel update of worker nodes

By default, worker machines are upgraded sequentially, which includes node draining, software upgrade, services restart, and so on. Though, MOSK enables you to parallelize node upgrade operations, significantly improving the efficiency of your deployment, especially on large clusters.

For upgrade workflow of the control plane, see Change the upgrade order of a machine.

Configure the parallel update of worker nodes using web UI

  1. Log in to the MOSK management console with the m:kaas:namespace@operator or m:kaas:namespace@writer permissions.

  2. Switch to the required project using the Switch Project action icon located on top of the main left-side navigation panel.

  3. In the Clusters tab, click the required cluster name. The cluster page with the Machines list opens.

  4. On the Clusters page, click the More action icon in the last column of the required cluster and select Configure cluster.

  5. In General Settings of the Configure cluster window, define the following parameters:

    Parallel Upgrade Of Worker Machines

    The maximum number of the worker nodes to update simultaneously. It serves as an upper limit on the number of machines that are drained at a given moment of time. Defaults to 1.

    You can configure this option after deployment before the cluster update.

    Parallel Preparation For Upgrade Of Worker Machines

    The maximum number of worker nodes being prepared at a given moment of time, which includes downloading of new artifacts. It serves as a limit for the network load that can occur when downloading the files to the nodes. Defaults to 50.

Configure the parallel update of worker nodes using CLI

  1. Open the Cluster object for editing.

  2. Adjust the following parameters as required:

    Configuration of the parallel node update

    Parameter

    Default

    Description

    spec.providerSpec.maxWorkerPrepareCount

    50

    The maximum number of workers being prepared at a given moment of time, which includes downloading of new artifacts. It serves as a limit for the network load that can occur when downloading the files to the nodes.

    spec.providerSpec.maxWorkerUpgradeCount Deprecated

    1

    The maximum number of the worker nodes to update simultaneously. It serves as an upper limit on the number of machines that are drained at a given moment of time.

    Caution

    This parameter is deprecated and will be removed in one of the following releases. Use the concurrentUpdates parameter in the UpdateGroup object instead. For details, see Create update groups for worker machines.

  3. Save the Cluster object to apply the change.