Deploy a VMware vSphere-based regional cluster

Unsupported since 2.25.0


Regional clusters are unsupported since Container Cloud 2.25.0. Mirantis does not perform functional integration testing of the feature and intends to remove the related code in Container Cloud 2.26.0. If you still require this feature, contact Mirantis support for further information.

You can deploy an additional regional VMware vSphere-based cluster to create managed clusters of several provider types or with different configurations.

To deploy a vSphere-based regional cluster:

  1. Log in to the node where you bootstrapped a management cluster.

  2. Verify that the bootstrap directory is updated.

    Select from the following options:

    • For clusters deployed using Container Cloud 2.11.0 or later:

      ./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \
      --target-dir <pathToBootstrapDirectory>
    • For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the kaas-bootstrap folder, download and run the Container Cloud bootstrap script:

      chmod 0755
  3. Verify access to the target vSphere cluster from Docker. For example:

    docker run --rm alpine sh -c "apk add --no-cache curl; \

    The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.

  4. Prepare deployment templates:

    1. Configure MetalLB parameters:

      1. Open the required configuration file for editing:

        Open templates/vsphere/metallbconfig.yaml.template. For a detailed MetalLBConfig object description, see API Reference: MetalLBConfig resource.

        Open templates/vsphere/cluster.yaml.template.

      2. Add SET_VSPHERE_METALLB_RANGE that is the MetalLB range of IP addresses to assign to load balancers for Kubernetes Services.


        To obtain the VSPHERE_METALLB_RANGE parameter for the selected vSphere network, contact your vSphere administrator who provides you with the IP ranges dedicated to your environment.

    2. Modify templates/vsphere/cluster.yaml.template:

      vSphere cluster network parameters
      1. Modify the following required network parameters:

        Required parameters




        IP address from the provided vSphere network for Kubernetes API load balancer (Keepalived VIP).


        Name of the vSphere datastore. You can use different datastores for vSphere Cluster API and vSphere Cloud Provider.


        Path to a folder where the cluster machines metadata will be stored.


        Path to a network for cluster machines.


        Path to a resource pool in which VMs will be created.


        To obtain the LB_HOST parameter for the selected vSphere network, contact your vSphere administrator who provides you with the IP ranges dedicated to your environment.

        Modify other parameters if required. For example, add the corresponding values for cidrBlocks in the spec::clusterNetwork::services section.

      2. For either DHCP or non-DHCP vSphere network:

        1. Determine the vSphere network parameters as described in VMware vSphere network objects and IPAM recommendations.

        2. Provide the following additional parameters for a proper network setup on machines using embedded IP address management (IPAM) in templates/vsphere/cluster.yaml.template:


          To obtain IPAM parameters for the selected vSphere network, contact your vSphere administrator who provides you with IP ranges dedicated to your environment only.

          vSphere configuration data




          Enables IPAM. Recommended value is true for either DHCP or non-DHCP networks.


          CIDR of the provided vSphere network. For example,


          Gateway of the provided vSphere network.


          IP range for the cluster machines. Specify the range of the provided CIDR. For example, If the DHCP network is used, this range must not intersect with the DHCP range of the network.


          Optional. IP ranges to be excluded from being assigned to the cluster machines. The MetalLB range and SET_LB_HOST should not intersect with the addresses for IPAM. For example,


          List of nameservers for the provided vSphere network.

    3. For RHEL deployments, fill out templates/vsphere/rhellicenses.yaml.template.

      RHEL license configuration

      Use one of the following set of parameters for RHEL machines subscription:

      • The user name and password of your RedHat Customer Portal account associated with your RHEL license for Virtual Datacenters.

        Optionally, provide the subscription allocation pools to use for the RHEL subscription activation. If not needed, remove the poolIDs field for subscription-manager to automatically select the licenses for machines.

        For example:

          username: <username>
            value: <password>
          - <pool1>
          - <pool2>
      • The activation key and organization ID associated with your RedHat account with RHEL license for Virtual Datacenters. The activation key can be created by the organization administrator on the RedHat Customer Portal.

        If you use the RedHat Satellite server for management of your RHEL infrastructure, you can provide a pre-generated activation key from that server. In this case:

        • Provide the URL to the RedHat Satellite RPM for installation of the CA certificate that belongs to that server.

        • Configure squid-proxy on the management or regional cluster to allow access to your Satellite server. For details, see Configure squid-proxy.

        For example:

            value: <activation key>
          orgID: "<organization ID>"
          rpmUrl: <rpm url>


        For RHEL 8.7, verify mirrors configuration for your activation key. For more details, see RHEL 8 mirrors configuration.


      Provide only one set of parameters. Mixing the parameters from different activation methods will cause deployment failure.


      The kubectl apply command automatically saves the applied data as plain text into the annotation of the corresponding object. This may result in revealing sensitive data in this annotation when creating or modifying the object.

      Therefore, do not use kubectl apply on this object. Use kubectl create, kubectl patch, or kubectl edit instead.

      If you used kubectl apply on this object, you can remove the annotation from the object using kubectl edit.

    4. For CentOS deployments, in templates/vsphere/rhellicenses.yaml.template, remove all lines under items:.

  5. Configure NTP server.

    Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool (* are accessible from the node where the regional cluster is being provisioned. Otherwise, configure the regional NTP server parameters as described below.

    Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the management of chrony configuration by Container Cloud to use your own system for chrony management. Otherwise, configure the regional NTP server parameters as described below.

    NTP configuration

    Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.

    In templates/vsphere/cluster.yaml.template, add the ntp:servers section with the list of required server names:

          ntpEnabled: true
              - helmReleases:
                - name: <providerName>-provider
                provider: <providerName>

    To disable NTP:

          ntpEnabled: false
  6. Prepare the VM template as described in Prepare the virtual machine template.

  7. In templates/vsphere/machines.yaml.template, define the following parameters:

    • rhelLicense

      RHEL license name defined in rhellicenses.yaml.template, defaults to kaas-mgmt-rhel-license. Remove or comment out this parameter for CentOS and Ubuntu deployments.

    • diskGiB

      Disk size in GiB for machines that must match the disk size of the VM template. You can leave this parameter commented to use the disk size of the VM template. The minimum requirement is 120 GiB.

    • template

      Path to the VM template prepared in the previous step.

    Sample template:

          kind: VsphereMachineProviderSpec
          rhelLicense: <rhelLicenseName>
          numCPUs: 8
          memoryMiB: 32768
          # diskGiB: 120
          template: <vSphereVMTemplatePath>

    Also, modify other parameters if required.

  8. Available since Container Cloud 2.24.0. Optional. Technology Preview. Enable custom host names for cluster machines. When enabled, any machine host name in a particular region matches the related Machine object name. For example, instead of the default kaas-node-<UID>, a machine host name will be master-0. The custom naming format is more convenient and easier to operate with.

    To enable the feature on the management and its future managed clusters:

    1. In |cluster-yaml-path|, find the spec.providerSpec.value.kaas.regional section of the required region.

    2. In this section, find the required provider name under helmReleases.

    3. Under values.config, add customHostnamesEnabled: true.

      For example, for the bare metal provider in region-one:

       - helmReleases:
         - name: baremetal-provider
               allInOneAllowed: false
               customHostnamesEnabled: true
               internalLoadBalancers: false
         provider: baremetal-provider

    Add the following environment variable:

    export CUSTOM_HOSTNAMES=true
  9. Optional. If you require all Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the regional cluster using proxy:



    • NO_PROXY


    Example snippet:

    export HTTP_PROXY=
    export HTTPS_PROXY=
    export NO_PROXY=,registry.internal.lan
    export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"

    The following formats of variables are accepted:

    Proxy configuration data



    • - for anonymous access.

    • - for restricted access.


    Comma-separated list of IP addresses or domain names. Mandatory to add host[:port] of the vCenter server.


    Optional. Absolute path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.


    If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.

    For implementation details, see Proxy and cache support.

    For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for a VMware vSphere-based cluster.

  10. Export the following parameters:

    export KAAS_VSPHERE_ENABLED=true
    export KUBECONFIG=<pathToMgmtClusterKubeconfig>
    export REGIONAL_CLUSTER_NAME=<newRegionalClusterName>
    export REGION=<NewRegionName>

    Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.


    The REGION and REGIONAL_CLUSTER_NAME parameters values must contain only lowercase alphanumeric characters, hyphens, or periods.


    If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export SSH_KEY_NAME. It is required for the management cluster to create a publicKey Kubernetes CRD with the public part of your newly generated ssh_key for the regional cluster.

    export SSH_KEY_NAME=<newRegionalClusterSshKeyName>
  11. Run the regional cluster bootstrap script:

    ./ deploy_regional


    When the bootstrap is complete, obtain and save in a secure location the kubeconfig-<regionalClusterName> file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.

    If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional ssh_key will be generated. Make sure to save this key in a secure location as well.

    The workflow of the regional cluster bootstrap script




    Prepare the bootstrap cluster for the new regional cluster.


    Load the updated Container Cloud CRDs for Credentials, Cluster, and Machines with information about the new regional cluster to the management cluster.


    Connect to each machine of the management cluster through SSH.


    Wait for the Machines and Cluster objects of the new regional cluster to be ready on the management cluster.


    Load the following objects to the new regional cluster: Secret with the management cluster kubeconfig and ClusterRole for the Container Cloud provider.


    Forward the bootstrap cluster endpoint to helm-controller.


    Wait for all CRDs to be available and verify the objects created using these CRDs.


    Pivot the cluster API stack to the regional cluster.


    Switch the LCM Agent from the bootstrap cluster to the regional one.


    Wait for the Container Cloud components to start on the regional cluster.

  12. Verify that network addresses used on your clusters do not overlap with the following default MKE network addresses for Swarm and MCR:

    • is used for Swarm networks. IP addresses from this network are virtual.

    • is used for MCR networks. IP addresses from this network are allocated on hosts.

    Verification of Swarm and MCR network addresses

    To verify Swarm and MCR network addresses, run on any master node:

    docker info

    Example of system response:

      Default Address Pool:
      SubnetSize: 24
     Default Address Pools:
       Base:, Size: 20

    Not all of Swarm and MCR addresses are usually in use. One Swarm Ingress network is created by default and occupies the address block. Also, three MCR networks are created by default and occupy three address blocks:,,

    To verify the actual networks state and addresses in use, run:

    docker network ls
    docker network inspect <networkName>

Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.