Deploy an Azure-based regional cluster

You can deploy an additional regional Azure-based cluster to create managed clusters of several provider types or with different configurations.

To deploy an Azure-based regional cluster:

  1. Log in to the node where you bootstrapped a management cluster.

  2. Prepare the Azure configuration for the new regional cluster:

    1. Create an Azure service principal. Skip this step to use an existing Azure service principal.

      1. Create a Microsoft Azure account.

      2. Install Azure CLI.

      3. Log in to the Azure CLI:

        az login
      4. List your Azure accounts:

        az account list -o table
      5. If more than one account exists, select the account dedicated for Container Cloud:

        az account set -s <subscriptionID>
      6. Create an Azure service principal:


        The owner role is required for creation of role assignments.

        az ad sp create-for-rbac --role contributor --scopes="/subscriptions/<subscriptionID>"

        Example of system response:

           "appId": "0c87aM5a-e172-182b-a91a-a9b8d39ddbcd",
           "displayName": "azure-cli-2021-08-04-15-25-16",
           "name": "1359ac72-5794-494d-b787-1d7309b7f8bc",
           "password": "Q1jB2-7Uz6Cka7xos6vL-Ddb4BQx2vgMl",
           "tenant": "6d498697-7anvd-4172-a7v0-4e5b2e25f280"
    2. Change the directory to kaas-bootstrap.

    3. Export the following parameter:

      export KAAS_AZURE_ENABLED=true
    4. In templates/azure/azure-config.yaml.template, modify the following parameters using credentials obtained in the previous steps or using credentials of an existing Azure service principal obtained from the subscription owner:

      • spec:subscriptionID is the subscription ID of your Azure account

      • spec:tenantID is the value of "tenant"

      • spec:clientID is the value of "appId"

      • spec:clientSecret:value is the value of "password"

      For example:

        subscriptionID: b8bea78f-zf7s-s7vk-s8f0-642a6v7a39c1
        tenantID: 6d498697-7anvd-4172-a7v0-4e5b2e25f280
        clientID: 0c87aM5a-e172-182b-a91a-a9b8d39ddbcd
          value: Q1jB2-7Uz6Cka7xos6vL-Ddb4BQx2vgMl
    5. In templates/azure/cluster.yaml.template, modify the default configuration of the Azure cluster location. This is an Azure region that your subscription has quota for.

      To obtain the list of available locations, run:

      az account list-locations -o=table

      For example:

          location: southcentralus

      Also, modify other parameters as required.

  3. If you require Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the regional cluster using proxy:



    • NO_PROXY

    Example snippet:

    export HTTP_PROXY=
    export HTTPS_PROXY=
    export NO_PROXY=,registry.internal.lan

    The following formats of variables are accepted:

    Proxy configuration data



    • - for anonymous access

    • - for restricted access


    Comma-separated list of IP addresses or domain names

    For implementation details, see Proxy and cache support.

    For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for an Azure-based cluster.

  4. Optional if servers from the Ubuntu NTP pool (* are accessible from the node where the regional cluster is being provisioned. Otherwise, this step is mandatory.

    Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.

    In templates/azure/cluster.yaml.template, add the ntp:servers section with the list of required servers names:

              - helmReleases:
                - name: azure-provider
                provider: azure
  5. Export the following parameters:

    export KUBECONFIG=<pathToMgmtClusterKubeconfig>
    export REGIONAL_CLUSTER_NAME=<newRegionalClusterName>
    export REGION=<NewRegionName>

    Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.


    The REGION and REGIONAL_CLUSTER_NAME parameters values must contain only lowercase alphanumeric characters, hyphens, or periods.


    If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export SSH_KEY_NAME. It is required for the management cluster to create a publicKey Kubernetes CRD with the public part of your newly generated ssh_key for the regional cluster.

    export SSH_KEY_NAME=<newRegionalClusterSshKeyName>
  6. Run the regional cluster bootstrap script:

    ./ deploy_regional


    When the bootstrap is complete, obtain and save in a secure location the kubeconfig-<regionalClusterName> file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.

    If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional ssh_key will be generated. Make sure to save this key in a secure location as well.

    The workflow of the regional cluster bootstrap script




    Prepare the bootstrap cluster for the new regional cluster.


    Load the updated Container Cloud CRDs for Credentials, Cluster, and Machines with information about the new regional cluster to the management cluster.


    Connect to each machine of the management cluster through SSH.


    Wait for the Machines and Cluster objects of the new regional cluster to be ready on the management cluster.


    Load the following objects to the new regional cluster: Secret with the management cluster kubeconfig and ClusterRole for the Container Cloud provider.


    Forward the bootstrap cluster endpoint to helm-controller.


    Wait for all CRDs to be available and verify the objects created using these CRDs.


    Pivot the cluster API stack to the regional cluster.


    Switch the LCM Agent from the bootstrap cluster to the regional one.


    Wait for the Container Cloud components to start on the regional cluster.

Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.