Deploy an OpenStack-based regional cluster

Unsupported since 2.25.0

Caution

Regional clusters are unsupported since Container Cloud 2.25.0. Mirantis does not perform functional integration testing of the feature and intends to remove the related code in Container Cloud 2.26.0. If you still require this feature, contact Mirantis support for further information.

You can deploy an additional regional OpenStack-based cluster to create managed clusters of several provider types or with different configurations.

To deploy an OpenStack-based regional cluster:

  1. Log in to the node where you bootstrapped a management cluster.

  2. Verify that the bootstrap directory is updated.

    Select from the following options:

    • For clusters deployed using Container Cloud 2.11.0 or later:

      ./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \
      --target-dir <pathToBootstrapDirectory>
      
    • For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the kaas-bootstrap folder, download and run the Container Cloud bootstrap script:

      wget https://binary.mirantis.com/releases/get_container_cloud.sh
      
      chmod 0755 get_container_cloud.sh
      
      ./get_container_cloud.sh
      
  3. Prepare the OpenStack configuration for a new regional cluster:

    1. Log in to the OpenStack Horizon.

    2. In the Project section, select API Access.

    3. In the right-side drop-down menu Download OpenStack RC File, select OpenStack clouds.yaml File.

    4. Save the downloaded clouds.yaml file in the kaas-bootstrap folder created by the get_container_cloud.sh script.

    5. In clouds.yaml, add the password field with your OpenStack password under the clouds/openstack/auth section.

      Example:

      clouds:
        openstack:
          auth:
            auth_url: https://auth.openstack.example.com/v3
            username: your_username
            password: your_secret_password
            project_id: your_project_id
            user_domain_name: your_user_domain_name
          region_name: RegionOne
          interface: public
          identity_api_version: 3
      
    6. If you deploy Container Cloud on top of MOSK Victoria with Tungsten Fabric and use the default security group for newly created load balancers, add the following rules for the Kubernetes API server endpoint, Container Cloud application endpoint, and for the MKE web UI and API using the OpenStack CLI:

      • direction='ingress'

      • ethertype='IPv4'

      • protocol='tcp'

      • remote_ip_prefix='0.0.0.0/0'

      • port_range_max and port_range_min:

        • '443' for Kubernetes API and Container Cloud application endpoints

        • '6443' for MKE web UI and API

    7. Verify access to the target cloud endpoint from Docker. For example:

      docker run --rm alpine sh -c "apk add --no-cache curl; \
      curl https://auth.openstack.example.com/v3"
      

      The system output must contain no error records.

    In case of issues, follow the steps provided in Troubleshooting.

  4. Configure the cluster and machines metadata:

    1. Adjust the templates/cluster.yaml.template parameters to suit your deployment:

      1. In the spec::providerSpec::value section, add the mandatory ExternalNetworkID parameter that is the ID of an external OpenStack network. It is required to have public Internet access to virtual machines.

      2. In the spec::clusterNetwork::services section, add the corresponding values for cidrBlocks.

      3. Configure other parameters as required.

    2. In templates/machines.yaml.template, modify the spec:providerSpec:value section for 3 control plane nodes marked with the cluster.sigs.k8s.io/control-plane label by substituting the flavor and image parameters with the corresponding values of the control plane nodes in the related OpenStack cluster. For example:

      spec: &cp_spec
        providerSpec:
          value:
            apiVersion: "openstackproviderconfig.k8s.io/v1alpha1"
            kind: "OpenstackMachineProviderSpec"
            flavor: kaas.minimal
            image: bionic-server-cloudimg-amd64-20190612
      

      Note

      The flavor parameter value provided in the example above is cloud-specific and must meet the Container Cloud requirements.

      Also, modify other parameters as required.

  5. Available since Container Cloud 2.24.0. Optional. Technology Preview. Enable custom host names for cluster machines. When enabled, any machine host name in a particular region matches the related Machine object name. For example, instead of the default kaas-node-<UID>, a machine host name will be master-0. The custom naming format is more convenient and easier to operate with.

    To enable the feature on the management and its future managed clusters:

    1. In |cluster-yaml-path|, find the spec.providerSpec.value.kaas.regional section of the required region.

    2. In this section, find the required provider name under helmReleases.

    3. Under values.config, add customHostnamesEnabled: true.

      For example, for the bare metal provider in region-one:

      regional:
       - helmReleases:
         - name: baremetal-provider
           values:
             config:
               allInOneAllowed: false
               customHostnamesEnabled: true
               internalLoadBalancers: false
         provider: baremetal-provider
      

    Add the following environment variable:

    export CUSTOM_HOSTNAMES=true
    
  6. Optional. Available as TechPreview. To boot cluster machines from a block storage volume, define the following parameter in the spec:providerSpec section of templates/machines.yaml.template:

    bootFromVolume:
      enabled: true
      volumeSize: 120
    

    Note

    The minimal storage requirement is 120 GB per node. For details, see Requirements for an OpenStack-based cluster.

    To boot the Bastion node from a volume, add the same parameter to templates/cluster.yaml.template in the spec:providerSpec section for Bastion. The default amount of storage 80 is enough.

  7. Optional. Available since Container Cloud 2.24.0 as Technology Preview. Create all load balancers of the cluster with a specific Octavia flavor by defining the following parameter in the spec:providerSpec section of templates/cluster.yaml.template:

    serviceAnnotations:
      loadbalancer.openstack.org/flavor-id: <octaviaFlavorID>
    

    For details, see OpenStack documentation: Octavia Flavors.

    Note

    This feature is not supported by OpenStack Queens.

  8. Configure NTP server.

    Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool (*.ubuntu.pool.ntp.org) are accessible from the node where the regional cluster is being provisioned. Otherwise, configure the regional NTP server parameters as described below.

    Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the management of chrony configuration by Container Cloud to use your own system for chrony management. Otherwise, configure the regional NTP server parameters as described below.

    NTP configuration

    Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.

    In templates/cluster.yaml.template, add the ntp:servers section with the list of required server names:

    spec:
      ...
      providerSpec:
        value:
          kaas:
          ...
          ntpEnabled: true
            regional:
              - helmReleases:
                - name: <providerName>-provider
                  values:
                    config:
                      lcm:
                        ...
                        ntp:
                          servers:
                          - 0.pool.ntp.org
                          ...
                provider: <providerName>
                ...
    

    To disable NTP:

    spec:
      ...
      providerSpec:
        value:
          ...
          ntpEnabled: false
          ...
    
  9. Optional. If you require all Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the regional cluster using proxy:

    • HTTP_PROXY

    • HTTPS_PROXY

    • NO_PROXY

    • PROXY_CA_CERTIFICATE_PATH

    Example snippet:

    export HTTP_PROXY=http://proxy.example.com:3128
    export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
    export NO_PROXY=172.18.10.0,registry.internal.lan
    export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"
    

    The following formats of variables are accepted:

    Proxy configuration data

    Variable

    Format

    HTTP_PROXY
    HTTPS_PROXY
    • http://proxy.example.com:port - for anonymous access.

    • http://user:password@proxy.example.com:port - for restricted access.

    NO_PROXY

    Comma-separated list of IP addresses or domain names.

    PROXY_CA_CERTIFICATE_PATH

    Optional. Absolute path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.

    Warning

    If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.

    Note

    For MOSK-based deployments, the parameter is generally available since MOSK 22.4.

    For implementation details, see Proxy and cache support.

    For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for an OpenStack-based cluster.

  10. If you are deploying the regional cluster on top of a baremetal-based management cluster, unset the following parameters:

    unset KAAS_BM_ENABLED KAAS_BM_FULL_PREFLIGHT KAAS_BM_PXE_IP \
          KAAS_BM_PXE_MASK KAAS_BM_PXE_BRIDGE KAAS_BM_BM_DHCP_RANGE \
          TEMPLATES_DIR
    
  11. Export the following parameters:

    export KUBECONFIG=<pathToMgmtClusterKubeconfig>
    export REGIONAL_CLUSTER_NAME=<newRegionalClusterName>
    export REGION=<NewRegionName>
    

    Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.

    Caution

    The REGION and REGIONAL_CLUSTER_NAME parameters values must contain only lowercase alphanumeric characters, hyphens, or periods.

    Note

    If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export SSH_KEY_NAME. It is required for the management cluster to create a publicKey Kubernetes CRD with the public part of your newly generated ssh_key for the regional cluster.

    export SSH_KEY_NAME=<newRegionalClusterSshKeyName>
    
  12. Run the regional cluster bootstrap script:

    ./bootstrap.sh deploy_regional
    

    Note

    When the bootstrap is complete, obtain and save in a secure location the kubeconfig-<regionalClusterName> file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.

    If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional ssh_key will be generated. Make sure to save this key in a secure location as well.

    The workflow of the regional cluster bootstrap script

    #

    Description

    1

    Prepare the bootstrap cluster for the new regional cluster.

    2

    Load the updated Container Cloud CRDs for Credentials, Cluster, and Machines with information about the new regional cluster to the management cluster.

    3

    Connect to each machine of the management cluster through SSH.

    4

    Wait for the Machines and Cluster objects of the new regional cluster to be ready on the management cluster.

    5

    Load the following objects to the new regional cluster: Secret with the management cluster kubeconfig and ClusterRole for the Container Cloud provider.

    6

    Forward the bootstrap cluster endpoint to helm-controller.

    7

    Wait for all CRDs to be available and verify the objects created using these CRDs.

    8

    Pivot the cluster API stack to the regional cluster.

    9

    Switch the LCM Agent from the bootstrap cluster to the regional one.

    10

    Wait for the Container Cloud components to start on the regional cluster.

  13. Verify that network addresses used on your clusters do not overlap with the following default MKE network addresses for Swarm and MCR:

    • 10.0.0.0/16 is used for Swarm networks. IP addresses from this network are virtual.

    • 10.99.0.0/16 is used for MCR networks. IP addresses from this network are allocated on hosts.

    Verification of Swarm and MCR network addresses

    To verify Swarm and MCR network addresses, run on any master node:

    docker info
    

    Example of system response:

    Server:
     ...
     Swarm:
      ...
      Default Address Pool: 10.0.0.0/16
      SubnetSize: 24
      ...
     Default Address Pools:
       Base: 10.99.0.0/16, Size: 20
     ...
    

    Not all of Swarm and MCR addresses are usually in use. One Swarm Ingress network is created by default and occupies the 10.0.0.0/24 address block. Also, three MCR networks are created by default and occupy three address blocks: 10.99.0.0/20, 10.99.16.0/20, 10.99.32.0/20.

    To verify the actual networks state and addresses in use, run:

    docker network ls
    docker network inspect <networkName>
    

Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.