Prepare metadata and deploy the management cluster

Using the example procedure below, replace the addresses and credentials in the configuration YAML files with the data from your environment. Keep everything else as is, including the file names and YAML structure.

The overall network mapping scheme with all L2/L3 parameters, for example, for a single 10.0.0.0/24 network, is described in the following table. The configuration of each parameter indicated in this table is described in the steps below.

Network mapping overview

Deployment file name

Parameters and values

cluster.yaml

  • SET_LB_HOST=10.0.0.90

  • SET_METALLB_ADDR_POOL=10.0.0.61-10.0.0.80

ipam-objects.yaml

  • SET_IPAM_CIDR=10.0.0.0/24

  • SET_PXE_NW_GW=10.0.0.1

  • SET_PXE_NW_DNS=8.8.8.8

  • SET_IPAM_POOL_RANGE=10.0.0.100-10.0.0.252

  • SET_LB_HOST=10.0.0.90

  • SET_METALLB_ADDR_POOL=10.0.0.61-10.0.0.80

bootstrap.env

  • KAAS_BM_PXE_IP=10.0.0.20

  • KAAS_BM_PXE_MASK=24

  • KAAS_BM_PXE_BRIDGE=br0

  • KAAS_BM_BM_DHCP_RANGE=10.0.0.30,10.0.0.49,255.255.255.0

  • BOOTSTRAP_METALLB_ADDRESS_POOL=10.0.0.61-10.0.0.80


  1. Log in to the seed node that you configured as described in Prepare the seed node.

  2. Change to your preferred work directory, for example, your home directory:

    cd $HOME
    
  3. Prepare the bootstrap script:

    1. Download and run the Container Cloud bootstrap script:

      apt install wget
      wget https://binary.mirantis.com/releases/get_container_cloud.sh
      chmod 0755 get_container_cloud.sh
      ./get_container_cloud.sh
      
    2. Change the directory to the kaas-bootstrap folder created by the script.

  4. Obtain your license file that will be required during the bootstrap:

    1. Create a user account at www.mirantis.com.

    2. Log in to your account and download the mirantis.lic license file.

    3. Save the license file as mirantis.lic under the kaas-bootstrap directory on the bootstrap node.

    4. Verify that mirantis.lic contains the exact Container Cloud license previously downloaded from www.mirantis.com by decoding the license JWT token, for example, using jwt.io.

      Example of a valid decoded Container Cloud license data with the mandatory license field:

      {
          "exp": 1652304773,
          "iat": 1636669973,
          "sub": "demo",
          "license": {
              "dev": false,
              "limits": {
                  "clusters": 10,
                  "workers_per_cluster": 10
              },
              "openstack": null
          }
      }
      

      Warning

      The MKE license does not apply to mirantis.lic. For details about MKE license, see MKE documentation.

  5. Prepare the deployment templates:

    1. Create a copy of the current templates directory for future reference.

      mkdir templates.backup
      cp -r templates/*  templates.backup/
      
    2. Update the cluster definition template in templates/bm/cluster.yaml.template according to the environment configuration. Use the table below. Manually set all parameters that start with SET_. For example, SET_METALLB_ADDR_POOL.

      Cluster template mandatory parameters

      Parameter

      Description

      Example value

      SET_LB_HOST

      The IP address of the externally accessible API endpoint of the cluster. This address must NOT be within the SET_METALLB_ADDR_POOL range but must be within the PXE/Management network. External load balancers are not supported.

      10.0.0.90

      SET_METALLB_ADDR_POOL

      The IP range to be used as external load balancers for the Kubernetes services with the LoadBalancer type. This range must be within the PXE/Management network. The minimum required range is 19 IP addresses.

      10.0.0.61-10.0.0.80

    3. Configure NTP server.

      Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool (*.ubuntu.pool.ntp.org) are accessible from the node where your cluster is being provisioned. Otherwise, configure the regional NTP server parameters as described below.

      Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the management of chrony configuration by Container Cloud to use your own system for chrony management. Otherwise, configure the regional NTP server parameters as described below.

      NTP configuration

      Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.

      In templates/bm/cluster.yaml.template, add the ntp:servers section with the list of required server names:

      spec:
        ...
        providerSpec:
          value:
            kaas:
            ...
            ntpEnabled: true
              regional:
                - helmReleases:
                  - name: <providerName>-provider
                    values:
                      config:
                        lcm:
                          ...
                          ntp:
                            servers:
                            - 0.pool.ntp.org
                            ...
                  provider: <providerName>
                  ...
      

      To disable NTP:

      spec:
        ...
        providerSpec:
          value:
            ...
            ntpEnabled: false
            ...
      
    4. Inspect the default bare metal host profile definition in templates/bm/baremetalhostprofiles.yaml.template. If your hardware configuration differs from the reference, adjust the default profile to match. For details, see Customize the default bare metal host profile.

      Warning

      All data will be wiped during cluster deployment on devices defined directly or indirectly in the fileSystems list of BareMetalHostProfile. For example:

      • A raw device partition with a file system on it

      • A device partition in a volume group with a logical volume that has a file system on it

      • An mdadm RAID device with a file system on it

      • An LVM RAID device with a file system on it

      The wipe field is always considered true for these devices. The false value is ignored.

      Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.

    5. Update the bare metal hosts definition template in templates/bm/baremetalhosts.yaml.template according to the environment configuration. Use the table below. Manually set all parameters that start with SET_.

      Bare metal hosts template mandatory parameters

      Parameter

      Description

      Example value

      SET_MACHINE_0_IPMI_USERNAME

      The IPMI user name to access the BMC. 0

      user

      SET_MACHINE_0_IPMI_PASSWORD

      The IPMI password to access the BMC. 0

      password

      SET_MACHINE_0_MAC

      The MAC address of the first master node in the PXE network.

      ac:1f:6b:02:84:71

      SET_MACHINE_0_BMC_ADDRESS

      The IP address of the BMC endpoint for the first master node in the cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.

      192.168.100.11

      SET_MACHINE_1_IPMI_USERNAME

      The IPMI user name to access the BMC. 0

      user

      SET_MACHINE_1_IPMI_PASSWORD

      The IPMI password to access the BMC. 0

      password

      SET_MACHINE_1_MAC

      The MAC address of the second master node in the PXE network.

      ac:1f:6b:02:84:72

      SET_MACHINE_1_BMC_ADDRESS

      The IP address of the BMC endpoint for the second master node in the cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.

      192.168.100.12

      SET_MACHINE_2_IPMI_USERNAME

      The IPMI user name to access the BMC. 0

      user

      SET_MACHINE_2_IPMI_PASSWORD

      The IPMI password to access the BMC. 0

      password

      SET_MACHINE_2_MAC

      The MAC address of the third master node in the PXE network.

      ac:1f:6b:02:84:73

      SET_MACHINE_2_BMC_ADDRESS

      The IP address of the BMC endpoint for the third master node in the cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.

      192.168.100.13

      0(1,2,3,4,5,6)
      • Since Container Cloud 2.21.0, a user name and password in plain text are required.

      • Before Container Cloud 2.21.0, the Base64 encoding of a user name and password is required. You can obtain the Base64-encoded user name and password using the following command in your Linux console:

        $ echo -n <username|password> | base64
        
    6. Update the Subnet objects definition template in templates/bm/ipam-objects.yaml.template according to the environment configuration. Use the table below. Manually set all parameters that start with SET_. For example, SET_IPAM_POOL_RANGE.

      IP address pools template mandatory parameters

      Parameter

      Description

      Example value

      SET_IPAM_CIDR

      The address of PXE network in CIDR notation. Must be minimum in the /24 network.

      10.0.0.0/24

      SET_PXE_NW_GW

      The default gateway in the PXE network. Since this is the only network that cluster will use by default, this gateway must provide access to:

      • The Internet to download the Mirantis artifacts

      • The OOB network of the Container Cloud cluster

      10.0.0.1

      SET_PXE_NW_DNS

      An external (non-Kubernetes) DNS server accessible from the PXE network.

      8.8.8.8

      SET_IPAM_POOL_RANGE

      This IP address range includes addresses that will be allocated in the PXE/Management network to bare metal hosts of the cluster.

      10.0.0.100-10.0.0.252

      SET_LB_HOST 1

      The IP address of the externally accessible API endpoint of the cluster. This address must NOT be within the SET_METALLB_ADDR_POOL range but must be within the PXE/Management network. External load balancers are not supported.

      10.0.0.90

      SET_METALLB_ADDR_POOL 1

      The IP address range to be used as external load balancers for the Kubernetes services with the LoadBalancer type. This range must be within the PXE/Management network. The minimum required range is 19 IP addresses.

      10.0.0.61-10.0.0.80

      1(1,2)

      Use the same value that you used for this parameter in the cluster.yaml.template file (see above).

    7. Optional. To configure the separated PXE and management networks instead of one PXE/management network, proceed to Separate PXE and management networks.

    8. Optional. To connect the cluster hosts to the PXE/Management network using bond interfaces, proceed to Configure NIC bonding.

    9. If you require all Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the cluster using proxy:

      • HTTP_PROXY

      • HTTPS_PROXY

      • NO_PROXY

      • PROXY_CA_CERTIFICATE_PATH

      Example snippet:

      export HTTP_PROXY=http://proxy.example.com:3128
      export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
      export NO_PROXY=172.18.10.0,registry.internal.lan
      export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"
      

      The following formats of variables are accepted:

      Proxy configuration data

      Variable

      Format

      HTTP_PROXY
      HTTPS_PROXY
      • http://proxy.example.com:port - for anonymous access.

      • http://user:password@proxy.example.com:port - for restricted access.

      NO_PROXY

      Comma-separated list of IP addresses or domain names.

      PROXY_CA_CERTIFICATE_PATH

      Optional. Absolute path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.

      Warning

      If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.

      Note

      For MOSK-based deployments, the parameter is generally available since MOSK 22.4.

      For implementation details, see Proxy and cache support.

      For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for a baremetal-based cluster.

    10. Verify that the kaas-bootstrap directory contains the following files:

      # tree  ~/kaas-bootstrap
        ~/kaas-bootstrap/
        ....
        ├── bootstrap.sh
        ├── kaas
        ├── mirantis.lic
        ├── releases
        ...
        ├── templates
        ....
                     ├── bm
                                  ├── baremetalhostprofiles.yaml.template
                                  ├── baremetalhosts.yaml.template
                                  ├── cluster.yaml.template
                                  ├── ipam-objects.yaml.template
                                  └── machines.yaml.template
        ....
        ├── templates.backup
            ....
      

      Note

      Before Container Cloud 2.20.0, kaas-bootstrap/templates/bm also must contain kaascephcluster.yaml.template.

    11. Export all required parameters using the table below.

      export KAAS_BM_ENABLED="true"
      #
      export KAAS_BM_PXE_IP="10.0.0.20"
      export KAAS_BM_PXE_MASK="24"
      export KAAS_BM_PXE_BRIDGE="br0"
      #
      export KAAS_BM_BM_DHCP_RANGE="10.0.0.30,10.0.0.49,255.255.255.0"
      export BOOTSTRAP_METALLB_ADDRESS_POOL="10.0.0.61-10.0.0.80"
      #
      unset KAAS_BM_FULL_PREFLIGHT
      
      Bare metal prerequisites data

      Parameter

      Description

      Example value

      KAAS_BM_PXE_IP

      The provisioning IP address. This address will be assigned to the interface of the seed node defined by the KAAS_BM_PXE_BRIDGE parameter (see below). The PXE service of the bootstrap cluster will use this address to network boot the bare metal hosts for the cluster.

      10.0.0.20

      KAAS_BM_PXE_MASK

      The CIDR prefix for the PXE network. It will be used with KAAS_BM_PXE_IP address when assigning it to network interface.

      24

      KAAS_BM_PXE_BRIDGE

      The PXE network bridge name. The name must match the name of the bridge created on the seed node during the Prepare the seed node stage.

      br0

      KAAS_BM_BM_DHCP_RANGE

      The start_ip and end_ip addresses must be within the PXE network. This range will be used by dnsmasq to provide IP addresses for nodes during provisioning.

      10.0.0.30,10.0.0.49,255.255.255.0

      BOOTSTRAP_METALLB_ADDRESS_POOL

      The pool of IP addresses that will be used by services in the bootstrap cluster. Can be the same as the SET_METALLB_ADDR_POOL range for the cluster, or a different range.

      10.0.0.61-10.0.0.80

    12. Run the verification preflight script to validate the deployment templates configuration:

      ./bootstrap.sh preflight
      

      The command outputs a human-readable report with the verification details. The report includes the list of verified bare metal nodes and their Chassis Power status. This status is based on the deployment templates configuration used during the verification.

      Caution

      If the report contains information about missing dependencies or incorrect configuration, fix the issues before proceeding to the next step.

  6. Optional. Configure external identity provider for IAM.

  7. Optional. Enable infinite timeout for all bootstrap stages by exporting the following environment variable or adding it to bootstrap.env:

    export KAAS_BOOTSTRAP_INFINITE_TIMEOUT=true
    

    Infinite timeout prevents the bootstrap failure due to timeout. This option is useful in the following cases:

    • The network speed is slow for artifacts downloading

    • An infrastructure configuration does not allow booting fast

    • A bare-metal node inspecting presupposes more than two HDDSATA disks to attach to a machine

  8. Optional. Available since Container Cloud 2.23.0. Customize the cluster and region name by exporting the following environment variables or adding them to bootstrap.env:

    export REGION=<customRegionName>
    export CLUSTER_NAME=<customClusterName>
    

    By default, the system uses region-one for the region name and kaas-mgmt for the management cluster name.

  9. Run the bootstrap script:

    ./bootstrap.sh all
    
    • In case of deployment issues, refer to Troubleshooting and inspect logs.

    • If the script fails for an unknown reason:

      1. Run the cleanup script:

        ./bootstrap.sh cleanup
        
      2. Rerun the bootstrap script.

    Warning

    During the bootstrap process, do not manually restart or power off any of the bare metal hosts.

  10. When the bootstrap is complete, collect and save the following management cluster details in a secure location:

    • The kubeconfig file located in the same directory as the bootstrap script. This file contains the admin credentials for the management cluster.

    • The private ssh_key for access to the management cluster nodes that is located in the same directory as the bootstrap script.

      Note

      If the initial version of your Container Cloud management cluster was earlier than 2.6.0, ssh_key is named openstack_tmp and is located at ~/.ssh/.

    • The URL for the Container Cloud web UI.

      To create users with permissions required for accessing the Container Cloud web UI, see Create initial users after a management cluster bootstrap.

    • The StackLight endpoints. For details, see Access StackLight web UIs.

    • The Keycloak URL that the system outputs when the bootstrap completes. The admin password for Keycloak is located in kaas-bootstrap/passwords.yml along with other IAM passwords.

    Note

    The Container Cloud web UI and StackLight endpoints are available through Transport Layer Security (TLS) and communicate with Keycloak to authenticate users. Keycloak is exposed using HTTPS and self-signed TLS certificates that are not trusted by web browsers.

    To use your own TLS certificates for Keycloak, refer to Configure TLS certificates for cluster applications.

    Note

    When the bootstrap is complete, the bootstrap cluster resources are freed up.

  11. Optional. If you plan to use multiple L2 segments for provisioning of managed cluster nodes, consider the requirements specified in Configure multiple DHCP ranges using Subnet resources.

  12. Optional. Deploy an additional regional cluster of a different provider type as described in Deploy an additional regional cluster (optional).