Prepare metadata and deploy the management cluster

Using the example procedure below, replace the addresses and credentials in the configuration YAML files with the data from your environment. Keep everything else as is, including the file names and YAML structure.

The overall network mapping scheme with all L2 parameters, for example, for a single 10.0.0.0/24 network, is described in the following table. The configuration of each parameter indicated in this table is described in the steps below.

Network mapping overview

Deployment file name

Parameters and values

cluster.yaml

  • SET_LB_HOST=10.0.0.90

  • SET_METALLB_ADDR_POOL=10.0.0.61-10.0.0.80

ipam-objects.yaml

  • SET_IPAM_CIDR=10.0.0.0/24

  • SET_PXE_NW_GW=10.0.0.1

  • SET_PXE_NW_DNS=8.8.8.8

  • SET_IPAM_POOL_RANGE=10.0.0.100-10.0.0.252

  • SET_LB_HOST=10.0.0.90

  • SET_METALLB_ADDR_POOL=10.0.0.61-10.0.0.80

bootstrap.sh

  • KAAS_BM_PXE_IP=10.0.0.20

  • KAAS_BM_PXE_MASK=24

  • KAAS_BM_PXE_BRIDGE=br0

  • KAAS_BM_BM_DHCP_RANGE=10.0.0.30,10.0.0.49


  1. Log in to the seed node that you configured as described in Prepare the seed node.

  2. Change to your preferred work directory, for example, your home directory:

    cd $HOME
    
  3. Prepare the bootstrap script:

    1. Download and run the Container Cloud bootstrap script:

      wget https://binary.mirantis.com/releases/get_container_cloud.sh
      chmod 0755 get_container_cloud.sh
      ./get_container_cloud.sh
      
    2. Change the directory to the kaas-bootstrap folder created by the script.

  4. Obtain your license file that will be required during the bootstrap:

    1. Create a user account at www.mirantis.com.

    2. Log in to your account and download the mirantis.lic license file.

    3. Save the license file as mirantis.lic under the kaas-bootstrap directory on the bootstrap node.

  5. Create a copy of the current templates directory for future reference.

    mkdir templates.backup
    cp -r templates/*  templates.backup/
    
  6. Update the cluster definition template in templates/bm/cluster.yaml.template according to the environment configuration. Use the table below. Manually set all parameters that start with SET_. For example, SET_METALLB_ADDR_POOL.

    Cluster template mandatory parameters

    Parameter

    Description

    Example value

    SET_LB_HOST

    The IP address of the externally accessible API endpoint of the management cluster. This address must NOT be within the SET_METALLB_ADDR_POOL range but must be from the PXE network. External load balancers are not supported.

    10.0.0.90

    SET_METALLB_ADDR_POOL

    The IP range to be used as external load balancers for the Kubernetes services with the LoadBalancer type. This range must be within the PXE network. The minimum required range is 19 IP addresses.

    10.0.0.61-10.0.0.80

  7. Optional. Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.

    In templates/bm/cluster.yaml.template, add the ntp:servers section with the list of required servers names:

    spec:
      ...
      providerSpec:
        value:
          kaas:
          ...
            regional:
              - helmReleases:
                - name: baremetal-provider
                  values:
                    config:
                      lcm:
                        ...
                        ntp:
                          servers:
                          - 0.pool.ntp.org
                          ...
                provider: baremetal
                ...
    
  8. Inspect the default bare metal host profile definition in templates/bm/baremetalhostprofiles.yaml.template. If your hardware configuration differs from the reference, adjust the default profile to match. For details, see Customize the default bare metal host profile.

  9. Update the bare metal hosts definition template in templates/bm/baremetalhosts.yaml.template according to the environment configuration. Use the table below. Manually set all parameters that start with SET_.

    Bare metal hosts template mandatory parameters

    Parameter

    Description

    Example value

    SET_MACHINE_0_IPMI_USERNAME

    The IPMI user name in the base64 encoding to access the BMC. 0

    dXNlcg== (base64 encoded user)

    SET_MACHINE_0_IPMI_PASSWORD

    The IPMI password in the base64 encoding to access the BMC. 0

    cGFzc3dvcmQ= (base64 encoded password)

    SET_MACHINE_0_MAC

    The MAC address of the first management master node in the PXE network.

    ac:1f:6b:02:84:71

    SET_MACHINE_0_BMC_ADDRESS

    The IP address of the BMC endpoint for the first master node in the management cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.

    192.168.100.11

    SET_MACHINE_1_IPMI_USERNAME

    The IPMI user name in the base64 encoding to access the BMC. 0

    dXNlcg== (base64 encoded user)

    SET_MACHINE_1_IPMI_PASSWORD

    The IPMI password in the base64 encoding to access the BMC. 0

    cGFzc3dvcmQ= (base64 encoded password)

    SET_MACHINE_1_MAC

    The MAC address of the second management master node in the PXE network.

    ac:1f:6b:02:84:72

    SET_MACHINE_1_BMC_ADDRESS

    The IP address of the BMC endpoint for the second master node in the management cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.

    192.168.100.12

    SET_MACHINE_2_IPMI_USERNAME

    The IPMI user name in the base64 encoding to access the BMC. 0

    dXNlcg== (base64 encoded user)

    SET_MACHINE_2_IPMI_PASSWORD

    The IPMI password in the base64 encoding to access the BMC. 0

    cGFzc3dvcmQ= (base64 encoded password)

    SET_MACHINE_2_MAC

    The MAC address of the third management master node in the PXE network.

    ac:1f:6b:02:84:73

    SET_MACHINE_2_BMC_ADDRESS

    The IP address of the BMC endpoint for the third master node in the management cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.

    192.168.100.13

    0(1,2,3,4,5,6)

    You can obtain the base64-encoded user name and password using the following command in your Linux console:

    $ echo -n <username|password> | base64
    
  10. Update the IP address pools definition template in templates/bm/ipam-objects.yaml.template according to the environment configuration. Use the table below. Manually set all parameters that start with SET_. For example, SET_IPAM_POOL_RANGE.

    IP address pools template mandatory parameters

    Parameter

    Description

    Example value

    SET_IPAM_CIDR

    The address of PXE network in CIDR notation. Must be minimum in the /24 network.

    10.0.0.0/24

    SET_PXE_NW_GW

    The default gateway in the PXE network. Since this is the only network that Container Cloud will use, this gateway must provide access to:

    • The Internet to download the Mirantis artifacts

    • The OOB network of the Container Cloud cluster

    10.0.0.1

    SET_PXE_NW_DNS

    An external (non-Kubernetes) DNS server accessible from the PXE network. This server will be used by the bare metal hosts in all Container Cloud clusters.

    8.8.8.8

    SET_IPAM_POOL_RANGE

    This pool range includes addresses that will be allocated to bare metal hosts in all Container Cloud clusters. The size of this range limits the number of hosts that can be deployed by the instance of Container Cloud.

    10.0.0.100-10.0.0.252

    SET_LB_HOST 1

    The IP address of the externally accessible API endpoint of the management cluster. This address must NOT be within the SET_METALLB_ADDR_POOL range but must be from the PXE network. External load balancers are not supported.

    10.0.0.90

    SET_METALLB_ADDR_POOL 1

    The IP range to be used as external load balancers for the Kubernetes services with the LoadBalancer type. This range must be within the PXE network. The minimum required range is 19 IP addresses.

    10.0.0.61-10.0.0.80

    1(1,2)

    Use the same value that you used for this parameter in the cluster.yaml.template file (see above).

  11. Optional. To connect the management cluster hosts to the PXE/management network using bond interfaces, proceed to Configure NIC bonding.

  12. If you require all Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the management and regional cluster using proxy:

    • HTTP_PROXY

    • HTTPS_PROXY

    • NO_PROXY

    Example snippet:

    export HTTP_PROXY=http://proxy.example.com:3128
    export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
    export NO_PROXY=172.18.10.0,registry.internal.lan
    

    The following variables formats are accepted:

    Proxy configuration data

    Variable

    Format

    • HTTP_PROXY

    • HTTPS_PROXY

    • http://proxy.example.com:port - for anonymous access

    • http://user:password@proxy.example.com:port - for restricted access

    • NO_PROXY

    Comma-separated list of IP addresses or domain names

    For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for a baremetal-based cluster.

  13. Optional. Configure external identity provider for IAM.

  14. Optional. If you are going to use your own TLS certificates for Keycloak, set DISABLE_OIDC=true in bootstrap.env.

  15. Configure the Ceph cluster:

    1. Optional. Technology Preview. Configure Ceph controller to manage Ceph nodes resources. In templates/bm/cluster.yaml.template, in the ceph-controller section of spec.providerSpec.value.helmReleases, specify the hyperconverge parameter with required resource requests, limits, or tolerations:

      spec:
         providerSpec:
           value:
             helmReleases:
             - name: ceph-controller
               values:
                 hyperconverge:
                   tolerations: <ceph tolerations map>
                   resources: <ceph resource management map>
      

      For the parameters description, see Enable Ceph tolerations and resources management.

    2. In templates/bm/kaascephcluster.yaml.template:

      • Configure dedicated networks clusterNet and publicNet for Ceph components.

      • Set up the disk configuration according to your hardware node specification. Verify that the storageDevices section has a valid list of HDD, SSD, or NVME device names and each device is empty, that is, no file system is present on it. To enable all LCM features of Ceph controller, set manageOsds to true.

        Caution

        The manageOsds parameter enables irreversible operations such as Ceph OSD removal. Therefore, use this feature with caution.

      • If required, configure other parameters as described in Ceph advanced configuration.

      Configuration example:

      manageOsds: true
      ...
      # This part of KaaSCephCluster should contain valid networks definition
      network:
        clusterNet: 10.10.10.0/24
        publicNet: 10.10.11.0/24
      ...
      nodes:
        master-0:
        ...
        <node_name>:
          ...
          # This part of KaaSCephCluster should contain valid device names
          storageDevices:
          - name: sdb
            config:
              deviceClass: hdd
          # Each storageDevices dicts can have several devices
          - name: sdc
            config:
              deviceClass: hdd
          # All devices for Ceph also should be described to ``wipe`` in
          # ``baremetalhosts.yaml.template``
          - name: sdd
            config:
             deviceClass: hdd
          # Do not to include first devices here (like vda or sda)
          # because they will be allocated for operating system
      
    3. In machines.yaml.template, verify that the metadata:name structure matches the machine names in the spec:nodes structure of kaascephcluster.yaml.template.

  16. Verify that the kaas-bootstrap directory contains the following files:

    # tree  ~/kaas-bootstrap
      ~/kaas-bootstrap/
      ....
      ├── bootstrap.sh
      ├── kaas
      ├── mirantis.lic
      ├── releases
      ...
      ├── templates
      ....
      │   ├── bm
      │   │   ├── baremetalhostprofiles.yaml.template
      │   │   ├── baremetalhosts.yaml.template
      │   │   ├── cluster.yaml.template
      │   │   ├── ipam-objects.yaml.template
      │   │   ├── kaascephcluster.yaml.template
      │   │   └── machines.yaml.template
      ....
      ├── templates.backup
          ....
    
  17. Export all required parameters using the table below.

    export KAAS_BM_ENABLED="true"
    #
    export KAAS_BM_PXE_IP="10.0.0.20"
    export KAAS_BM_PXE_MASK="24"
    export KAAS_BM_PXE_BRIDGE="br0"
    #
    export KAAS_BM_BM_DHCP_RANGE="10.0.0.30,10.0.0.49"
    export BOOTSTRAP_METALLB_ADDRESS_POOL="10.0.0.61-10.0.0.80"
    #
    unset KAAS_BM_FULL_PREFLIGHT
    
    Bare metal prerequisites data

    Parameter

    Description

    Example value

    KAAS_BM_PXE_IP

    The provisioning IP address. This address will be assigned to the interface of the seed node defined by the KAAS_BM_PXE_BRIDGE parameter (see below). The PXE service of the bootstrap cluster will use this address to network boot the bare metal hosts for the management cluster.

    10.0.0.20

    KAAS_BM_PXE_MASK

    The CIDR prefix for the PXE network. It will be used with all of the addresses below when assigning them to interfaces.

    24

    KAAS_BM_PXE_BRIDGE

    The PXE network bridge name. The name must match the name of the bridge created on the seed node during the Prepare the seed node stage.

    br0

    KAAS_BM_BM_DHCP_RANGE

    The start_ip and end_ip addresses must be within the PXE network. This range will be used by dnsmasq to provide IP addresses for nodes during provisioning.

    10.0.0.30,10.0.0.49

    BOOTSTRAP_METALLB_ADDRESS_POOL

    The pool of IP addresses that will be used by services in the bootstrap cluster. Can be the same as the SET_METALLB_ADDR_POOL range for the management cluster, or a different range.

    10.0.0.61-10.0.0.80

    KEYCLOAK_FLOATING_IP Optional

    The spec.loadBalancerIP address for the Keycloak service. Use the address from the top of the SET_METALLB_ADDR_POOL range.

    10.0.0.70 2

    IAM_FLOATING_IP Optional

    The spec.loadBalancerIP address for the IAM service. Use the address from the top of the SET_METALLB_ADDR_POOL range.

    10.0.0.71 2

    PROXY_FLOATING_IP Optional

    The spec.loadBalancerIP address for the Squid service. Use the address from the top of the SET_METALLB_ADDR_POOL range.

    10.0.0.72 2

    2(1,2,3)

    Must not conflict with other *_FLOATING_IP parameters. Use the address from the top of the SET_METALLB_ADDR_POOL range.

  18. Run the verification preflight script to validate the deployment templates configuration:

    ./bootstrap.sh preflight
    

    The command outputs a human-readable report with the verification details. The report includes the list of verified bare metal nodes and their Chassis Power status. This status is based on the deployment templates configuration used during the verification.

    Caution

    If the report contains information about missing dependencies or incorrect configuration, fix the issues before proceeding to the next step.

  19. Run the bootstrap script:

    ./bootstrap.sh all
    

    In case of deployment issues, refer to Troubleshooting. If the script fails for an unknown reason:

    1. Run the cleanup script:

      ./bootstrap.sh cleanup
      
    2. Rerun the bootstrap script.

    Note

    If the bootstrap fails on the Connecting to bootstrap cluster step with the unable to initialize Tiller in bootstrap cluster: failed to establish connection with tiller error, refer to the known issue 16873 to identify possible root cause of the issue and apply the workaround, if applicable.

    Warning

    During the bootstrap process, do not manually restart or power off any of the bare metal hosts.

  20. When the bootstrap is complete, collect and save the following management cluster details in a secure location:

    • The kubeconfig file located in the same directory as the bootstrap script. This file contains the admin credentials for the management cluster.

    • The private ssh_key for access to the management cluster nodes that is located in the same directory as the bootstrap script.

      Note

      If the initial version of your Container Cloud management cluster was earlier than 2.6.0, ssh_key is named openstack_tmp and is located at ~/.ssh/.

    • The URL for the Container Cloud web UI.

      To create users with permissions required for accessing the Container Cloud web UI, see Create initial users after a management cluster bootstrap.

    • The StackLight endpoints. For details, see Access StackLight web UIs.

    • The Keycloak URL that the system outputs when the bootstrap completes. The admin password for Keycloak is located in kaas-bootstrap/passwords.yml along with other IAM passwords.

    Note

    The Container Cloud web UI and StackLight endpoints are available through Transport Layer Security (TLS) and communicate with Keycloak to authenticate users. Keycloak is exposed using HTTPS and self-signed TLS certificates that are not trusted by web browsers.

    To use your own TLS certificates for Keycloak, refer to Configure TLS certificates for management cluster applications.

    Note

    When the bootstrap is complete, the bootstrap cluster resources are freed up.