Bootstrap a management cluster

Caution

Deployment based on Equinix Metal with private networking is available as Technology Preview for testing purposes only and is not intended for production use.

After you complete the prerequisite steps described in Prerequisites, proceed with bootstrapping your Mirantis Container Cloud management cluster based on the Equinix Metal provider with private networking.

To bootstrap an Equinix Metal based management cluster with private networking:

  1. Log in to the bootstrap node running Ubuntu 20.04 that is configured as described in Prerequisites.

  2. Prepare the bootstrap script:

    1. Download and run the Container Cloud bootstrap script:

      apt install wget
      wget https://binary.mirantis.com/releases/get_container_cloud.sh
      chmod 0755 get_container_cloud.sh
      ./get_container_cloud.sh
      
    2. Change the directory to the kaas-bootstrap folder created by the script.

  3. Obtain your license file that will be required during the bootstrap:

    1. Create a user account at www.mirantis.com.

    2. Log in to your account and download the mirantis.lic license file.

    3. Save the license file as mirantis.lic under the kaas-bootstrap directory on the bootstrap node.

    4. Verify that mirantis.lic contains the exact Container Cloud license previously downloaded from www.mirantis.com by decoding the license JWT token, for example, using jwt.io.

      Example of a valid decoded Container Cloud license data with the mandatory license field:

      {
          "exp": 1652304773,
          "iat": 1636669973,
          "sub": "demo",
          "license": {
              "dev": false,
              "limits": {
                  "clusters": 10,
                  "workers_per_cluster": 10
              },
              "openstack": null
          }
      }
      

      Warning

      The MKE license does not apply to mirantis.lic. For details about MKE license, see MKE documentation.

  4. Using the Equinix Metal console, obtain the project ID and the user-level API Key of the Equinix Metal project to be used for the Container Cloud deployment:

    1. Log in to the Equinix Metal console.

    2. Select the project that you want to use for the Container Cloud deployment.

    3. In Project Settings > General, capture your Project ID.

    4. In Profile Settings > Personal API Keys, capture the existing user-level API Key or create a new one:

      1. In Profile Settings > Personal API Keys, click Add New Key.

      2. Fill in the Description and select the Read/Write permissions.

      3. Click Add Key.

  5. Prepare the Equinix Metal configuration:

    1. Change the directory to kaas-bootstrap.

    2. In templates/equinixmetalv2/equinix-config.yaml.template, modify spec:projectID and spec:apiToken:value using the values obtained in the previous steps. For example:

      spec:
        projectID: g98sd6f8-dc7s-8273-v8s7-d9v7395nd91
        apiToken:
          value: Bi3m9c7qjYBD3UgsnSCSsqs2bYkbK
      
    3. In templates/equinixmetalv2/cluster.yaml.template:

      1. Modify the default configuration of the Equinix Metal facility depending on the previously prepared capacity settings as described in Prerequisites:

        providerSpec:
          value:
            # ...
            facility: am6
        
      2. Add projectSSHKeys that is the list of the Equinix Metal project SSH key names to be attached to cluster machines. These keys are required for access to the Equinix Metal out-of-band console Serial Over SSH (SOS) to debug provisioning failures. We recommend adding at least one project SSH key per cluster.

        Example of the project SSH keys configuration:

        providerSpec:
          value:
            # ...
            projectSSHKeys:
            - <projectSSHKeyName>
        

        To create an SSH key in an Equinix Metal project:

        1. Log in to the Equinix Metal console.

        2. Select the project that you want to use for the Container Cloud deployment.

        3. In the Project Settings tab, select Project SSH Keys and click Add New Key.

        4. Enter the Key Name and Public Key values and click Add.

      3. Modify network parameters as required by your infrastructure:

        providerSpec:
          value:
            # ...
            network:
              vlanId: SET_EQUINIX_VLAN_ID
              loadBalancerHost: SET_LB_HOST
              metallbRanges:
                - SET_EQUINIX_METALLB_RANGES
              cidr: SET_EQUINIX_NETWORK_CIDR
              gateway: SET_EQUINIX_NETWORK_GATEWAY
              dhcpRanges:
                - SET_EQUINIX_NETWORK_DHCP_RANGES
              includeRanges:
                - SET_EQUINIX_CIDR_INCLUDE_RANGES
              excludeRanges:
                - SET_EQUINIX_CIDR_EXCLUDE_RANGES
              nameservers:
                - SET_EQUINIX_NETWORK_NAMESERVERS
        

        Parameter

        Description

        vlanId

        ID of the VLAN created in the corresponding Equinix Metal Metro that the seed node and cluster nodes should be attached to.

        loadBalancerHost

        IP address to use for the MKE and Kubernetes API endpoints of the cluster.

        metallbRanges

        List of IP ranges in the 192.168.0.129-192.168.0.200 format to use for Kubernetes LoadBalancer services. For example, on a management cluster, these services include the Container Cloud web UI and Keycloak. This list should include at least 12 addresses for a management or regional cluster and 5 addresses for a managed cluster.

        cidr

        Network address in CIDR notation. For example, 192.168.0.0/24.

        gateway

        IP address of a gateway attached to this VLAN that provides the necessary external connectivity.

        dhcpRanges

        List of IP ranges in the 192.168.0.10-192.168.0.50 format. IP addresses from these ranges will be allocated to nodes that boot from DHCP during the provisioning process. Should include at least one address for each machine in the cluster.

        includeRanges

        List of IP ranges in the 192.168.0.51-192.168.0.128 format. IP addresses from these ranges will be allocated as permanent addresses of machines in this cluster. Should include at least one address for each machine in the cluster.

        excludeRanges

        Optional. List of IP ranges in the 192.168.0.60-192.168.0.65 format. IP addresses from these ranges will not be allocated as permanent addresses of machines in this cluster.

        nameservers

        List of IP addresses of DNS servers that should be configured on machines. These servers must be accessible through the gateway from the provided VLAN. Required unless a proxy server is used. Since Container Cloud 2.23.0, if you deploy an Equinix Metal regional cluster on top of a public management cluster such as AWS or Azure-based, the nameservers parameter is mandatory and must be set to the public DNS server address, for example, 8.8.8.8.

    4. Add the following parameters to the bootstrap.env file:

      Parameter

      Description

      KAAS_BM_PXE_BRIDGE

      Name of the bridge that will be used to provide PXE services to provision machines during bootstrap.

      KAAS_BM_PXE_IP

      IP address that will be used for PXE services. Will be assigned to the KAAS_BM_PXE_BRIDGE bridge. Must be part of the cidr parameter.

      KAAS_BM_PXE_MASK

      Number of bits in the network address KAAS_BM_PXE_IP. Must match the CIDR suffix in the cidr parameter.

      BOOTSTRAP_METALLB_ADDRESS_POOL

      IP range in the 192.168.0.129-192.168.0.200 format that will be used for Kubernetes LoadBalancer services in the bootstrap cluster.

      Example of the PXE parameters in bootstrap.env:

      KAAS_BM_PXE_BRIDGE=br0
      KAAS_BM_PXE_IP=192.168.0.5
      KAAS_BM_PXE_MASK=24
      BOOTSTRAP_METALLB_ADDRESS_POOL=192.168.0.129-192.168.0.200
      
    5. Optional. In templates/equinixmetalv2/machines.yaml.template, modify the default configuration of the Equinix Metal machine type. The minimal required type is c3.small.x86.

      Warning

      Mirantis highly recommends using the c3.small.x86 machine type for the control plane machines deployed with private network to prevent hardware issues with incorrect BIOS boot order.

      providerSpec:
        value:
          # ...
          machineType: c3.small.x86
      

      Also, modify other parameters as required.

  6. Configure NTP server.

    Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool (*.ubuntu.pool.ntp.org) are accessible from the VLAN where the management cluster is being provisioned. Otherwise, configure the regional NTP server parameters as described below.

    Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the management of chrony configuration by Container Cloud to use your own system for chrony management. Otherwise, configure the regional NTP server parameters as described below.

  7. Export the following parameter:

    export KAAS_EQUINIXMETALV2_ENABLED=true
    
  8. If you require all Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the management and regional cluster using proxy:

    • HTTP_PROXY

    • HTTPS_PROXY

    • NO_PROXY

    • PROXY_CA_CERTIFICATE_PATH

    Example snippet:

    export HTTP_PROXY=http://proxy.example.com:3128
    export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
    export NO_PROXY=172.18.10.0,registry.internal.lan
    export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"
    

    The following formats of variables are accepted:

    Proxy configuration data

    Variable

    Format

    HTTP_PROXY
    HTTPS_PROXY
    • http://proxy.example.com:port - for anonymous access.

    • http://user:password@proxy.example.com:port - for restricted access.

    NO_PROXY

    Comma-separated list of IP addresses or domain names.

    PROXY_CA_CERTIFICATE_PATH

    Optional. Absolute path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.

    Warning

    If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.

    Note

    • This parameter is generally available for the OpenStack, bare metal, Equinix Metal with private networking, AWS, and vSphere providers.

    • For MOSK-based deployments, the parameter is generally available since MOSK 22.4.

    • For Azure and Equinix Metal with public networking, the feature is not supported.

    For implementation details, see Proxy and cache support.

    For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for an Equinix Metal based cluster.

  9. Optional. Configure external identity provider for IAM.

  10. Re-verify that the selected Equinix Metal facility for the management cluster bootstrap is still available and has enough capacity:

    metal capacity check -f $EQUINIX_FACILITY -P $EQUINIX_MACHINE_TYPE -q $MACHINES_AMOUNT
    

    In the system response, if the value in the AVAILABILITY section has changed from true to false, find an available facility and update the previously configured facility field in cluster.yaml.template.

    For details about the verification procedure, see Verify the capacity of the Equinix Metal facility.

  11. Optional. Enable infinite timeout for all bootstrap stages by exporting the following environment variable or adding it to bootstrap.env:

    export KAAS_BOOTSTRAP_INFINITE_TIMEOUT=true
    

    Infinite timeout prevents the bootstrap failure due to timeout. This option is useful in the following cases:

    • The network speed is slow for artifacts downloading

    • An infrastructure configuration does not allow booting fast

    • A bare-metal node inspecting presupposes more than two HDDSATA disks to attach to a machine

  12. Optional. Available since Container Cloud 2.23.0. Customize the cluster and region name by exporting the following environment variables or adding them to bootstrap.env:

    export REGION=<customRegionName>
    export CLUSTER_NAME=<customClusterName>
    

    By default, the system uses region-one for the region name and kaas-mgmt for the management cluster name.

  13. Run the bootstrap script:

    ./bootstrap.sh all
    
    • In case of deployment issues, refer to Troubleshooting and inspect logs.

    • If the script fails for an unknown reason:

      1. Run the cleanup script:

        ./bootstrap.sh cleanup
        
      2. Rerun the bootstrap script.

  14. When the bootstrap is complete, collect and save the following management cluster details in a secure location:

    • The kubeconfig file located in the same directory as the bootstrap script. This file contains the admin credentials for the management cluster.

    • The private ssh_key for access to the management cluster nodes that is located in the same directory as the bootstrap script.

      Note

      If the initial version of your Container Cloud management cluster was earlier than 2.6.0, ssh_key is named openstack_tmp and is located at ~/.ssh/.

    • The URL for the Container Cloud web UI.

      To create users with permissions required for accessing the Container Cloud web UI, see Create initial users after a management cluster bootstrap.

    • The StackLight endpoints. For details, see Access StackLight web UIs.

    • The Keycloak URL that the system outputs when the bootstrap completes. The admin password for Keycloak is located in kaas-bootstrap/passwords.yml along with other IAM passwords.

    Note

    The Container Cloud web UI and StackLight endpoints are available through Transport Layer Security (TLS) and communicate with Keycloak to authenticate users. Keycloak is exposed using HTTPS and self-signed TLS certificates that are not trusted by web browsers.

    To use your own TLS certificates for Keycloak, refer to Configure TLS certificates for cluster applications.

    Note

    When the bootstrap is complete, the bootstrap cluster resources are freed up.

  15. Establish connection to the cluster private network:

    1. Install sshuttle.

    2. Obtain the cluster CIDR from the cluster specification:

      kubectl --kubeconfig <clusterKubeconfig> \
      get cluster <clusterName> -n <clusterProjectName> \
      -o jsonpath='{.spec.providerSpec.value.network.cidr}'
      
    3. Obtain the public IP address of the related Equinix Metal router:

      1. Log in to the Equinix Metal console of the related project.

      2. In the list of servers, capture the IP address of the related Equinix Metal router server listed in the IPV4 ADDRESS column.

    4. Establish connection to the cluster private network from your local machine:

      sshuttle <clusterCIDR> -r ubuntu@<routerPublicIP> --ssh-cmd 'ssh -i <pathToRouterSSHKey>'
      

    Now, you can access the Keycloak, StackLight, and Container Cloud web UIs.

  16. Optional. Deploy an additional regional cluster of a different provider type or configuration as described in Deploy an additional regional cluster (optional).

Now, you can proceed with operating your management cluster using the Container Cloud web UI and deploying managed clusters as described in Create and operate an Equinix Metal based managed cluster with private networking.