Deploy an Equinix Metal based regional cluster with private networking

Before you deploy an additional regional Equinix Metal based cluster with private networking, complete the prerequisite steps described in Prerequisites.

To deploy an Equinix Metal based regional cluster with private networking:

  1. Log in to the bootstrap node running Ubuntu 20.04 that is configured as described in Prerequisites. Properly connect this node to the regional cluster VLAN.

  2. Prepare the bootstrap script:

    1. Download and run the Container Cloud bootstrap script:

      apt install wget
      wget https://binary.mirantis.com/releases/get_container_cloud.sh
      chmod 0755 get_container_cloud.sh
      ./get_container_cloud.sh
      
    2. Change the directory to the kaas-bootstrap folder created by the script.

  3. Obtain your license file that will be required during the bootstrap:

    1. Create a user account at www.mirantis.com.

    2. Log in to your account and download the mirantis.lic license file.

    3. Save the license file as mirantis.lic under the kaas-bootstrap directory on the bootstrap node.

    4. Verify that mirantis.lic contains the exact Container Cloud license previously downloaded from www.mirantis.com by decoding the license JWT token, for example, using jwt.io.

      Example of a valid decoded Container Cloud license data with the mandatory license field:

      {
          "exp": 1652304773,
          "iat": 1636669973,
          "sub": "demo",
          "license": {
              "dev": false,
              "limits": {
                  "clusters": 10,
                  "workers_per_cluster": 10
              },
              "openstack": null
          }
      }
      

      Warning

      The MKE license does not apply to mirantis.lic. For details about MKE license, see MKE documentation.

  4. Using the Equinix Metal console, obtain the project ID and the user-level API Key of the Equinix Metal project to be used for the Container Cloud deployment:

    1. Log in to the Equinix Metal console.

    2. Select the project that you want to use for the Container Cloud deployment.

    3. In Project Settings > General, capture your Project ID.

    4. In Profile Settings > Personal API Keys, capture the existing user-level API Key or create a new one:

      1. In Profile Settings > Personal API Keys, click Add New Key.

      2. Fill in the Description and select the Read/Write permissions.

      3. Click Add Key.

  5. Prepare the Equinix Metal configuration:

    1. Change the directory to kaas-bootstrap.

    2. In templates/equinixmetalv2/equinix-config.yaml.template, modify spec:projectID and spec:apiToken:value using the values obtained in the previous steps. For example:

      spec:
        projectID: g98sd6f8-dc7s-8273-v8s7-d9v7395nd91
        apiToken:
          value: Bi3m9c7qjYBD3UgsnSCSsqs2bYkbK
      
    3. In templates/equinixmetalv2/cluster.yaml.template:

      • Modify the default configuration of the Equinix Metal facility depending on the previously prepared capacity settings as described in Prerequisites:

        providerSpec:
          value:
            # ...
            facility: am6
        
      • Add projectSSHKeys that is the list of the Equinix Metal project SSH key names to be attached to cluster machines. These keys are required for access to the Equinix Metal out-of-band console Serial Over SSH (SOS) to debug provisioning failures. We recommend adding at least one project SSH key per cluster.

        Example of the project SSH keys configuration:

        providerSpec:
          value:
            # ...
            projectSSHKeys:
            - <projectSSHKeyName>
        

        To create an SSH key in an Equinix Metal project:

        1. Log in to the Equinix Metal console.

        2. Select the project that you want to use for the Container Cloud deployment.

        3. In the Project Settings tab, select Project SSH Keys and click Add New Key.

        4. Enter the Key Name and Public Key values and click Add.

      • Modify network parameters as required by your infrastructure:

        providerSpec:
          value:
            # ...
            network:
              vlanId: SET_EQUINIX_VLAN_ID
              loadBalancerHost: SET_LB_HOST
              metallbRanges:
                - SET_EQUINIX_METALLB_RANGES
              cidr: SET_EQUINIX_NETWORK_CIDR
              gateway: SET_EQUINIX_NETWORK_GATEWAY
              dhcpRanges:
                - SET_EQUINIX_NETWORK_DHCP_RANGES
              includeRanges:
                - SET_EQUINIX_CIDR_INCLUDE_RANGES
              excludeRanges:
                - SET_EQUINIX_CIDR_EXCLUDE_RANGES
              nameservers:
                - SET_EQUINIX_NETWORK_NAMESERVERS
        

        Parameter

        Description

        vlanId

        ID of the VLAN created in the corresponding Equinix Metal Metro that the seed node and cluster nodes should be attached to.

        loadBalancerHost

        IP address to use for the MKE and Kubernetes API endpoints of the cluster.

        metallbRanges

        List of IP ranges in the 192.168.0.129-192.168.0.200 format to use for Kubernetes LoadBalancer services. For example, on a management cluster, these services include the Container Cloud web UI and Keycloak. This list should include at least 12 addresses for a management cluster and 5 for managed clusters.

        cidr

        Network address in CIDR notation. For example, 192.168.0.0/24.

        gateway

        IP address of a gateway attached to this VLAN that provides the necessary external connectivity.

        dhcpRanges

        List of IP ranges in the 192.168.0.10-192.168.0.50 format. IP addresses from these ranges will be allocated to nodes that boot from DHCP during the provisioning process. Should include at least one address for each machine in the cluster.

        includeRanges

        List of IP ranges in the 192.168.0.51-192.168.0.128 format. IP addresses from these ranges will be allocated as permanent addresses of machines in this cluster. Should include at least one address for each machine in the cluster.

        excludeRanges

        Optional. List of IP ranges in the 192.168.0.51-192.168.0.128 format. IP addresses from these ranges will not be allocated as permanent addresses of machines in this cluster.

        nameservers

        List of IP addresses of DNS servers that should be configured on machines. These servers must be accessible through the gateway from the provided VLAN. Required unless a proxy server is used.

    4. Add the following parameters to the bootstrap.env file:

      Parameter

      Description

      KAAS_BM_PXE_BRIDGE

      Name of the bridge that will be used to provide PXE services to provision machines during bootstrap.

      KAAS_BM_PXE_IP

      IP address that will be used for PXE services. Will be assigned to the KAAS_BM_PXE_BRIDGE bridge. Must be part of the cidr parameter.

      KAAS_BM_PXE_MASK

      Number of bits in the network address KAAS_BM_PXE_IP. Must match the CIDR suffix in the cidr parameter.

      BOOTSTRAP_METALLB_ADDRESS_POOL

      IP range in the 192.168.0.129-192.168.0.200 format that will be used for Kubernetes LoadBalancer services in the bootstrap cluster.

      Example of this section in bootstrap.env:

      KAAS_BM_PXE_BRIDGE=br0
      KAAS_BM_PXE_IP=192.168.0.5
      KAAS_BM_PXE_MASK=24
      BOOTSTRAP_METALLB_ADDRESS_POOL=192.168.0.129-192.168.0.200
      
    5. Optional. In templates/equinixmetalv2/machines.yaml.template, modify the default configuration of the Equinix Metal machine type. The minimal required type is c3.small.x86.

      Warning

      Mirantis highly recommends using the c3.small.x86 machine type for the control plane machines deployed with private network to prevent hardware issues with incorrect BIOS boot order.

      providerSpec:
        value:
          # ...
          machineType: c3.small.x86
      

      Also, modify other parameters as required.

  6. Optional if servers from the Ubuntu NTP pool (*.ubuntu.pool.ntp.org) are accessible from the VLAN where the regional cluster is being provisioned. Otherwise, this step is mandatory.

    Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.

    In templates/equinixmetalv2/cluster.yaml.template, add the ntp:servers section with the list of required servers names:

    spec:
      ...
      providerSpec:
        value:
          kaas:
          ...
            regional:
              - helmReleases:
                - name: equinix-provider
                  values:
                    config:
                      lcm:
                        ...
                        ntp:
                          servers:
                          - 192.168.0.1
                          ...
                provider: equinixmetalv2
                ...
    
  7. If you require all Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the regional cluster using proxy:

    • HTTP_PROXY

    • HTTPS_PROXY

    • NO_PROXY

    • PROXY_CA_CERTIFICATE_PATH

    Example snippet:

    export HTTP_PROXY=http://proxy.example.com:3128
    export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
    export NO_PROXY=172.18.10.0,registry.internal.lan
    export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"
    

    The following formats of variables are accepted:

    Proxy configuration data

    Variable

    Format

    HTTP_PROXY
    HTTPS_PROXY
    • http://proxy.example.com:port - for anonymous access

    • http://user:password@proxy.example.com:port - for restricted access

    NO_PROXY

    Comma-separated list of IP addresses or domain names

    PROXY_CA_CERTIFICATE_PATH Available since 2.20.0 as GA

    Optional. Path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.

    Warning

    If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.

    Note

    • Since Container Cloud 2.20.0, this parameter is generally available for the OpenStack, bare metal, Equinix Metal with private networking, AWS, and vSphere providers

    • Since Container Cloud 2.18.0, this parameter is available as TechPreview for the OpenStack and bare metal providers only

    • For Azure and Equinix Metal with public networking, the feature is not supported

    • For MOSK-based deployments, the feature support will become available in one of the following Container Cloud releases

    For implementation details, see Proxy and cache support.

    For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for an Equinix Metal based cluster.

  8. Optional, Technology Preview in Container Cloud 2.18.0. Removed in Container Cloud 2.19.0 for compatibility reasons, currently not supported. Enables encryption for the Kubernetes workloads network using the following field to the Cluster object spec:

     spec:
       providerSpec:
         value:
           secureOverlay: true
    

    For more details, see MKE documentation: Kubernetes network encryption.

    • When the option is enabled, Calico networking is configured to use IP-in-IP overlay and BGP routing.

    • When the option is disabled, Calico networking is configured to use VXLAN overlay (no BGP).

  9. Export the following parameters:

    export KAAS_EQUINIXMETALV2_ENABLED=true
    export KUBECONFIG=<pathToMgmtClusterKubeconfig>
    export REGIONAL_CLUSTER_NAME=<newRegionalClusterName>
    export REGION=<NewRegionName>
    export SSH_KEY_NAME=<newRegionalClusterSshKeyName>
    

    Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.

    Caution

    The REGION and REGIONAL_CLUSTER_NAME parameters values must contain only lowercase alphanumeric characters, hyphens, or periods.

  10. Re-verify that the selected Equinix Metal facility for the regional cluster bootstrap is still available and has enough capacity:

    metal capacity check -f $EQUINIX_FACILITY -P $EQUINIX_MACHINE_TYPE -q $MACHINES_AMOUNT
    

    Note

    Depending on your metal CLI version, naming of flags may vary. To verify naming of flags available for your metal CLI version, run metal capacity check --help.

    In the system response, if the value in the AVAILABILITY section has changed from true to false, find an available facility and update the previously configured facility field in cluster.yaml.template.

    For details about the verification procedure, see Verify the capacity of the Equinix Metal facility.

  11. Run the regional cluster bootstrap script:

    ./bootstrap.sh deploy_regional
    

    Note

    When the bootstrap is complete, obtain and save in a secure location the kubeconfig-<regionalClusterName> file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.

    The workflow of the regional cluster bootstrap script

    #

    Description

    1

    Prepare the bootstrap cluster for the new regional cluster.

    2

    Load the updated Container Cloud CRDs for Credentials, Cluster, and Machines with information about the new regional cluster to the management cluster.

    3

    Connect to each machine of the management cluster through SSH.

    4

    Wait for the Machines and Cluster objects of the new regional cluster to be ready on the management cluster.

    5

    Load the following objects to the new regional cluster: Secret with the management cluster kubeconfig and ClusterRole for the Container Cloud provider.

    6

    Forward the bootstrap cluster endpoint to helm-controller.

    7

    Wait for all CRDs to be available and verify the objects created using these CRDs.

    8

    Pivot the cluster API stack to the regional cluster.

    9

    Switch the LCM Agent from the bootstrap cluster to the regional one.

    10

    Wait for the Container Cloud components to start on the regional cluster.

  12. Establish connection to the cluster private network:

    1. Install sshuttle.

    2. Obtain the cluster CIDR from the cluster specification:

      kubectl --kubeconfig <clusterKubeconfig> \
      get cluster <clusterName> -n <clusterProjectName> \
      -o jsonpath='{.spec.providerSpec.value.network.cidr}'
      
    3. Obtain the public IP address of the related Equinix Metal router:

      1. Log in to the Equinix Metal console of the related project.

      2. In the list of servers, capture the IP address of the related Equinix Metal router server listed in the IPV4 ADDRESS column.

    4. Establish connection to the cluster private network from your local machine:

      sshuttle <clusterCIDR> -r ubuntu@<routerPublicIP> --ssh-cmd 'ssh -i <pathToRouterSSHKey>'
      

    Now, you can access the Keycloak, StackLight, and Container Cloud web UIs.

Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate an Equinix Metal based managed cluster with private networking.

Caution

To decrease network traffic cost and not to complicate the network infrastructure, you must deploy managed clusters in the same region as the regional cluster to have both clusters deployed in the same metro.

For example, if you have a management cluster with region-one in Frankfurt and a regional cluster with region-two in Silicon Valley, create all Frankfurt-based managed clusters in region-one and all Silicon Valley based managed clusters in region-two.