Deploy an Equinix Metal based regional cluster with public networking

Caution

The public networking mode for the Equinix Metal based clusters is deprecated for the sake of the private networking mode. Deployments with public networks will become unsupported in one of the following Container Cloud releases.

You can deploy an additional regional Equinix Metal based cluster with public networking to create managed clusters of several provider types or with different configurations.

To deploy an Equinix Metal based regional cluster:

  1. Configure BGP for your Equinix Metal project as described in Equinix Metal project setup.

  2. Log in to the node where you bootstrapped the Container Cloud management cluster.

  3. Verify that the bootstrap directory is updated.

    Select from the following options:

    • For clusters deployed using Container Cloud 2.11.0 or later:

      ./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \
      --target-dir <pathToBootstrapDirectory>
      
    • For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the kaas-bootstrap folder, download and run the Container Cloud bootstrap script:

      wget https://binary.mirantis.com/releases/get_container_cloud.sh
      
      chmod 0755 get_container_cloud.sh
      
      ./get_container_cloud.sh
      
  4. Prepare the Equinix Metal configuration for the new regional cluster:

    1. Change the directory to kaas-bootstrap.

    2. In templates/equinix/equinix-config.yaml.template, modify spec:projectID and spec:apiToken:value using the values obtained in the previous steps. For example:

      spec:
        projectID: g98sd6f8-dc7s-8273-v8s7-d9v7395nd91
        apiToken:
          value: Bi3m9c7qjYBD3UgsnSCSsqs2bYkbK
      
    3. In templates/equinix/cluster.yaml.template, modify the default configuration of the Equinix Metal facility depending on the previously prepared capacity settings:

      providerSpec:
        value:
        ...
          facility: am6
      

      Also, modify other parameters as required.

    4. Optional. In templates/equinix/machines.yaml.template, modify the default configuration of the Equinix Metal machine type. The minimal required type is c3.small.x86.

      providerSpec:
        value:
        ...
          machineType: c3.small.x86
      

      Also, modify other parameters as required.

  5. If you require all Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the regional cluster using proxy:

    • HTTP_PROXY

    • HTTPS_PROXY

    • NO_PROXY

    Example snippet:

    export HTTP_PROXY=http://proxy.example.com:3128
    export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
    export NO_PROXY=172.18.10.0,registry.internal.lan
    

    The following formats of variables are accepted:

    Proxy configuration data

    Variable

    Format

    HTTP_PROXY
    HTTPS_PROXY
    • http://proxy.example.com:port - for anonymous access

    • http://user:password@proxy.example.com:port - for restricted access

    NO_PROXY

    Comma-separated list of IP addresses or domain names

    For implementation details, see Proxy and cache support.

    For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for an Equinix Metal based cluster.

  6. Configure NTP server.

    Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool (*.ubuntu.pool.ntp.org) are accessible from the node where the regional cluster is being provisioned. Otherwise, configure the regional NTP server parameters as described below.

    Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the management of chrony configuration by Container Cloud to use your own system for chrony management. Otherwise, configure the regional NTP server parameters as described below.

  7. Export the following parameters:

    export KAAS_EQUINIX_ENABLED=true
    export KUBECONFIG=<pathToMgmtClusterKubeconfig>
    export REGIONAL_CLUSTER_NAME=<newRegionalClusterName>
    export REGION=<NewRegionName>
    

    Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.

    Caution

    The REGION and REGIONAL_CLUSTER_NAME parameters values must contain only lowercase alphanumeric characters, hyphens, or periods.

    Note

    If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export SSH_KEY_NAME. It is required for the management cluster to create a publicKey Kubernetes CRD with the public part of your newly generated ssh_key for the regional cluster.

    export SSH_KEY_NAME=<newRegionalClusterSshKeyName>
    
  8. Run the regional cluster bootstrap script:

    ./bootstrap.sh deploy_regional
    

    Note

    When the bootstrap is complete, obtain and save in a secure location the kubeconfig-<regionalClusterName> file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.

    If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional ssh_key will be generated. Make sure to save this key in a secure location as well.

    The workflow of the regional cluster bootstrap script

    #

    Description

    1

    Prepare the bootstrap cluster for the new regional cluster.

    2

    Load the updated Container Cloud CRDs for Credentials, Cluster, and Machines with information about the new regional cluster to the management cluster.

    3

    Connect to each machine of the management cluster through SSH.

    4

    Wait for the Machines and Cluster objects of the new regional cluster to be ready on the management cluster.

    5

    Load the following objects to the new regional cluster: Secret with the management cluster kubeconfig and ClusterRole for the Container Cloud provider.

    6

    Forward the bootstrap cluster endpoint to helm-controller.

    7

    Wait for all CRDs to be available and verify the objects created using these CRDs.

    8

    Pivot the cluster API stack to the regional cluster.

    9

    Switch the LCM Agent from the bootstrap cluster to the regional one.

    10

    Wait for the Container Cloud components to start on the regional cluster.

Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.