Deploy an AWS-based regional cluster

Deploy an AWS-based regional cluster

If you want to deploy AWS-based managed clusters of different configurations, deploy an additional regional cluster with specific settings that differ from the AWS-based management cluster configuration.

To deploy an AWS-based regional cluster:

  1. Log in to the node where you bootstrapped a management cluster.

  2. Prepare the AWS configuration for the new regional cluster:

    1. Verify access to the target cloud endpoint from Docker. For example:

      docker run --rm alpine sh -c "apk add --no-cache curl; \
      curl https://ec2.amazonaws.com"
      

      The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.

    2. Change the directory to the kaas-bootstrap folder.

    3. In templates/aws/machines.yaml.template, modify the spec:providerSpec:value section by substituting the ami:id parameter with the corresponding value for Ubuntu 18.04 from the required AWS region. For example:

       spec:
         providerSpec:
           value:
             apiVersion: aws.kaas.mirantis.com/v1alpha1
             kind: AWSMachineProviderSpec
             instanceType: c5d.2xlarge
             ami:
               id: ami-033a0960d9d83ead0
      

      Also, modify other parameters as required.

    4. Optional. In templates/aws/cluster.yaml.template, modify the default configuration of the AWS instance types and AMI IDs for further creation of managed clusters:

      providerSpec:
          value:
            ...
            kaas:
              ...
              regional:
              - provider: aws
                helmReleases:
                  - name: aws-credentials-controller
                    values:
                      config:
                        allowedInstanceTypes:
                          minVCPUs: 8
                          # in MiB
                          minMemory: 16384
                          # in GB
                          minStorage: 120
                          supportedArchitectures:
                          - "x86_64"
                          filters:
                          - name: instance-storage-info.disk.type
                            values:
                              - "ssd"
                        allowedAMIs:
                        -
                          - name: name
                            values:
                            - "ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20200729"
                          - name: owner-id
                            values:
                            - "099720109477"
      

      Also, modify other parameters as required.

  3. Available since 2.6.0 Optional. If you require all Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the regional cluster using proxy:

    • HTTP_PROXY

    • HTTPS_PROXY

    • NO_PROXY

    Example snippet:

    export HTTP_PROXY=http://proxy.example.com:3128
    export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
    export NO_PROXY=172.18.10.0,registry.internal.lan
    

    The following variables formats are accepted:

    Proxy configuration data

    Variable

    Format

    • HTTP_PROXY

    • HTTPS_PROXY

    • http://proxy.example.com:port - for anonymous access

    • http://user:password@proxy.example.com:port - for restricted access

    • NO_PROXY

    Comma-separated list of IP addresses or domain names

    For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Reference Architecture: Hardware and system requirements.

  4. Available since 2.5.0 Optional. Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.

    In templates/aws/cluster.yaml.template, add the ntp:servers section with the list of required servers names:

    spec:
      ...
      providerSpec:
        value:
          kaas:
          ...
            regional:
              - helmReleases:
                - name: aws-provider
                  values:
                    config:
                      lcm:
                        ...
                        ntp:
                          servers:
                          - 0.pool.ntp.org
                          ...
                provider: aws
                ...
    
  5. Generate the AWS Access Key ID with Secret Access Key for the bootstrapper.cluster-api-provider-aws.kaas.mirantis.com user and select the AWS default region name.

  6. Export the AWS bootstrapper.cluster-api-provider-aws.kaas.mirantis.com user credentials that were created in the previous step:

    export KAAS_AWS_ENABLED=true
    export AWS_SECRET_ACCESS_KEY=XXXXXXX
    export AWS_ACCESS_KEY_ID=XXXXXXX
    export AWS_DEFAULT_REGION=us-east-2
    
  7. Export the following parameters:

    export KUBECONFIG=<pathToMgmtClusterKubeconfig>
    export REGIONAL_CLUSTER_NAME=<newRegionalClusterName>
    export REGION=<NewRegionName>
    

    Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.

  8. Run the regional cluster bootstrap script:

    ./bootstrap.sh deploy_regional
    

    Note

    When the bootstrap is complete, obtain and save in a secure location the kubeconfig-<regionalClusterName> file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.

    The workflow of the regional cluster bootstrap script

    #

    Description

    1

    Prepare the bootstrap cluster for the new regional cluster.

    2

    Load the updated Container Cloud CRDs for Credentials, Cluster, and Machines with information about the new regional cluster to the management cluster.

    3

    Connect to each machine of the management cluster through SSH.

    4

    Wait for the Machines and Cluster objects of the new regional cluster to be ready on the management cluster.

    5

    Load the following objects to the new regional cluster: Secret with the management cluster kubeconfig and ClusterRole for the Container Cloud provider.

    6

    Forward the bootstrap cluster endpoint to helm-controller.

    7

    Wait for all CRDs to be available and verify the objects created using these CRDs.

    8

    Pivot the cluster API stack to the regional cluster.

    9

    Switch the LCM agent from the bootstrap cluster to the regional one.

    10

    Wait for the Container Cloud components to start on the regional cluster.

Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate a managed cluster.