Deploy an AWS-based regional cluster

You can deploy an additional regional AWS-based cluster to create managed clusters of several provider types or with different configurations.

To deploy an AWS-based regional cluster:

  1. Log in to the node where you bootstrapped a management cluster.

  2. Verify that the bootstrap directory is updated.

    Select from the following options:

    • For clusters deployed using Container Cloud 2.11.0 or later:

      ./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \
      --target-dir <pathToBootstrapDirectory>
      
    • For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the kaas-bootstrap folder, download and run the Container Cloud bootstrap script:

      wget https://binary.mirantis.com/releases/get_container_cloud.sh
      
      chmod 0755 get_container_cloud.sh
      
      ./get_container_cloud.sh
      
  3. Prepare the AWS configuration for the new regional cluster:

    1. Verify access to the target cloud endpoint from Docker. For example:

      docker run --rm alpine sh -c "apk add --no-cache curl; \
      curl https://ec2.amazonaws.com"
      

      The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.

    2. Change the directory to the kaas-bootstrap folder.

    3. In templates/aws/machines.yaml.template, modify the spec:providerSpec:value section by substituting the ami:id parameter with the corresponding value for Ubuntu 20.04 from the required AWS region. For example:

      spec:
        providerSpec:
          value:
            apiVersion: aws.kaas.mirantis.com/v1alpha1
            kind: AWSMachineProviderSpec
            instanceType: c5.4xlarge
            ami:
              id: ami-033a0960d9d83ead0
      

      Also, modify other parameters as required.

      Warning

      Do not stop the AWS instances dedicated to the Container Cloud clusters to prevent data failure and cluster disaster.

    4. Optional. In templates/aws/cluster.yaml.template, modify the values of the spec:providerSpec:value:bastion:amiId and spec:providerSpec:value:bastion:instanceType sections by setting the necessary Ubuntu AMI ID and instance type in the required AWS region respectively. For example:

      spec:
        providerSpec:
          value:
            apiVersion: aws.kaas.mirantis.com/v1alpha1
            kind: AWSClusterProviderSpec
            bastion:
              amiId: ami-0fb653ca2d3203ac1
              instanceType: t3.micro
      
    5. Optional. In templates/aws/cluster.yaml.template, modify the default configuration of the AWS instance types and AMI IDs for further creation of managed clusters:

      providerSpec:
          value:
            ...
            kaas:
              ...
              regional:
              - provider: aws
                helmReleases:
                  - name: aws-credentials-controller
                    values:
                      config:
                        allowedInstanceTypes:
                          minVCPUs: 8
                          # in MiB
                          minMemory: 16384
                          # in GB
                          minStorage: 120
                          supportedArchitectures:
                          - "x86_64"
                          filters:
                          - name: instance-storage-info.disk.type
                            values:
                              - "ssd"
                        allowedAMIs:
                        -
                          - name: name
                            values:
                            - "ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210325"
                          - name: owner-id
                            values:
                            - "099720109477"
      

      Also, modify other parameters as required.

  4. If you require all Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the regional cluster using proxy:

    • HTTP_PROXY

    • HTTPS_PROXY

    • NO_PROXY

    • PROXY_CA_CERTIFICATE_PATH

    Example snippet:

    export HTTP_PROXY=http://proxy.example.com:3128
    export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
    export NO_PROXY=172.18.10.0,registry.internal.lan
    export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"
    

    The following formats of variables are accepted:

    Proxy configuration data

    Variable

    Format

    HTTP_PROXY
    HTTPS_PROXY
    • http://proxy.example.com:port - for anonymous access

    • http://user:password@proxy.example.com:port - for restricted access

    NO_PROXY

    Comma-separated list of IP addresses or domain names

    PROXY_CA_CERTIFICATE_PATH Available since 2.20.0 as GA

    Optional. Path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.

    Warning

    If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.

    Note

    • Since Container Cloud 2.20.0, this parameter is generally available for the OpenStack, bare metal, Equinix Metal with private networking, AWS, and vSphere providers

    • For MOSK-based deployments, the feature support is available since MOSK 22.4

    • Since Container Cloud 2.18.0, this parameter is available as TechPreview for the OpenStack and bare metal providers only

    • For Azure and Equinix Metal with public networking, the feature is not supported

    For implementation details, see Proxy and cache support.

    For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for an AWS-based cluster.

  5. Optional if servers from the Ubuntu NTP pool (*.ubuntu.pool.ntp.org) are accessible from the node where the regional cluster is being provisioned. Otherwise, this step is mandatory.

    Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.

    In templates/aws/cluster.yaml.template, add the ntp:servers section with the list of required servers names:

    spec:
      ...
      providerSpec:
        value:
          kaas:
          ...
            regional:
              - helmReleases:
                - name: aws-provider
                  values:
                    config:
                      lcm:
                        ...
                        ntp:
                          servers:
                          - 0.pool.ntp.org
                          ...
                provider: aws
                ...
    
  6. Optional, Technology Preview in Container Cloud 2.18.0. Removed in Container Cloud 2.19.0 for compatibility reasons, currently not supported. Enables encryption for the Kubernetes workloads network using the following field to the Cluster object spec:

     spec:
       providerSpec:
         value:
           secureOverlay: true
    

    For more details, see MKE documentation: Kubernetes network encryption.

    • When the option is enabled, Calico networking is configured to use IP-in-IP overlay and BGP routing.

    • When the option is disabled, Calico networking is configured to use VXLAN overlay (no BGP).

  7. For Container Cloud to communicate with the AWS APIs, create the AWS CloudFormation stack that contains properly configured IAM users and policies.

    Note

    If the AWS CloudFormation stack already exists in your AWS account, skip this step.

    ./container-cloud bootstrap aws policy
    

    If you do not have access to create the CloudFormation stack, users, or policies:

    1. Log in to your AWS Management Console.

    2. On the home page, expand the upper right menu with your user name and capture your Account ID.

    3. Create the CloudFormation template:

      ./container-cloud bootstrap aws policy --account-id <accountId> --dump > cf.yaml
      

      Substitute the parameter enclosed in angle brackets with the corresponding value.

    4. Send the cf.yaml template to your AWS account admin to create the CloudFormation stack from this template.

      The generated template includes the following lists of IAM permissions:

      • ec2:DescribeInstances

      • ec2:DescribeRegions

      • ecr:GetAuthorizationToken

      • ecr:BatchCheckLayerAvailability

      • ecr:GetDownloadUrlForLayer

      • ecr:GetRepositoryPolicy

      • ecr:DescribeRepositories

      • ecr:ListImages

      • ecr:BatchGetImage

      • ec2:AllocateAddress

      • ec2:AssociateRouteTable

      • ec2:AttachInternetGateway

      • ec2:AuthorizeSecurityGroupIngress

      • ec2:CreateInternetGateway

      • ec2:CreateNatGateway

      • ec2:CreateRoute

      • ec2:CreateRouteTable

      • ec2:CreateSecurityGroup

      • ec2:CreateSubnet

      • ec2:CreateTags

      • ec2:CreateVpc

      • ec2:ModifyVpcAttribute

      • ec2:DeleteInternetGateway

      • ec2:DeleteNatGateway

      • ec2:DeleteRouteTable

      • ec2:DeleteSecurityGroup

      • ec2:DeleteSubnet

      • ec2:DeleteTags

      • ec2:DeleteVpc

      • ec2:DescribeAccountAttributes

      • ec2:DescribeAddresses

      • ec2:DescribeAvailabilityZones

      • ec2:DescribeInstances

      • ec2:DescribeInstanceTypes

      • ec2:DescribeInternetGateways

      • ec2:DescribeImages

      • ec2:DescribeNatGateways

      • ec2:DescribeNetworkInterfaces

      • ec2:DescribeNetworkInterfaceAttribute

      • ec2:DescribeRegions

      • ec2:DescribeRouteTables

      • ec2:DescribeSecurityGroups

      • ec2:DescribeSubnets

      • ec2:DescribeVpcs

      • ec2:DescribeVpcAttribute

      • ec2:DescribeVolumes

      • ec2:DetachInternetGateway

      • ec2:DisassociateRouteTable

      • ec2:DisassociateAddress

      • ec2:ModifyInstanceAttribute

      • ec2:ModifyNetworkInterfaceAttribute

      • ec2:ModifySubnetAttribute

      • ec2:ReleaseAddress

      • ec2:RevokeSecurityGroupIngress

      • ec2:RunInstances

      • ec2:TerminateInstances

      • tag:GetResources

      • elasticloadbalancing:CreateLoadBalancer

      • elasticloadbalancing:ConfigureHealthCheck

      • elasticloadbalancing:DeleteLoadBalancer

      • elasticloadbalancing:DescribeLoadBalancers

      • elasticloadbalancing:DescribeLoadBalancerAttributes

      • elasticloadbalancing:ModifyLoadBalancerAttributes

      • elasticloadbalancing:RegisterInstancesWithLoadBalancer

      • elasticloadbalancing:DescribeTargetGroups

      • elasticloadbalancing:CreateTargetGroup

      • elasticloadbalancing:DeleteTargetGroup

      • elasticloadbalancing:DescribeListeners

      • elasticloadbalancing:CreateListener

      • elasticloadbalancing:DeleteListener

      • elasticloadbalancing:RegisterTargets

      • elasticloadbalancing:DeregisterTargets

      • autoscaling:DescribeAutoScalingGroups

      • autoscaling:DescribeLaunchConfigurations

      • autoscaling:DescribeTags

      • ec2:DescribeInstances

      • ec2:DescribeImages

      • ec2:DescribeRegions

      • ec2:DescribeRouteTables

      • ec2:DescribeSecurityGroups

      • ec2:DescribeSubnets

      • ec2:DescribeVolumes

      • ec2:CreateSecurityGroup

      • ec2:CreateTags

      • ec2:CreateVolume

      • ec2:ModifyInstanceAttribute

      • ec2:ModifyVolume

      • ec2:AttachVolume

      • ec2:AuthorizeSecurityGroupIngress

      • ec2:CreateRoute

      • ec2:DeleteRoute

      • ec2:DeleteSecurityGroup

      • ec2:DeleteVolume

      • ec2:DetachVolume

      • ec2:RevokeSecurityGroupIngress

      • ec2:DescribeVpcs

      • elasticloadbalancing:AddTags

      • elasticloadbalancing:AttachLoadBalancerToSubnets

      • elasticloadbalancing:ApplySecurityGroupsToLoadBalancer

      • elasticloadbalancing:CreateLoadBalancer

      • elasticloadbalancing:CreateLoadBalancerPolicy

      • elasticloadbalancing:CreateLoadBalancerListeners

      • elasticloadbalancing:ConfigureHealthCheck

      • elasticloadbalancing:DeleteLoadBalancer

      • elasticloadbalancing:DeleteLoadBalancerListeners

      • elasticloadbalancing:DescribeLoadBalancers

      • elasticloadbalancing:DescribeLoadBalancerAttributes

      • elasticloadbalancing:DetachLoadBalancerFromSubnets

      • elasticloadbalancing:DeregisterInstancesFromLoadBalancer

      • elasticloadbalancing:ModifyLoadBalancerAttributes

      • elasticloadbalancing:RegisterInstancesWithLoadBalancer

      • elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer

      • elasticloadbalancing:AddTags

      • elasticloadbalancing:CreateListener

      • elasticloadbalancing:CreateTargetGroup

      • elasticloadbalancing:DeleteListener

      • elasticloadbalancing:DeleteTargetGroup

      • elasticloadbalancing:DescribeListeners

      • elasticloadbalancing:DescribeLoadBalancerPolicies

      • elasticloadbalancing:DescribeTargetGroups

      • elasticloadbalancing:DescribeTargetHealth

      • elasticloadbalancing:ModifyListener

      • elasticloadbalancing:ModifyTargetGroup

      • elasticloadbalancing:RegisterTargets

      • elasticloadbalancing:SetLoadBalancerPoliciesOfListener

      • iam:CreateServiceLinkedRole

      • kms:DescribeKey

  8. Configure the bootstrapper.cluster-api-provider-aws.kaas.mirantis.com user created in the previous steps:

    1. Using your AWS Management Console, generate the AWS Access Key ID with Secret Access Key for bootstrapper.cluster-api-provider-aws.kaas.mirantis.com and select the AWS default region name.

      Note

      Other authorization methods, such as usage of AWS_SESSION_TOKEN, are not supported.

    2. Export the AWS bootstrapper.cluster-api-provider-aws.kaas.mirantis.com user credentials that were created in the previous step:

      export KAAS_AWS_ENABLED=true
      export AWS_SECRET_ACCESS_KEY=XXXXXXX
      export AWS_ACCESS_KEY_ID=XXXXXXX
      export AWS_DEFAULT_REGION=us-east-2
      
  9. Export the following parameters:

    export KUBECONFIG=<pathToMgmtClusterKubeconfig>
    export REGIONAL_CLUSTER_NAME=<newRegionalClusterName>
    export REGION=<NewRegionName>
    

    Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.

    Caution

    The REGION and REGIONAL_CLUSTER_NAME parameters values must contain only lowercase alphanumeric characters, hyphens, or periods.

    Note

    If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export SSH_KEY_NAME. It is required for the management cluster to create a publicKey Kubernetes CRD with the public part of your newly generated ssh_key for the regional cluster.

    export SSH_KEY_NAME=<newRegionalClusterSshKeyName>
    
  10. Run the regional cluster bootstrap script:

    ./bootstrap.sh deploy_regional
    

    Note

    When the bootstrap is complete, obtain and save in a secure location the kubeconfig-<regionalClusterName> file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.

    If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional ssh_key will be generated. Make sure to save this key in a secure location as well.

    The workflow of the regional cluster bootstrap script

    #

    Description

    1

    Prepare the bootstrap cluster for the new regional cluster.

    2

    Load the updated Container Cloud CRDs for Credentials, Cluster, and Machines with information about the new regional cluster to the management cluster.

    3

    Connect to each machine of the management cluster through SSH.

    4

    Wait for the Machines and Cluster objects of the new regional cluster to be ready on the management cluster.

    5

    Load the following objects to the new regional cluster: Secret with the management cluster kubeconfig and ClusterRole for the Container Cloud provider.

    6

    Forward the bootstrap cluster endpoint to helm-controller.

    7

    Wait for all CRDs to be available and verify the objects created using these CRDs.

    8

    Pivot the cluster API stack to the regional cluster.

    9

    Switch the LCM Agent from the bootstrap cluster to the regional one.

    10

    Wait for the Container Cloud components to start on the regional cluster.

Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.