For clusters deployed using the Container Cloud release earlier than 2.11.0
or if you deleted the kaas-bootstrap folder, download and run
the Container Cloud bootstrap script:
Prepare the AWS configuration for the new regional cluster:
Verify access to the target cloud endpoint from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \curl https://ec2.amazonaws.com"
The system output must contain no error records.
In case of issues, follow the steps provided in Troubleshooting.
Change the directory to the kaas-bootstrap folder.
In templates/aws/machines.yaml.template,
modify the spec:providerSpec:value section
by substituting the ami:id parameter with the corresponding value
for Ubuntu 20.04 from the required AWS region. For example:
Do not stop the AWS instances dedicated to the Container Cloud
clusters to prevent data failure and cluster disaster.
Optional. In templates/aws/cluster.yaml.template,
modify the values of the spec:providerSpec:value:bastion:amiId and
spec:providerSpec:value:bastion:instanceType sections
by setting the necessary Ubuntu AMI ID and instance type in the required
AWS region respectively. For example:
Optional. In templates/aws/cluster.yaml.template, modify the default
configuration of the AWS instance types and AMI IDs for further creation
of managed clusters:
providerSpec:value:...kaas:...regional:-provider:awshelmReleases:-name:aws-credentials-controllervalues:config:allowedInstanceTypes:minVCPUs:8# in MiBminMemory:16384# in GBminStorage:120supportedArchitectures:-"x86_64"filters:-name:instance-storage-info.disk.typevalues:-"ssd"allowedAMIs:--name:namevalues:-"ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-20210325"-name:owner-idvalues:-"099720109477"
Also, modify other parameters as required.
If you require all Internet access to go through a proxy server,
in bootstrap.env, add the following environment variables
to bootstrap the regional cluster using proxy:
Optional if servers from the Ubuntu NTP pool (*.ubuntu.pool.ntp.org)
are accessible from the node where the regional cluster is being
provisioned. Otherwise, this step is mandatory.
Configure the regional NTP server parameters to be applied to all machines
of regional and managed clusters in the specified region.
In templates/aws/cluster.yaml.template, add the ntp:servers section
with the list of required servers names:
Optional. Technology Preview. As of Container Cloud 2.18.0, enable Kubernetes
network encryption by adding the following field to the Cluster object
spec:
Configure the bootstrapper.cluster-api-provider-aws.kaas.mirantis.com
user created in the previous steps:
Using your AWS Management Console, generate the AWS Access Key ID with
Secret Access Key for
bootstrapper.cluster-api-provider-aws.kaas.mirantis.com
and select the AWS default region name.
Note
Other authorization methods, such as usage of
AWS_SESSION_TOKEN, are not supported.
Export the AWS bootstrapper.cluster-api-provider-aws.kaas.mirantis.com
user credentials that were created in the previous step:
Substitute the parameters enclosed in angle brackets with the corresponding
values of your cluster.
Caution
The REGION and REGIONAL_CLUSTER_NAME parameters values
must contain only lowercase alphanumeric characters, hyphens,
or periods.
Note
If the bootstrap node for the regional cluster deployment is not
the same where you bootstrapped the management cluster, also
export SSH_KEY_NAME. It is required for the management
cluster to create a publicKey Kubernetes CRD with the
public part of your newly generated ssh_key
for the regional cluster.
exportSSH_KEY_NAME=<newRegionalClusterSshKeyName>
Run the regional cluster bootstrap script:
./bootstrap.sh deploy_regional
Note
When the bootstrap is complete, obtain and save in a secure location
the kubeconfig-<regionalClusterName> file
located in the same directory as the bootstrap script.
This file contains the admin credentials for the regional cluster.
If the bootstrap node for the regional cluster deployment is not
the same where you bootstrapped the management cluster, a new
regional ssh_key will be generated.
Make sure to save this key in a secure location as well.
The workflow of the regional cluster bootstrap script¶
#
Description
1
Prepare the bootstrap cluster for the new regional cluster.
2
Load the updated Container Cloud CRDs for Credentials,
Cluster, and Machines with information about the new
regional cluster to the management cluster.
3
Connect to each machine of the management cluster through SSH.
4
Wait for the Machines and Cluster objects of the new regional
cluster to be ready on the management cluster.
5
Load the following objects to the new regional cluster: Secret
with the management cluster kubeconfig and ClusterRole for
the Container Cloud provider.
6
Forward the bootstrap cluster endpoint to helm-controller.
7
Wait for all CRDs to be available and verify the objects
created using these CRDs.
8
Pivot the cluster API stack to the regional cluster.
9
Switch the LCM Agent from the bootstrap cluster to the regional one.
10
Wait for the Container Cloud components to start on the regional
cluster.