Bootstrap a management cluster¶
After you complete the prerequisite steps described in Prerequisites, proceed with bootstrapping your OpenStack-based Mirantis Container Cloud management cluster.
To bootstrap an OpenStack-based management cluster:
Log in to the bootstrap node running Ubuntu 20.04 that is configured as described in Prerequisites.
Prepare the bootstrap script:
Download and run the Container Cloud bootstrap script:
wget https://binary.mirantis.com/releases/get_container_cloud.sh chmod 0755 get_container_cloud.sh ./get_container_cloud.sh
Change the directory to the
kaas-bootstrap
folder created by the script.
Obtain your license file that will be required during the bootstrap:
Create a user account at www.mirantis.com.
Log in to your account and download the
mirantis.lic
license file.Save the license file as
mirantis.lic
under thekaas-bootstrap
directory on the bootstrap node.Verify that
mirantis.lic
contains the exact Container Cloud license previously downloaded from www.mirantis.com by decoding the license JWT token, for example, using jwt.io.Example of a valid decoded Container Cloud license data with the mandatory
license
field:{ "exp": 1652304773, "iat": 1636669973, "sub": "demo", "license": { "dev": false, "limits": { "clusters": 10, "workers_per_cluster": 10 }, "openstack": null } }
Warning
The MKE license does not apply to
mirantis.lic
. For details about MKE license, see MKE documentation.
Prepare the OpenStack configuration for a new cluster:
Log in to the OpenStack Horizon.
In the Project section, select API Access.
In the right-side drop-down menu Download OpenStack RC File, select OpenStack clouds.yaml File.
Save the downloaded
clouds.yaml
file in thekaas-bootstrap
folder created by theget_container_cloud.sh
script.In
clouds.yaml
, add thepassword
field with your OpenStack password under theclouds/openstack/auth
section.Example:
clouds: openstack: auth: auth_url: https://auth.openstack.example.com:5000/v3 username: your_username password: your_secret_password project_id: your_project_id user_domain_name: your_user_domain_name region_name: RegionOne interface: public identity_api_version: 3
Verify access to the target cloud endpoint from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \ curl https://auth.openstack.example.com:5000/v3"
The system output must contain no error records.
In case of issues, follow the steps provided in Troubleshooting.
Configure the cluster and machines metadata:
In
templates/machines.yaml.template
, modify thespec:providerSpec:value
section for 3 control plane nodes marked with thecluster.sigs.k8s.io/control-plane
label by substituting theflavor
andimage
parameters with the corresponding values of the control plane nodes in the related OpenStack cluster. For example:spec: &cp_spec providerSpec: value: apiVersion: "openstackproviderconfig.k8s.io/v1alpha1" kind: "OpenstackMachineProviderSpec" flavor: kaas.minimal image: bionic-server-cloudimg-amd64-20190612
Note
The
flavor
parameter value provided in the example above is cloud-specific and must meet the Container Cloud requirements.Also, modify other parameters as required.
Modify the
templates/cluster.yaml.template
parameters to fit your deployment. For example, add the corresponding values forcidrBlocks
in thespec::clusterNetwork::services
section.
Optional. Configure backups for the MariaDB database as described in Configure periodic backups of MariaDB for AWS and OpenStack providers.
Optional if servers from the Ubuntu NTP pool (
*.ubuntu.pool.ntp.org
) are accessible from the node where the management cluster is being provisioned. Otherwise, this step is mandatory.Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In
templates/cluster.yaml.template
, add thentp:servers
section with the list of required servers names:spec: ... providerSpec: value: kaas: ... regional: - helmReleases: - name: openstack-provider values: config: lcm: ... ntp: servers: - 0.pool.ntp.org ... provider: openstack ...
Optional. If you require all Internet access to go through a proxy server, in
bootstrap.env
, add the following environment variables to bootstrap the management and regional cluster using proxy:HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128 export HTTPS_PROXY=http://user:pass@proxy.example.com:3128 export NO_PROXY=172.18.10.0,registry.internal.lan
The following variables formats are accepted:
Proxy configuration data¶ Variable
Format
HTTP_PROXY
HTTPS_PROXY
http://proxy.example.com:port
- for anonymous accesshttp://user:password@proxy.example.com:port
- for restricted access
NO_PROXY
Comma-separated list of IP addresses or domain names
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for an OpenStack-based cluster.
Optional. Configure external identity provider for IAM.
Run the bootstrap script:
./bootstrap.sh all
In case of deployment issues, refer to Troubleshooting and inspect logs.
If the script fails for an unknown reason:
Run the cleanup script:
./bootstrap.sh cleanup
Rerun the bootstrap script.
When the bootstrap is complete, collect and save the following management cluster details in a secure location:
The
kubeconfig
file located in the same directory as the bootstrap script. This file contains the admin credentials for the management cluster.The private
ssh_key
for access to the management cluster nodes that is located in the same directory as the bootstrap script.Note
If the initial version of your Container Cloud management cluster was earlier than 2.6.0,
ssh_key
is namedopenstack_tmp
and is located at~/.ssh/
.The URL for the Container Cloud web UI.
To create users with permissions required for accessing the Container Cloud web UI, see Create initial users after a management cluster bootstrap.
The StackLight endpoints. For details, see Access StackLight web UIs.
The Keycloak URL that the system outputs when the bootstrap completes. The admin password for Keycloak is located in
kaas-bootstrap/passwords.yml
along with other IAM passwords.
Note
The Container Cloud web UI and StackLight endpoints are available through Transport Layer Security (TLS) and communicate with Keycloak to authenticate users. Keycloak is exposed using HTTPS and self-signed TLS certificates that are not trusted by web browsers.
To use your own TLS certificates for Keycloak, refer to Configure TLS certificates for management cluster applications.
Note
When the bootstrap is complete, the bootstrap cluster resources are freed up.
Optional. Deploy an additional regional cluster as described in Deploy an additional regional cluster (optional).
Now, you can proceed with operating your management cluster using the Container Cloud web UI and deploying managed clusters as described in Create and operate an OpenStack-based managed cluster.