Deploy an OpenStack-based regional cluster¶
Unsupported since 2.25.0
Regional clusters are unsupported since Container Cloud 2.25.0. Mirantis does not perform functional integration testing of the feature and intends to remove the related code in Container Cloud 2.26.0. If you still require this feature, contact Mirantis support for further information.
You can deploy an additional regional OpenStack-based cluster to create managed clusters of several provider types or with different configurations.
To deploy an OpenStack-based regional cluster:
Log in to the node where you bootstrapped a management cluster.
Verify that the bootstrap directory is updated.
Select from the following options:
For clusters deployed using Container Cloud 2.11.0 or later:
./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \ --target-dir <pathToBootstrapDirectory>
For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the
kaas-bootstrapfolder, download and run the Container Cloud bootstrap script:
wget https://binary.mirantis.com/releases/get_container_cloud.sh chmod 0755 get_container_cloud.sh ./get_container_cloud.sh
Prepare the OpenStack configuration for a new regional cluster:
Log in to the OpenStack Horizon.
In the Project section, select API Access.
In the right-side drop-down menu Download OpenStack RC File, select OpenStack clouds.yaml File.
Save the downloaded
clouds.yamlfile in the
kaas-bootstrapfolder created by the
clouds.yaml, add the
passwordfield with your OpenStack password under the
clouds: openstack: auth: auth_url: https://auth.openstack.example.com/v3 username: your_username password: your_secret_password project_id: your_project_id user_domain_name: your_user_domain_name region_name: RegionOne interface: public identity_api_version: 3
If you deploy Container Cloud on top of MOSK Victoria with Tungsten Fabric and use the default security group for newly created load balancers, add the following rules for the Kubernetes API server endpoint, Container Cloud application endpoint, and for the MKE web UI and API using the OpenStack CLI:
'443'for Kubernetes API and Container Cloud application endpoints
'6443'for MKE web UI and API
Verify access to the target cloud endpoint from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \ curl https://auth.openstack.example.com/v3"
The system output must contain no error records.
In case of issues, follow the steps provided in Troubleshooting.
Configure the cluster and machines metadata:
templates/cluster.yaml.templateparameters to suit your deployment:
spec::providerSpec::valuesection, add the mandatory
ExternalNetworkIDparameter that is the ID of an external OpenStack network. It is required to have public Internet access to virtual machines.
spec::clusterNetwork::servicessection, add the corresponding values for
Configure other parameters as required.
templates/machines.yaml.template, modify the
spec:providerSpec:valuesection for 3 control plane nodes marked with the
cluster.sigs.k8s.io/control-planelabel by substituting the
imageparameters with the corresponding values of the control plane nodes in the related OpenStack cluster. For example:
spec: &cp_spec providerSpec: value: apiVersion: "openstackproviderconfig.k8s.io/v1alpha1" kind: "OpenstackMachineProviderSpec" flavor: kaas.minimal image: bionic-server-cloudimg-amd64-20190612
flavorparameter value provided in the example above is cloud-specific and must meet the Container Cloud requirements.
Also, modify other parameters as required.
Available since Container Cloud 2.24.0. Optional. Technology Preview. Enable custom host names for cluster machines. When enabled, any machine host name in a particular region matches the related
Machineobject name. For example, instead of the default
kaas-node-<UID>, a machine host name will be
master-0. The custom naming format is more convenient and easier to operate with.
To enable the feature on the management and its future managed clusters:
In |cluster-yaml-path|, find the
spec.providerSpec.value.kaas.regionalsection of the required region.
In this section, find the required provider
For example, for the bare metal provider in
regional: - helmReleases: - name: baremetal-provider values: config: allInOneAllowed: false customHostnamesEnabled: true internalLoadBalancers: false provider: baremetal-provider
Add the following environment variable:
Optional. Available as TechPreview. To boot cluster machines from a block storage volume, define the following parameter in the
bootFromVolume: enabled: true volumeSize: 120
The minimal storage requirement is 120 GB per node. For details, see Requirements for an OpenStack-based cluster.
To boot the Bastion node from a volume, add the same parameter to
spec:providerSpecsection for Bastion. The default amount of storage
Optional. Available since Container Cloud 2.24.0 as Technology Preview. Create all load balancers of the cluster with a specific Octavia flavor by defining the following parameter in the
serviceAnnotations: loadbalancer.openstack.org/flavor-id: <octaviaFlavorID>
For details, see OpenStack documentation: Octavia Flavors.
This feature is not supported by OpenStack Queens.
Configure NTP server.
Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool (
*.ubuntu.pool.ntp.org) are accessible from the node where the regional cluster is being provisioned. Otherwise, configure the regional NTP server parameters as described below.
Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the management of
chronyconfiguration by Container Cloud to use your own system for
chronymanagement. Otherwise, configure the regional NTP server parameters as described below.
Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
templates/cluster.yaml.template, add the
ntp:serverssection with the list of required server names:
spec: ... providerSpec: value: kaas: ... ntpEnabled: true regional: - helmReleases: - name: <providerName>-provider values: config: lcm: ... ntp: servers: - 0.pool.ntp.org ... provider: <providerName> ...
To disable NTP:
spec: ... providerSpec: value: ... ntpEnabled: false ...
Optional. If you require all Internet access to go through a proxy server, in
bootstrap.env, add the following environment variables to bootstrap the regional cluster using proxy:
export HTTP_PROXY=http://proxy.example.com:3128 export HTTPS_PROXY=http://user:firstname.lastname@example.org:3128 export NO_PROXY=172.18.10.0,registry.internal.lan export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"
The following formats of variables are accepted:
http://proxy.example.com:port- for anonymous access.
http://user:email@example.com:port- for restricted access.
Comma-separated list of IP addresses or domain names.
Optional. Absolute path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.
If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.
For MOSK-based deployments, the parameter is generally available since MOSK 22.4.
For implementation details, see Proxy and cache support.
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for an OpenStack-based cluster.
If you are deploying the regional cluster on top of a baremetal-based management cluster, unset the following parameters:
unset KAAS_BM_ENABLED KAAS_BM_FULL_PREFLIGHT KAAS_BM_PXE_IP \ KAAS_BM_PXE_MASK KAAS_BM_PXE_BRIDGE KAAS_BM_BM_DHCP_RANGE \ TEMPLATES_DIR
Export the following parameters:
export KUBECONFIG=<pathToMgmtClusterKubeconfig> export REGIONAL_CLUSTER_NAME=<newRegionalClusterName> export REGION=<NewRegionName>
Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.
REGIONAL_CLUSTER_NAMEparameters values must contain only lowercase alphanumeric characters, hyphens, or periods.
If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export
SSH_KEY_NAME. It is required for the management cluster to create a
publicKeyKubernetes CRD with the public part of your newly generated
ssh_keyfor the regional cluster.
Run the regional cluster bootstrap script:
When the bootstrap is complete, obtain and save in a secure location the
kubeconfig-<regionalClusterName>file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.
If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional
ssh_keywill be generated. Make sure to save this key in a secure location as well.
Prepare the bootstrap cluster for the new regional cluster.
Load the updated Container Cloud CRDs for
Machineswith information about the new regional cluster to the management cluster.
Connect to each machine of the management cluster through SSH.
Wait for the
Clusterobjects of the new regional cluster to be ready on the management cluster.
Load the following objects to the new regional cluster:
Secretwith the management cluster
ClusterRolefor the Container Cloud provider.
Forward the bootstrap cluster endpoint to
Wait for all CRDs to be available and verify the objects created using these CRDs.
Pivot the cluster API stack to the regional cluster.
Switch the LCM Agent from the bootstrap cluster to the regional one.
Wait for the Container Cloud components to start on the regional cluster.
Verify that network addresses used on your clusters do not overlap with the following default MKE network addresses for Swarm and MCR:
10.0.0.0/16is used for Swarm networks. IP addresses from this network are virtual.
10.99.0.0/16is used for MCR networks. IP addresses from this network are allocated on hosts.
Verification of Swarm and MCR network addresses
To verify Swarm and MCR network addresses, run on any master node:
Example of system response:
Server: ... Swarm: ... Default Address Pool: 10.0.0.0/16 SubnetSize: 24 ... Default Address Pools: Base: 10.99.0.0/16, Size: 20 ...
Not all of Swarm and MCR addresses are usually in use. One Swarm Ingress network is created by default and occupies the
10.0.0.0/24address block. Also, three MCR networks are created by default and occupy three address blocks:
To verify the actual networks state and addresses in use, run:
docker network ls docker network inspect <networkName>
Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.