Regional clusters are unsupported since Container Cloud 2.25.0.
Mirantis does not perform functional integration testing of the feature and
intends to remove the related code in Container Cloud 2.26.0. If you still
require this feature, contact Mirantis support for further information.
You can deploy an additional regional OpenStack-based cluster
to create managed clusters of several provider types or with different
configurations.
To deploy an OpenStack-based regional cluster:
Log in to the node where you bootstrapped a management
cluster.
Verify that the bootstrap directory is updated.
Select from the following options:
For clusters deployed using Container Cloud 2.11.0 or later:
For clusters deployed using the Container Cloud release earlier than 2.11.0
or if you deleted the kaas-bootstrap folder, download and run
the Container Cloud bootstrap script:
If you deploy Container Cloud on top of MOSK Victoria with Tungsten Fabric
and use the default security group for newly created load balancers, add the
following rules for the Kubernetes API server endpoint, Container Cloud
application endpoint, and for the MKE web UI and API using the OpenStack CLI:
direction='ingress'
ethertype='IPv4'
protocol='tcp'
remote_ip_prefix='0.0.0.0/0'
port_range_max and port_range_min:
'443' for Kubernetes API and Container Cloud application endpoints
'6443' for MKE web UI and API
Verify access to the target cloud endpoint from Docker. For example:
In case of issues, follow the steps provided in Troubleshooting.
Configure the cluster and machines metadata:
Adjust the templates/cluster.yaml.template parameters to suit your
deployment:
In the spec::providerSpec::value section, add the mandatory
ExternalNetworkID parameter that is the ID of an external
OpenStack network. It is required to have public Internet access
to virtual machines.
In the spec::clusterNetwork::services section, add the
corresponding values for cidrBlocks.
Configure other parameters as required.
In templates/machines.yaml.template,
modify the spec:providerSpec:value section for 3 control plane nodes
marked with the cluster.sigs.k8s.io/control-plane label
by substituting the flavor and image parameters
with the corresponding values of the control plane nodes in the related
OpenStack cluster. For example:
The flavor parameter value provided in the example above
is cloud-specific and must meet the Container Cloud
requirements.
Also, modify other parameters as required.
Available since Container Cloud 2.24.0. Optional.
Technology Preview. Enable custom host names for cluster machines.
When enabled, any machine host name in a particular region matches the related
Machine object name. For example, instead of the default
kaas-node-<UID>, a machine host name will be master-0. The custom
naming format is more convenient and easier to operate with.
To enable the feature on the management and its future managed clusters:
Since 2.25.0
In |cluster-yaml-path|, find the
spec.providerSpec.value.kaas.regional section of the required
region.
In this section, find the required provider name under
helmReleases.
Under values.config, add customHostnamesEnabled:true.
For example, for the bare metal provider in region-one:
Optional. Available as TechPreview. To boot cluster machines from a block
storage volume, define the following parameter in the spec:providerSpec
section of templates/machines.yaml.template:
To boot the Bastion node from a volume, add the same parameter to
templates/cluster.yaml.template in the spec:providerSpec section
for Bastion. The default amount of storage 80 is enough.
Optional. Available since Container Cloud 2.24.0 as Technology Preview.
Create all load balancers of the cluster with a specific Octavia flavor by
defining the following parameter in the spec:providerSpec section of
templates/cluster.yaml.template:
This feature is not supported by OpenStack Queens.
Configure NTP server.
Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool
(*.ubuntu.pool.ntp.org) are accessible from the node where
the regional cluster is being provisioned. Otherwise, configure the
regional NTP server parameters as described below.
Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the
management of chrony configuration by Container Cloud to use your own
system for chrony management. Otherwise, configure the regional NTP server
parameters as described below.
NTP configuration
Configure the regional NTP server parameters to be applied to all machines
of regional and managed clusters in the specified region.
In templates/cluster.yaml.template, add the ntp:servers section
with the list of required server names:
Optional. If you require all Internet access to go through a proxy server,
in bootstrap.env, add the following environment variables
to bootstrap the regional cluster using proxy:
Substitute the parameters enclosed in angle brackets with the corresponding
values of your cluster.
Caution
The REGION and REGIONAL_CLUSTER_NAME parameters values
must contain only lowercase alphanumeric characters, hyphens,
or periods.
Note
If the bootstrap node for the regional cluster deployment is not
the same where you bootstrapped the management cluster, also
export SSH_KEY_NAME. It is required for the management
cluster to create a publicKey Kubernetes CRD with the
public part of your newly generated ssh_key
for the regional cluster.
exportSSH_KEY_NAME=<newRegionalClusterSshKeyName>
Run the regional cluster bootstrap script:
./bootstrap.shdeploy_regional
Note
When the bootstrap is complete, obtain and save in a secure location
the kubeconfig-<regionalClusterName> file
located in the same directory as the bootstrap script.
This file contains the admin credentials for the regional cluster.
If the bootstrap node for the regional cluster deployment is not
the same where you bootstrapped the management cluster, a new
regional ssh_key will be generated.
Make sure to save this key in a secure location as well.
The workflow of the regional cluster bootstrap script¶
#
Description
1
Prepare the bootstrap cluster for the new regional cluster.
2
Load the updated Container Cloud CRDs for Credentials,
Cluster, and Machines with information about the new
regional cluster to the management cluster.
3
Connect to each machine of the management cluster through SSH.
4
Wait for the Machines and Cluster objects of the new regional
cluster to be ready on the management cluster.
5
Load the following objects to the new regional cluster: Secret
with the management cluster kubeconfig and ClusterRole for
the Container Cloud provider.
6
Forward the bootstrap cluster endpoint to helm-controller.
7
Wait for all CRDs to be available and verify the objects
created using these CRDs.
8
Pivot the cluster API stack to the regional cluster.
9
Switch the LCM Agent from the bootstrap cluster to the regional one.
10
Wait for the Container Cloud components to start on the regional
cluster.
Verify that network addresses used on your clusters do not overlap with
the following default MKE network addresses for Swarm and MCR:
10.0.0.0/16 is used for Swarm networks. IP addresses from this
network are virtual.
10.99.0.0/16 is used for MCR networks. IP addresses from this
network are allocated on hosts.
Verification of Swarm and MCR network addresses
To verify Swarm and MCR network addresses, run on any master node:
Not all of Swarm and MCR addresses are usually in use. One Swarm Ingress
network is created by default and occupies the 10.0.0.0/24 address
block. Also, three MCR networks are created by default and occupy
three address blocks: 10.99.0.0/20, 10.99.16.0/20,
10.99.32.0/20.
To verify the actual networks state and addresses in use, run: