Deploy a baremetal-based regional cluster¶
You can deploy an additional regional baremetal-based cluster to create managed clusters of several provider types or with different configurations within a single Container Cloud deployment.
To deploy a baremetal-based regional cluster:
Log in to the node where you bootstrapped the Container Cloud management cluster.
Verify that the bootstrap directory is updated.
Select from the following options:
For clusters deployed using Container Cloud 2.11.0 or later:
./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \ --target-dir <pathToBootstrapDirectory>
For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the
kaas-bootstrap
folder, download and run the Container Cloud bootstrap script:wget https://binary.mirantis.com/releases/get_container_cloud.sh chmod 0755 get_container_cloud.sh ./get_container_cloud.sh
Prepare the bare metal configuration for the new regional cluster:
Create a virtual bridge to connect to your PXE network on the seed node. Use the following
netplan
-based configuration file as an example:# cat /etc/netplan/config.yaml network: version: 2 renderer: networkd ethernets: ens3: dhcp4: false dhcp6: false bridges: br0: addresses: # Please, adjust for your environment - 10.0.0.15/24 dhcp4: false dhcp6: false # Please, adjust for your environment gateway4: 10.0.0.1 interfaces: # Interface name may be different in your environment - ens3 nameservers: addresses: # Please, adjust for your environment - 8.8.8.8 parameters: forward-delay: 4 stp: false
Apply the new network configuration using
netplan
:sudo netplan apply
Verify the new network configuration:
sudo apt update && sudo apt install -y bridge-utils sudo brctl show
Example of system response:
bridge name bridge id STP enabled interfaces br0 8000.fa163e72f146 no ens3
Verify that the interface connected to the PXE network belongs to the previously configured bridge.
Install the current Docker version available for Ubuntu 20.04:
sudo apt update sudo apt install docker.io
Verify that your logged
USER
has access to the Docker daemon:sudo usermod -aG docker $USER
Log out and log in again to the seed node to apply the changes.
Verify that Docker is configured correctly and has access to Container Cloud CDN. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \ curl https://binary.mirantis.com"
The system output must contain a
json
file with no error messages. In case of errors, follow the steps provided in Troubleshooting.Note
If you require all Internet access to go through a proxy server for security and audit purposes, configure Docker proxy settings as described in the official Docker documentation.
Prepare the deployment configuration files that contain the cluster and machines metadata:
Create a copy of the current
templates
directory for future reference.mkdir templates.backup cp -r templates/* templates.backup/
Update the cluster definition template in
templates/bm/cluster.yaml.template
according to the environment configuration. Use the table below. Manually set all parameters that start withSET_
. For example,SET_METALLB_ADDR_POOL
.Cluster template mandatory parameters¶ Parameter
Description
Example value
SET_LB_HOST
The IP address of the externally accessible API endpoint of the cluster. This address must NOT be within the
SET_METALLB_ADDR_POOL
range but must be within the PXE/Management network. External load balancers are not supported.10.0.0.90
SET_METALLB_ADDR_POOL
The IP range to be used as external load balancers for the Kubernetes services with the
LoadBalancer
type. This range must be within the PXE/Management network. The minimum required range is 19 IP addresses.10.0.0.61-10.0.0.80
Configure NTP server.
Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool (
*.ubuntu.pool.ntp.org
) are accessible from the node where your cluster is being provisioned. Otherwise, configure the regional NTP server parameters as described below.Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the management of
chrony
configuration by Container Cloud to use your own system forchrony
management. Otherwise, configure the regional NTP server parameters as described below.NTP configuration
Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In
templates/bm/cluster.yaml.template
, add thentp:servers
section with the list of required server names:spec: ... providerSpec: value: kaas: ... ntpEnabled: true regional: - helmReleases: - name: <providerName>-provider values: config: lcm: ... ntp: servers: - 0.pool.ntp.org ... provider: <providerName> ...
To disable NTP:
spec: ... providerSpec: value: ... ntpEnabled: false ...
Inspect the default bare metal host profile definition in
templates/bm/baremetalhostprofiles.yaml.template
. If your hardware configuration differs from the reference, adjust the default profile to match. For details, see Customize the default bare metal host profile.Warning
All data will be wiped during cluster deployment on devices defined directly or indirectly in the
fileSystems
list ofBareMetalHostProfile
. For example:A raw device partition with a file system on it
A device partition in a volume group with a logical volume that has a file system on it
An mdadm RAID device with a file system on it
An LVM RAID device with a file system on it
The
wipe
field is always consideredtrue
for these devices. Thefalse
value is ignored.Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.
Update the bare metal hosts definition template in
templates/bm/baremetalhosts.yaml.template
according to the environment configuration. Use the table below. Manually set all parameters that start withSET_
.Bare metal hosts template mandatory parameters¶ Parameter
Description
Example value
SET_MACHINE_0_IPMI_USERNAME
The IPMI user name to access the BMC. 0
user
SET_MACHINE_0_IPMI_PASSWORD
The IPMI password to access the BMC. 0
password
SET_MACHINE_0_MAC
The MAC address of the first master node in the PXE network.
ac:1f:6b:02:84:71
SET_MACHINE_0_BMC_ADDRESS
The IP address of the BMC endpoint for the first master node in the cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.
192.168.100.11
SET_MACHINE_1_IPMI_USERNAME
The IPMI user name to access the BMC. 0
user
SET_MACHINE_1_IPMI_PASSWORD
The IPMI password to access the BMC. 0
password
SET_MACHINE_1_MAC
The MAC address of the second master node in the PXE network.
ac:1f:6b:02:84:72
SET_MACHINE_1_BMC_ADDRESS
The IP address of the BMC endpoint for the second master node in the cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.
192.168.100.12
SET_MACHINE_2_IPMI_USERNAME
The IPMI user name to access the BMC. 0
user
SET_MACHINE_2_IPMI_PASSWORD
The IPMI password to access the BMC. 0
password
SET_MACHINE_2_MAC
The MAC address of the third master node in the PXE network.
ac:1f:6b:02:84:73
SET_MACHINE_2_BMC_ADDRESS
The IP address of the BMC endpoint for the third master node in the cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.
192.168.100.13
- 0(1,2,3,4,5,6)
Since Container Cloud 2.21.0, a user name and password in plain text are required.
Before Container Cloud 2.21.0, the Base64 encoding of a user name and password is required. You can obtain the Base64-encoded user name and password using the following command in your Linux console:
$ echo -n <username|password> | base64
Update the Subnet objects definition template in
templates/bm/ipam-objects.yaml.template
according to the environment configuration. Use the table below. Manually set all parameters that start withSET_
. For example,SET_IPAM_POOL_RANGE
.IP address pools template mandatory parameters¶ Parameter
Description
Example value
SET_IPAM_CIDR
The address of PXE network in CIDR notation. Must be minimum in the
/24
network.10.0.0.0/24
SET_PXE_NW_GW
The default gateway in the PXE network. Since this is the only network that cluster will use by default, this gateway must provide access to:
The Internet to download the Mirantis artifacts
The OOB network of the Container Cloud cluster
10.0.0.1
SET_PXE_NW_DNS
An external (non-Kubernetes) DNS server accessible from the PXE network.
8.8.8.8
SET_IPAM_POOL_RANGE
This IP address range includes addresses that will be allocated in the PXE/Management network to bare metal hosts of the cluster.
10.0.0.100-10.0.0.252
SET_LB_HOST
1The IP address of the externally accessible API endpoint of the cluster. This address must NOT be within the
SET_METALLB_ADDR_POOL
range but must be within the PXE/Management network. External load balancers are not supported.10.0.0.90
SET_METALLB_ADDR_POOL
1The IP address range to be used as external load balancers for the Kubernetes services with the
LoadBalancer
type. This range must be within the PXE/Management network. The minimum required range is 19 IP addresses.10.0.0.61-10.0.0.80
Optional. To configure the separated PXE and management networks instead of one PXE/management network, proceed to Separate PXE and management networks.
Optional. To connect the cluster hosts to the PXE/Management network using bond interfaces, proceed to Configure NIC bonding.
If you require all Internet access to go through a proxy server, in
bootstrap.env
, add the following environment variables to bootstrap the cluster using proxy:HTTP_PROXY
HTTPS_PROXY
NO_PROXY
PROXY_CA_CERTIFICATE_PATH
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128 export HTTPS_PROXY=http://user:pass@proxy.example.com:3128 export NO_PROXY=172.18.10.0,registry.internal.lan export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"
The following formats of variables are accepted:
Proxy configuration data¶ Variable
Format
HTTP_PROXY
HTTPS_PROXY
http://proxy.example.com:port
- for anonymous access.http://user:password@proxy.example.com:port
- for restricted access.
NO_PROXY
Comma-separated list of IP addresses or domain names.
PROXY_CA_CERTIFICATE_PATH
Optional. Absolute path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.
Warning
If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.
Note
For MOSK-based deployments, the parameter is generally available since MOSK 22.4.
For implementation details, see Proxy and cache support.
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for a baremetal-based cluster.
Verify that the
kaas-bootstrap
directory contains the following files:# tree ~/kaas-bootstrap ~/kaas-bootstrap/ .... ├── bootstrap.sh ├── kaas ├── mirantis.lic ├── releases ... ├── templates .... │ ├── bm │ │ ├── baremetalhostprofiles.yaml.template │ │ ├── baremetalhosts.yaml.template │ │ ├── cluster.yaml.template │ │ ├── ipam-objects.yaml.template │ │ └── machines.yaml.template .... ├── templates.backup ....
Note
Before Container Cloud 2.20.0,
kaas-bootstrap/templates/bm
also must containkaascephcluster.yaml.template
.Export all required parameters using the table below.
export KAAS_BM_ENABLED="true" # export KAAS_BM_PXE_IP="10.0.0.20" export KAAS_BM_PXE_MASK="24" export KAAS_BM_PXE_BRIDGE="br0" # export KAAS_BM_BM_DHCP_RANGE="10.0.0.30,10.0.0.49,255.255.255.0" export BOOTSTRAP_METALLB_ADDRESS_POOL="10.0.0.61-10.0.0.80" # unset KAAS_BM_FULL_PREFLIGHT
Bare metal prerequisites data¶ Parameter
Description
Example value
KAAS_BM_PXE_IP
The provisioning IP address. This address will be assigned to the interface of the seed node defined by the
KAAS_BM_PXE_BRIDGE
parameter (see below). The PXE service of the bootstrap cluster will use this address to network boot the bare metal hosts for the cluster.10.0.0.20
KAAS_BM_PXE_MASK
The CIDR prefix for the PXE network. It will be used with
KAAS_BM_PXE_IP
address when assigning it to network interface.24
KAAS_BM_PXE_BRIDGE
The PXE network bridge name. The name must match the name of the bridge created on the seed node during the Prepare the seed node stage.
br0
KAAS_BM_BM_DHCP_RANGE
The
start_ip
andend_ip
addresses must be within the PXE network. This range will be used by dnsmasq to provide IP addresses for nodes during provisioning.10.0.0.30,10.0.0.49,255.255.255.0
BOOTSTRAP_METALLB_ADDRESS_POOL
The pool of IP addresses that will be used by services in the bootstrap cluster. Can be the same as the
SET_METALLB_ADDR_POOL
range for the cluster, or a different range.10.0.0.61-10.0.0.80
Run the verification
preflight
script to validate the deployment templates configuration:./bootstrap.sh preflight
The command outputs a human-readable report with the verification details. The report includes the list of verified bare metal nodes and their
Chassis Power
status. This status is based on the deployment templates configuration used during the verification.Caution
If the report contains information about missing dependencies or incorrect configuration, fix the issues before proceeding to the next step.
Verify that the vSphere provider selection parameter is unset:
unset KAAS_VSPHERE_ENABLED
Export the following parameters:
export KAAS_BM_ENABLED=true export KUBECONFIG=<pathToMgmtClusterKubeconfig> export REGIONAL_CLUSTER_NAME=<newRegionalClusterName> export REGION=<NewRegionName>
Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.
Caution
The
REGION
andREGIONAL_CLUSTER_NAME
parameters values must contain only lowercase alphanumeric characters, hyphens, or periods.Note
If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export
SSH_KEY_NAME
. It is required for the management cluster to create apublicKey
Kubernetes CRD with the public part of your newly generatedssh_key
for the regional cluster.export SSH_KEY_NAME=<newRegionalClusterSshKeyName>
Run the regional cluster bootstrap script:
./bootstrap.sh deploy_regional
Note
When the bootstrap is complete, obtain and save in a secure location the
kubeconfig-<regionalClusterName>
file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional
ssh_key
will be generated. Make sure to save this key in a secure location as well.The workflow of the regional cluster bootstrap script¶ #
Description
1
Prepare the bootstrap cluster for the new regional cluster.
2
Load the updated Container Cloud CRDs for
Credentials
,Cluster
, andMachines
with information about the new regional cluster to the management cluster.3
Connect to each machine of the management cluster through SSH.
4
Wait for the
Machines
andCluster
objects of the new regional cluster to be ready on the management cluster.5
Load the following objects to the new regional cluster:
Secret
with the management clusterkubeconfig
andClusterRole
for the Container Cloud provider.6
Forward the bootstrap cluster endpoint to
helm-controller
.7
Wait for all CRDs to be available and verify the objects created using these CRDs.
8
Pivot the cluster API stack to the regional cluster.
9
Switch the LCM Agent from the bootstrap cluster to the regional one.
10
Wait for the Container Cloud components to start on the regional cluster.
Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.
See also