Deploy a baremetal-based regional cluster¶
Available since 2.16.0
You can deploy an additional regional baremetal-based cluster to create managed clusters of several provider types or with different configurations within a single Container Cloud deployment.
To deploy a baremetal-based regional cluster:
Log in to the node where you bootstrapped the Container Cloud management cluster.
Verify that the bootstrap directory is updated.
Select from the following options:
For clusters deployed using Container Cloud 2.11.0 or later:
./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \ --target-dir <pathToBootstrapDirectory>
For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the
kaas-bootstrap
folder, download and run the Container Cloud bootstrap script:wget https://binary.mirantis.com/releases/get_container_cloud.sh chmod 0755 get_container_cloud.sh ./get_container_cloud.sh
Prepare the bare metal configuration for the new regional cluster:
Create a virtual bridge to connect to your PXE network on the seed node. Use the following
netplan
-based configuration file as an example:# cat /etc/netplan/config.yaml network: version: 2 renderer: networkd ethernets: ens3: dhcp4: false dhcp6: false bridges: br0: addresses: # Please, adjust for your environment - 10.0.0.15/24 dhcp4: false dhcp6: false # Please, adjust for your environment gateway4: 10.0.0.1 interfaces: # Interface name may be different in your environment - ens3 nameservers: addresses: # Please, adjust for your environment - 8.8.8.8 parameters: forward-delay: 4 stp: false
Apply the new network configuration using
netplan
:sudo netplan apply
Verify the new network configuration:
sudo brctl show
Example of system response:
bridge name bridge id STP enabled interfaces br0 8000.fa163e72f146 no ens3
Verify that the interface connected to the PXE network belongs to the previously configured bridge.
Install the current Docker version available for Ubuntu 20.04:
sudo apt install docker.io
Verify that your logged
USER
has access to the Docker daemon:sudo usermod -aG docker $USER
Log out and log in again to the seed node to apply the changes.
Verify that Docker is configured correctly and has access to Container Cloud CDN. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \ curl https://binary.mirantis.com"
The system output must contain a
json
file with no error messages. In case of errors, follow the steps provided in Troubleshooting.Note
If you require all Internet access to go through a proxy server for security and audit purposes, configure Docker proxy settings as described in the official Docker documentation.
Verify that the seed node has direct access to the Baseboard Management Controller (BMC) of each baremetal host. All target hardware nodes must be in the
power off
state.For example, using the IPMI tool:
ipmitool -I lanplus -H 'IPMI IP' -U 'IPMI Login' -P 'IPMI password' \ chassis power status
Example of system response:
Chassis Power is off
Prepare the deployment configuration files that contain the cluster and machines metadata, including Ceph configuration:
Create a copy of the current
templates
directory for future reference.mkdir templates.backup cp -r templates/* templates.backup/
Update the cluster definition template in
templates/bm/cluster.yaml.template
according to the environment configuration. Use the table below. Manually set all parameters that start withSET_
. For example,SET_METALLB_ADDR_POOL
.Cluster template mandatory parameters¶ Parameter
Description
Example value
SET_LB_HOST
The IP address of the externally accessible API endpoint of the cluster. This address must NOT be within the
SET_METALLB_ADDR_POOL
range but must be within the PXE/Management network. External load balancers are not supported.10.0.0.90
SET_METALLB_ADDR_POOL
The IP range to be used as external load balancers for the Kubernetes services with the
LoadBalancer
type. This range must be within the PXE/Management network. The minimum required range is 19 IP addresses.10.0.0.61-10.0.0.80
Optional. If you plan to use multiple L2 segments for provisioning of managed cluster nodes, consider the requirements specified in Configure multiple DHCP ranges using Subnet resources.
Optional. Override the default dnsmasq settings.
The dnsmasq configuration options
dhcp-option=3
anddhcp-option=6
are absent in the default configuration. So, by default, dnsmasq will send the DNS server and default route to DHCP clients as defined in the dnsmasq official documentation:The netmask and broadcast address are the same as on the host running dnsmasq.
The DNS server and default route are set to the address of the host running dnsmasq.
If the domain name option is set, this name is sent to DHCP clients.
If such behavior is not desirable during the cluster deployment, add the corresponding DHCP options, such as a specific gateway address and DNS addresses, using the
dnsmasq.dnsmasq_extra_opts
parameter for thebaremetal-operator
release intemplates/bm/cluster.yaml.template
:providerSpec: value: kind: BaremetalClusterProviderSpec ... kaas: regional: - provider: baremetal helmReleases: - name: baremetal-operator values: dnsmasq: dnsmasq_extra_opts: - dhcp-option=3 - dhcp-option=6
Optional if servers from the Ubuntu NTP pool (
*.ubuntu.pool.ntp.org
) are accessible from the node where your cluster is being provisioned. Otherwise, this step is mandatory.Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In
templates/bm/cluster.yaml.template
, add thentp:servers
section with the list of required servers names:spec: ... providerSpec: value: kaas: ... regional: - helmReleases: - name: baremetal-provider values: config: lcm: ... ntp: servers: - 0.pool.ntp.org ... provider: baremetal ...
Inspect the default bare metal host profile definition in
templates/bm/baremetalhostprofiles.yaml.template
. If your hardware configuration differs from the reference, adjust the default profile to match. For details, see Customize the default bare metal host profile.Warning
All data will be wiped during cluster deployment on devices defined directly or indirectly in the
fileSystems
list ofBareMetalHostProfile
. For example:A raw device partition with a file system on it
A device partition in a volume group with a logical volume that has a file system on it
An mdadm RAID device with a file system on it
An LVM RAID device with a file system on it
The
wipe
field is always consideredtrue
for these devices. Thefalse
value is ignored.Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.
Update the bare metal hosts definition template in
templates/bm/baremetalhosts.yaml.template
according to the environment configuration. Use the table below. Manually set all parameters that start withSET_
.Bare metal hosts template mandatory parameters¶ Parameter
Description
Example value
SET_MACHINE_0_IPMI_USERNAME
The IPMI user name in the base64 encoding to access the BMC. 0
dXNlcg==
(base64 encodeduser
)SET_MACHINE_0_IPMI_PASSWORD
The IPMI password in the base64 encoding to access the BMC. 0
cGFzc3dvcmQ=
(base64 encodedpassword
)SET_MACHINE_0_MAC
The MAC address of the first master node in the PXE network.
ac:1f:6b:02:84:71
SET_MACHINE_0_BMC_ADDRESS
The IP address of the BMC endpoint for the first master node in the cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.
192.168.100.11
SET_MACHINE_1_IPMI_USERNAME
The IPMI user name in the base64 encoding to access the BMC. 0
dXNlcg==
(base64 encodeduser
)SET_MACHINE_1_IPMI_PASSWORD
The IPMI password in the base64 encoding to access the BMC. 0
cGFzc3dvcmQ=
(base64 encodedpassword
)SET_MACHINE_1_MAC
The MAC address of the second master node in the PXE network.
ac:1f:6b:02:84:72
SET_MACHINE_1_BMC_ADDRESS
The IP address of the BMC endpoint for the second master node in the cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.
192.168.100.12
SET_MACHINE_2_IPMI_USERNAME
The IPMI user name in the base64 encoding to access the BMC. 0
dXNlcg==
(base64 encodeduser
)SET_MACHINE_2_IPMI_PASSWORD
The IPMI password in the base64 encoding to access the BMC. 0
cGFzc3dvcmQ=
(base64 encodedpassword
)SET_MACHINE_2_MAC
The MAC address of the third master node in the PXE network.
ac:1f:6b:02:84:73
SET_MACHINE_2_BMC_ADDRESS
The IP address of the BMC endpoint for the third master node in the cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway.
192.168.100.13
Update the Subnet objects definition template in
templates/bm/ipam-objects.yaml.template
according to the environment configuration. Use the table below. Manually set all parameters that start withSET_
. For example,SET_IPAM_POOL_RANGE
.IP address pools template mandatory parameters¶ Parameter
Description
Example value
SET_IPAM_CIDR
The address of PXE network in CIDR notation. Must be minimum in the
/24
network.10.0.0.0/24
SET_PXE_NW_GW
The default gateway in the PXE network. Since this is the only network that cluster will use by default, this gateway must provide access to:
The Internet to download the Mirantis artifacts
The OOB network of the Container Cloud cluster
10.0.0.1
SET_PXE_NW_DNS
An external (non-Kubernetes) DNS server accessible from the PXE network.
8.8.8.8
SET_IPAM_POOL_RANGE
This IP address range includes addresses that will be allocated in the PXE/Management network to bare metal hosts of the cluster.
10.0.0.100-10.0.0.252
SET_LB_HOST
1The IP address of the externally accessible API endpoint of the cluster. This address must NOT be within the
SET_METALLB_ADDR_POOL
range but must be within the PXE/Management network. External load balancers are not supported.10.0.0.90
SET_METALLB_ADDR_POOL
1The IP address range to be used as external load balancers for the Kubernetes services with the
LoadBalancer
type. This range must be within the PXE/Management network. The minimum required range is 19 IP addresses.10.0.0.61-10.0.0.80
Optional. To configure the separated PXE and management networks instead of one PXE/management network, proceed to Separate PXE and management networks.
Optional. To connect the cluster hosts to the PXE/Management network using bond interfaces, proceed to Configure NIC bonding.
If you require all Internet access to go through a proxy server, in
bootstrap.env
, add the following environment variables to bootstrap the cluster using proxy:HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128 export HTTPS_PROXY=http://user:pass@proxy.example.com:3128 export NO_PROXY=172.18.10.0,registry.internal.lan
The following variables formats are accepted:
Proxy configuration data¶ Variable
Format
HTTP_PROXY
HTTPS_PROXY
http://proxy.example.com:port
- for anonymous accesshttp://user:password@proxy.example.com:port
- for restricted access
NO_PROXY
Comma-separated list of IP addresses or domain names
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for a baremetal-based cluster.
Configure the Ceph cluster:
Optional. Technology Preview. Configure Ceph controller to manage Ceph nodes resources. In
templates/bm/cluster.yaml.template
, in theceph-controller
section ofspec.providerSpec.value.helmReleases
, specify thehyperconverge
parameter with required resource requests, limits, or tolerations:spec: providerSpec: value: helmReleases: - name: ceph-controller values: hyperconverge: tolerations: <ceph tolerations map> resources: <ceph resource management map>
For the parameters description, see Enable Ceph tolerations and resources management.
In
templates/bm/kaascephcluster.yaml.template
:Configure dedicated networks for Ceph components. Select from the following options:
Specify dedicated networks directly using the
clusterNet
andpublicNet
parameters.Warning
Mirantis does not recommend specifying
0.0.0.0/0
inclusterNet
andpublicNet
.Note
The bare metal provider automatically translates the
0.0.0.0/0
network range to the default LCM IPAM subnet if it exists.Add the corresponding labels for the bare metal IPAM subnets:
ipam/SVC-ceph-cluster
to the IPAMSubnet
that will be used as a Ceph cluster network (reflectsclusterNet
).ipam/SVC-ceph-public
to the IPAMSubnet
that will be used as a Ceph public network (reflectspublicNet
).
Example of a bare metal IPAM subnet used as a Ceph public network:
apiVersion: ipam.mirantis.com/v1alpha1 kind: Subnet metadata: labels: ... ipam/SVC-ceph-public: "1"
Example of a bare metal IPAM subnet used as a Ceph cluster network:
apiVersion: ipam.mirantis.com/v1alpha1 kind: Subnet metadata: labels: ... ipam/SVC-ceph-cluster: "1"
Set up the disk configuration according to your hardware node specification. Verify that the
storageDevices
section has a valid list of HDD, SSD, or NVME device names and each device is empty, that is, no file system is present on it.If required, configure other parameters as described in Ceph advanced configuration.
Configuration example:
... # This part of KaaSCephCluster should contain valid networks definition network: clusterNet: 10.10.10.0/24 publicNet: 10.10.11.0/24 ... nodes: master-0: ... <node_name>: ... # This part of KaaSCephCluster should contain valid device names storageDevices: - name: sdb config: deviceClass: hdd # Each storageDevices dicts can have several devices - name: sdc config: deviceClass: hdd # All devices for Ceph also should be described to ``wipe`` in # ``baremetalhosts.yaml.template`` - name: sdd config: deviceClass: hdd # Do not to include first devices here (like vda or sda) # because they will be allocated for operating system
In
machines.yaml.template
, verify that themetadata:name
structure matches the machine names in thespec:nodes
structure ofkaascephcluster.yaml.template
.
Verify that the
kaas-bootstrap
directory contains the following files:# tree ~/kaas-bootstrap ~/kaas-bootstrap/ .... ├── bootstrap.sh ├── kaas ├── mirantis.lic ├── releases ... ├── templates .... │ ├── bm │ │ ├── baremetalhostprofiles.yaml.template │ │ ├── baremetalhosts.yaml.template │ │ ├── cluster.yaml.template │ │ ├── ipam-objects.yaml.template │ │ ├── kaascephcluster.yaml.template │ │ └── machines.yaml.template .... ├── templates.backup ....
Export all required parameters using the table below.
export KAAS_BM_ENABLED="true" # export KAAS_BM_PXE_IP="10.0.0.20" export KAAS_BM_PXE_MASK="24" export KAAS_BM_PXE_BRIDGE="br0" # export KAAS_BM_BM_DHCP_RANGE="10.0.0.30,10.0.0.49,255.255.255.0" export BOOTSTRAP_METALLB_ADDRESS_POOL="10.0.0.61-10.0.0.80" # unset KAAS_BM_FULL_PREFLIGHT
Bare metal prerequisites data¶ Parameter
Description
Example value
KAAS_BM_PXE_IP
The provisioning IP address. This address will be assigned to the interface of the seed node defined by the
KAAS_BM_PXE_BRIDGE
parameter (see below). The PXE service of the bootstrap cluster will use this address to network boot the bare metal hosts for the cluster.10.0.0.20
KAAS_BM_PXE_MASK
The CIDR prefix for the PXE network. It will be used with
KAAS_BM_PXE_IP
address when assigning it to network interface.24
KAAS_BM_PXE_BRIDGE
The PXE network bridge name. The name must match the name of the bridge created on the seed node during the Prepare the seed node stage.
br0
KAAS_BM_BM_DHCP_RANGE
The
start_ip
andend_ip
addresses must be within the PXE network. This range will be used by dnsmasq to provide IP addresses for nodes during provisioning.10.0.0.30,10.0.0.49,255.255.255.0
BOOTSTRAP_METALLB_ADDRESS_POOL
The pool of IP addresses that will be used by services in the bootstrap cluster. Can be the same as the
SET_METALLB_ADDR_POOL
range for the cluster, or a different range.10.0.0.61-10.0.0.80
Run the verification
preflight
script to validate the deployment templates configuration:./bootstrap.sh preflight
The command outputs a human-readable report with the verification details. The report includes the list of verified bare metal nodes and their
Chassis Power
status. This status is based on the deployment templates configuration used during the verification.Caution
If the report contains information about missing dependencies or incorrect configuration, fix the issues before proceeding to the next step.
Verify that the following provider selection parameters are unset:
unset KAAS_AWS_ENABLED unset KAAS_VSPHERE_ENABLED unset KAAS_EQUINIX_ENABLED unset KAAS_EQUINIXMETALV2_ENABLED unset KAAS_AZURE_ENABLED
Export the following parameters:
export KAAS_BM_ENABLED=true export KUBECONFIG=<pathToMgmtClusterKubeconfig> export REGIONAL_CLUSTER_NAME=<newRegionalClusterName> export REGION=<NewRegionName>
Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.
Caution
The
REGION
andREGIONAL_CLUSTER_NAME
parameters values must contain only lowercase alphanumeric characters, hyphens, or periods.Note
If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export
SSH_KEY_NAME
. It is required for the management cluster to create apublicKey
Kubernetes CRD with the public part of your newly generatedssh_key
for the regional cluster.export SSH_KEY_NAME=<newRegionalClusterSshKeyName>
Run the regional cluster bootstrap script:
./bootstrap.sh deploy_regional
Note
When the bootstrap is complete, obtain and save in a secure location the
kubeconfig-<regionalClusterName>
file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional
ssh_key
will be generated. Make sure to save this key in a secure location as well.The workflow of the regional cluster bootstrap script¶ #
Description
1
Prepare the bootstrap cluster for the new regional cluster.
2
Load the updated Container Cloud CRDs for
Credentials
,Cluster
, andMachines
with information about the new regional cluster to the management cluster.3
Connect to each machine of the management cluster through SSH.
4
Wait for the
Machines
andCluster
objects of the new regional cluster to be ready on the management cluster.5
Load the following objects to the new regional cluster:
Secret
with the management clusterkubeconfig
andClusterRole
for the Container Cloud provider.6
Forward the bootstrap cluster endpoint to
helm-controller
.7
Wait for all CRDs to be available and verify the objects created using these CRDs.
8
Pivot the cluster API stack to the regional cluster.
9
Switch the LCM agent from the bootstrap cluster to the regional one.
10
Wait for the Container Cloud components to start on the regional cluster.
Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.
See also