Add a managed baremetal cluster¶
This section instructs you on how to configure and deploy a managed cluster that is based on the baremetal-based management cluster.
By default, Mirantis Container Cloud configures a single interface on the cluster nodes, leaving all other physical interfaces intact.
With L2 networking templates, you can create advanced host networking configurations for your clusters. For example, you can create bond interfaces on top of physical interfaces on the host or use multiple subnets to separate different types of network traffic.
You can use several host-specific L2 templates per one cluster to support different hardware configurations. For example, you can create L2 templates with different number and layout of NICs to be applied to the specific machines of one cluster.
When you create a baremetal-based project, the exemplary templates
with the ipam/PreInstalledL2Template
label are copied to this project.
These templates are preinstalled during the management cluster bootstrap.
Using the L2 Templates section of the Clusters tab in the Container Cloud web UI, you can view a list of preinstalled templates and the ones that you manually create before a cluster deployment.
Note
Preinstalled L2 templates are removed in Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0).
Caution
Modification of L2 templates in use is allowed with a mandatory validation step from the Infrastructure Operator to prevent accidental cluster failures due to unsafe changes. The list of risks posed by modifying L2 templates includes:
Services running on hosts cannot reconfigure automatically to switch to the new IP addresses and/or interfaces.
Connections between services are interrupted unexpectedly, which can cause data loss.
Incorrect configurations on hosts can lead to irrevocable loss of connectivity between services and unexpected cluster partition or disassembly.
For details, see Modify network configuration on an existing machine.
Since Container Cloud 2.24.4, in the Technology Preview scope, you can create a managed cluster with a multi-rack topology, where cluster nodes including Kubernetes masters are distributed across multiple racks without L2 layer extension between them, and use BGP for announcement of the cluster API load balancer address and external addresses of Kubernetes load-balanced services.
Implementation of the multi-rack topology implies the use of Rack
and
MultiRackCluster
objects that support configuration of BGP announcement
of the cluster API load balancer address. For the configuration procedure,
refer to Configure BGP announcement for cluster API LB address. For configuring the BGP announcement of
external addresses of Kubernetes load-balanced services, refer to
Configure MetalLB.
Follow the procedures described in the below subsections to configure initial settings and advanced network objects for your managed clusters.
- Create a cluster using web UI
- Workflow of network interface naming
- Create subnets
- Automate multiple subnet creation using SubnetPool
- Create L2 templates
- Configure BGP announcement for cluster API LB address