In addition to the default BareMetalHostProfile object installed
with Mirantis Container Cloud, you can create custom profiles
for managed clusters using Container Cloud API.
Note
The procedure below also applies to the Container Cloud
management clusters.
Warning
Any data stored on any device defined in the fileSystems
list can be deleted or corrupted during cluster (re)deployment. It happens
because each device from the fileSystems list is a part of the
rootfs directory tree that is overwritten during (re)deployment.
Examples of affected devices include:
A raw device partition with a file system on it
A device partition in a volume group with a logical volume that has a
file system on it
An mdadm RAID device with a file system on it
An LVM RAID device with a file system on it
The wipe field (deprecated) or wipeDevice structure (recommended
since Container Cloud 2.26.0) have no effect in this case and cannot
protect data on these devices.
Therefore, to prevent data loss, move the necessary data from these file
systems to another server beforehand, if required.
To create a custom bare metal host profile:
Select from the following options:
For a management cluster, log in to the bare metal seed node that will be
used to bootstrap the management cluster.
For a managed cluster, log in to the local machine where you management
cluster kubeconfig is located and where kubectl is installed.
Note
The management cluster kubeconfig is created automatically
during the last stage of the management cluster bootstrap.
Select from the following options:
For a management cluster, open
templates/bm/baremetalhostprofiles.yaml.template for editing.
For a managed cluster, create a new bare metal host profile
under the templates/bm/ directory.
Edit the host profile using the example template below to meet
your hardware configuration requirements:
Example template of a bare metal host profile
apiVersion:metal3.io/v1alpha1kind:BareMetalHostProfilemetadata:name:<profileName>namespace:<ManagedClusterProjectName># Add the name of the non-default project for the managed cluster# being created.spec:devices:# From the HW node, obtain the first device, which size is at least 120Gib.-device:minSize:120Giwipe:truepartitions:-name:bios_grubpartflags:-bios_grubsize:4Miwipe:true-name:uefipartflags:-espsize:200Miwipe:true-name:config-2size:64Miwipe:true-name:lvm_root_partsize:0wipe:true# From the HW node, obtain the second device, which size is at least 120Gib.# If a device exists but does not fit the size,# the BareMetalHostProfile will not be applied to the node.-device:minSize:120Giwipe:true# From the HW node, obtain the disk device with the exact device path.-device:byPath:/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1minSize:120Giwipe:truepartitions:-name:lvm_lvp_partsize:0wipe:true# Example of wiping a device w\o partitioning it.# Mandatory for the case when a disk is supposed to be used for Ceph backend.# later-device:byPath:/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2wipe:truefileSystems:-fileSystem:vfatpartition:config-2-fileSystem:vfatmountPoint:/boot/efipartition:uefi-fileSystem:ext4logicalVolume:rootmountPoint:/-fileSystem:ext4logicalVolume:lvpmountPoint:/mnt/local-volumes/logicalVolumes:-name:rootsize:0vg:lvm_root-name:lvpsize:0vg:lvm_lvppostDeployScript:|#!/bin/bash -execho $(date) 'post_deploy_script done' >> /root/post_deploy_donepreDeployScript:|#!/bin/bash -execho $(date) 'pre_deploy_script done' >> /root/pre_deploy_donevolumeGroups:-devices:-partition:lvm_root_partname:lvm_root-devices:-partition:lvm_lvp_partname:lvm_lvpgrubConfig:defaultGrubOptions:-GRUB_DISABLE_RECOVERY="true"-GRUB_PRELOAD_MODULES=lvm-GRUB_TIMEOUT=20kernelParameters:sysctl:# For the list of options prohibited to change, refer to# https://docs.mirantis.com/mke/3.7/install/predeployment/set-up-kernel-default-protections.htmlkernel.dmesg_restrict:"1"kernel.core_uses_pid:"1"fs.file-max:"9223372036854775807"fs.aio-max-nr:"1048576"fs.inotify.max_user_instances:"4096"vm.max_map_count:"262144"
Optional. Configure wiping of the target device or partition to be used
for cluster deployment as described in Wipe a device or partition.
Optional. Configure multiple devices for LVM volume using the example
template extract below for reference.
Caution
The following template extract contains only sections relevant
to LVM configuration with multiple PVs.
Expand the main template described in the previous step
with the configuration below if required.
Optional. Technology Preview. Configure support of the Redundant Array of
Independent Disks (RAID) that allows, for example, installing a cluster
operating system on a RAID device, refer to Configure RAID support.
Optional. Configure the RX/TX buffer size for physical network interfaces
and txqueuelen for any network interfaces.
This configuration can greatly benefit high-load and high-performance
network interfaces. You can configure these parameters using the
udev rules. For example:
Add or edit the mandatory parameters in the new BareMetalHostProfile
object. For the parameters description, see
API: BareMetalHostProfile spec.
Note
If asymmetric traffic is expected on some of the managed cluster
nodes, enable the loose mode for the corresponding interfaces on those
nodes by setting the net.ipv4.conf.<interface-name>.rp_filter
parameter to "2" in the kernelParameters.sysctl section.
For example: