Create a custom host profile¶
In addition to the default BareMetalHostProfile
object installed
with Mirantis Container Cloud, you can create custom profiles
for managed clusters using Container Cloud API.
Note
The procedure below also applies to the Container Cloud management clusters.
Warning
All data will be wiped during cluster deployment on devices
defined directly or indirectly in the fileSystems
list of
BareMetalHostProfile
. For example:
A raw device partition with a file system on it
A device partition in a volume group with a logical volume that has a file system on it
An mdadm RAID device with a file system on it
An LVM RAID device with a file system on it
The wipe
field is always considered true
for these devices.
The false
value is ignored.
Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.
To create a custom bare metal host profile:
Select from the following options:
For a management cluster, log in to the bare metal seed node that will be used to bootstrap the management cluster.
For a managed cluster, log in to the local machine where you management cluster
kubeconfig
is located and wherekubectl
is installed.Note
The management cluster
kubeconfig
is created automatically during the last stage of the management cluster bootstrap.
Select from the following options:
For a management cluster, open
templates/bm/baremetalhostprofiles.yaml.template
for editing.For a managed cluster, create a new bare metal host profile under the
templates/bm/
directory.
Edit the host profile using the example template below to meet your hardware configuration requirements:
Example template of a bare metal host profile
apiVersion: metal3.io/v1alpha1 kind: BareMetalHostProfile metadata: name: <profileName> namespace: <ManagedClusterProjectName> # Add the name of the non-default project for the managed cluster # being created. spec: devices: # From the HW node, obtain the first device, which size is at least 120Gib. - device: minSize: 120Gi wipe: true partitions: - name: bios_grub partflags: - bios_grub size: 4Mi wipe: true - name: uefi partflags: - esp size: 200Mi wipe: true - name: config-2 size: 64Mi wipe: true - name: lvm_root_part size: 0 wipe: true # From the HW node, obtain the second device, which size is at least 120Gib. # If a device exists but does not fit the size, # the BareMetalHostProfile will not be applied to the node. - device: minSize: 120Gi wipe: true # From the HW node, obtain the disk device with the exact name. - device: byName: /dev/nvme0n1 minSize: 120Gi wipe: true partitions: - name: lvm_lvp_part size: 0 wipe: true # Example of wiping a device w\o partitioning it. # Mandatory for the case when a disk is supposed to be used for Ceph back end. # later - device: byName: /dev/sde wipe: true fileSystems: - fileSystem: vfat partition: config-2 - fileSystem: vfat mountPoint: /boot/efi partition: uefi - fileSystem: ext4 logicalVolume: root mountPoint: / - fileSystem: ext4 logicalVolume: lvp mountPoint: /mnt/local-volumes/ logicalVolumes: - name: root size: 0 vg: lvm_root - name: lvp size: 0 vg: lvm_lvp postDeployScript: | #!/bin/bash -ex echo $(date) 'post_deploy_script done' >> /root/post_deploy_done preDeployScript: | #!/bin/bash -ex echo $(date) 'pre_deploy_script done' >> /root/pre_deploy_done volumeGroups: - devices: - partition: lvm_root_part name: lvm_root - devices: - partition: lvm_lvp_part name: lvm_lvp grubConfig: defaultGrubOptions: - GRUB_DISABLE_RECOVERY="true" - GRUB_PRELOAD_MODULES=lvm - GRUB_TIMEOUT=20 kernelParameters: sysctl: kernel.panic: "900" kernel.dmesg_restrict: "1" kernel.core_uses_pid: "1" fs.file-max: "9223372036854775807" fs.aio-max-nr: "1048576" fs.inotify.max_user_instances: "4096" vm.max_map_count: "262144"
To use multiple devices for LVM volume, use the example template extract below for reference.
Caution
The following template extract contains only sections relevant to LVM configuration with multiple PVs. Expand the main template described in the previous step with the configuration below if required.
spec: devices: ... - device: ... partitions: - name: lvm_lvp_part1 size: 0 wipe: true - device: ... partitions: - name: lvm_lvp_part2 size: 0 wipe: true volumeGroups: ... - devices: - partition: lvm_lvp_part1 - partition: lvm_lvp_part2 name: lvm_lvp logicalVolumes: ... - name: root size: 0 vg: lvm_lvp fileSystems: ... - fileSystem: ext4 logicalVolume: root mountPoint: /
Optional. Technology Preview. Configure support of the Redundant Array of Independent Disks (RAID) that allows, for example, installing a cluster operating system on a RAID device, refer to Configure RAID support.
Add or edit the mandatory parameters in the new
BareMetalHostProfile
object. For the parameters description, see API: BareMetalHostProfile spec.Select from the following options:
For a management cluster, proceed with the cluster bootstrap procedure as described in Bootstrap a management cluster.
For a managed cluster:
Add the bare metal host profile to your management cluster:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n <managedClusterProjectName> apply -f <pathToBareMetalHostProfileFile>
If required, further modify the host profile:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n <managedClusterProjectName> edit baremetalhostprofile <hostProfileName>
Proceed with Add a bare metal host either using web UI or CLI.