Create MOSK host profiles¶
Different types of MOSK nodes require differently configured host storage. This section describes how to create custom host profiles for different types of MOSK nodes.
You can create custom profiles for managed clusters using Container Cloud API.
Since MOSK 22.4, you can use flexible size
units
throughout bare metal host profiles. For example, you can now use either
sizeGiB: 0.1
or size: 100Mi
when specifying a device size.
Mirantis recommends using only one parameter name type and units throughout
the configuration files. If both sizeGiB
and size
are used,
sizeGiB
is ignored during deployment and the suffix is adjusted
accordingly. For example, 1.5Gi
will be serialized as 1536Mi
. The size
without units is counted in bytes. For example, size: 120
means 120 bytes.
Warning
All data will be wiped during cluster deployment on devices
defined directly or indirectly in the fileSystems
list of
BareMetalHostProfile
. For example:
A raw device partition with a file system on it
A device partition in a volume group with a logical volume that has a file system on it
An mdadm RAID device with a file system on it
An LVM RAID device with a file system on it
The wipe
field is always considered true
for these devices.
The false
value is ignored.
Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.
To create MOSK bare metal host profiles:
Log in to the local machine where you management cluster
kubeconfig
is located and wherekubectl
is installed.Note
The management cluster
kubeconfig
is created automatically during the last stage of the management cluster bootstrap.Create a new bare metal host profile for MOSK compute nodes in a YAML file under the
templates/bm/
directory.Edit the host profile using the example template below to meet your hardware configuration requirements:
apiVersion: metal3.io/v1alpha1 kind: BareMetalHostProfile metadata: name: <PROFILE_NAME> namespace: <PROJECT_NAME> spec: devices: # From the HW node, obtain the first device, which size is at least 60Gib - device: workBy: "by_id,by_wwn,by_path,by_name" minSize: 60Gi type: ssd wipe: true partitions: - name: bios_grub partflags: - bios_grub size: 4Mi wipe: true - name: uefi partflags: - esp size: 200Mi wipe: true - name: config-2 size: 64Mi wipe: true # This partition is only required on compute nodes if you plan to # use LVM ephemeral storage. - name: lvm_nova_part wipe: true size: 100Gi - name: lvm_root_part size: 0 wipe: true # From the HW node, obtain the second device, which size is at least 60Gib # If a device exists but does not fit the size, # the BareMetalHostProfile will not be applied to the node - device: workBy: "by_id,by_wwn,by_path,by_name" minSize: 60Gi type: ssd wipe: true # From the HW node, obtain the disk device with the exact name - device: workBy: "by_id,by_wwn,by_path,by_name" minSize: 60Gi wipe: true partitions: - name: lvm_lvp_part size: 0 wipe: true # Example of wiping a device w\o partitioning it. # Mandatory for the case when a disk is supposed to be used for Ceph back end # later - device: workBy: "by_id,by_wwn,by_path,by_name" wipe: true fileSystems: - fileSystem: vfat partition: config-2 - fileSystem: vfat mountPoint: /boot/efi partition: uefi - fileSystem: ext4 logicalVolume: root mountPoint: / - fileSystem: ext4 logicalVolume: lvp mountPoint: /mnt/local-volumes/ logicalVolumes: - name: root size: 0 vg: lvm_root - name: lvp size: 0 vg: lvm_lvp postDeployScript: | #!/bin/bash -ex echo $(date) 'post_deploy_script done' >> /root/post_deploy_done preDeployScript: | #!/bin/bash -ex echo $(date) 'pre_deploy_script done' >> /root/pre_deploy_done volumeGroups: - devices: - partition: lvm_root_part name: lvm_root - devices: - partition: lvm_lvp_part name: lvm_lvp grubConfig: defaultGrubOptions: - GRUB_DISABLE_RECOVERY="true" - GRUB_PRELOAD_MODULES=lvm - GRUB_TIMEOUT=20 kernelParameters: sysctl: kernel.panic: "900" kernel.dmesg_restrict: "1" kernel.core_uses_pid: "1" fs.file-max: "9223372036854775807" fs.aio-max-nr: "1048576" fs.inotify.max_user_instances: "4096" vm.max_map_count: "262144"
Add or edit the mandatory parameters in the new
BareMetalHostProfile
object. For the parameters description, see Container Cloud API: BareMetalHostProfile spec.Add the bare metal host profile to your management cluster:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n <projectName> apply -f <pathToBareMetalHostProfileFile>
If required, further modify the host profile:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n <projectName> edit baremetalhostprofile <hostProfileName>
Repeat the steps above to create host profiles for other OpenStack node roles such as control plane nodes and storage nodes.
Now, proceed to Enable huge pages in a host profile.