Create a custom host profile

In addition to the default BareMetalHostProfile object installed with Mirantis Container Cloud, you can create custom profiles for managed clusters using Container Cloud API.

Note

The procedure below also applies to the Container Cloud management clusters.

Warning

Any data stored on any device defined in the fileSystems list can be deleted or corrupted during cluster (re)deployment. It happens because each device from the fileSystems list is a part of the rootfs directory tree that is overwritten during (re)deployment.

Examples of affected devices include:

  • A raw device partition with a file system on it

  • A device partition in a volume group with a logical volume that has a file system on it

  • An mdadm RAID device with a file system on it

  • An LVM RAID device with a file system on it

The wipe field (deprecated) or wipeDevice structure (recommended since Container Cloud 2.26.0) have no effect in this case and cannot protect data on these devices.

Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.

To create a custom bare metal host profile:

  1. Select from the following options:

    • For a management cluster, log in to the bare metal seed node that will be used to bootstrap the management cluster.

    • For a managed cluster, log in to the local machine where you management cluster kubeconfig is located and where kubectl is installed.

      Note

      The management cluster kubeconfig is created automatically during the last stage of the management cluster bootstrap.

  2. Select from the following options:

    • For a management cluster, open templates/bm/baremetalhostprofiles.yaml.template for editing.

    • For a managed cluster, create a new bare metal host profile under the templates/bm/ directory.

  3. Edit the host profile using the example template below to meet your hardware configuration requirements:

    Example template of a bare metal host profile
    apiVersion: metal3.io/v1alpha1
    kind: BareMetalHostProfile
    metadata:
      name: <profileName>
      namespace: <ManagedClusterProjectName>
      # Add the name of the non-default project for the managed cluster
      # being created.
    spec:
      devices:
      # From the HW node, obtain the first device, which size is at least 120Gib.
      - device:
          minSize: 120Gi
          wipe: true
        partitions:
        - name: bios_grub
          partflags:
          - bios_grub
          size: 4Mi
          wipe: true
        - name: uefi
          partflags:
          - esp
          size: 200Mi
          wipe: true
        - name: config-2
          size: 64Mi
          wipe: true
        - name: lvm_root_part
          size: 0
          wipe: true
      # From the HW node, obtain the second device, which size is at least 120Gib.
      # If a device exists but does not fit the size,
      # the BareMetalHostProfile will not be applied to the node.
      - device:
          minSize: 120Gi
          wipe: true
      # From the HW node, obtain the disk device with the exact name.
      - device:
          byName: /dev/nvme0n1
          minSize: 120Gi
          wipe: true
        partitions:
        - name: lvm_lvp_part
          size: 0
          wipe: true
      # Example of wiping a device w\o partitioning it.
      # Mandatory for the case when a disk is supposed to be used for Ceph back end.
      # later
      - device:
          byName: /dev/sde
          wipe: true
      fileSystems:
      - fileSystem: vfat
        partition: config-2
      - fileSystem: vfat
        mountPoint: /boot/efi
        partition: uefi
      - fileSystem: ext4
        logicalVolume: root
        mountPoint: /
      - fileSystem: ext4
        logicalVolume: lvp
        mountPoint: /mnt/local-volumes/
      logicalVolumes:
      - name: root
        size: 0
        vg: lvm_root
      - name: lvp
        size: 0
        vg: lvm_lvp
      postDeployScript: |
        #!/bin/bash -ex
        echo $(date) 'post_deploy_script done' >> /root/post_deploy_done
      preDeployScript: |
        #!/bin/bash -ex
        echo $(date) 'pre_deploy_script done' >> /root/pre_deploy_done
      volumeGroups:
      - devices:
        - partition: lvm_root_part
        name: lvm_root
      - devices:
        - partition: lvm_lvp_part
        name: lvm_lvp
      grubConfig:
        defaultGrubOptions:
        - GRUB_DISABLE_RECOVERY="true"
        - GRUB_PRELOAD_MODULES=lvm
        - GRUB_TIMEOUT=20
      kernelParameters:
        sysctl:
        # For the list of options prohibited to change, refer to
        # https://docs.mirantis.com/mke/3.7/install/predeployment/set-up-kernel-default-protections.html
          kernel.dmesg_restrict: "1"
          kernel.core_uses_pid: "1"
          fs.file-max: "9223372036854775807"
          fs.aio-max-nr: "1048576"
          fs.inotify.max_user_instances: "4096"
          vm.max_map_count: "262144"
    
  4. Optional. Configure wiping of the target device or partition to be used for cluster deployment as described in Wipe a device or partition.

  5. Optional. Configure multiple devices for LVM volume using the example template extract below for reference.

    Caution

    The following template extract contains only sections relevant to LVM configuration with multiple PVs. Expand the main template described in the previous step with the configuration below if required.

    spec:
      devices:
        ...
        - device:
          ...
          partitions:
            - name: lvm_lvp_part1
              size: 0
              wipe: true
        - device:
          ...
          partitions:
            - name: lvm_lvp_part2
              size: 0
              wipe: true
    volumeGroups:
      ...
      - devices:
        - partition: lvm_lvp_part1
        - partition: lvm_lvp_part2
        name: lvm_lvp
    logicalVolumes:
      ...
      - name: root
        size: 0
        vg: lvm_lvp
    fileSystems:
      ...
      - fileSystem: ext4
        logicalVolume: root
        mountPoint: /
    
  6. For a managed cluster, configure required disks for the Ceph cluster as described in Configure Ceph disks in a host profile.

  7. Optional. Technology Preview. Configure support of the Redundant Array of Independent Disks (RAID) that allows, for example, installing a cluster operating system on a RAID device, refer to Configure RAID support.

  8. Optional. Configure the RX/TX buffer size for physical network interfaces and txqueuelen for any network interfaces.

    This configuration can greatly benefit high-load and high-performance network interfaces. You can configure these parameters using the udev rules. For example:

    postDeployScript: |
      #!/bin/bash -ex
      ...
      echo 'ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*|en*", RUN+="/sbin/ethtool -G $name rx 4096 tx 4096"' > /etc/udev/rules.d/59-net.ring.rules
    
      echo 'ACTION=="add|change", SUBSYSTEM=="net", KERNEL=="eth*|en*|bond*|k8s-*|v*" ATTR{tx_queue_len}="10000"' > /etc/udev/rules.d/58-net.txqueue.rules
    
  9. Add or edit the mandatory parameters in the new BareMetalHostProfile object. For the parameters description, see API: BareMetalHostProfile spec.

    Note

    If asymmetric traffic is expected on some of the managed cluster nodes, enable the loose mode for the corresponding interfaces on those nodes by setting the net.ipv4.conf.<interface-name>.rp_filter parameter to "2" in the kernelParameters.sysctl section. For example:

    kernelParameters:
      sysctl:
        net.ipv4.conf.k8s-lcm.rp_filter: "2"
    
  10. Select from the following options:

    • For a management cluster, proceed with the cluster bootstrap procedure as described in Deploy a management cluster using CLI.

    • For a managed cluster, select from the following options:

      Available since Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0)

      1. Log in to the Container Cloud web UI with the operator permissions.

      2. Switch to the required non-default project using the Switch Project action icon located on top of the main left-side navigation panel.

        To create a project, refer to Create a project for managed clusters.

      3. In the left sidebar, navigate to Baremetal and click the Host Profiles tab.

      4. Click Create Host Profile.

      5. Fill out the Create host profile form:

        • Name

          Name of the bare metal host profile.

        • YAML file

          BareMetalHostProfile object in the YAML format that you have previously created. Click Upload to select the required file for uploading.

      1. Add the bare metal host profile to your management cluster:

        kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n <managedClusterProjectName> apply -f <pathToBareMetalHostProfileFile>
        
      2. If required, further modify the host profile:

        kubectl --kubeconfig <pathToManagementClusterKubeconfig> -n <managedClusterProjectName> edit baremetalhostprofile <hostProfileName>
        
      3. Proceed with Add a bare metal host either using web UI or CLI.