Create an mdadm software RAID (raid0, raid1, raid10)

TechPreview

Warning

The EFI system partition partflags: ['esp'] must be a physical partition in the main partition table of the disk, not under LVM or mdadm software RAID.

During configuration of your custom bare metal host profile as described in Create a custom bare metal host profile, you can create an mdadm-based software RAID device raid0 and raid1 by describing the mdadm devices under the softRaidDevices field in BaremetalHostProfile. For example:

...
softRaidDevices:
- name: /dev/md0
  devices:
  - partition: sda1
  - partition: sdb1
- name: raid-name
  devices:
  - partition: sda2
  - partition: sdb2
...

You can also use the raid10 type for the mdadm-based software RAID devices. This type requires at least four and in total an even number of storage devices available on your servers. For example:

softRaidDevices:
- name: /dev/md0
  level: raid10
  devices:
    - partition: sda1
    - partition: sdb1
    - partition: sdd1

The following fields in softRaidDevices describe RAID devices:

  • name

    Name of the RAID device to refer to throughout the baremetalhostprofile.

  • level

    Type or level of RAID used to create a device, defaults to raid1. Set to raid0 or raid10 to create a device of the corresponding type.

  • devices

    List of physical devices or partitions used to build a software RAID device. It must include at least two partitions or devices to build a raid0 and raid1 devices and at least four for raid10.

For the rest of the mdadm RAID parameters, see Container Cloud API: BareMetalHostProfile spec.

Caution

The mdadm RAID devices cannot be created on top of LVM devices.

You can use flexible size units throughout bare metal host profiles. For example, you can now use either sizeGiB: 0.1 or size: 100Mi when specifying a device size.

Mirantis recommends using only one parameter name type and units throughout the configuration files. If both sizeGiB and size are used, sizeGiB is ignored during deployment and the suffix is adjusted accordingly. For example, 1.5Gi will be serialized as 1536Mi. The size without units is counted in bytes. For example, size: 120 means 120 bytes.

Warning

All data will be wiped during cluster deployment on devices defined directly or indirectly in the fileSystems list of BareMetalHostProfile. For example:

  • A raw device partition with a file system on it

  • A device partition in a volume group with a logical volume that has a file system on it

  • An mdadm RAID device with a file system on it

  • An LVM RAID device with a file system on it

The wipe field is always considered true for these devices. The false value is ignored.

Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.

The following example illustrates an extract of BaremetalHostProfile with / on the mdadm raid1 and some data storage on raid0:

Example with / on the mdadm raid1 and data storage on raid0
...
devices:
  - device:
      workBy: "by_id,by_wwn,by_path,by_name"
      type: nvme
      wipe: true
    partitions:
      - name: root_part1
        size: 120Gi
    partitions:
      - name: rest_sda
        size: 0
  - device:
      workBy: "by_id,by_wwn,by_path,by_name"
      type: nvme
      wipe: true
    partitions:
      - name: root_part2
        size: 120Gi
    partitions:
      - name: rest_sdb
        size: 0
softRaidDevices:
  - name: root
    level: raid1  ## <-- mdadm raid1
    devices:
      - partition: root_part1
      - partition: root_part2
  - name: raid-name
    level: raid0  ## <-- mdadm raid0
    devices:
      - partition: rest_sda
      - partition: rest_sdb
fileSystems:
  - fileSystem: ext4
    softRaidDevice: root
    mountPoint: /
    mountOpts: "noatime,nodiratime"
  - fileSystem: ext4
    softRaidDevice: data
    mountPoint: /mnt/data
...

The following example illustrates an extract of BaremetalHostProfile with data storage on a raid10 device:

Example with data storage on the mdadm raid10
...
devices:
  - device:
      workBy: "by_id,by_wwn,by_path,by_name"
      minSize: 60Gi
      type: ssd
      wipe: true
    partitions:
      - name: bios_grub1
        partflags:
          - bios_grub
        size: 4Mi
        wipe: true
      - name: uefi
        partflags:
          - esp
        size: 200Mi
        wipe: true
      - name: config-2
        size: 64Mi
        wipe: true
      - name: lvm_root
        size: 0
        wipe: true
  - device:
      workBy: "by_id,by_wwn,by_path,by_name"
      minSize: 60Gi
      type: nvme
      wipe: true
    partitions:
      - name: md_part1
        partflags:
          - raid
        size: 40Gi
        wipe: true
  - device:
      workBy: "by_id,by_wwn,by_path,by_name"
      minSize: 60Gi
      type: nvme
      wipe: true
    partitions:
      - name: md_part2
        partflags:
          - raid
        size: 40Gi
        wipe: true
  - device:
      workBy: "by_id,by_wwn,by_path,by_name"
      minSize: 60Gi
      type: nvme
      wipe: true
    partitions:
      - name: md_part3
        partflags:
          - raid
        size: 40Gi
        wipe: true
  - device:
      workBy: "by_id,by_wwn,by_path,by_name"
      minSize: 60Gi
      type: nvme
      wipe: true
    partitions:
      - name: md_part4
        partflags:
          - raid
        size: 40Gi
        wipe: true
fileSystems:
  - fileSystem: vfat
    partition: config-2
  - fileSystem: vfat
    mountPoint: /boot/efi
    partition: uefi
  - fileSystem: ext4
    mountOpts: rw,noatime,nodiratime,lazytime,nobarrier,commit=240,data=ordered
    mountPoint: /
    partition: root
  - filesystem: ext4
    mountPoint: /var
    softRaidDevice: /dev/md0
softRaidDevices:
  - devices:
      - partition: md_root_part1
      - partition: md_root_part2
      - partition: md_root_part3
      - partition: md_root_part4
    level: raid10
    metadata: "1.2"
    name: /dev/md0
...