Create an mdadm software RAID level 10 (raid10)

Available since 2.16.0 in 8.5.0 Technology Preview

Warning

The EFI system partition partflags: ['esp'] must be a physical partition in the main partition table of the disk, not under LVM or mdadm software RAID.

You can deploy Mirantis OpenStack for Kubernetes (MOSK) on local software-based Redundant Array of Independent Disks (RAID) devices to withstand failure of one device at a time.

Using a custom bare metal host profile, you can configure and create an mdadm-based software RAID device of type raid10 if you have an even number of devices available on your servers. At least four storage devices are required for such RAID device.

During configuration of your custom bare metal host profile as described in Create a custom bare metal host profile, create an mdadm-based software RAID device raid10 by describing the mdadm devices under the softRaidDevices field. For example:

...
softRaidDevices:
- name: /dev/md0
  level: raid10
  devices:
    - partition: sda1
    - partition: sdb1
    - partition: sdc1
    - partition: sdd1
...

The following fields in softRaidDevices describe RAID devices:

  • name

    Name of the RAID device to refer to throughout the baremetalhostprofile.

  • devices

    List of physical devices or partitions used to build a software RAID device. It must include at least four partitions or devices to build a raid10 device.

  • level

    Type or level of RAID used to create device. Set to raid10 or raid1 to create a device of the corresponding type.

For the rest of the mdadm RAID parameters, see API Reference: BareMetalHostProfile spec.

Caution

The mdadm RAID devices cannot be created on top of an LVM device.

The following example illustrates an extract of baremetalhostprofile with data storage on a raid10 device:

...
devices:
  - device:
      minSizeGiB: 60
      wipe: true
    partitions:
      - name: bios_grub1
        partflags:
          - bios_grub
        sizeGiB: 0.00390625
        wipe: true
      - name: uefi
        partflags:
          - esp
        sizeGiB: 0.20000000298023224
        wipe: true
      - name: config-2
        sizeGiB: 0.0625
        wipe: true
      - name: lvm_root
        sizeGiB: 0
        wipe: true
  - device:
      minSizeGiB: 60
      wipe: true
    partitions:
      - name: md_part1
        partflags:
          - raid
        sizeGiB: 40
        wipe: true
  - device:
      minSizeGiB: 60
      wipe: true
    partitions:
      - name: md_part2
        partflags:
          - raid
        sizeGiB: 40
        wipe: true
  - device:
      minSizeGiB: 60
      wipe: true
    partitions:
      - name: md_part3
        partflags:
          - raid
        sizeGiB: 40
        wipe: true
  - device:
      minSizeGiB: 60
      wipe: true
    partitions:
      - name: md_part4
        partflags:
          - raid
        sizeGiB: 40
        wipe: true
fileSystems:
  - fileSystem: vfat
    partition: config-2
  - fileSystem: vfat
    mountPoint: /boot/efi
    partition: uefi
  - fileSystem: ext4
    mountOpts: rw,noatime,nodiratime,lazytime,nobarrier,commit=240,data=ordered
    mountPoint: /
    partition: root
  - filesystem: ext4
    mountPoint: /var
    softRaidDevice: /dev/md0
softRaidDevices:
  - devices:
      - partition: md_root_part1
      - partition: md_root_part2
      - partition: md_root_part3
      - partition: md_root_part4
    level: raid10
    metadata: "1.2"
    name: /dev/md0
...

Warning

All data will be wiped during cluster deployment on devices defined directly or indirectly in the fileSystems list of BareMetalHostProfile. For example:

  • A raw device partition with a file system on it

  • A device partition in a volume group with a logical volume that has a file system on it

  • An mdadm RAID device with a file system on it

  • An LVM RAID device with a file system on it

The wipe field is always considered true for these devices. The false value is ignored.

Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.