Create an mdadm software RAID level 10 (raid10)¶
The EFI system partition
partflags: ['esp'] must be
a physical partition in the main partition table of the disk, not under
LVM or mdadm software RAID.
You can deploy Mirantis OpenStack for Kubernetes (MOSK) on local software-based Redundant Array of Independent Disks (RAID) devices to withstand failure of one device at a time.
Using a custom bare metal host profile, you can configure and create
an mdadm-based software RAID device of type
raid10 if you have
an even number of devices available on your servers. At least four
storage devices are required for such RAID device.
During configuration of your custom bare metal host profile as described in
Create a custom bare metal host profile, create an mdadm-based software RAID device
raid10 by describing the mdadm devices under the
field. For example:
... softRaidDevices: - name: /dev/md0 level: raid10 devices: - partition: sda1 - partition: sdb1 - partition: sdd1 ...
The following fields in
softRaidDevices describe RAID devices:
Name of the RAID device to refer to throughout the
List of physical devices or partitions used to build a software RAID device. It must include at least four partitions or devices to build a
Type or level of RAID used to create device. Set to
raid1to create a device of the corresponding type.
For the rest of the mdadm RAID parameters, see API Reference: BareMetalHostProfile spec.
The mdadm RAID devices cannot be created on top of an LVM device.
The following example illustrates an extract of
with data storage on a
... devices: - device: minSize: 60Gi wipe: true partitions: - name: bios_grub1 partflags: - bios_grub size: 4Mi wipe: true - name: uefi partflags: - esp size: 200Mi wipe: true - name: config-2 size: 64Mi wipe: true - name: lvm_root size: 0 wipe: true - device: minSize: 60Gi wipe: true partitions: - name: md_part1 partflags: - raid size: 40Gi wipe: true - device: minSize: 60Gi wipe: true partitions: - name: md_part2 partflags: - raid size: 40Gi wipe: true - device: minSize: 60Gi wipe: true partitions: - name: md_part3 partflags: - raid size: 40Gi wipe: true - device: minSize: 60Gi wipe: true partitions: - name: md_part4 partflags: - raid size: 40Gi wipe: true fileSystems: - fileSystem: vfat partition: config-2 - fileSystem: vfat mountPoint: /boot/efi partition: uefi - fileSystem: ext4 mountOpts: rw,noatime,nodiratime,lazytime,nobarrier,commit=240,data=ordered mountPoint: / partition: root - filesystem: ext4 mountPoint: /var softRaidDevice: /dev/md0 softRaidDevices: - devices: - partition: md_root_part1 - partition: md_root_part2 - partition: md_root_part3 - partition: md_root_part4 level: raid10 metadata: "1.2" name: /dev/md0 ...
All data will be wiped during cluster deployment on devices
defined directly or indirectly in the
fileSystems list of
BareMetalHostProfile. For example:
A raw device partition with a file system on it
A device partition in a volume group with a logical volume that has a file system on it
An mdadm RAID device with a file system on it
An LVM RAID device with a file system on it
wipe field is always considered
true for these devices.
false value is ignored.
Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.