Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Create an LVM software RAID (raid1)¶
TechPreview
Warning
The EFI system partition partflags: ['esp'] must be
a physical partition in the main partition table of the disk, not under
LVM or mdadm software RAID.
During configuration of your custom bare metal host profile,
you can create an LVM-based software RAID device raid1 by adding
type: raid1 to the logicalVolume spec in BaremetalHostProfile.
For the LVM RAID parameters description, refer to BareMetalHostProfile spec.
For a bare metal host profile configuration, refer to Create a custom bare metal host profile.
Caution
The logicalVolume spec of the raid1 type requires at least
two devices (partitions) in volumeGroup where you build a logical
volume. For an LVM of the linear type, one device is enough.
You can use flexible size units throughout bare metal host profiles:
Ki, Mi, Gi, and so on. For example, size: 100Mi or
size: 1.5Gi that will be serialized as 1536Mi. The size without units
is counted in bytes. For example, size: 120 means 120 bytes.
Note
The LVM raid1 requires additional space to store the raid1
metadata on a volume group, roughly 4 MB for each partition.
Therefore, you cannot create a logical volume of exactly the same
size as the partitions it works on.
For example, if you have two partitions of 10 GiB, the corresponding
raid1 logical volume size will be less than 10 GiB. For that
reason, you can either set size: 0 to use all available
space on the volume group, or set a smaller size than the partition
size. For example, use size: 9.9Gi instead of
size: 10Gi for the logical volume.
The following example illustrates an extract of BaremetalHostProfile
with / on the LVM raid1.
...
devices:
- device:
workBy: "by_id,by_wwn,by_path,by_name"
minSize: 200Gi
type: hdd
wipe: true
partitions:
- name: root_part1
size: 120Gi
partitions:
- name: rest_sda
size: 0
- device:
workBy: "by_id,by_wwn,by_path,by_name"
minSize: 200Gi
type: hdd
wipe: true
partitions:
- name: root_part2
size: 120Gi
partitions:
- name: rest_sdb
size: 0
volumeGroups:
- name: vg-root
devices:
- partition: root_part1
- partition: root_part2
- name: vg-data
devices:
- partition: rest_sda
- partition: rest_sdb
logicalVolumes:
- name: root
type: raid1 ## <-- LVM raid1
vg: vg-root
size: 119.9Gi
- name: data
type: linear
vg: vg-data
size: 0
fileSystems:
- fileSystem: ext4
logicalVolume: root
mountPoint: /
mountOpts: "noatime,nodiratime"
- fileSystem: ext4
logicalVolume: data
mountPoint: /mnt/data
Warning
All data will be wiped during cluster deployment on devices
defined directly or indirectly in the fileSystems list of
BareMetalHostProfile. For example:
A raw device partition with a file system on it
A device partition in a volume group with a logical volume that has a file system on it
An mdadm RAID device with a file system on it
An LVM RAID device with a file system on it
The wipe field is always considered true for these devices.
The false value is ignored.
Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.