Create an LVM software RAID level 1 (raid1)¶
Caution
This feature is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview features.
Warning
The EFI system partition partflags: ['esp']
must be
a physical partition in the main partition table of the disk, not under
LVM or mdadm software RAID.
During configuration of your custom bare metal host profile,
you can create an LVM-based software RAID device raid1
by adding
type: raid1
to the logicalVolume
spec in BaremetalHostProfile
.
For the LVM RAID parameters description, refer to API: BareMetalHostProfile spec.
For a bare metal host profile configuration, refer to Create a custom bare metal host profile.
Caution
The logicalVolume
spec of the raid1
type requires
at least two devices (partitions) in volumeGroup
where you
build a logical volume. For an LVM of the linear
type,
one device is enough.
Note
The LVM raid1
requires additional space to store the raid1
metadata on a volume group, roughly 4 MB for each partition.
Therefore, you cannot create a logical volume of exactly the same
size as the partitions it works on.
For example, if you have two partitions of 10 GiB, the corresponding
raid1
logical volume size will be less than 10 GiB. For that
reason, you can either set size: 0
to use all available space
on the volume group, or set a smaller size than the partition size.
For example, use size: 9.9Gi
instead of size: 10Gi
for the logical volume.
The following example illustrates an extract of BaremetalHostProfile
with /
on the LVM raid1
.
...
devices:
- device:
byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1
minSize: 200Gi
type: hdd
wipe: true
partitions:
- name: root_part1
size: 120Gi
- name: rest_sda
size: 0
- device:
byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2
minSize: 200Gi
type: hdd
wipe: true
partitions:
- name: root_part2
size: 120Gi
- name: rest_sdb
size: 0
volumeGroups:
- name: vg-root
devices:
- partition: root_part1
- partition: root_part2
- name: vg-data
devices:
- partition: rest_sda
- partition: rest_sdb
logicalVolumes:
- name: root
type: raid1 ## <-- LVM raid1
vg: vg-root
size: 119.9Gi
- name: data
type: linear
vg: vg-data
size: 0
fileSystems:
- fileSystem: ext4
logicalVolume: root
mountPoint: /
mountOpts: "noatime,nodiratime"
- fileSystem: ext4
logicalVolume: data
mountPoint: /mnt/data
Warning
Any data stored on any device defined in the fileSystems
list can be deleted or corrupted during cluster (re)deployment. It happens
because each device from the fileSystems
list is a part of the
rootfs
directory tree that is overwritten during (re)deployment.
Examples of affected devices include:
A raw device partition with a file system on it
A device partition in a volume group with a logical volume that has a file system on it
An mdadm RAID device with a file system on it
An LVM RAID device with a file system on it
The wipe
field (deprecated) or wipeDevice
structure (recommended
since Container Cloud 2.26.0) have no effect in this case and cannot
protect data on these devices.
Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.