Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Migrate Ceph cluster to address storage devices using by-id¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph
custom
resource (CR), which is available since MOSK 25.2 to replace the deprecated
KaaSCephCluster
. For the equivalent procedure with the KaaSCephCluster
CR, refer to the following section:
The by-id
identifier is the only persistent device identifier for a Ceph
cluster that remains stable after the cluster upgrade or any other maintenance.
Therefore, Mirantis recommends using device by-id
symlinks rather than
device names or by-path
symlinks.
MOSK uses the device by-id
identifier as the default
method of addressing the underlying devices of Ceph OSDs. Thus, you should
migrate all existing Ceph clusters, which are still utilizing the device names
or device by-path
symlinks, to the by-id
format.
This section explains how to configure the MiraCeph
specification to use
the by-id
symlinks instead of disk names and by-path
identifiers as the
default method of addressing storage devices.
Note
Mirantis recommends avoiding the use of wwn
symlinks as by-id
identifiers due to their lack of persistence expressed in inconsistent
discovery during node boot.
Besides migrating to by-id
, consider using the fullPath
field for the
by-id
symlinks configuration, instead of the name
field in the
spec.nodes.devices
section. This approach allows for
clear understanding of field namings and their use cases.
Migrate the Ceph nodes section to by-id identifiers¶
Make sure that your MOSK cluster is not currently running an upgrade or any other maintenance process.
Obtain the list of all
MiraCeph
storage devices that use disk names or diskby-path
as identifiers of Ceph node storage devices:kubectl -n ceph-lcm-mirantis get miraceph -o yaml
Example of system response
spec: nodes: ... - name: managed-worker-1 devices: - config: deviceClass: hdd name: sdc - config: deviceClass: hdd fullPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2 - name: managed-worker-2 devices: - config: deviceClass: hdd name: /dev/disk/by-id/wwn-0x26d546263bd312b8 - config: deviceClass: hdd name: /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dsdc - name: managed-worker-3 devices: - config: deviceClass: nvme name: nvme3n1 - config: deviceClass: hdd fullPath: /dev/disk/by-id/scsi-SATA_HGST_HUS724040AL_PN1334PEHN18ZS
Verify the items from the
devices
sections to be moved to theby-id
symlinks. The list of the items to migrate includes:A disk name in the
name
field. For example,sdc
,nvme3n1
, and so on.A disk
/dev/disk/by-path
symlink in thefullPath
field. For example,/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2
.A disk
/dev/disk/by-id
symlink in thename
field.A disk
/dev/disk/by-id/wwn
symlink, which is programmatically calculated at boot. For example,/dev/disk/by-id/wwn-0x26d546263bd312b8
.
For the example above, we have to migrate both items of
managed-worker-1
, both items ofmanaged-worker-2
, and the first item ofmanaged-worker-3
. The second item ofmanaged-worker-3
has already been configured in the required format, therefore, we are leaving it as is.Open the
MiraCeph
custom resource for editing to start migration of all affecteddevices
items toby-id
symlinks:kubectl -n ceph-lcm-mirantis edit miraceph
For each affected node from the
spec.nodes
section, obtain the correspondingstatus.providerStatus.hardware.storage
section in theMachine
custom resource located on the management cluster:kubectl -n <moskClusterProject> get machine <machineName> -o yaml
Substitute
<moskClusterProject>
with the corresponding cluster namespace and<machineName>
with the machine name.Example of system response for
managed-worker-1
status: providerStatus: hardware: storage: - byID: /dev/disk/by-id/wwn-0x05ad99618d66a21f byIDs: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_05ad99618d66a21f - /dev/disk/by-id/scsi-305ad99618d66a21f - /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_05ad99618d66a21f - /dev/disk/by-id/wwn-0x05ad99618d66a21f byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0 byPaths: - /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0 name: /dev/sda serialNumber: 05ad99618d66a21f size: 61 type: hdd - byID: /dev/disk/by-id/wwn-0x26d546263bd312b8 byIDs: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_26d546263bd312b8 - /dev/disk/by-id/scsi-326d546263bd312b8 - /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_26d546263bd312b8 - /dev/disk/by-id/wwn-0x26d546263bd312b8 byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2 byPaths: - /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2 name: /dev/sdb serialNumber: 26d546263bd312b8 size: 32 type: hdd - byID: /dev/disk/by-id/wwn-0x2e52abb48862dbdc byIDs: - /dev/disk/by-id/lvm-pv-uuid-MncrcO-6cel-0QsB-IKaY-e8UK-6gDy-k2hOtf - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2e52abb48862dbdc - /dev/disk/by-id/scsi-32e52abb48862dbdc - /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dbdc - /dev/disk/by-id/wwn-0x2e52abb48862dbdc byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1 byPaths: - /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1 name: /dev/sdc serialNumber: 2e52abb48862dbdc size: 61 type: hdd
For each affected
devices
item from the consideredMachine
, obtain a correctby-id
symlink fromstatus.providerStatus.hardware.storage.byIDs
. Suchby-id
symlink must containstatus.providerStatus.hardware.storage.serialNumber
and must not containwwn
.For
managed-worker-1
, according to the example system response above, we can use the followingby-id
symlinks:Replace the first item of
devices
that containsname: sdc
withfullPath: /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dbdc
Replace the second item of
devices
that containsfullPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2
withfullPath: /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_26d546263bd312b8
Replace all affected
devices
items inMiraCeph
with the obtained ones.Resulting example of the storage device identifier migration
spec: nodes: ... - name: managed-worker-1 devices: - config: deviceClass: hdd fullPath: /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dbdc - config: deviceClass: hdd fullPath: /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_26d546263bd312b8 - name: managed-worker-2 devices: - config: deviceClass: hdd fullPath: /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_031d9054c9b48f79 - config: deviceClass: hdd fullPath: /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dsdc - name: managed-worker-3 devices: - config: deviceClass: nvme fullPath: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R394543 - config: deviceClass: hdd fullPath: /dev/disk/by-id/scsi-SATA_HGST_HUS724040AL_PN1334PEHN18ZS
Save and quit editing the
MiraCeph
custom resource.
After migration, the re-orchestration occurs. The whole procedure should not result in any real changes to the Ceph cluster state in Ceph OSDs.
Migrate the Ceph node groups to by-id identifiers¶
Besides single nodes in the nodes
section, your cluster may contain node
groups in the nodes
section specified with disk names instead of by-id
symlinks. Despite of inplace replacement of single nodes storage device
identifiers, node groups require another approach because of the repeatable
spec
section for different nodes.
In the case of migrating storage devices for node groups, use the
deviceLabels
section to label different disks with the same labels and use
these labels in node groups after. For the deviceLabels
section
specification, refer to Ceph advanced configuration: extraOpts.
The following procedure describes how to keep node groups in the nodes
section but use unique by-id
identifiers instead of disk names.
To migrate the Ceph node groups to by-id identifiers:
Make sure that your MOSK cluster is not currently running an upgrade or any other maintenance process.
Obtain the list of all
MiraCeph
storage devices that use disk names or diskby-path
as identifiers of Ceph node group storage devices:kubectl -n ceph-lcm-mirantis get miraceph -o yaml
Example extract of the
MiraCeph
node groups in thenodes
section with disk names used as identifiers:spec: nodes: ... - name: rack-1 nodeGroup: - node-1 - node-2 crush: rack: "rack-1" devices: - name: nvme0n1 config: deviceClass: nvme - name: nvme1n1 config: deviceClass: nvme - name: nvme2n1 config: deviceClass: nvme - name: rack-2 nodeGroup: - node-3 - node-4 crush: rack: "rack-2" devices: - name: nvme0n1 config: deviceClass: nvme - name: nvme1n1 config: deviceClass: nvme - name: nvme2n1 config: deviceClass: nvme - name: rack-3: nodeGroup: - node-5 - node-6 crush: rack: "rack-3" devices: - name: nvme0n1 config: deviceClass: nvme - name: nvme1n1 config: deviceClass: nvme - name: nvme2n1 config: deviceClass: nvme
Verify the items from the
devices
sections to be moved toby-id
symlinks. The list of the items to migrate includes:A disk name in the
name
field. For example,sdc
,nvme3n1
, and so on.A disk
/dev/disk/by-path
symlink in thefullPath
field. For example,/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2
.A disk
/dev/disk/by-id
symlink in thename
field.A disk
/dev/disk/by-id/wwn
symlink, which is programmatically calculated at boot. For example,/dev/disk/by-id/wwn-0x26d546263bd312b8
.
All
devices
sections in the example above contain disk names in thename
field. Therefore, you need to replace them withby-id
symlinks.Open the
MiraCeph
custom resource for editing to start migration of all affecteddevices
items toby-id
symlinks:kubectl -n ceph-lcm-mirantis edit miraceph
Within each impacted Ceph node group in the
nodes
section, add disk labels to thedeviceLabels
sections for every affected storage device linked with the nodes listed innodeGroup
of that specific node group. Verify that these disk labels are equal toby-id
symlinks of corresponding disks.For example, if the node group
rack-1
contains two nodesnode-1
andnode-2
andspec
contains three items withname
, you need to obtain properby-id
symlinks for disk names from both nodes and write it down with the same disk labels. The following example contains the labels forby-id
symlinks ofnvme0n1
,nvme1n1
, andnvme2n1
disks fromnode-1
andnode-2
correspondingly:spec: extraOpts: deviceLabels: node-1: nvme-1: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R394543 nvme-2: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R372150 nvme-3: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R183266 node-2: nvme-1: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB4040ALR-00007_S46FNY0R900128 nvme-2: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB4040ALR-00007_S46FNY0R805840 nvme-3: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB4040ALR-00007_S46FNY0R848469
Note
Keep device labels repeatable for all nodes from the node group. This allows for specifying unified
spec
for differentby-id
symlinks of different nodes.Example of the complete
deviceLabels
sectionspec: extraOpts: deviceLabels: node-1: nvme-1: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R394543 nvme-2: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R372150 nvme-3: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R183266 node-2: nvme-1: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB4040ALR-00007_S46FNY0R900128 nvme-2: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB4040ALR-00007_S46FNY0R805840 nvme-3: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB4040ALR-00007_S46FNY0R848469 node-3: nvme-1: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB00T2B0A-00007_S46FNY0R900128 nvme-2: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB00T2B0A-00007_S46FNY0R805840 nvme-3: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB00T2B0A-00007_S46FNY0R848469 node-4: nvme-1: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB00Z4SA0-00007_S46FNY0R286212 nvme-2: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB00Z4SA0-00007_S46FNY0R350024 nvme-3: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB00Z4SA0-00007_S46FNY0R300756 node-5: nvme-1: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB8UK0QBD-00007_S46FNY0R577024 nvme-2: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB8UK0QBD-00007_S46FNY0R718411 nvme-3: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB8UK0QBD-00007_S46FNY0R831424 node-6: nvme-1: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB01DAU34-00007_S46FNY0R908440 nvme-2: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB01DAU34-00007_S46FNY0R945405 nvme-3: /dev/disk/by-id/nvme-SAMSUNG_MZ1LB01DAU34-00007_S46FNY0R224911
For each affected node group in the
nodes
section, replace the field with the insufficient disk identifier to thename
field with the disk label from thedeviceLabels
section.Example of the updated
nodes
sectionspec: nodeGroups: ... - name: rack-1 nodeGroup: - node-1 - node-2 crush: rack: "rack-1" devices: - name: nvme-1 config: deviceClass: nvme - name: nvme-2 config: deviceClass: nvme - name: nvme-3 config: deviceClass: nvme - name: rack-2 nodeGroup: - node-3 - node-4 crush: rack: "rack-2" devices: - name: nvme-1 config: deviceClass: nvme - name: nvme-2 config: deviceClass: nvme - name: nvme-3 config: deviceClass: nvme - name: rack-3 nodes: - node-5 - node-6 crush: rack: "rack-3" devices: - name: nvme-1 config: deviceClass: nvme - name: nvme-2 config: deviceClass: nvme - name: nvme-3 config: deviceClass: nvme
Save and quit editing the
MiraCeph
custom resource.
After migration, the re-orchestration occurs. The whole procedure should not result in any real changes to the Ceph cluster state in Ceph OSDs.