Add, remove, or reconfigure Ceph OSDs with metadata devices¶
Mirantis Ceph Controller simplifies Ceph cluster management by automating LCM
operations. This section describes how to add, remove, or reconfigure Ceph
OSDs with a separate metadata device.
From the Ceph disks defined in the BareMetalHostProfile object that was
configured using the Configure Ceph disks in a host profile procedure, select one disk for
data and one logical volume for metadata of a Ceph OSD to be added to the
Ceph cluster.
Note
If you add a new disk after machine provisioning, manually
prepare the required machine devices using Logical Volume Manager (LVM) 2
on the existing node because BareMetalHostProfile does not support
in-place changes.
To add a Ceph OSD to an existing or hot-plugged raw device
If you want to add a Ceph OSD on top of a raw device that already exists
on a node or is hot-plugged, add the required device using the following
guidelines:
You can add a raw device to a node during node deployment.
If a node supports adding devices without node reboot, you can hot plug
a raw device to a node.
If a node does not support adding devices without node reboot, you can
hot plug a raw device during node shutdown. In this case, complete the
following steps:
Substitute <managedClusterProjectName> with the corresponding value.
In the nodes.<machineName>.storageDevices section, specify the
parameters for a Ceph OSD as required. For the parameters description, see
Node parameters.
The example configuration of the nodes section with the new node:
Since MOSK 23.3
nodes:kaas-node-5bgk6:roles:-mon-mgrstorageDevices:-config:# existing itemdeviceClass:hddfullPath:/dev/disk/by-id/scsi-SATA_HGST_HUS724040AL_PN1334PEHN18ZS-config:# new itemdeviceClass:hddmetadataDevice:/dev/bluedb/meta_1fullPath:/dev/disk/by-id/scsi-0ATA_HGST_HUS724040AL_PN1334PEHN1VBC
Before MOSK 23.3
nodes:kaas-node-5bgk6:roles:-mon-mgrstorageDevices:-config:# existing itemdeviceClass:hddname:sdb-config:# new itemdeviceClass:hddmetadataDevice:/dev/bluedb/meta_1name:sdc
Warning
Since MOSK 23.3, Mirantis highly recommends
using the non-wwn by-id symlinks to specify storage devices in the
storageDevices list.
Ceph OSD removal implies the usage of the
KaaSCephOperationRequest custom resource (CR). For workflow overview,
spec and phases description, see High-level workflow of Ceph OSD or node removal.
Open the KaasCephCluster object of the managed cluster for editing:
Substitute <managedClusterProjectName> with the corresponding value.
Remove the required Ceph OSD specification from the
spec.cephClusterSpec.nodes.<machineName>.storageDevices list:
The example configuration of the nodes section with the new node:
Since MOSK 23.3
nodes:kaas-node-5bgk6:roles:-mon-mgrstorageDevices:-config:deviceClass:hddfullPath:/dev/disk/by-id/scsi-SATA_HGST_HUS724040AL_PN1334PEHN18ZS-config:# remove the entire item entry from storageDevices listdeviceClass:hddmetadataDevice:/dev/bluedb/meta_1fullPath:/dev/disk/by-id/scsi-0ATA_HGST_HUS724040AL_PN1334PEHN1VBC
Before MOSK 23.3
nodes:kaas-node-5bgk6:roles:-mon-mgrstorageDevices:-config:deviceClass:hddname:sdb-config:# remove the entire item entry from storageDevices listdeviceClass:hddmetadataDevice:/dev/bluedb/meta_1name:sdc
Create a YAML template for the KaaSCephOperationRequest CR. For example:
Substitute <managedClusterProjectName> with the corresponding cluster
namespace and <kaasCephClusterName> with the corresponding
KaaSCephCluster name.
Warning
Since MOSK 23.3, Mirantis does not recommend setting device
name or device by-path symlink in the cleanupByDevice field
as these identifiers are not persistent and can change at node boot. Remove
Ceph OSDs with by-id symlinks specified in the path field or use
cleanupByOsdId instead.
Since MOSK 23.1,
cleanupByDevice is not supported if a device was physically
removed from a node. Therefore, use cleanupByOsdId instead. For
details, see Remove a failed Ceph OSD by Ceph OSD ID.
Before MOSK 23.1,
if the storageDevice item was specified with by-id, specify
the path parameter in the cleanupByDevice section instead of
name.
If the storageDevice item was specified with a by-path device
path, specify the path parameter in the cleanupByDevice section
instead of name.
Apply the template on the management cluster in the corresponding namespace:
kubectlapply-fremove-osd-<machineName>-sdb.yaml
Verify that the corresponding request has been created:
Reconfigure a partition of a Ceph OSD metadata device¶
There is no hot reconfiguration procedure for existing Ceph OSDs. To
reconfigure an existing Ceph node, remove and re-add a Ceph OSD with a
metadata device using the following options:
Since Container Cloud 2.24.0, if metadata device partitions are specified
in the BareMetalHostProfile object as described in Configure Ceph disks in a host profile,
the metadata device definition is an LVM path in metadataDevice of the
KaaSCephCluster object.
Therefore, automated LCM will clean up the logical volume without removal
and it can be reused. For this reason, to reconfigure a partition of a Ceph
OSD metadata device:
Before MOSK 23.2 or if metadata device partitions are not
specified in the BareMetalHostProfile object as described in
Configure Ceph disks in a host profile, the most common definition of a metadata device is a
full device name (by-path or by-id) in metadataDevice of the
KaaSCephCluster object for Ceph OSD. For example,
metadataDevice:/dev/nvme0n1. In this case, to reconfigure a partition
of a Ceph OSD metadata device:
Remove a Ceph OSD from the Ceph cluster as described in
Remove a Ceph OSD with a metadata device. Automated LCM will clean
up the data device and will remove the metadata device partition for the
required Ceph OSD.
Reconfigure the metadata device partition manually to use it during
addition of a new Ceph OSD.
Manual reconfiguration of a metadata device partition
Log in to the Ceph node running a Ceph OSD to reconfigure.
Find the required metadata device used for Ceph OSDs that should
have LVM partitions with the osd--db substring:
Capture the volume group UUID and logical volume sizes. In the
example above, the volume group UUID is
ceph--7831901d--398e--415d--8941--e78486f3b019 and the size
is 16G.
Capture the volume group with the name that matches the prefix of
LVM partitions of the metadata device. In the example above, the
required volume group is
ceph-7831901d-398e-415d-8941-e78486f3b019.
Make a manual LVM partitioning for the new Ceph OSD. Create a new
logical volume in the obtained volume group:
lvcreate-L<lvSize>-n<lvName><vgName>
Substitute the following parameters:
<lvSize> with the previously obtained logical volume size.
In the example above, it is 16G.
<lvName> with a new logical volume name. For example,
meta_1.
<vgName> with the previously obtained volume group name.
In the example above, it is
ceph-7831901d-398e-415d-8941-e78486f3b019.
Note
Manually created partitions can be removed only
manually, or during a complete metadata disk removal, or during
the Machine object removal or re-provisioning.
Add the same Ceph OSD but with a modified configuration and manually
created logical volume of the metadata device as described in
Add a Ceph OSD with a metadata device.
For example, instead of metadataDevice:/dev/bluedb/meta_1 define
metadataDevice:/dev/ceph-7831901d-398e-415d-8941-e78486f3b019/meta_1
that was manually created in the previous step.