Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Add, remove, or reconfigure Ceph OSDs¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph custom
resource (CR), which is available since MOSK 25.2 to replace the deprecated
KaaSCephCluster. For the equivalent procedure with the KaaSCephCluster
CR, refer to the following section:
Mirantis Ceph Controller simplifies Ceph cluster management by automating LCM operations. This section describes how to add, remove, or reconfigure Ceph OSDs.
Add a Ceph OSD to a MOSK cluster¶
Manually prepare the required machine devices with LVM2 on the existing node because
BareMetalHostProfiledoes not support in-place changes.To add a Ceph OSD to an existing or hot-plugged raw device
If you want to add a Ceph OSD on top of a raw device that already exists on a node or is hot-plugged, add the required device using the following guidelines:
You can add a raw device to a node during node deployment.
If a node supports adding devices without node reboot, you can hot plug a raw device to a node.
If a node does not support adding devices without node reboot, you can hot plug a raw device during node shutdown. In this case, complete the following steps:
Enable maintenance mode on the MOSK cluster.
Turn off the required node.
Attach the required raw device to the node.
Turn on the required node.
Disable maintenance mode on the MOSK cluster.
Open the
MiraCephCR on a MOSK cluster for editing:kubectl -n ceph-lcm-mirantis edit miraceph
In one of the following sections, specify parameters for Ceph OSD:
nodes.<nodeName>.devicesnodes.<nodeName>.deviceFilternodes.<nodeName>.devicePathFilter
For description of parameters, see Node parameters.
The example configuration of the
nodessection with the new node:nodes: - name: kaas-node-5bgk6 roles: - mon - mgr devices: - config: # existing item deviceClass: hdd fullPath: /dev/disk/by-id/scsi-SATA_HGST_HUS724040AL_PN1334PEHN18ZS - config: # new item deviceClass: hdd fullPath: /dev/disk/by-id/scsi-0ATA_HGST_HUS724040AL_PN1334PEHN1VBC
Warning
Mirantis highly recommends using the non-wwn
by-idsymlinks to specify storage devices in thedeviceslist.For details, see Addressing storage devices since MOSK 25.2.
Verify that the Ceph OSD on the specified node is successfully deployed. The
fullClusterInfosection should not contain any issues.kubectl -n ceph-lcm-mirantis get mchealth -o yaml
For example:
status: fullClusterInfo: daemonsStatus: ... osd: running: '3/3 running: 3 up, 3 in' status: Ok
Note
Since MOSK 23.2,
cephDeviceMappingis removed because its large size can potentially exceed the Kubernetes 1.5 MB quota.Verify the Ceph OSD on the MOSK cluster:
kubectl -n rook-ceph get pod -l app=rook-ceph-osd -o wide | grep <nodeName>
Remove a Ceph OSD from a MOSK cluster¶
Note
Ceph OSD removal presupposes usage of a CephOsdRemoveRequest
CR. For workflow overview, spec and phases description, see
High-level workflow of Ceph OSD or node removal.
Warning
When using the non-recommended Ceph pools replicated.size of
less than 3, Ceph OSD removal cannot be performed. The minimal replica
size equals a rounded up half of the specified replicated.size.
For example, if replicated.size is 2, the minimal replica size is
1, and if replicated.size is 3, then the minimal replica size
is 2. The replica size of 1 allows Ceph having PGs with only one
Ceph OSD in the acting state, which may cause a PG_TOO_DEGRADED
health warning that blocks Ceph OSD removal. Mirantis recommends setting
replicated.size to 3 for each Ceph pool.
Open the
MiraCephCR on a MOSK cluster for editing:kubectl -n ceph-lcm-mirantis edit miraceph
Remove the required Ceph OSD specification from the
spec.nodes.<machineName>.storageDeviceslist:The example configuration of the
nodessection with the new node:nodes: - name: kaas-node-5bgk6 roles: - mon - mgr devices: - config: deviceClass: hdd fullPath: /dev/disk/by-id/scsi-SATA_HGST_HUS724040AL_PN1334PEHN18ZS - config: # remove the entire item entry from devices list deviceClass: hdd fullPath: /dev/disk/by-id/scsi-0ATA_HGST_HUS724040AL_PN1334PEHN1VBC
Create a YAML template for the
CephOsdRemoveRequestCR. Select from the following options:Remove Ceph OSD by device name,
by-pathsymlink, orby-idsymlink:apiVersion: lcm.mirantis.com/v1alpha1 kind: CephOsdRemoveRequest metadata: name: remove-osd-<nodeName>-sdb namespace: ceph-lcm-mirantis spec: nodes: <nodeName>: cleanupByDevice: - name: sdb
Warning
Since MOSK 23.3, Mirantis does not recommend setting device
nameor deviceby-pathsymlink in thecleanupByDevicefield as these identifiers are not persistent and can change at node boot. Remove Ceph OSDs withby-idsymlinks specified in thepathfield or usecleanupByOsdIdinstead. For details, see Addressing storage devices since MOSK 25.2.Note
If a device was physically removed from a node,
cleanupByDeviceis not supported. Therefore, usecleanupByOsdIdinstead. For details, see Remove a failed Ceph OSD by Ceph OSD ID.If the
devicesitem was specified with aby-pathdevice path, specify thepathparameter in thecleanupByDevicesection instead ofname.
Remove Ceph OSD by OSD ID:
apiVersion: lcm.mirantis.com/v1alpha1 kind: CephOsdRemoveRequest metadata: name: remove-osd-<nodeName>-sdb namespace: ceph-lcm-mirantis spec: nodes: <nodeName>: cleanupByOsdId: - 2
Apply the template on the MOSK cluster:
kubectl apply -f remove-osd-<nodeName>-sdb.yaml
Verify that the corresponding request has been created:
kubectl -n ceph-lcm-mirantis get cephosdremoverequest remove-osd-<nodeName>-sdb
Verify that the
removeInfosection appeared in theCephOsdRemoveRequestCRstatus:kubectl -n ceph-lcm-mirantis get cephosdremoverequest remove-osd-<nodeName>-sdb -o yaml
Example of system response:
status: removeInfo: cleanUpMap: kaas-node-d4aac64d-1721-446c-b7df-e351c3025591: osdMapping: "10": deviceMapping: sdb: path: "/dev/disk/by-path/pci-0000:00:1t.9" partition: "/dev/ceph-b-vg_sdb/osd-block-b-lv_sdb" type: "block" class: "hdd" zapDisk: true
Verify that the
cleanUpMapsection matches the required removal and wait for theApproveWaitingphase to appear instatus:kubectl -n ceph-lcm-mirantis get cephosdremoverequest remove-osd-<nodeName>-sdb -o yaml
Example of system response:
status: phase: ApproveWaiting
Edit the
CephOsdRemoveRequestCR and set theapproveflag totrue:kubectl -n ceph-lcm-mirantis edit cephosdremoverequest remove-osd-<nodeName>-sdb
For example:
spec: approve: true
Review the following
statusfields of the Ceph LCM CR request processing:status.phase- current state of request processingstatus.messages- description of the current phasestatus.conditions- full history of request processing before the current phasestatus.removeInfo.issuesandstatus.removeInfo.warnings- error and warning messages occurred during request processing, if any
Verify that the
CephOsdRemoveRequesthas been completed. For example:status: phase: Completed # or CompletedWithWarnings if there are non-critical issues
Remove the device cleanup jobs:
kubectl delete jobs -n ceph-lcm-mirantis -l app=miraceph-cleanup-disks
Reconfigure a Ceph OSD on a MOSK cluster¶
There is no hot reconfiguration procedure for existing Ceph OSDs. To reconfigure an existing Ceph node, follow the steps below:
Remove a Ceph OSD from the Ceph cluster as described in Remove a Ceph OSD from a MOSK cluster.
Add the same Ceph OSD but with a modified configuration as described in Add a Ceph OSD to a MOSK cluster.