Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Now, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Add, remove, or reconfigure Ceph nodes¶
Warning
This procedure is valid for MOSK clusters that use the deprecated
KaaSCephCluster custom resource (CR) instead of the MiraCeph CR that is
available since MOSK 25.2 as a new Ceph configuration entrypoint. For the
equivalent procedure with the MiraCeph CR, refer to the following section:
Mirantis Ceph Controller simplifies a Ceph cluster management by automating LCM operations. This section describes how to add, remove, or reconfigure Ceph nodes.
Note
When adding a Ceph node with the Ceph Monitor role, if any issues occur with
the Ceph Monitor, rook-ceph removes it and adds a new Ceph Monitor instead,
named using the next alphabetic character in order. Therefore, the Ceph Monitor
names may not follow the alphabetical order. For example, a, b, d,
instead of a, b, c.
Add Ceph nodes to a MOSK cluster¶
Prepare a new machine for the required MOSK cluster as described in Add a machine. During machine preparation, update the settings of the related bare metal host profile for the Ceph node being replaced with the required machine devices as described in Create a custom bare metal host profile.
Open the
KaasCephClusterCR of a MOSK cluster for editing:kubectl edit kaascephcluster -n <moskClusterProjectName>
Substitute
<moskClusterProjectName>with the corresponding value.In the
nodessection, specify the parameters for a Ceph node as required. For the parameters description, see Node parameters.The example configuration of the
nodessection with the new node:Example configuration with
storageDevices:nodes: kaas-node-5bgk6: roles: - mon - mgr storageDevices: - config: deviceClass: hdd fullPath: /dev/disk/by-id/scsi-SATA_HGST_HUS724040AL_PN1334PEHN18ZS
Example configuration with
storageDeviceFilter:kaas-node-5bgk6: roles: - mon - mgr storageDeviceFilter: config: deviceClass: hdd filterByPath: "^/dev/disk/by-id/scsi-SATA_HGST_.+$""
Warning
Mirantis highly recommends using the non-wwn
by-idsymlinks to specify storage devices in thestorageDeviceslist. For details, see Addressing storage devices using KaaSCephCluster (deprecated).Note
To use a new Ceph node for a Ceph Monitor or Ceph Manager deployment, also specify the
rolesparameter.Reducing the number of Ceph Monitors is not supported and causes the Ceph Monitor daemons removal from random nodes.
Removal of the
mgrrole in thenodessection of theKaaSCephClusterCR does not remove Ceph Managers. To remove a Ceph Manager from a node, remove it from thenodesspec and manually delete themgrpod in the Rook namespace.
Verify that all new Ceph daemons for the specified node have been successfully deployed in the Ceph cluster. The
fullClusterInfosection should not contain any issues.kubectl -n <moskClusterProjectName> get kaascephcluster -o yaml
Example of system response
status: fullClusterInfo: daemonsStatus: mgr: running: a is active mgr status: Ok mon: running: '3/3 mons running: [a b c] in quorum' status: Ok osd: running: '3/3 running: 3 up, 3 in' status: Ok
Remove a Ceph node from a MOSK cluster¶
Note
Ceph node removal presupposes usage of a KaaSCephOperationRequest
CR. For workflow overview, spec and phases description, see
High-level workflow of Ceph OSD or node removal.
Note
To remove a Ceph node with a mon role, first move the Ceph
Monitor to another node and remove the mon role from the Ceph node as
described in Move a Ceph Monitor daemon to another node.
Open the
KaasCephClusterCR of a MOSK cluster for editing:kubectl edit kaascephcluster -n <moskClusterProjectName>
Substitute
<moskClusterProjectName>with the corresponding value.In the
spec.cephClusterSpec.nodes.<machineName>section, remove the required Ceph OSD node specification.Caution
If any other roles are present on the node, remove only the Ceph OSD specification for the node.
For example:
spec: cephClusterSpec: nodes: worker-5: # remove the entire entry for the required node storageDevices: {...} roles: [...]
Create a YAML template for the
KaaSCephOperationRequestCR. For example:apiVersion: kaas.mirantis.com/v1alpha1 kind: KaaSCephOperationRequest metadata: name: remove-osd-worker-5 namespace: <moskClusterProjectName> spec: osdRemove: nodes: worker-5: completeCleanUp: true kaasCephCluster: name: <kaasCephClusterName> namespace: <moskClusterProjectName>
Substitute
<moskClusterProjectName>with the corresponding cluster namespace and<kaasCephClusterName>with the correspondingKaaSCephClustername.Apply the template on the management cluster in the corresponding namespace:
kubectl apply -f remove-osd-worker-5.yaml
Verify that the corresponding request has been created:
kubectl get kaascephoperationrequest remove-osd-worker-5 -n <moskClusterProjectName>
Verify that the
removeInfosection appeared in theKaaSCephOperationRequestCRstatus:kubectl -n <moskClusterProjectName> get kaascephoperationrequest remove-osd-worker-5 -o yaml
Example of system response
status: childNodesMapping: kaas-node-d4aac64d-1721-446c-b7df-e351c3025591: worker-5 osdRemoveStatus: removeInfo: cleanUpMap: kaas-node-d4aac64d-1721-446c-b7df-e351c3025591: osdMapping: "10": deviceMapping: sdb: path: "/dev/disk/by-path/pci-0000:00:1t.9" partition: "/dev/ceph-b-vg_sdb/osd-block-b-lv_sdb" type: "block" class: "hdd" zapDisk: true "16": deviceMapping: sdc: path: "/dev/disk/by-path/pci-0000:00:1t.10" partition: "/dev/ceph-b-vg_sdb/osd-block-b-lv_sdc" type: "block" class: "hdd" zapDisk: true
Verify that the
cleanUpMapsection matches the required removal and wait for theApproveWaitingphase to appear instatus:kubectl -n <moskClusterProjectName> get kaascephoperationrequest remove-osd-worker-5 -o yaml
Example of system response:
status: phase: ApproveWaiting
Edit the
KaaSCephOperationRequestCR and set theapproveflag totrue:kubectl -n <moskClusterProjectName> edit kaascephoperationrequest remove-osd-worker-5
For example:
spec: osdRemove: approve: true
Review the status of the
KaaSCephOperationRequestresource request processing. The valuable parameters are as follows:status.phase- the current state of request processingstatus.messages- the description of the current phasestatus.conditions- full history of request processing before the current phasestatus.removeInfo.issuesandstatus.removeInfo.warnings- contain error and warning messages occurred during request processing
Verify that the
KaaSCephOperationRequesthas been completed. For example:status: phase: Completed # or CompletedWithWarnings if there are non-critical issues
Remove the device cleanup jobs:
kubectl delete jobs -n ceph-lcm-mirantis -l app=miraceph-cleanup-disks
Reconfigure a Ceph node on a MOSK cluster¶
There is no hot reconfiguration procedure for existing Ceph OSDs and Ceph Monitors. To reconfigure an existing Ceph node, follow the steps below:
Remove the Ceph node from the Ceph cluster as described in Remove a Ceph node from a MOSK cluster.
Add the same Ceph node but with a modified configuration as described in Add Ceph nodes to a MOSK cluster.