Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Add, remove, or reconfigure Ceph nodes¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph
custom
resource (CR), which is available since MOSK 25.2 to replace the deprecated
KaaSCephCluster
. For the equivalent procedure with the KaaSCephCluster
CR, refer to the following section:
Mirantis Ceph Controller simplifies a Ceph cluster management by automating LCM operations. This section describes how to add, remove, or reconfigure Ceph nodes.
Note
When adding a Ceph node with the Ceph Monitor role, if any issues occur with
the Ceph Monitor, rook-ceph
removes it and adds a new Ceph Monitor instead,
named using the next alphabetic character in order. Therefore, the Ceph Monitor
names may not follow the alphabetical order. For example, a
, b
, d
,
instead of a
, b
, c
.
Add Ceph nodes on a MOSK cluster¶
Prepare a new machine for the required MOSK cluster as described in Add a machine. During machine preparation, update the settings of the related bare metal host profile for the Ceph node being replaced with the required machine devices as described in Create a custom bare metal host profile.
Open the
MiraCeph
CR on the MOSK cluster for editing:kubectl -n ceph-lcm-mirantis edit miraceph
In the
nodes
section, specify the parameters for a Ceph node as required. For the parameters description, see Node parameters.The example configuration of the
nodes
section with the new node:nodes: - name: kaas-node-5bgk6 roles: - mon - mgr devices: - config: deviceClass: hdd fullPath: /dev/disk/by-id/scsi-SATA_HGST_HUS724040AL_PN1334PEHN18ZS
You can also add a new node with device filters. For example:
nodes: - name: kaas-node-5bgk6 roles: - mon - mgr config: deviceClass: hdd devicePathFilter: "^/dev/disk/by-id/scsi-SATA_HGST+*"
Warning
Mirantis highly recommends using the non-wwn
by-id
symlinks to specify storage devices in thedevices
list.For details, see Addressing storage devices since MOSK 25.2.
Note
To use a new Ceph node for a Ceph Monitor or Ceph Manager deployment, also specify the
roles
parameter.Reducing the number of Ceph Monitors is not supported and causes the Ceph Monitor daemons removal from random nodes.
Removal of the
mgr
role in thenodes
section of theMiraCeph
CR does not remove Ceph Managers. To remove a Ceph Manager from a node, remove it from thenodes
spec and manually delete themgr
pod in the Rook namespace.
Verify that all new Ceph daemons for the specified node have been successfully deployed in the Ceph cluster. The
fullClusterInfo
section should not contain any issues.kubectl -n ceph-lcm-mirantis get mchealth -o yaml
Example of system response
status: fullClusterInfo: daemonsStatus: mgr: running: a is active mgr status: Ok mon: running: '3/3 mons running: [a b c] in quorum' status: Ok osd: running: '3/3 running: 3 up, 3 in' status: Ok
Remove a Ceph node from a MOSK cluster¶
Note
Ceph node removal presupposes usage of a CephOsdRemoveRequest
CR. For workflow overview, spec and phases description, see
High-level workflow of Ceph OSD or node removal.
Note
To remove a Ceph node with a mon
role, first move the Ceph
Monitor to another node and remove the mon
role from the Ceph node as
described in Move a Ceph Monitor daemon to another node.
Open the
MiraCeph
CR on a MOSK cluster for editing:kubectl -n ceph-lcm-mirantis edit miraceph
In the
nodes
section, remove the required Ceph node specification.For example:
spec: nodes: - name: kaas-node-5bgk6 # remove the entire entry for the required node devices: {...} roles: [...]
Create a YAML template for the
CephOsdRemoveRequest
CR. For example:apiVersion: lcm.mirantis.com/v1alpha1 kind: CephOsdRemoveRequest metadata: name: remove-osd-worker-5 namespace: ceph-lcm-mirantis spec: nodes: kaas-node-5bgk6: completeCleanUp: true
Apply the template on the MOSK cluster:
kubectl apply -f remove-osd-worker-5.yaml
Verify that the corresponding request has been created:
kubectl -n ceph-lcm-mirantis get cephosdremoverequest remove-osd-worker-5
Verify that the
removeInfo
section appeared in theCephOsdRemoveRequest
CRstatus
:kubectl -n ceph-lcm-mirantis get cephosdremoverequest remove-osd-worker-5 -o yaml
Example of system response
status: removeInfo: cleanUpMap: kaas-node-5bgk6: osdMapping: "10": deviceMapping: sdb: path: "/dev/disk/by-path/pci-0000:00:1t.9" partition: "/dev/ceph-b-vg_sdb/osd-block-b-lv_sdb" type: "block" class: "hdd" zapDisk: true "16": deviceMapping: sdc: path: "/dev/disk/by-path/pci-0000:00:1t.10" partition: "/dev/ceph-b-vg_sdb/osd-block-b-lv_sdc" type: "block" class: "hdd" zapDisk: true
Verify that the
cleanUpMap
section matches the required removal and wait for theApproveWaiting
phase to appear instatus
:kubectl -n ceph-lcm-mirantis get cephosdremoverequest remove-osd-worker-5 -o yaml
Example of system response:
status: phase: ApproveWaiting
Edit the
CephOsdRemoveRequest
CR and set theapprove
flag totrue
:kubectl -n ceph-lcm-mirantis edit cephosdremoverequest remove-osd-worker-5
For example:
spec: approve: true
Review the status of the
CephOsdRemoveRequest
resource request processing. The valuable parameters are as follows:status.phase
- the current state of request processingstatus.messages
- the description of the current phasestatus.conditions
- full history of request processing before the current phasestatus.removeInfo.issues
andstatus.removeInfo.warnings
- contain error and warning messages occurred during request processing
Verify that the
CephOsdRemoveRequest
has been completed. For example:status: phase: Completed # or CompletedWithWarnings if there are non-critical issues
Remove the device cleanup jobs:
kubectl delete jobs -n ceph-lcm-mirantis -l app=miraceph-cleanup-disks
Reconfigure a Ceph node on a MOSK cluster¶
There is no hot reconfiguration procedure for existing Ceph OSDs and Ceph Monitors. To reconfigure an existing Ceph node, follow the steps below:
Remove the Ceph node from the Ceph cluster as described in Remove a Ceph node from a MOSK cluster.
Add the same Ceph node but with a modified configuration as described in Add Ceph nodes on a MOSK cluster.