Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
This procedure is valid for MOSK clusters that use the deprecated
KaaSCephCluster custom resource (CR) instead of the MiraCeph CR that is
available since MOSK 25.2 as a new Ceph configuration entrypoint. For the
equivalent procedure with the MiraCeph CR, refer to the following section:
Mirantis Ceph Controller simplifies a Ceph cluster management by automating
LCM operations. This section describes how to add, remove, or reconfigure Ceph
nodes.
Note
When adding a Ceph node with the Ceph Monitor role, if any issues occur with
the Ceph Monitor, rook-ceph removes it and adds a new Ceph Monitor instead,
named using the next alphabetic character in order. Therefore, the Ceph Monitor
names may not follow the alphabetical order. For example, a, b, d,
instead of a, b, c.
Prepare a new machine for the required managed cluster as described in
Add a machine. During machine preparation, update the settings of the
related bare metal host profile for the Ceph node being replaced with the
required machine devices as described in Create a custom bare metal host profile.
Open the KaasCephCluster CR of a managed cluster for editing:
Since MOSK 23.3, Mirantis highly recommends
using the non-wwn by-id symlinks to specify storage devices in the
storageDevices list. For details, see Addressing storage devices prior to MOSK 25.2.
Note
To use a new Ceph node for a Ceph Monitor or Ceph Manager deployment,
also specify the roles parameter.
Reducing the number of Ceph Monitors is not supported and causes the
Ceph Monitor daemons removal from random nodes.
Removal of the mgr role in the nodes section of the
KaaSCephCluster CR does not remove Ceph Managers. To remove a Ceph
Manager from a node, remove it from the nodes spec and manually
delete the mgr pod in the Rook namespace.
Verify that all new Ceph daemons for the specified node have been
successfully deployed in the Ceph cluster. The fullClusterInfo section
should not contain any issues.
status:fullClusterInfo:daemonsStatus:mgr:running:a is active mgrstatus:Okmon:running:'3/3monsrunning:[abc]inquorum'status:Okosd:running:'3/3running:3up,3in'status:Ok
To remove a Ceph node with a mon role, first move the Ceph
Monitor to another node and remove the mon role from the Ceph node as
described in Move a Ceph Monitor daemon to another node.
Open the KaasCephCluster CR of a managed cluster for editing: