Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
This procedure is valid for MOSK clusters that use the MiraCeph custom
resource (CR), which is available since MOSK 25.2 to replace the deprecated
KaaSCephCluster. For the equivalent procedure with the KaaSCephCluster
CR, refer to the following section:
This document describes how to migrate a Ceph Monitor daemon from one node to
another without changing the general number of Ceph Monitors in the cluster.
In the Ceph Controller concept, migration of a Ceph Monitor means manually
removing it from one node and adding it to another.
Consider the following exemplary placement scheme of Ceph Monitors in the
nodes spec of the MiraCeph CR:
nodes:node-1:roles:-mon-mgrnode-2:roles:-mgr
Using the example above, if you want to move the Ceph Monitor from node-1
to node-2 without changing the number of Ceph Monitors, the roles table
of the nodes spec must result as follows:
nodes:node-1:roles:-mgrnode-2:roles:-mgr-mon
However, due to the Rook limitation related to Kubernetes architecture, once
you move the Ceph Monitor through the MiraCeph CR, changes will not
apply automatically. This is caused by the following Rook behavior:
Rook creates Ceph Monitor resources as deployments with nodeSelector,
which binds Ceph Monitor pods to a requested node.
Rook does not recreate new Ceph Monitors with the new node placement if the
current mon quorum works.
Therefore, to move a Ceph Monitor to another node, you must also manually apply
the new Ceph Monitors placement to the Ceph cluster as described below.
To move a Ceph Monitor to another node:
Open the MiraCeph CR on a MOSK cluster:
kubectl-nceph-lcm-mirantiseditmiraceph
In the nodes spec of the MiraCeph CR, change the mon
roles placement without changing the total number of mon roles. For
details, see the example above. Note the nodes on which the mon roles
have been removed and save the name value of that nodes.
If you perform a MOSK cluster update, follow additional
steps:
Verify that the following conditions are met before proceeding to the
next step:
There are at least 2 running and available Ceph Monitors so that the
Ceph cluster is accessible during the Ceph Monitor migration:
Once done, Rook removes the obsolete Ceph Monitor from the node and creates
a new one on the specified node with a new letter. For example, if the a,
b, and c Ceph Monitors were in quorum and mon-c was obsolete, Rook
removes mon-c and creates mon-d. In this case, the new quorum includes
the a, b, and d Ceph Monitors.