Move a Ceph Monitor to another node

This document describes how to migrate a Ceph Monitor daemon from one node to another without changing the general number of Ceph Monitors in the cluster. In the Ceph controller concept, migration of a Ceph Monitor means manually removing it from one node and adding it to another.

Consider the following exemplary placement scheme of Ceph Monitors in the nodes spec of the KaaSCephCluster CR:

nodes:
  node-1:
    roles:
    - mon
    - mgr
  node-2:
    roles:
    - mgr

Using the example above, if you want to move the Ceph Monitor from node-1 to node-2 without changing the number of Ceph Monitors, the roles table of the nodes spec must result as follows:

nodes:
  node-1:
    roles:
    - mgr
  node-2:
    roles:
    - mgr
    - mon

However, due to the Rook limitation related to Kubernetes architecture, once you move the Ceph Monitor through the KaaSCephCluster CR, changes will not apply automatically. This caused by the following Rook behavior:

  • Rook creates Ceph Monitor resources as deployments with nodeSelector, which binds Ceph Monitor pods to a requested node.

  • Rook does not recreate new Ceph Monitors with the new node placement if the current mon quorum works.

Therefore, to move a Ceph Monitor to another node, you must also manually apply the new Ceph Monitors placement to the Ceph cluster as described below.

To move a Ceph Monitor to another node:

  1. Log in to a local machine running Ubuntu 18.04 where kubectl is installed.

  2. Obtain and export kubeconfig of the management cluster as described in Connect to a Mirantis Container Cloud cluster.

  3. Open the KaasCephCluster CR of a managed cluster:

    kubectl edit kaascephcluster -n <managedClusterProjectName>
    

    Substitute <managedClusterProjectName> with the corresponding value.

  4. In the nodes spec of the KaaSCephCluster CR, change the mon roles placement without changing the total number of mon roles. For details, see the example above. Note the nodes on which the mon roles have been removed.

  5. Wait until the corresponding MiraCeph resource is updated with the new nodes spec:

    kubectl --kubeconfig <kubeconfig> -n ceph-lcm-mirantis get miraceph -o yaml
    

    Substitute <kubeconfig> with the Container Cloud cluster kubeconfig that hosts the required Ceph cluster.

  6. In the MiraCeph resource, determine which node has been changed in the nodes spec. Save the name value of the node where the mon role has been removed for further usage.

    kubectl -n <managedClusterProjectName> get machine -o jsonpath='{range .items[*]}{.metadata.name}{" kaas-node-"}{.metadata.annotations.kaas\.mirantis\.com\/uid}{"\n"}{end}'
    

    Substitute <managedClusterProjectName> with the corresponding value.

  7. Remove the rook-ceph-mon pods placed on the obsolete node using the node name obtained in the previous step:

    kubectl --kubeconfig <kubeconfig> -n rook-ceph get pod -l app=rook-ceph-mon -o jsonpath='{.items[?(@.spec.nodeName == "<nodeName>")].metadata.name}'
    

    Substitute <nodeName> with the name of the node where the mon role has been removed.

  8. Wait for 10 minutes until rook-ceph-operator performs a failover of the Pending mon pod. Inspect the logs during the failover process:

    kubectl --kubeconfig <kubeconfig> -n rook-ceph logs -l app=rook-ceph-operator -f
    

    Example logs extract:

    2021-03-15 17:48:23.471978 W | op-mon: mon "a" not found in quorum, waiting for timeout (554 seconds left) before failover
    

Once done, Rook removes the obsolete Ceph Monitor from the node and creates a new one on the specified node with a new letter. For example, if the a, b, and c Ceph Monitors were in quorum and mon-c was obsolete, Rook will remove mon-c and create mon-d. In this case, the new quorum will include the a, b, and d Ceph Monitors.