Migrate a Ceph Monitor before machine replacement

Available since 2.17.0

Note

The feature is available as Technology Preview for non-MOSK-based clusters.

This document describes how to migrate a Ceph Monitor to another machine on the Equinix Metal and baremetal-based clusters before node replacement as described in Delete a machine from a cluster.

Warning

  • Remove the Ceph Monitor role before the machine removal.

  • Make sure that the Ceph cluster always has an odd number of Ceph Monitors.

In the Equinix Metal provider default configuration, all manager machines have Ceph Manager/Monitor and Storage roles while all worker machines have only the Storage role. The procedure of a Ceph Monitor migration assumes that you override the default configuration and temporarily move the Ceph Manager/Monitor to a worker machine. After a node replacement, we recommend migrating the Ceph Manager/Monitor to the new manager machine.

To migrate a Ceph Monitor to another machine:

  1. For the Equinix Metal provider, enable the non-default manual Ceph roles configuration. Select from the following options:

    • Using the Container Cloud web UI, enable the Configure cluster > General Settings > Manual Ceph Configuration option. For details, see Change a cluster configuration.

    • Using the Container Cloud API, update the Cluster object:

      spec:
        providerSpec:
          value:
            ceph:
              manualCephConfiguration: true
      

    Note

    Skip this step if you have enabled this option during the cluster deployment.

    Caution

    Switching back from the manual to automatic configuration of Ceph roles is forbidden. Therefore, moving forward, configure Ceph roles for new machines manually through the machine creation and configuration dialogues in the Container Cloud web UI.

  2. For managed clusters, move the Ceph Manager/Monitor daemon from the affected machine to one of the worker machines as described in Move a Ceph Monitor daemon to another node.

  3. Delete the affected machine as described in Delete a cluster machine.

  4. Add a new manager machine without the Monitor and Manager role:

    Warning

    The addition of a new machine with the Monitor and Manager role breaks the odd number quorum of Ceph Monitors.

  5. Select from the following options:

    • For managed clusters, move the previously migrated Ceph Manager/Monitor daemon to the new manager machine as described in Move a Ceph Monitor daemon to another node.

    • For management or regional clusters before Container Cloud 2.20.0:

      Ceph changes in Container Cloud 2.20

      • Since Container Cloud 2.20.0, the Ceph cluster does not deploy on management and regional clusters to reduce resource consumption.

      • Ceph cluster is automatically removed from existing management and regional clusters during the Container Cloud update to 2.20.0.

      • Managed clusters continue using Ceph as a distributed storage system.