Troubleshoot Ceph based on MiraCeph (current)¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph custom
resource (CR), which is available since MOSK 25.2 to replace the unsupported
KaaSCephCluster resource. And MiraCeph will be automatically migrated
to CephDeployment in MOSK 26.1. For details, see Deprecation Notes:
KaaSCephCluster API on management clusters.
For the equivalent procedure with the unsupported KaaSCephCluster CR, refer
to the following section:
This section provides solutions to the issues that may occur during Ceph usage.
- Ceph disaster recovery
- Ceph Monitors recovery
- CephOsdRemoveRequest failure with a timeout during rebalance
- Ceph Monitors store.db size rapidly growing
- Replaced Ceph OSD fails to start on authorization
- The ceph-exporter pods are present in the Ceph crash list
- Ceph health reports PG_DAMAGED after a failed disk or node replacement