Ceph operations for MiraCeph¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph custom
resource (CR), which is available since MOSK 25.2 to replace the unsupported
KaaSCephCluster resource. And MiraCeph will be automatically migrated
to CephDeployment in MOSK 26.1. For details, see Deprecation Notes:
KaaSCephCluster API on management clusters.
For the equivalent procedure with the unsupported KaaSCephCluster CR, refer
to the following section:
This section outlines Ceph LCM operations such as adding Ceph Monitor, Ceph nodes, and RADOS Gateway nodes to an existing Ceph cluster or removing them, as well as removing or replacing Ceph OSDs. The section also includes OpenStack-specific operations for Ceph.
The following sections describe the Ceph cluster configuration options:
The following sections describe the OpenStack-related Ceph operations:
The following sections describe how to configure, manage, and verify specific aspects of a Ceph cluster.
Caution
Before you proceed with any reading or writing operation, first verify the cluster status using the ceph tool as described in Verify the Ceph core services.
- Automated Ceph LCM
- Remove Ceph OSD manually
- Migrate Ceph cluster to address storage devices using by-id
- Obtain a by-id symlink of a storage device
- Increase Ceph cluster storage size
- Move a Ceph Monitor daemon to another node
- Migrate a Ceph Monitor before machine replacement
- Enable Ceph RGW Object Storage
- Enable multisite for Ceph RGW Object Storage
- Manage Ceph RBD or CephFS clients and RGW users
- Verify Ceph
- Enable management of Ceph tolerations and resources
- Enable Ceph multinetwork
- Enable Ceph RBD mirroring
- Configure Ceph Shared File System (CephFS)
- Share Ceph across two MOSK clusters
- Specify placement of Ceph cluster daemons
- Migrate Ceph pools from one failure domain to another
- Enable periodic Ceph performance testing