Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Ceph operations for KaaSCephCluster (prior to 25.2)¶
Warning
This procedure is valid for MOSK clusters that use the deprecated
KaaSCephCluster
custom resource (CR) instead of the MiraCeph
CR that is
available since MOSK 25.2 as a new Ceph configuration entrypoint. For the
equivalent procedure with the MiraCeph
CR, refer to the following section:
This section outlines Ceph LCM operations such as adding Ceph Monitor, Ceph nodes, and RADOS Gateway nodes to an existing Ceph cluster or removing them, as well as removing or replacing Ceph OSDs. The section also includes OpenStack-specific operations for Ceph.
The following sections describe the Ceph cluster configuration options:
The following sections describe the OpenStack-related Ceph operations:
The following sections describe how to configure, manage, and verify specific aspects of a Ceph cluster.
Caution
Before you proceed with any reading or writing operation, first verify the cluster status using the ceph tool as described in Verify the Ceph core services.
- Comparison of KaaSCephCluster and MiraCeph-related specifications
- Automated Ceph LCM
- Remove Ceph OSD manually
- Migrate Ceph cluster to address storage devices using by-id
- Obtain a by-id symlink of a storage device
- Increase Ceph cluster storage size
- Move a Ceph Monitor daemon to another node
- Migrate a Ceph Monitor before machine replacement
- Enable Ceph RGW Object Storage
- Enable multisite for Ceph RGW Object Storage
- Manage Ceph RBD or CephFS clients and RGW users
- Verify Ceph
- Enable Ceph tolerations and resources management
- Enable Ceph multinetwork
- Enable Ceph RBD mirroring
- Configure Ceph Shared File System (CephFS)
- Share Ceph across two managed clusters
- Specify placement of Ceph cluster daemons
- Migrate Ceph pools from one failure domain to another
- Enable periodic Ceph performance testing