Ceph pools for Cinder multi-backend¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph custom
resource (CR), which is available since MOSK 25.2 to replace the unsupported
KaaSCephCluster resource. And MiraCeph will be automatically migrated
to CephDeployment in MOSK 26.1. For details, see Deprecation Notes:
KaaSCephCluster API on management clusters.
For the equivalent procedure with the unsupported KaaSCephCluster CR, refer
to the following section:
The MiraCeph object supports multiple Ceph pools with the volumes role
to configure Cinder multiple backends.
To define Ceph pools for Cinder multiple backends:
In the
MiraCephCR, add the desired number of Ceph pools to thepoolssection with thevolumesrole:kubectl -n ceph-lcm-mirantis edit miraceph
Example configuration:
pools: - default: false deviceClass: hdd name: volumes replicated: size: 3 role: volumes - default: false deviceClass: hdd name: volumes-backend-1 replicated: size: 3 role: volumes - default: false deviceClass: hdd name: volumes-backend-2 replicated: size: 3 role: volumes
Verify that Cinder backend pools are created and ready:
kubectl -n ceph-lcm-mirantis get mchealth -o yaml
Example output:
status: fullClusterStatus: blockStorageStatus: poolsStatus: volumes-hdd: present: true status: observedGeneration: 1 phase: Ready volumes-backend-1-hdd: present: true status: observedGeneration: 1 phase: Ready volumes-backend-2-hdd: present: true status: observedGeneration: 1 phase: Ready
Verify that the added Ceph pools are accessible from the Cinder service. For example:
kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-1-hdd -n client.cinder kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-2-hdd -n client.cinder
After the Ceph pool becomes available, it is automatically specified as an additional Cinder backend and registered as a new volume type, which you can use to create Cinder volumes.