Ceph pools for Cinder multi-backend¶
Available since MOSK 23.2
The KaaSCephCluster
object supports multiple Ceph pools with the
volumes
role to configure Cinder multiple backends.
To define Ceph pools for Cinder multiple backends:
In the
KaaSCephCluster
object, add the desired number of Ceph pools to thepools
section with thevolumes
role:kubectl -n <MOSKClusterProject> edit kaascephcluster
Substitute
<MOSKClusterProject>
with corresponding namespace of the MOSK cluster.Example configuration:
spec: cephClusterSpec: pools: - default: false deviceClass: hdd name: volumes replicated: size: 3 role: volumes - default: false deviceClass: hdd name: volumes-backend-1 replicated: size: 3 role: volumes - default: false deviceClass: hdd name: volumes-backend-2 replicated: size: 3 role: volumes
Verify that Cinder backend pools are created and ready:
kubectl -n <managedClusterProject> get kaascephcluster -o yaml
Example output:
status: fullClusterStatus: blockStorageStatus: poolsStatus: volumes-hdd: present: true status: observedGeneration: 1 phase: Ready volumes-backend-1-hdd: present: true status: observedGeneration: 1 phase: Ready volumes-backend-2-hdd: present: true status: observedGeneration: 1 phase: Ready
Verify that the added Ceph pools are accessible from the Cinder service. For example:
kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-1-hdd -n client.cinder kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-2-hdd -n client.cinder
After the Ceph pool becomes available, it is automatically specified as an additional Cinder backend and registered as a new volume type, which you can use to create Cinder volumes.