Ceph pools for Cinder multi-backend

Available since MOSK 23.2

The KaaSCephCluster object supports multiple Ceph pools with the volumes role to configure Cinder multiple back ends.

To define Ceph pools for Cinder multiple back ends:

  1. In the KaaSCephCluster object, add the desired number of Ceph pools to the pools section with the volumes role:

    kubectl -n <MOSKClusterProject> edit kaascephcluster
    

    Substitute <MOSKClusterProject> with corresponding namespace of the MOSK cluster.

    Example configuration:

    spec:
      cephClusterSpec:
        pools:
        - default: false
          deviceClass: hdd
          name: volumes
          replicated:
            size: 3
          role: volumes
        - default: false
          deviceClass: hdd
          name: volumes-backend-1
          replicated:
            size: 3
          role: volumes
        - default: false
          deviceClass: hdd
          name: volumes-backend-2
          replicated:
            size: 3
          role: volumes
    
  2. Verify that Cinder back-end pools are created and ready:

    kubectl -n <managedClusterProject> get kaascephcluster -o yaml
    

    Example output:

    status:
      fullClusterStatus:
        blockStorageStatus:
          poolsStatus:
            volumes-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
            volumes-backend-1-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
            volumes-backend-2-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
    
  3. Verify that the added Ceph pools are accessible from the Cinder service. For example:

    kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-1-hdd -n client.cinder
    kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-2-hdd -n client.cinder
    

After the Ceph pool becomes available, it is automatically specified as an additional Cinder back end and registered as a new volume type, which you can use to create Cinder volumes.