Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!

Starting with MOSK 25.2, the MOSK documentation set will cover all product layers, including MOSK management (formerly MCC). This means everything you need will be in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.

Ceph pools for Cinder multi-backend

Available since MOSK 23.2

The KaaSCephCluster object supports multiple Ceph pools with the volumes role to configure Cinder multiple backends.

To define Ceph pools for Cinder multiple backends:

  1. In the KaaSCephCluster object, add the desired number of Ceph pools to the pools section with the volumes role:

    kubectl -n <MOSKClusterProject> edit kaascephcluster
    

    Substitute <MOSKClusterProject> with corresponding namespace of the MOSK cluster.

    Example configuration:

    spec:
      cephClusterSpec:
        pools:
        - default: false
          deviceClass: hdd
          name: volumes
          replicated:
            size: 3
          role: volumes
        - default: false
          deviceClass: hdd
          name: volumes-backend-1
          replicated:
            size: 3
          role: volumes
        - default: false
          deviceClass: hdd
          name: volumes-backend-2
          replicated:
            size: 3
          role: volumes
    
  2. Verify that Cinder backend pools are created and ready:

    kubectl -n <managedClusterProject> get kaascephcluster -o yaml
    

    Example output:

    status:
      fullClusterStatus:
        blockStorageStatus:
          poolsStatus:
            volumes-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
            volumes-backend-1-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
            volumes-backend-2-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
    
  3. Verify that the added Ceph pools are accessible from the Cinder service. For example:

    kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-1-hdd -n client.cinder
    kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-2-hdd -n client.cinder
    

After the Ceph pool becomes available, it is automatically specified as an additional Cinder backend and registered as a new volume type, which you can use to create Cinder volumes.