Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!

Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.

Ceph pools for Cinder multi-backend

Warning

This procedure is valid for MOSK clusters that use the MiraCeph custom resource (CR), which is available since MOSK 25.2 to replace the deprecated KaaSCephCluster. For the equivalent procedure with the KaaSCephCluster CR, refer to the following section:

Ceph pools for Cinder multi-backend

The MiraCeph object supports multiple Ceph pools with the volumes role to configure Cinder multiple backends.

To define Ceph pools for Cinder multiple backends:

  1. In the MiraCeph CR, add the desired number of Ceph pools to the pools section with the volumes role:

    kubectl -n ceph-lcm-mirantis edit miraceph
    

    Example configuration:

    pools:
    - default: false
      deviceClass: hdd
      name: volumes
      replicated:
        size: 3
      role: volumes
    - default: false
      deviceClass: hdd
      name: volumes-backend-1
      replicated:
        size: 3
      role: volumes
    - default: false
      deviceClass: hdd
      name: volumes-backend-2
      replicated:
        size: 3
      role: volumes
    
  2. Verify that Cinder backend pools are created and ready:

    kubectl -n ceph-lcm-mirantis get mchealth -o yaml
    

    Example output:

    status:
      fullClusterStatus:
        blockStorageStatus:
          poolsStatus:
            volumes-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
            volumes-backend-1-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
            volumes-backend-2-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
    
  3. Verify that the added Ceph pools are accessible from the Cinder service. For example:

    kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-1-hdd -n client.cinder
    kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-2-hdd -n client.cinder
    

After the Ceph pool becomes available, it is automatically specified as an additional Cinder backend and registered as a new volume type, which you can use to create Cinder volumes.