Searching for results...

No results

Your search did not match anything from Mirantis documentation.
Check your spelling or try different keywords.

An error occurred

An error occurred while using the search.
Try your search again or contact us to let us know about it.

Newer documentation is now live.You are currently reading an older version.

Ceph pools for Cinder multi-backend

Warning

This procedure is valid for MOSK clusters that use the MiraCeph custom resource (CR), which is available since MOSK 25.2 to replace the unsupported KaaSCephCluster resource. And MiraCeph will be automatically migrated to CephDeployment in MOSK 26.1. For details, see Deprecation Notes: KaaSCephCluster API on management clusters.

For the equivalent procedure with the unsupported KaaSCephCluster CR, refer to the following section:

Ceph pools for Cinder multi-backend

The MiraCeph object supports multiple Ceph pools with the volumes role to configure Cinder multiple backends.

To define Ceph pools for Cinder multiple backends:

  1. In the MiraCeph CR, add the desired number of Ceph pools to the pools section with the volumes role:

    kubectl -n ceph-lcm-mirantis edit miraceph
    

    Example configuration:

    pools:
    - default: false
      deviceClass: hdd
      name: volumes
      replicated:
        size: 3
      role: volumes
    - default: false
      deviceClass: hdd
      name: volumes-backend-1
      replicated:
        size: 3
      role: volumes
    - default: false
      deviceClass: hdd
      name: volumes-backend-2
      replicated:
        size: 3
      role: volumes
    
  2. Verify that Cinder backend pools are created and ready:

    kubectl -n ceph-lcm-mirantis get mchealth -o yaml
    

    Example output:

    status:
      fullClusterStatus:
        blockStorageStatus:
          poolsStatus:
            volumes-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
            volumes-backend-1-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
            volumes-backend-2-hdd:
              present: true
              status:
                observedGeneration: 1
                phase: Ready
    
  3. Verify that the added Ceph pools are accessible from the Cinder service. For example:

    kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-1-hdd -n client.cinder
    kubectl -n openstack exec -it cinder-volume-0 -- rbd ls -p volumes-backend-2-hdd -n client.cinder
    

After the Ceph pool becomes available, it is automatically specified as an additional Cinder backend and registered as a new volume type, which you can use to create Cinder volumes.