Comparison of KaaSCephCluster, MiraCeph, and CephDeployment specifications

This section outlines the differences and functional matches between KaaSCephCluster, MiraCeph, and CephDeployment custom resources (CRs).

As of MOSK 26.1, KaaSCephCluster and MiraCeph are unsupported and will be removed in a future release. All Ceph management operations are now centralized within the following MOSK cluster CRs:

  • CephDeployment

  • CephDeploymentHealth

  • CephDeploymentSecret

  • CephDeploymentMaintenance

Additionally, the management cluster’s KaaSCephOperationRequest and the MOSK cluster’s CephOsdRemoveRequest are replaced by the CephOsdRemoveTask CR of the MOSK cluster. For usage instructions, refer to Creating a Ceph OSD remove task.

KaaSCephCluster vs. CephDeployment

CephDeployment is the main management CR for Ceph clusters within a MOSK cluster, handling both day-1 and day-2 operations. Most fields in the legacy KaaSCephCluster map directly to CephDeployment fields, with the following exceptions:

Spec mapping

KaaSCephCluster spec.cephClusterSpec generally matches CephDeployment spec, except for:

  • The nodes section: in CephDeployment, the nodes section is a list of key-value mappings rather than the map structure used in KaaSCephCluster:

    • The name field now refers to the MOSK cluster node name instead of the Machine resource name.

    • The devices list replaces storageDevices.

    • Node groups are now configured through the nodeGroup list of node names.

    For a complete parameter list, refer to the Nodes parameters section at CephDeployment custom resource.

    Configuration examples:

    KaaSCephCluster nodes and nodeGroups (legacy)
    kind: KaaSCephCluster
    ...
    spec:
      cephClusterSpec:
        ...
        nodes:
          machine-name-1:
            crush:
              rack: rack-1
            roles: ["mds"]
            storageDevices:
            - fullPath: /dev/disk/by-id/something
              config:
                deviceClass: nvme
                osdsPerDevice: "4"
        nodeGroups:
          group-1:
            spec:
              crush:
                rack: rack-1
              roles: ["mds"]
              storageDevices:
              - name: nvmeXnY
                config:
                  deviceClass: nvme
                  osdsPerDevice: "4"
            nodes:
            - machine-name-2
            - machine-name-3
    
    CephDeployment nodes (current)
    kind: CephDeployment
    ...
    spec:
      ...
      nodes:
      - name: node-name-1
        crush:
          rack: rack-1
        roles: ["mds"]
        devices:
        - fullPath: /dev/disk/by-id/something
          config:
            deviceClass: nvme
            osdsPerDevice: "4"
      - name: group-1
        nodeGroup:
        - node-name-2
        - node-name-3
        crush:
          rack: rack-1
        roles: ["mds"]
        devices:
        - name: nvmeXnY
          config:
            deviceClass: nvme
            osdsPerDevice: "4"
    
  • The network section: in CephDeployment, the network section must explicitly define the publicNet and clusterNet parameters.

    In the legacy architecture, KaaSCephCluster (located on a management cluster), could leave these empty and inherit the IPAM settings. Because CephDeployment resides on the MOSK cluster, it cannot access management cluster IPAM data. Therefore, subnets must be defined manually in the network section.

    For a complete parameter list, refer to the Network parameters section at CephDeployment custom resource.

Status mapping

The status.miraCephInfo section in KaaSCephCluster is identical to status in CephDeployment.

KaaSCephCluster vs. CephDeploymentHealth

CephDeploymentHealth provides comprehensive Ceph cluster health report. Its sections map as follows:

  • status.fullClusterInfo of KaaSCephCluster maps directly to status.fullClusterStatus of CephDeploymentHealth, with the following exceptions:

    • The fullClusterInfo.cephDetails.deviceMappingMapRef referenced object KaaSCephExtraInfo data of KaaSCephCluster is now embedded directly in the fullClusterStatus.cephDetails.deviceMapping section of CephDeploymentHealth.

  • The status.shortClusterInfo parameters of KaaSCephCluster now reside under the status section of CephDeploymentHealth that include:

    • lastCheck

    • lastUpdate

    • miraCephGeneration

    • state

    • messages

    For example, status.shortClusterInfo.lastCheck in KaaSCephCluster matches status.lastCheck in CephDeploymentHealth.

KaaSCephCluster vs. CephDeploymentSecret

CephDeploymentSecret contains references to all credential secrets of custom Ceph clients and Ceph Object Storage users.

The status.miraCephSecretsInfo section of KaaSCephCluster fully matches the status section of CephDeploymentSecret.

KaaSCephCluster vs. CephDeploymentMaintenance

CephDeploymentMaintenance contains information about current maintenance.

The status.miraCephMaintenanceInfo section of KaaSCephCluster fully matches the status section of CephDeploymentMaintenance.

MiraCeph resources vs. CephDeployment resources

The MiraCeph and CephDeployment resources are nearly identical with several minor structural changes:

  • MiraCeph spec.pools.default and spec.pools.rbdDeviceMapOptions parameters are now located under the spec.pools.storageClassOpts sub-section in CephDeployment.

  • The MiraCeph disableOsSharedKeys parameter has been renamed and moved to the CephDeployment spec.extraOpts.disableOsKeys parameter.

The auxiliary resources MiraCephHealth, MiraCephSecret, and MiraCephMaintenance are fully identical with their CephDeployment counterparts (CephDeploymentHealth, CephDeploymentSecret, and CephDeploymentMaintenance).

CephOsdRemoveRequest and CephOsdRemoveTask configuration parameters are the same.