Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!

Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.

Enable management of Ceph tolerations and resources

Warning

This procedure is valid for MOSK clusters that use the MiraCeph custom resource (CR), which is available since MOSK 25.2 to replace the deprecated KaaSCephCluster. For the equivalent procedure with the KaaSCephCluster CR, refer to the following section:

Enable Ceph tolerations and resources management

Warning

This document does not provide any specific recommendations on requests and limits for Ceph resources. The document stands for a native Ceph resources configuration for any cluster with MOSK.

You can configure Ceph Controller to manage Ceph resources by specifying their requirements and constraints. To configure the resources consumption for Ceph nodes, consider the following options that are based on different Helm release configuration values:

  • Configuring tolerations for taint nodes for the Ceph Monitor, Ceph Manager, and Ceph OSD daemons. For details, see Taints and Tolerations.

  • Configuring node resource requests or limits for the Ceph daemons and for each Ceph OSD device class such as HDD, SSD, or NVMe. For details, see Managing Resources for Containers.

To enable management of Ceph tolerations and resources:

  1. To avoid Ceph cluster health issues during daemon configuration changes, set Ceph noout, nobackfill, norebalance, and norecover flags through the ceph-tools pod before editing Ceph tolerations and resources:

    kubectl -n rook-ceph exec deploy/rook-ceph-tools -- bash
    ceph osd set noout
    ceph osd set nobackfill
    ceph osd set norebalance
    ceph osd set norecover
    exit
    

    Note

    Skip this step if you are only configuring the PG rebalance timeout and replicas count parameters.

  2. Edit the MiraCeph resource of a MOSK cluster:

    kubectl -n ceph-lcm-mirantis edit miraceph
    

    Substitute <moskClusterProjectName> with the project name of the required MOSK cluster.

  3. Specify the parameters in the hyperconverge section as required. The hyperconverge section includes the following parameters:

    tolerations

    Specifies tolerations for taint nodes for the defined daemon type. Each daemon type key contains the following parameters:

    hyperconverge:
      tolerations:
        <daemonType>:
          rules:
          - key: ""
            operator: ""
            value: ""
            effect: ""
            tolerationSeconds: 0
    

    Possible values for <daemonType> are osd, mon, mgr, and rgw. The following values are also supported:

    • all - specifies general toleration rules for all daemons if no separate daemon rule is specified.

    • mds - specifies the CephFS Metadata Server daemons.

    Example configuration
    hyperconverge:
      tolerations:
        mon:
          rules:
          - effect: NoSchedule
            key: node-role.kubernetes.io/controlplane
            operator: Exists
        mgr:
          rules:
          - effect: NoSchedule
            key: node-role.kubernetes.io/controlplane
            operator: Exists
        osd:
          rules:
          - effect: NoSchedule
            key: node-role.kubernetes.io/controlplane
            operator: Exists
        rgw:
          rules:
          - effect: NoSchedule
            key: node-role.kubernetes.io/controlplane
            operator: Exists
    
    resources

    Specifies resources requests or limits. The parameter is a map with the daemon type as a key and the following structure as a value:

    hyperconverge:
      resources:
        <daemonType>:
          requests: <kubernetes valid spec of daemon resource requests>
          limits: <kubernetes valid spec of daemon resource limits>
    

    Possible values for <daemonType> are mon, mgr, osd, osd-hdd, osd-ssd, osd-nvme, prepareosd, rgw, and mds. The osd-hdd, osd-ssd, and osd-nvme resource requirements handle only the Ceph OSDs with a corresponding device class.

    Example configuration
    hyperconverge:
      resources:
        mon:
          requests:
            memory: 1Gi
            cpu: 2
          limits:
            memory: 2Gi
            cpu: 3
        mgr:
          requests:
            memory: 1Gi
            cpu: 2
          limits:
            memory: 2Gi
            cpu: 3
        osd:
          requests:
            memory: 1Gi
            cpu: 2
          limits:
            memory: 2Gi
            cpu: 3
        osd-hdd:
          requests:
            memory: 1Gi
            cpu: 2
          limits:
            memory: 2Gi
            cpu: 3
        osd-ssd:
          requests:
            memory: 1Gi
            cpu: 2
          limits:
            memory: 2Gi
            cpu: 3
        osd-nvme:
          requests:
            memory: 1Gi
            cpu: 2
          limits:
            memory: 2Gi
            cpu: 3
    
  4. For the Ceph node-specific resource settings, specify the resources section in the corresponding nodes spec of MiraCeph:

    spec:
      nodes:
      - name: <nodeName>
        resources:
          requests: <kubernetes valid spec of daemon resource requests>
          limits: <kubernetes valid spec of daemon resource limits>
    

    Substitute <nodeName> with the node requested for specific resources. For example:

    spec:
      nodes:
      - name: kaas-node-worker-1
        resources:
          requests:
            memory: 1Gi
            cpu: 2
          limits:
            memory: 2Gi
            cpu: 3
    
  5. For the RADOS Gateway instances specific resource settings, specify the resources section in the rgw spec of MiraCeph:

    spec:
      objectStorage:
        rgw:
          gateway:
            resources:
              requests: <kubernetes valid spec of daemon resource requests>
              limits: <kubernetes valid spec of daemon resource limits>
    

    For example:

    spec:
      objectStorage:
        rgw:
          gateway:
            resources:
              requests:
                memory: 1Gi
                cpu: 2
              limits:
                memory: 2Gi
                cpu: 3
    
  6. Save the reconfigured MiraCeph resource and wait for ceph-controller to apply the updated Ceph configuration. It will recreate Ceph Monitors, Ceph Managers, or Ceph OSDs according to the specified hyperconverge configuration.

  7. If you have specified any osd tolerations, additionally specify tolerations for the rook instances:

    1. Open the Cluster resource of the required Ceph cluster on a management cluster:

      kubectl -n <ClusterProjectName> edit cluster
      

      Substitute <ClusterProjectName> with the project name of the required cluster.

    2. Specify the parameters in the ceph-controller section of spec.providerSpec.value.helmReleases:

      1. Specify the hyperconverge.tolerations.rook parameter as required:

        hyperconverge:
          tolerations:
            rook: |
             <yamlFormattedKubernetesTolerations>
        

        In <yamlFormattedKubernetesTolerations>, specify YAML-formatted tolerations from hyperconverge.tolerations.osd.rules of the MiraCeph spec. For example:

        hyperconverge:
          tolerations:
            rook: |
            - effect: NoSchedule
              key: node-role.kubernetes.io/controlplane
              operator: Exists
        
      2. In controllers.cephRequest.parameters.pgRebalanceTimeoutMin, specify the PG rebalance timeout for requests. The default is 30 minutes. For example:

        controllers:
          cephRequest:
            parameters:
              pgRebalanceTimeoutMin: 35
        
      3. In controllers.cephController.replicas, controllers.cephRequest.replicas, and controllers.cephStatus.replicas, specify the replica count. The default is 3 replicas. For example:

        controllers:
          cephController:
            replicas: 1
          cephRequest:
            replicas: 1
          cephStatus:
            replicas: 1
        
    3. Save the reconfigured Cluster resource and wait for the ceph-controller Helm release update. It will recreate Ceph CSI and discover pods according to the specified hyperconverge.tolerations.rook configuration.

  8. Specify tolerations for different Rook resources using the following chart-based options:

    • hyperconverge.tolerations.rook - general toleration rules for each Rook service if no exact rules specified

    • hyperconverge.tolerations.csiplugin - for tolerations of the ceph-csi plugins DaemonSets

    • hyperconverge.tolerations.csiprovisioner - for the ceph-csi provisioner deployment tolerations

    • hyperconverge.nodeAffinity.csiprovisioner - provides the ceph-csi provisioner node affinity with a value section

  9. After a successful Ceph reconfiguration, unset the flags set in step 1 through the ceph-tools pod:

    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
    ceph osd unset
    ceph osd unset noout
    ceph osd unset nobackfill
    ceph osd unset norebalance
    ceph osd unset norecover
    exit
    

    Note

    Skip this step if you have only configured the PG rebalance timeout and replicas count parameters.

Once done, proceed to Verify Ceph tolerations and resources.