Searching for results...

No results

Your search did not match anything from Mirantis documentation.
Check your spelling or try different keywords.

An error occurred

An error occurred while using the search.
Try your search again or contact us to let us know about it.

Newer documentation is now live.You are currently reading an older version.

Specify placement of Ceph cluster daemons

If you need to configure the placement of Rook daemons on nodes, you can add extra values in the Cluster providerSpec section of the ceph-controller Helm release.

The procedures in this section describe how to specify the placement of rook-ceph-operator, rook-discover, and csi-rbdplugin.

To specify rook-ceph-operator placement:

  1. On the management cluster, edit the Cluster resource of the target MOSK cluster:

    kubectl -n <moskClusterProjectName> edit cluster
    
  2. Add the following parameters to the ceph-controller Helm release values:

    spec:
      providerSpec:
        value:
          helmReleases:
          - name: ceph-controller
            values:
              rookOperatorPlacement:
                affinity: <rookOperatorAffinity>
                nodeSelector: <rookOperatorNodeSelector>
                tolerations: <rookOperatorTolerations>
    
    • <rookOperatorAffinity> is a key-value mapping that contains a valid Kubernetes affinity specification

    • <rookOperatorNodeSelector> is a key-value mapping that contains a valid Kubernetes nodeSelector specification

    • <rookOperatorTolerations> is a list that contains valid Kubernetes toleration items

  3. Wait for some time and verify on a MOSK cluster that the changes have applied:

    kubectl -n rook-ceph get deploy rook-ceph-operator -o yaml
    

To specify rook-discover and csi-rbdplugin placement simultaneously:

  1. On the management cluster, edit the desired Cluster resource:

    kubectl -n <moskClusterProjectName> edit cluster
    
  2. Add the following parameters to the ceph-controller Helm release values:

    spec:
      providerSpec:
        value:
          helmReleases:
          - name: ceph-controller
            values:
              rookExtraConfig:
                extraDaemonsetLabels: <labelSelector>
    

    Substitute <labelSelector> with a valid Kubernetes label selector expression to place the rook-discover and csi-rbdplugin DaemonSet pods.

  3. Wait for some time and verify on a MOSK cluster that the changes have applied:

    kubectl -n rook-ceph get ds rook-discover -o yaml
    kubectl -n rook-ceph get ds csi-rbdplugin -o yaml
    

To specify rook-discover and csi-rbdplugin placement separately:

  1. On the management cluster, edit the desired Cluster resource:

    kubectl -n <moskClusterProjectName> edit cluster
    
  2. If required, add the following parameters to the ceph-controller Helm release values:

    spec:
      providerSpec:
        value:
          helmReleases:
          - name: ceph-controller
            values:
              hyperconverge:
                nodeAffinity:
                  csiplugin: <labelSelector1>
                  rookDiscover: <labelSelector2>
    

    Substitute <labelSelectorX> with a valid Kubernetes label selector expression to place the rook-discover and csi-rbdplugin DaemonSet pods. For example, "role=storage-node; discover=true".

  3. Wait for some time and verify on the MOSK cluster that the changes have applied:

    kubectl -n rook-ceph get ds rook-discover -o yaml
    kubectl -n rook-ceph get ds csi-rbdplugin -o yaml