Specify placement of Ceph cluster daemons¶
If you need to configure the placement of Rook daemons on nodes, you can add
extra values in the Cluster providerSpec section of the
ceph-controller Helm release.
The procedures in this section describe how to specify the placement of
rook-ceph-operator, rook-discover, and csi-rbdplugin.
To specify rook-ceph-operator placement:
On the management cluster, edit the
Clusterresource of the target MOSK cluster:kubectl -n <moskClusterProjectName> edit cluster
Add the following parameters to the
ceph-controllerHelm release values:spec: providerSpec: value: helmReleases: - name: ceph-controller values: rookOperatorPlacement: affinity: <rookOperatorAffinity> nodeSelector: <rookOperatorNodeSelector> tolerations: <rookOperatorTolerations>
<rookOperatorAffinity>is a key-value mapping that contains a valid Kubernetesaffinityspecification<rookOperatorNodeSelector>is a key-value mapping that contains a valid KubernetesnodeSelectorspecification<rookOperatorTolerations>is a list that contains valid Kubernetestolerationitems
Wait for some time and verify on a MOSK cluster that the changes have applied:
kubectl -n rook-ceph get deploy rook-ceph-operator -o yaml
To specify rook-discover and csi-rbdplugin placement simultaneously:
On the management cluster, edit the desired
Clusterresource:kubectl -n <moskClusterProjectName> edit cluster
Add the following parameters to the
ceph-controllerHelm release values:spec: providerSpec: value: helmReleases: - name: ceph-controller values: rookExtraConfig: extraDaemonsetLabels: <labelSelector>
Substitute
<labelSelector>with a valid Kubernetes label selector expression to place therook-discoverandcsi-rbdpluginDaemonSet pods.Wait for some time and verify on a MOSK cluster that the changes have applied:
kubectl -n rook-ceph get ds rook-discover -o yaml kubectl -n rook-ceph get ds csi-rbdplugin -o yaml
To specify rook-discover and csi-rbdplugin placement separately:
On the management cluster, edit the desired
Clusterresource:kubectl -n <moskClusterProjectName> edit cluster
If required, add the following parameters to the
ceph-controllerHelm release values:spec: providerSpec: value: helmReleases: - name: ceph-controller values: hyperconverge: nodeAffinity: csiplugin: <labelSelector1> rookDiscover: <labelSelector2>
Substitute
<labelSelectorX>with a valid Kubernetes label selector expression to place therook-discoverandcsi-rbdpluginDaemonSet pods. For example,"role=storage-node; discover=true".Wait for some time and verify on the MOSK cluster that the changes have applied:
kubectl -n rook-ceph get ds rook-discover -o yaml kubectl -n rook-ceph get ds csi-rbdplugin -o yaml