Specify placement of Ceph cluster daemons¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph custom
resource (CR), which is available since MOSK 25.2 to replace the unsupported
KaaSCephCluster resource. And MiraCeph will be automatically migrated
to CephDeployment in MOSK 26.1. For details, see Deprecation Notes:
KaaSCephCluster API on management clusters.
For the equivalent procedure with the unsupported KaaSCephCluster CR, refer
to the following section:
If you need to configure the placement of Rook daemons on nodes, you can add
extra values in the Cluster providerSpec section of the
ceph-controller Helm release.
The procedures in this section describe how to specify the placement of
rook-ceph-operator, rook-discover, and csi-rbdplugin.
To specify rook-ceph-operator placement:
On the management cluster, edit the
Clusterresource of the target MOSK cluster:kubectl -n <moskClusterProjectName> edit cluster
Add the following parameters to the
ceph-controllerHelm release values:spec: providerSpec: value: helmReleases: - name: ceph-controller values: rookOperatorPlacement: affinity: <rookOperatorAffinity> nodeSelector: <rookOperatorNodeSelector> tolerations: <rookOperatorTolerations>
<rookOperatorAffinity>is a key-value mapping that contains a valid Kubernetesaffinityspecification<rookOperatorNodeSelector>is a key-value mapping that contains a valid KubernetesnodeSelectorspecification<rookOperatorTolerations>is a list that contains valid Kubernetestolerationitems
Wait for some time and verify on the MOSK cluster that the changes have applied:
kubectl -n rook-ceph get deploy rook-ceph-operator -o yaml
To specify rook-discover and csi-rbdplugin placement simultaneously:
On the management cluster, edit the required
Clusterresource:kubectl -n <moskClusterProjectName> edit cluster
Add the following parameters to the
ceph-controllerHelm release values:spec: providerSpec: value: helmReleases: - name: ceph-controller values: rookExtraConfig: extraDaemonsetLabels: <labelSelector>
Substitute
<labelSelector>with a valid Kubernetes label selector expression to place therook-discoverandcsi-rbdpluginDaemonSet pods.Wait for some time and verify on the MOSK cluster that the changes have applied:
kubectl -n rook-ceph get ds rook-discover -o yaml kubectl -n rook-ceph get ds csi-rbdplugin -o yaml
To specify rook-discover and csi-rbdplugin placement separately:
On the management cluster, edit the required
Clusterresource:kubectl -n <moskClusterProjectName> edit cluster
If required, add the following parameters to the
ceph-controllerHelm release values:spec: providerSpec: value: helmReleases: - name: ceph-controller values: hyperconverge: nodeAffinity: csiplugin: <labelSelector1> rookDiscover: <labelSelector2>
Substitute
<labelSelectorX>with a valid Kubernetes label selector expression to place therook-discoverandcsi-rbdpluginDaemonSet pods. For example,"role=storage-node; discover=true".Wait for some time and verify on the MOSK cluster that the changes have applied:
kubectl -n rook-ceph get ds rook-discover -o yaml kubectl -n rook-ceph get ds csi-rbdplugin -o yaml