Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Specify placement of Ceph cluster daemons¶
Warning
This procedure is valid for MOSK clusters that use the deprecated
KaaSCephCluster custom resource (CR) instead of the MiraCeph CR that is
available since MOSK 25.2 as a new Ceph configuration entrypoint. For the
equivalent procedure with the MiraCeph CR, refer to the following section:
If you need to configure the placement of Rook daemons on nodes, you can add
extra values in the Cluster providerSpec section of the
ceph-controller Helm release.
The procedures in this section describe how to specify the placement of
rook-ceph-operator, rook-discover, and csi-rbdplugin.
To specify rook-ceph-operator placement:
On the management cluster, edit the
Clusterresource of the target MOSK cluster:kubectl -n <moskClusterProjectName> edit cluster
Add the following parameters to the
ceph-controllerHelm release values:spec: providerSpec: value: helmReleases: - name: ceph-controller values: rookOperatorPlacement: affinity: <rookOperatorAffinity> nodeSelector: <rookOperatorNodeSelector> tolerations: <rookOperatorTolerations>
<rookOperatorAffinity>is a key-value mapping that contains a valid Kubernetesaffinityspecification<rookOperatorNodeSelector>is a key-value mapping that contains a valid KubernetesnodeSelectorspecification<rookOperatorTolerations>is a list that contains valid Kubernetestolerationitems
Wait for some time and verify on a MOSK cluster that the changes have applied:
kubectl -n rook-ceph get deploy rook-ceph-operator -o yaml
To specify rook-discover and csi-rbdplugin placement simultaneously:
On the management cluster, edit the desired
Clusterresource:kubectl -n <moskClusterProjectName> edit cluster
Add the following parameters to the
ceph-controllerHelm release values:spec: providerSpec: value: helmReleases: - name: ceph-controller values: rookExtraConfig: extraDaemonsetLabels: <labelSelector>
Substitute
<labelSelector>with a valid Kubernetes label selector expression to place therook-discoverandcsi-rbdpluginDaemonSet pods.Wait for some time and verify on a MOSK cluster that the changes have applied:
kubectl -n rook-ceph get ds rook-discover -o yaml kubectl -n rook-ceph get ds csi-rbdplugin -o yaml
To specify rook-discover and csi-rbdplugin placement separately:
On the management cluster, edit the desired
Clusterresource:kubectl -n <moskClusterProjectName> edit cluster
If required, add the following parameters to the
ceph-controllerHelm release values:spec: providerSpec: value: helmReleases: - name: ceph-controller values: hyperconverge: nodeAffinity: csiplugin: <labelSelector1> rookDiscover: <labelSelector2>
Substitute
<labelSelectorX>with a valid Kubernetes label selector expression to place therook-discoverandcsi-rbdpluginDaemonSet pods. For example,"role=storage-node; discover=true".Wait for some time and verify on the MOSK cluster that the changes have applied:
kubectl -n rook-ceph get ds rook-discover -o yaml kubectl -n rook-ceph get ds csi-rbdplugin -o yaml