Verify Ceph tolerations and resources¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph custom
resource (CR), which is available since MOSK 25.2 to replace the unsupported
KaaSCephCluster resource. And MiraCeph will be automatically migrated
to CephDeployment in MOSK 26.1. For details, see Deprecation Notes:
KaaSCephCluster API on management clusters.
For the equivalent procedure with the unsupported KaaSCephCluster CR, refer
to the following section:
After you enable Ceph resources management as described in Enable management of Ceph tolerations and resources, perform the steps below to verify that the configured tolerations, requests, or limits have been successfully specified in the Ceph cluster.
To verify Ceph tolerations and resources:
To verify that the required tolerations are specified in the Ceph cluster, inspect the output of the following commands:
kubectl -n rook-ceph get $(kubectl -n rook-ceph get cephcluster -o name) -o jsonpath='{.spec.placement.mon.tolerations}' kubectl -n rook-ceph get $(kubectl -n rook-ceph get cephcluster -o name) -o jsonpath='{.spec.placement.mgr.tolerations}' kubectl -n rook-ceph get $(kubectl -n rook-ceph get cephcluster -o name) -o jsonpath='{.spec.placement.osd.tolerations}'
To verify RADOS Gateway tolerations:
kubectl -n rook-ceph get $(kubectl -n rook-ceph get cephobjectstore -o name) -o jsonpath='{.spec.gateway.placement.tolerations}'
To verify that the required resource requests or limits are specified for the Ceph
mon,mgr, orosddaemons, inspect the output of the following command:kubectl -n rook-ceph get $(kubectl -n rook-ceph get cephcluster -o name) -o jsonpath='{.spec.resources}'
To verify that the required resource requests and limits are specified for the RADOS Gateway daemons, inspect the output of the following command:
kubectl -n rook-ceph get $(kubectl -n rook-ceph get cephobjectstore -o name) -o jsonpath='{.spec.gateway.resources}'
To verify that the required resource requests or limits are specified for the Ceph OSDs
hdd,ssd, ornvmedevice classes, perform the following steps:Identify which Ceph OSDs belong to the
<deviceClass>device class in question:kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph osd crush class ls-osd <deviceClass>
For each
<osdID>obtained in the previous step, run the following command. Compare the output with the desired result.kubectl -n rook-ceph get deploy rook-ceph-osd-<osdID> -o jsonpath='{.spec.template.spec.containers[].resources}'