After a physical disk replacement, you can use Rook to redeploy a failed
Ceph OSD by restarting rook-operator
that triggers
the reconfiguration of the management or managed cluster.
To redeploy a failed Ceph OSD:
Log in to a local machine running Ubuntu 18.04 where kubectl
is installed.
Obtain and export kubeconfig
of the required management or managed
cluster as described in Connect to a Mirantis Container Cloud cluster.
Identify the failed Ceph OSD ID:
ceph osd tree
Remove the Ceph OSD deployment from the management or managed cluster:
kubectl delete deployment -n rook-ceph rook-ceph-osd-<ID>
Connect to the terminal of the ceph-tools
pod:
kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod \
-l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
Remove the failed Ceph OSD from the Ceph cluster:
ceph osd purge osd.<ID>
Replace the failed disk.
Restart the Rook operator:
kubectl delete pod $(kubectl -n rook-ceph get pod -l "app=rook-ceph-operator" \
-o jsonpath='{.items[0].metadata.name}') -n rook-ceph