Mirantis Ceph controller simplifies a Ceph cluster management
by automating LCM operations.
To modify Ceph components, only the MiraCeph
custom resource (CR) update
is required. Once you update the MiraCeph
CR, the Ceph controller
automatically adds, removes, or reconfigures Ceph nodes as required.
Note
When adding a Ceph node with the Ceph Monitor role, if any issues occur with
the Ceph Monitor, rook-ceph
removes it and adds a new Ceph Monitor instead,
named using the next alphabetic character in order. Therefore, the Ceph Monitor
names may not follow the alphabetical order. For example, a
, b
, d
,
instead of a
, b
, c
.
To add, remove, or reconfigure Ceph nodes on a management or managed cluster:
To modify Ceph OSDs, verify that the manageOsds
parameter is set to
true
in the KaasCephCluster
CR as described in Enable automated Ceph LCM.
Log in to a local machine running Ubuntu 18.04 where kubectl
is installed.
Obtain and export kubeconfig
of the management cluster
as described in Connect to a Mirantis Container Cloud cluster.
Open the KaasCephCluster
CR for editing.
Choose from the following options:
For a management cluster:
kubectl edit kaascephcluster
For a managed cluster:
kubectl edit kaascephcluster -n <managedClusterProjectName>
Substitute <managedClusterProjectName>
with the corresponding value.
In the nodes
section, specify or remove the parameters for a
Ceph OSD as required. For the parameters description, see
OSD Configuration Settings.
For example:
nodes:
kaas-mgmt-node-5bgk6:
roles:
- mon
- mgr
storageDevices:
- config:
storeType: bluestore
name: sdb
Note
To use a new Ceph node for a Ceph Monitor or Ceph Manager
deployment, also specify the roles
parameter.
If you are making changes for your managed cluster, obtain and export
kubeconfig
of the managed cluster as described
in Connect to a Mirantis Container Cloud cluster. Otherwise, skip this step.
Monitor the status of your Ceph cluster deployment. For example:
kubectl -n rook-ceph get pods
kubectl -n ceph-lcm-mirantis logs ceph-controller-78c95fb75c-dtbxk
kubectl -n rook-ceph logs rook-ceph-operator-56d6b49967-5swxr
Connect to the terminal of the ceph-tools
pod:
kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod \
-l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
Verify that the Ceph node has been successfully added, removed, or reconfigured:
Verify that the Ceph cluster status is healthy:
ceph status
Example of a positive system response:
cluster:
id: 0868d89f-0e3a-456b-afc4-59f06ed9fbf7
health: HEALTH_OK
services:
mon: 3 daemons, quorum a,b,c (age 20h)
mgr: a(active, since 20h)
osd: 9 osds: 9 up (since 20h), 9 in (since 2d)
data:
pools: 1 pools, 32 pgs
objects: 0 objects, 0 B
usage: 9.1 GiB used, 231 GiB / 240 GiB avail
pgs: 32 active+clean
Verify that the status of the Ceph OSDs is up
:
ceph osd tree
Example of a positive system response:
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.23424 root default
-3 0.07808 host osd1
1 hdd 0.02930 osd.1 up 1.00000 1.00000
3 hdd 0.01949 osd.3 up 1.00000 1.00000
6 hdd 0.02930 osd.6 up 1.00000 1.00000
-15 0.07808 host osd2
2 hdd 0.02930 osd.2 up 1.00000 1.00000
5 hdd 0.01949 osd.5 up 1.00000 1.00000
8 hdd 0.02930 osd.8 up 1.00000 1.00000
-9 0.07808 host osd3
0 hdd 0.02930 osd.0 up 1.00000 1.00000
4 hdd 0.01949 osd.4 up 1.00000 1.00000
7 hdd 0.02930 osd.7 up 1.00000 1.00000