Add, remove, or reconfigure Ceph nodes

Caution

Ceph LCM automated operations such as Ceph OSD or Ceph node removal are temporarily disabled due to the Storage known issue. To remove Ceph OSDs manually, see Remove Ceph OSD manually.

Mirantis Ceph controller simplifies a Ceph cluster management by automating LCM operations. To modify Ceph components, only the MiraCeph custom resource (CR) update is required. Once you update the MiraCeph CR, the Ceph controller automatically adds, removes, or reconfigures Ceph nodes as required.

Note

When adding a Ceph node with the Ceph Monitor role, if any issues occur with the Ceph Monitor, rook-ceph removes it and adds a new Ceph Monitor instead, named using the next alphabetic character in order. Therefore, the Ceph Monitor names may not follow the alphabetical order. For example, a, b, d, instead of a, b, c.

To add, remove, or reconfigure Ceph nodes on a managed cluster:

  1. Log in to a local machine running Ubuntu 18.04 where kubectl is installed.

  2. Obtain and export kubeconfig of the management cluster as described in Connect to a Mirantis Container Cloud cluster.

  3. Open the KaasCephCluster CR of a managed cluster:

    kubectl edit kaascephcluster -n <managedClusterProjectName>
    

    Substitute <managedClusterProjectName> with the corresponding value.

  4. In the nodes section, specify or remove the parameters for a Ceph OSD as required. For the parameters description, see OSD Configuration Settings.

    For example:

    nodes:
      kaas-mgmt-node-5bgk6:
        roles:
        - mon
        - mgr
        storageDevices:
        - config:
            storeType: bluestore
        name: sdb
    

    Note

    • To use a new Ceph node for a Ceph Monitor or Ceph Manager deployment, also specify the roles parameter.

    • Reducing the number of Ceph Monitors is not supported and causes the Ceph Monitor daemons removal from random nodes.

    • Removal of the mgr role in the nodes section of the KaaSCephCluster CR does not remove Ceph Managers. To remove a Ceph Manager from a node, remove it from the nodes spec and manually delete the mgr pod in the Rook namespace.

  5. If you are making changes for your managed cluster, obtain and export kubeconfig of the managed cluster as described in Connect to a Mirantis Container Cloud cluster. Otherwise, skip this step.

  6. Monitor the status of your Ceph cluster deployment. For example:

    kubectl -n rook-ceph get pods
    
    kubectl -n ceph-lcm-mirantis logs ceph-controller-78c95fb75c-dtbxk
    
    kubectl -n rook-ceph logs rook-ceph-operator-56d6b49967-5swxr
    
  7. Connect to the terminal of the ceph-tools pod:

    kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod \
    -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
    
  8. Verify that the Ceph node has been successfully added, removed, or reconfigured:

    1. Verify that the Ceph cluster status is healthy:

      ceph status
      

      Example of a positive system response:

      cluster:
        id:     0868d89f-0e3a-456b-afc4-59f06ed9fbf7
        health: HEALTH_OK
      
      services:
        mon: 3 daemons, quorum a,b,c (age 20h)
        mgr: a(active, since 20h)
        osd: 9 osds: 9 up (since 20h), 9 in (since 2d)
      
      data:
        pools:   1 pools, 32 pgs
        objects: 0 objects, 0 B
        usage:   9.1 GiB used, 231 GiB / 240 GiB avail
        pgs:     32 active+clean
      
    2. Verify that the status of the Ceph OSDs is up:

      ceph osd tree
      

      Example of a positive system response:

      ID  CLASS WEIGHT  TYPE NAME                   STATUS REWEIGHT PRI-AFF
      -1       0.23424 root default
      -3       0.07808             host osd1
       1   hdd 0.02930                 osd.1           up  1.00000 1.00000
       3   hdd 0.01949                 osd.3           up  1.00000 1.00000
       6   hdd 0.02930                 osd.6           up  1.00000 1.00000
      -15       0.07808             host osd2
       2   hdd 0.02930                 osd.2           up  1.00000 1.00000
       5   hdd 0.01949                 osd.5           up  1.00000 1.00000
       8   hdd 0.02930                 osd.8           up  1.00000 1.00000
      -9       0.07808             host osd3
       0   hdd 0.02930                 osd.0           up  1.00000 1.00000
       4   hdd 0.01949                 osd.4           up  1.00000 1.00000
       7   hdd 0.02930                 osd.7           up  1.00000 1.00000