Add, remove, or reconfigure Ceph OSDs

Mirantis Ceph Controller simplifies Ceph cluster management by automating LCM operations. This section describes how to add, remove, or reconfigure Ceph OSDs.

Add a Ceph OSD on a managed cluster

  1. Open the KaasCephCluster CR of a managed cluster for editing:

    kubectl edit kaascephcluster -n <managedClusterProjectName>
    

    Substitute <managedClusterProjectName> with the corresponding value.

  2. In the nodes.<machineName>.storageDevices section, specify the parameters for a Ceph OSD as required. For the parameters description, see Node parameters.

    For example:

    nodes:
      kaas-mgmt-node-5bgk6:
        ...
        storageDevices:
        - config:
            deviceClass: hdd
          name: sdb
    
  3. Verify that the Ceph OSD on the specified node is successfully deployed. The fullClusterInfo section should not contain any issues.

    kubectl -n <managedClusterProjectName> get kaascephcluster -o yaml
    

    For example:

    status:
      fullClusterInfo:
        cephDetails:
          cephDeviceMapping:
            kaas-node-d4aac64d-1721-446c-b7df-e351c3025591:
              "10": "sdb"
        daemonsStatus:
          ...
          osd:
            running: '3/3 running: 3 up, 3 in'
            status: Ok
    
  4. Verify the Ceph OSD on the managed cluster:

    kubectl -n rook-ceph get pod -l app=rook-ceph-osd -o wide | grep <machineName>
    

Remove a Ceph OSD from a managed cluster

Note

Ceph OSD removal presupposes usage of a KaaSCephOperationRequest CR. For workflow overview, spec and phases description, see High-level workflow of Ceph OSD or node removal.

  1. Open the KaasCephCluster CR of a managed cluster for editing:

    kubectl edit kaascephcluster -n <managedClusterProjectName>
    

    Substitute <managedClusterProjectName> with the corresponding value.

  2. Remove the required Ceph OSD specification from the spec.cephClusterSpec.nodes.<machineName>.storageDevices list:

    For example:

    spec:
      cephClusterSpec:
        nodes:
          <machineName>:
            ...
            storageDevices:
            - config: # remove the entire item entry from storageDevices list
                deviceClass: hdd
              name: sdb
    
  3. Create a YAML template for the KaaSCephOperationRequest CR. For example:

    apiVersion: kaas.mirantis.com/v1alpha1
    kind: KaaSCephOperationRequest
    metadata:
      name: remove-osd-<machineName>-sdb
      namespace: <managedClusterProjectName>
    spec:
      osdRemove:
        nodes:
          <machineName>:
            cleanupByDevice:
            - name: sdb
      kaasCephCluster:
        name: <kaasCephClusterName>
        namespace: <managedClusterProjectName>
    

    Substitute <managedClusterProjectName> with the corresponding cluster namespace and <kaasCephClusterName> with the corresponding KaaSCephCluster name.

    Note

    If the storageDevice item was specified with a by-path device path, specify the path parameter in the cleanupByDevice section instead of name.

  4. Apply the template on the management cluster in the corresponding namespace:

    kubectl apply -f remove-osd-<machineName>-sdb.yaml
    
  5. Verify that the corresponding request has been created:

    kubectl get kaascephoperationrequest remove-osd-<machineName>-sdb -n <managedClusterProjectName>
    
  6. Verify that the removeInfo section appeared in the KaaSCephOperationRequest CR status:

    kubectl -n <managedClusterProjectName> get kaascephoperationrequest remove-osd-<machineName>-sdb -o yaml
    

    Example of system response:

    status:
      childNodesMapping:
        kaas-node-d4aac64d-1721-446c-b7df-e351c3025591: <machineName>
      osdRemoveStatus:
        removeInfo:
          cleanUpMap:
            kaas-node-d4aac64d-1721-446c-b7df-e351c3025591:
              osdMapping:
                "10":
                  deviceMapping:
                    sdb:
                      path: "/dev/disk/by-path/pci-0000:00:1t.9"
                      partition: "/dev/ceph-b-vg_sdb/osd-block-b-lv_sdb"
                      type: "block"
                      class: "hdd"
                      zapDisk: true
    
  7. Verify that the cleanUpMap section matches the required removal and wait for the ApproveWaiting phase to appear in status:

    kubectl -n <managedClusterProjectName> get kaascephoperationrequest remove-osd-<machineName>-sdb -o yaml
    

    Example of system response:

    status:
      phase: ApproveWaiting
    
  8. Edit the KaaSCephOperationRequest CR and set the approve flag to true:

    kubectl -n <managedClusterProjectName> edit kaascephoperationrequest remove-osd-<machineName>-sdb
    

    For example:

    spec:
      osdRemove:
        approve: true
    
  9. Review the status of the KaaSCephOperationRequest resource request processing. The valuable parameters are as follows:

    • status.phase - the current state of request processing

    • status.messages - the description of the current phase

    • status.conditions - full history of request processing before the current phase

    • status.removeInfo.issues and status.removeInfo.warnings - contain error and warning messages occurred during request processing

  10. Verify that the KaaSCephOperationRequest has been completed. For example:

    status:
      phase: Completed # or CompletedWithWarnings if there are non-critical issues
    
  11. Remove the device cleanup jobs:

    kubectl delete jobs -n ceph-lcm-mirantis -l app=miraceph-cleanup-disks
    

Reconfigure a Ceph OSD on a managed cluster

There is no hot reconfiguration procedure for existing Ceph OSDs. To reconfigure an existing Ceph node, follow the steps below:

  1. Remove a Ceph OSD from the Ceph cluster as described in Remove a Ceph OSD from a managed cluster.

  2. Add the same Ceph OSD but with a modified configuration as described in Add a Ceph OSD on a managed cluster.