Enable the ceph-volume tool

Enable the ceph-volume toolΒΆ

Note

This feature is available starting from the MCP 2019.2.7 maintenance update. Before using the feature, follow the steps described in Apply maintenance updates.

Warning

  • Prior to the 2019.2.10 maintenance update, ceph-volume is available as technical preview only.
  • Starting from the 2019.2.10 maintenance update, the ceph-volume tool is fully supported and must be enabled prior to upgrading from Ceph Luminous to Nautilus. The ceph-disk tool is deprecated.

This section describes how to enable the ceph-volume command-line tool that enables you to deploy and inspect Ceph OSDs using the Logical Volume Management (LVM) functionality for provisioning block devices. The main difference between ceph-disk and ceph-volume is that ceph-volume does not automatically partition disks used for block.db. However, partitioning is performed within the procedure below.

To enable the ceph-volume tool:

  1. Open your Git project repository with the Reclass model on the cluster level.

  2. Open ceph/osd.yml for editing.

  3. Set the lvm_enabled parameter to True:

    parameters:
      ceph:
        osd:
          lvm_enabled: True
    
  4. For each Ceph OSD, add the definition of a bare block.db partition and define its number in db_partition. For example:

    parameters:
      ceph:
        osd:
          backend:
            bluestore:
              disks:
                - dev: /dev/vdc
                  block_db: /dev/vdd
                  db_partition: 1
      linux:
        storage:
          disk:
            /dev/vdd:
              type: gpt
              partitions:
                - size: 10000
    

    Note

    Due to a custom disk layout, these definitions may already exist and be configured differently. In this case, verify that the db_partition number is defined.

  5. Apply the changes:

    salt -C 'I@ceph:osd' saltutil.refresh_pillar
    

    Once done, all new Ceph OSDs will be deployed using ceph-volume instead of ceph-disk. The existing Ceph OSDs will cause errors during common operations and must be redeployed or defined as legacy_disks as described below.

  6. Select from the following options:

    • Redeploy Ceph OSD nodes or daemons:

      Warning

      Before redeploying a node, verify that all pools have at least three copies. Redeploy only one node at a time. Redeploying multiple nodes may cause irreversible data loss.

    • Define existing Ceph OSDs as legacy_disks by specifying the legacy_disks pillar for each Ceph OSD in ceph/osd.yml. For example:

      parameters:
        ceph:
          osd:
            legacy_disks:
              0:
                class: hdd
                weight: 0.048691
                dev: /dev/vdc
      

      Note

      Use the legacy_disks option only as a temporary solution for common management of the cluster during the transition period.