Migrate the Ceph back end

Migrate the Ceph back endΒΆ

Ceph uses FileStore or BlueStore as a storage back end. You can migrate the Ceph storage back end from FileStore to BlueStore and vice versa using the Ceph - backend migration pipeline.


Starting from the 2019.2.10 maintenance update, this procedure is deprecated and all Ceph OSDs should use LVM with BlueStore. Back-end migration is described in Enable the ceph-volume tool.

For earlier versions, if you are going to upgrade Ceph to Nautilus, also skip this procedure to avoid a double migration of the back end. In this case, first apply the 2019.2.10 maintenance update and then enable ceph-volume as well.

To migrate the Ceph back end:

  1. In your project repository, open the cluster/ceph/osd.yml file for editing:

    1. Change the back end type and block_db or journal for every OSD disk device.
    2. Specify the size of the journal or block_db device if it resides on another device than the storage device. The device storage will be divided equally by the number of OSDs using it.


          bluestore_block_db_size: 10073741824
    #      journal_size: 10000
    #        filestore:
              - dev: /dev/sdh
                block_db: /dev/sdj
    #            journal: /dev/sdj

    Where the commented lines are the example lines that must be replaced and removed if migrating from FileStore to BlueStore.

  2. Log in to the Jenkins web UI.

  3. Open the Ceph - backend migration pipeline.

  4. Specify the following parameters:

    Parameter Description and values
    SALT_MASTER_CREDENTIALS The Salt Master credentials to use for connection, defaults to salt.
    SALT_MASTER_URL The Salt Master node host URL with the salt-api port, defaults to the jenkins_salt_api_url parameter. For example,
    ADMIN_HOST Add cmn01* as the Ceph cluster node with the admin keyring.
    TARGET Add the Salt target name of the Ceph OSD node(s). For example, osd005* to migrate on one OSD HOST or osd* to migrate on all OSD hosts.
    OSD Add * to target all OSD disks on all TARGET OSD hosts or comma-separated list of Ceph OSDs if targeting just one OSD host by TARGET For example 1,2.
    WAIT_FOR_HEALTHY Verify that this parameter is selected as it enables the Ceph health check within the pipeline.
    PER_OSD_CONTROL Select to verify the Ceph status after migration of each OSD disk.
    PER_OSD_HOST_CONTROL Select to verify the Ceph status after the whole OSD host migration.
    CLUSTER_FLAGS Add a comma-separated list of flags to apply for the migration procedure. Tested with blank.
    ORIGIN_BACKEND Specify the Ceph back end before migration.


    The PER_OSD_CONTROL and PER_OSD_HOST_CONTROL options provide granular control during the migration to verify each OSD disk after its migration. You can decide to continue or abort.

  5. Click Deploy.

The Ceph - upgrade pipeline workflow:

  1. Set back-end migration flags.
  2. Perform the following for each targeted OSD disk:
    1. Mark the Ceph OSD as out.
    2. Stop the Ceph OSD service.
    3. Remove the Ceph OSD authentication key.
    4. Remove the Ceph OSD from the Ceph cluster
    5. Remove block_db, block_wal, or journal of the OSD.
  3. Run the ceph.osd state to deploy the OSD with a desired back end.
  4. Unset the back-end migration flags.


During the pipeline execution, a check is performed to verify whether the back end type for an OSD disk differs from the one specified in ORIGIN_BACKEND. If the back end differs, Jenkins does not apply any changes to that OSD disk.