Ceph uses FileStore or BlueStore as a storage back end. You can migrate the Ceph storage back end from FileStore to BlueStore and vice versa using the Ceph - backend migration pipeline.
Note
Starting from the 2019.2.10 maintenance update, this procedure is deprecated and all Ceph OSDs should use LVM with BlueStore. Back-end migration is described in Enable the ceph-volume tool.
For earlier versions, if you are going to upgrade Ceph to Nautilus, also
skip this procedure to avoid a double migration of the back end. In this
case, first apply the 2019.2.10 maintenance update and then enable
ceph-volume
as well.
To migrate the Ceph back end:
In your project repository, open the cluster/ceph/osd.yml
file for
editing:
block_db
or journal
for every OSD
disk device.journal
or block_db
device if it resides
on another device than the storage device. The device storage will be
divided equally by the number of OSDs using it.Example:
parameters:
ceph:
osd:
bluestore_block_db_size: 10073741824
# journal_size: 10000
backend:
# filestore:
bluestore:
disks:
- dev: /dev/sdh
block_db: /dev/sdj
# journal: /dev/sdj
Where the commented lines are the example lines that must be replaced and removed if migrating from FileStore to BlueStore.
Log in to the Jenkins web UI.
Open the Ceph - backend migration pipeline.
Specify the following parameters:
Parameter | Description and values |
---|---|
SALT_MASTER_CREDENTIALS | The Salt Master credentials to use for connection, defaults to
salt . |
SALT_MASTER_URL | The Salt Master node host URL with the salt-api port, defaults to
the jenkins_salt_api_url parameter. For example,
http://172.18.170.27:6969 . |
ADMIN_HOST | Add cmn01* as the Ceph cluster node with the admin keyring. |
TARGET | Add the Salt target name of the Ceph OSD node(s). For example,
osd005* to migrate on one OSD HOST or osd* to migrate on all
OSD hosts. |
OSD | Add * to target all OSD disks on all TARGET OSD hosts or
comma-separated list of Ceph OSDs if targeting just one OSD host by
TARGET For example 1,2 . |
WAIT_FOR_HEALTHY | Verify that this parameter is selected as it enables the Ceph health check within the pipeline. |
PER_OSD_CONTROL | Select to verify the Ceph status after migration of each OSD disk. |
PER_OSD_HOST_CONTROL | Select to verify the Ceph status after the whole OSD host migration. |
CLUSTER_FLAGS | Add a comma-separated list of flags to apply for the migration procedure. Tested with blank. |
ORIGIN_BACKEND | Specify the Ceph back end before migration. |
Note
The PER_OSD_CONTROL
and PER_OSD_HOST_CONTROL
options provide
granular control during the migration to verify each OSD disk after its
migration. You can decide to continue or abort.
Click Deploy.
The Ceph - upgrade pipeline workflow:
out
.block_db
, block_wal
, or journal of the OSD.ceph.osd
state to deploy the OSD with a desired back end.Note
During the pipeline execution, a check is performed to verify whether
the back end type for an OSD disk differs from the one specified in
ORIGIN_BACKEND
. If the back end differs, Jenkins does not apply any
changes to that OSD disk.