You can migrate the management of an existing Ceph cluster deployed by Decapod to a cluster managed by the Ceph Salt formula.
To migrate the management of a Ceph cluster:
Log in to the Decapod web UI.
Navigate to the CONFIGURATIONS tab.
Select the required configuration and click VIEW.
Generate a new cluster model with Ceph as described in MCP Deployment Guide: Create a deployment metadata model using the Model Designer. Verify that you fill in the correct values from the Decapod configuration file displayed in the VIEW tab of the Decapod web UI.
In the <cluster_name>/ceph/setup.yml
file, specify the right pools and
parameters for the existing pools.
Note
Verify that the keyring names and their caps match the ones that already exist in the Ceph cluster deployed by Decapod.
In the <cluster_name>/infra/config.yml
file, add the following pillar
and modify the parameters according to your environment:
ceph:
decapod:
ip: 192.168.1.10
user: user
pass: psswd
deploy_config_name: ceph
On the node defined in the previous step, apply the following state:
salt-call state.sls ceph.migration
Note
The output of this state must contain defined configurations, Ceph OSD disks, Ceph File System ID (FSID), and so on.
Using the output of the previous command, add the following pillars to your cluster model:
ceph:common
pillar to <cluster_name>/ceph/common.yml
.ceph:osd
pillar to <cluster_name>/ceph/osd.yml
.Examine the newly generated cluster model for any occurrence of the
ceph
keyword and verify that it exists in your current cluster model.
Examine each Ceph cluster file to verify that the parameters match the configuration specified in Decapod.
Copy the Ceph cluster directory to the existing cluster model.
Verify that the ceph
subdirectory is included in your cluster model in
<cluster_name>/infra/init.yml
or <cluster_name>/init.yml
for older
cluster model versions:
classes:
- cluster.<cluster_name>.ceph
Add the Reclass storage nodes to <cluster_name>/infra/config.yml
and
change the count
variable to the number of OSDs you have. For example:
classes:
- system.reclass.storage.system.ceph_mon_cluster
- system.reclass.storage.system.ceph_rgw_cluster # Add this line only if
# RadosGW services run on separate nodes than the Ceph Monitor services.
parameters:
reclass:
storage:
node:
ceph_osd_rack01:
name: ${_param:ceph_osd_rack01_hostname}<<count>>
domain: ${_param:cluster_domain}
classes:
- cluster.${_param:cluster_name}.ceph.osd
repeat:
count: 3
start: 1
digits: 3
params:
single_address:
value: ${_param:ceph_osd_rack01_single_subnet}.<<count>>
start: 201
backend_address:
value: ${_param:ceph_osd_rack01_backend_subnet}.<<count>>
start: 201
If the Ceph RADOS Gateway service is running on the same nodes as the Ceph monitor services:
Add the following snippet to <cluster_name>/infra/config.yml
:
reclass:
storage:
node:
ceph_mon_node01:
classes:
- cluster.${_param:cluster_name}.ceph.rgw
ceph_mon_node02:
classes:
- cluster.${_param:cluster_name}.ceph.rgw
ceph_mon_node03:
classes:
- cluster.${_param:cluster_name}.ceph.rgw
Verify that the parameters in <cluster_name>/ceph/rgw.yml
are
defined correctly according to the existing Ceph cluster.
From the Salt Master node, generate the Ceph nodes:
salt-call state.sls reclass
Run the commands below.
Warning
If the outputs of the commands below contain any changes that can potentially break the cluster, change the cluster model as needed and optionally run the salt-call pillar.data ceph command to verify that the Salt pillar contains the correct value. Proceed to the next step only once you are sure that your model is correct.
From the Ceph monitor nodes:
salt-call state.sls ceph test=True
From the Ceph OSD nodes:
salt-call state.sls ceph test=True
From the Ceph RADOS Gateway nodes:
salt-call state.sls ceph test=True
From the Salt Master node:
salt -C 'I@ceph:common' state.sls ceph test=True
Once you have verified that no changes by the Salt Formula can break the running Ceph cluster, run the following commands.
From the Salt Master node:
salt -C 'I@ceph:common:keyring:admin' state.sls ceph.mon
salt -C 'I@ceph:mon' saltutil.sync_grains
salt -C 'I@ceph:mon' mine.update
salt -C 'I@ceph:mon' state.sls ceph.mon
From one of the OSD nodes:
salt-call state.sls ceph.osd
Note
Before you proceed, verify that the OSDs on this node are working fine.
From the Salt Master node:
salt -C 'I@ceph:osd' state.sls ceph.osd
From the Salt Master node:
salt -C 'I@ceph:radosgw' state.sls ceph.radosgw