This section guides you through the manual deployment of a Ceph cluster. If you are deploying a Ceph cluster distributed over L3 domains, verify that you have performed the steps described in Prerequisites for a Ceph cluster distributed over L3 domains.
Warning
Converged storage is not supported.
Note
Prior to deploying a Ceph cluster:
Verify that you have selected Ceph enabled while generating a deployment model as described in Define the deployment model.
If you require Tenant Telemetry, verify that you have set the
gnocchi_aggregation_storage
option to Ceph
while
generating the deployment model.
Verify that OpenStack services, such as Cinder, Glance, and Nova are up and running.
Verify and, if required, adjust the Ceph setup for disks in the
classes/cluster/<CLUSTER_NAME>/ceph/osd.yml
file.
To deploy a Ceph cluster:
Log in to the Salt Master node.
Update modules and states on all Minions:
salt '*' saltutil.sync_all
Run basic states on all Ceph nodes:
salt "*" state.sls linux,openssh,salt,ntp,rsyslog
Generate admin
and mon
keyrings:
salt -C 'I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin' state.sls ceph.mon
salt -C 'I@ceph:mon' saltutil.sync_grains
salt -C 'I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin' mine.update
Deploy Ceph mon
nodes:
If your Ceph version is older than Luminous:
salt -C 'I@ceph:mon' state.sls ceph.mon
If your Ceph version is Luminous or newer:
salt -C 'I@ceph:mon' state.sls ceph.mon
salt -C 'I@ceph:mgr' state.sls ceph.mgr
(Optional) To modify the Ceph CRUSH map:
Uncomment the example pillar in the
classes/cluster/<CLUSTER_NAME>/ceph/setup.yml
file and modify it as
required.
Verify the ceph_crush_parent
parameters in the
classes/cluster/<CLUSTER_NAME>/infra.config.yml
file and modify them
if required.
If you have modified the ceph_crush_parent
parameters, also update
the grains:
salt -C 'I@salt:master' state.sls reclass.storage
salt '*' saltutil.refresh_pillar
salt -C 'I@ceph:common' state.sls salt.minion.grains
salt -C 'I@ceph:common' mine.flush
salt -C 'I@ceph:common' mine.update
Technical preview Optional. For testing and evaluation purposes, you
can enable the ceph-volume
tool instead of ceph-disk
to deploy the
Ceph OSD nodes:
In classes/cluster/<cluster_name>/ceph/osd.yml
, specify:
parameters:
ceph:
osd:
backend:
bluestore:
create_partitions: True
lvm_enabled: True
Apply the changes:
salt -C 'I@ceph:osd' saltutil.refresh_pillar
Deploy Ceph osd
nodes:
salt -C 'I@ceph:osd' state.sls ceph.osd
salt -C 'I@ceph:osd' saltutil.sync_grains
salt -C 'I@ceph:osd' state.sls ceph.osd.custom
salt -C 'I@ceph:osd' saltutil.sync_grains
salt -C 'I@ceph:osd' mine.update
salt -C 'I@ceph:setup' state.sls ceph.setup
Deploy RADOS Gateway:
salt -C 'I@ceph:radosgw' saltutil.sync_grains
salt -C 'I@ceph:radosgw' state.sls ceph.radosgw
Set up the Keystone service and endpoints for Swift or S3:
salt -C 'I@keystone:client' state.sls keystone.client
Connect Ceph to your MCP cluster:
salt -C 'I@ceph:common and I@glance:server' state.sls ceph.common,ceph.setup.keyring,glance
salt -C 'I@ceph:common and I@glance:server' service.restart glance-api
salt -C 'I@ceph:common and I@glance:server' service.restart glance-glare
salt -C 'I@ceph:common and I@glance:server' service.restart glance-registry
salt -C 'I@ceph:common and I@cinder:controller' state.sls ceph.common,ceph.setup.keyring,cinder
salt -C 'I@ceph:common and I@nova:compute' state.sls ceph.common,ceph.setup.keyring
salt -C 'I@ceph:common and I@nova:compute' saltutil.sync_grains
salt -C 'I@ceph:common and I@nova:compute' state.sls nova
If you have deployed StackLight LMA, configure Ceph monitoring:
Clean up the /srv/volumes/ceph/etc/ceph
directory.
Connect Telegraf to Ceph:
salt -C 'I@ceph:common and I@telegraf:remote_agent' state.sls ceph.common
If you have deployed Tenant Telemetry, connect Gnocchi to Ceph:
salt -C 'I@ceph:common and I@gnocchi:server' state.sls ceph.common,ceph.setup.keyring
salt -C 'I@ceph:common and I@gnocchi:server' saltutil.sync_grains
salt -C 'I@ceph:common and I@gnocchi:server:role:primary' state.sls gnocchi.server
salt -C 'I@ceph:common and I@gnocchi:server' state.sls gnocchi.server
(Optional) If you have modified the CRUSH map as described in the step 6:
View the CRUSH map generated in the /etc/ceph/crushmap
file and
modify it as required. Before applying the CRUSH map, verify that the
settings are correct.
Apply the following state:
salt -C 'I@ceph:setup:crush' state.sls ceph.setup.crush
Once the CRUSH map is set up correctly, add the following snippet to the
classes/cluster/<CLUSTER_NAME>/ceph/osd.yml
file to make the
settings persist even after a Ceph OSD reboots:
ceph:
osd:
crush_update: false
Apply the following state:
salt -C 'I@ceph:osd' state.sls ceph.osd
Once done, if your Ceph version is Luminous or newer, you can access the Ceph
dashboard through http://<active_mgr_node_IP>:7000/
. Run ceph -s
on a cmn
node to obtain the active mgr
node.