Deploy a Ceph cluster

Deploy a Ceph clusterΒΆ

This section guides you through the manual deployment of a Ceph cluster. If you are deploying a Ceph cluster distributed over L3 domains, verify that you have performed the steps described in Prerequisites for a Ceph cluster distributed over L3 domains.

Warning

Converged storage is not supported.

Note

Prior to deploying a Ceph cluster:

  1. Verify that you have selected Ceph enabled while generating a deployment model as described in Define the deployment model.
  2. If you require Tenant Telemetry, verify that you have set the gnocchi_aggregation_storage option to Ceph while generating the deployment model.
  3. Verify that OpenStack services, such as Cinder, Glance, and Nova are up and running.
  4. Verify and, if required, adjust the Ceph setup for disks in the classes/cluster/<CLUSTER_NAME>/ceph/osd.yml file.

To deploy a Ceph cluster:

  1. Log in to the Salt Master node.

  2. Update modules and states on all Minions:

    salt '*' saltutil.sync_all
    
  3. Run basic states on all Ceph nodes:

    salt "*" state.sls linux,openssh,salt,ntp,rsyslog
    
  4. Generate admin and mon keyrings:

    salt -C 'I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin' state.sls ceph.mon
    salt -C 'I@ceph:mon' saltutil.sync_grains
    salt -C 'I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin' mine.update
    
  5. Deploy Ceph mon nodes:

    • If your Ceph version is older than Luminous:

      salt -C 'I@ceph:mon' state.sls ceph.mon
      
    • If your Ceph version is Luminous or newer:

      salt -C 'I@ceph:mon' state.sls ceph.mon
      salt -C 'I@ceph:mgr' state.sls ceph.mgr
      
  6. (Optional) To modify the Ceph CRUSH map:

    1. Uncomment the example pillar in the classes/cluster/<CLUSTER_NAME>/ceph/setup.yml file and modify it as required.

    2. Verify the ceph_crush_parent parameters in the classes/cluster/<CLUSTER_NAME>/infra.config.yml file and modify them if required.

    3. If you have modified the ceph_crush_parent parameters, also update the grains:

      salt -C 'I@salt:master' state.sls reclass.storage
      salt '*' saltutil.refresh_pillar
      salt -C 'I@ceph:common' state.sls salt.minion.grains
      salt -C 'I@ceph:common' mine.flush
      salt -C 'I@ceph:common' mine.update
      
  7. Technical preview Optional. For testing and evaluation purposes, you can enable the ceph-volume tool instead of ceph-disk to deploy the Ceph OSD nodes:

    1. In classes/cluster/<cluster_name>/ceph/osd.yml, specify:

      parameters:
        ceph:
          osd:
            backend:
              bluestore:
                create_partitions: True
            lvm_enabled: True
      
    2. Apply the changes:

      salt -C 'I@ceph:osd' saltutil.refresh_pillar
      
  8. Deploy Ceph osd nodes:

    salt -C 'I@ceph:osd' state.sls ceph.osd
    salt -C 'I@ceph:osd' saltutil.sync_grains
    salt -C 'I@ceph:osd' state.sls ceph.osd.custom
    salt -C 'I@ceph:osd' saltutil.sync_grains
    salt -C 'I@ceph:osd' mine.update
    salt -C 'I@ceph:setup' state.sls ceph.setup
    
  9. Deploy RADOS Gateway:

    salt -C 'I@ceph:radosgw' saltutil.sync_grains
    salt -C 'I@ceph:radosgw' state.sls ceph.radosgw
    
  10. Set up the Keystone service and endpoints for Swift or S3:

    salt -C 'I@keystone:client' state.sls keystone.client
    
  11. Connect Ceph to your MCP cluster:

    salt -C 'I@ceph:common and I@glance:server' state.sls ceph.common,ceph.setup.keyring,glance
    salt -C 'I@ceph:common and I@glance:server' service.restart glance-api
    salt -C 'I@ceph:common and I@glance:server' service.restart glance-glare
    salt -C 'I@ceph:common and I@glance:server' service.restart glance-registry
    salt -C 'I@ceph:common and I@cinder:controller' state.sls ceph.common,ceph.setup.keyring,cinder
    salt -C 'I@ceph:common and I@nova:compute' state.sls ceph.common,ceph.setup.keyring
    salt -C 'I@ceph:common and I@nova:compute' saltutil.sync_grains
    salt -C 'I@ceph:common and I@nova:compute' state.sls nova
    
  12. If you have deployed StackLight LMA, configure Ceph monitoring:

    1. Clean up the /srv/volumes/ceph/etc/ceph directory.

    2. Connect Telegraf to Ceph:

      salt -C 'I@ceph:common and I@telegraf:remote_agent' state.sls ceph.common
      
  13. If you have deployed Tenant Telemetry, connect Gnocchi to Ceph:

    salt -C 'I@ceph:common and I@gnocchi:server' state.sls ceph.common,ceph.setup.keyring
    salt -C 'I@ceph:common and I@gnocchi:server' saltutil.sync_grains
    salt -C 'I@ceph:common and I@gnocchi:server:role:primary' state.sls gnocchi.server
    salt -C 'I@ceph:common and I@gnocchi:server' state.sls gnocchi.server
    
  14. (Optional) If you have modified the CRUSH map as described in the step 6:

    1. View the CRUSH map generated in the /etc/ceph/crushmap file and modify it as required. Before applying the CRUSH map, verify that the settings are correct.

    2. Apply the following state:

      salt -C 'I@ceph:setup:crush' state.sls ceph.setup.crush
      
    3. Once the CRUSH map is set up correctly, add the following snippet to the classes/cluster/<CLUSTER_NAME>/ceph/osd.yml file to make the settings persist even after a Ceph OSD reboots:

      ceph:
        osd:
          crush_update: false
      
    4. Apply the following state:

      salt -C 'I@ceph:osd' state.sls ceph.osd
      

Once done, if your Ceph version is Luminous or newer, you can access the Ceph dashboard through http://<active_mgr_node_IP>:7000/. Run ceph -s on a cmn node to obtain the active mgr node.