Enable granular distribution of Ceph keys

Enable granular distribution of Ceph keysΒΆ

Note

This feature is available starting from the MCP 2019.2.14 maintenance update. Before using the feature, follow the steps described in Apply maintenance updates.

This section describes how to enable granular distribution of Ceph keys on an existing deployment to avoid keeping the Ceph keys for the services that do not belong to a particular node.

To enable granular distribution of Ceph keys:

  1. Open your Git project repository with the Reclass model on the cluster level.

  2. Create a new ceph/keyrings folder.

  3. Open the ceph/common.yml file for editing.

  4. Move the configuration for each component from the parameters:ceph:common:keyrings section to a corresponding file in the newly created folder. For example, the following configuration must be split to four different files.

    ceph:
     common:
       keyring:
         glance:
           name: ${_param:glance_storage_user}
           caps:
             mon: 'allow r, allow command "osd blacklist"'
             osd: "profile rbd pool=images"
         cinder:
           name: ${_param:cinder_storage_user}
           caps:
             mon: 'allow r, allow command "osd blacklist"'
             osd: "profile rbd pool=volumes, profile rbd-read-only pool=images, profile rbd pool=${_param:cinder_ceph_backup_pool}"
         nova:
           name: ${_param:nova_storage_user}
           caps:
             mon: 'allow r, allow command "osd blacklist"'
             osd: "profile rbd pool=vms, profile rbd-read-only pool=images"
         gnocchi:
           name: ${_param:gnocchi_storage_user}
           caps:
             mon: 'allow r, allow command "osd blacklist"'
             osd: "profile rbd pool=${_param:gnocchi_storage_pool}"
    

    In this case, each file must have its own component keyring. For example:

    • In ceph/keyrings/nova.yml, add:

      parameters:
        ceph:
          common:
            keyring:
              nova:
                name: ${_param:nova_storage_user}
                caps:
                  mon: 'allow r, allow command "osd blacklist"'
                  osd: "profile rbd pool=vms, profile rbd-read-only pool=images"
      
    • In ceph/keyrings/cinder.yml, add:

      parameters:
        ceph:
          common:
            keyring:
              cinder:
                name: ${_param:cinder_storage_user}
                caps:
                  mon: 'allow r, allow command "osd blacklist"'
                  osd: "profile rbd pool=volumes, profile rbd-read-only pool=images, profile rbd pool=${_param:cinder_ceph_backup_pool}"
      
    • In ceph/keyrings/glance.yml, add:

      parameters:
        ceph:
          common:
            keyring:
              glance:
                name: ${_param:glance_storage_user}
                caps:
                  mon: 'allow r, allow command "osd blacklist"'
                  osd: "profile rbd pool=images"
      
    • In ceph/keyrings/gnocchi.yml, add:

      parameters:
        ceph:
          common:
            keyring:
              gnocchi:
                name: ${_param:gnocchi_storage_user}
                caps:
                  mon: 'allow r, allow command "osd blacklist"'
                  osd: "profile rbd pool=${_param:gnocchi_storage_pool}"
      
  5. In the same ceph/keyrings folder, create an init.yml file and add the newly created keyrings:

    classes:
    - cluster.<cluster_name>.ceph.keyrings.glance
    - cluster.<cluster_name>.ceph.keyrings.cinder
    - cluster.<cluster_name>.ceph.keyrings.nova
    - cluster.<cluster_name>.ceph.keyrings.gnocchi
    

    Note

    If Telemetry is disabled, Gnocchi may not be present in your deployment.

  6. In openstack/compute/init.yml, add the Cinder and Nova keyrings after class cluster.<cluster_name>.ceph.common:

    - cluster.<cluster_name>.ceph.keyrings.cinder
    - cluster.<cluster_name>.ceph.keyrings.nova
    
  7. In openstack/control.yml, add the following line after cluster.<cluster_name>.ceph.common:

    - cluster.<cluster_name>.ceph.keyrings
    
  8. In openstack/telemetry.yml add the Gnocchi keyring after class cluster.<cluster_name>.ceph.common:

    - cluster.<cluster_name>.ceph.keyrings.gnocchi
    
  9. Log in to the Salt Master node.

  10. Synchronize the Salt modules and update mines:

    salt "*" saltutil.sync_all
    salt "*" mine.update
    
  11. Drop the redundant keyrings from the corresponding nodes and verify that the keyrings will not change with the new Salt run:

    Note

    If ceph:common:manage_keyring is enabled, modify the last state for each component using the following template:

    salt "<target>" state.sls ceph.common,ceph.setup.keyring,ceph.setup.managed_keyring test=true
    
    • For the OpenStack compute nodes, run:

      salt "cmp*" cmd.run "rm /etc/ceph/ceph.client.glance.keyring"
      salt "cmp*" cmd.run "rm /etc/ceph/ceph.client.gnocchi.keyring"
      salt "cmp*" state.sls ceph.common,ceph.setup.keyring test=true
      
    • For the Ceph Monitor nodes, run:

      salt "cmn*" cmd.run "rm /etc/ceph/ceph.client.glance.keyring"
      salt "cmn*" cmd.run "rm /etc/ceph/ceph.client.gnocchi.keyring"
      salt "cmn*" cmd.run "rm /etc/ceph/ceph.client.nova.keyring"
      salt "cmn*" cmd.run "rm /etc/ceph/ceph.client.cinder.keyring"
      salt "cmn*" state.sls ceph.common,ceph.setup.keyring test=true
      
    • For the RADOS Gateway nodes, run:

      salt "rgw*" cmd.run "rm /etc/ceph/ceph.client.glance.keyring"
      salt "rgw*" cmd.run "rm /etc/ceph/ceph.client.gnocchi.keyring"
      salt "rgw*" cmd.run "rm /etc/ceph/ceph.client.nova.keyring"
      salt "rgw*" cmd.run "rm /etc/ceph/ceph.client.cinder.keyring"
      salt "rgw*" state.sls ceph.common,ceph.setup.keyring test=true
      
    • For the Telemetry nodes, run:

      salt "mdb*" cmd.run "rm /etc/ceph/ceph.client.glance.keyring"
      salt "mdb*" cmd.run "rm /etc/ceph/ceph.client.nova.keyring"
      salt "mdb*" cmd.run "rm /etc/ceph/ceph.client.cinder.keyring"
      salt "mdb*" state.sls ceph.common,ceph.setup.keyring test=true
      
  12. Apply the changes for all components one by one:

    • If ceph:common:manage_keyring is disabled:

      salt "<target>" state.sls ceph.common,ceph.setup.keyring
      
    • If ceph:common:manage_keyring is enabled:

      salt "<target>" state.sls ceph.common,ceph.setup.keyring,ceph.setup.managed_keyring