Add a Ceph OSD node

Add a Ceph OSD nodeΒΆ

This section describes how to add a Ceph OSD node to an existing Ceph cluster.

Warning

Prior to the 2019.2.10 maintenance update, this feature is available as technical preview only.

To add a Ceph OSD node:

  1. Connect the Ceph OSD salt-minion node to salt-master.

  2. In your project repository, if the nodes are not generated dynamically, add the following lines to cluster/ceph/init.yml and modify according to your environment:

    _param:
       ceph_osd_node05_hostname: osd005
       ceph_osd_node05_address: 172.16.47.72
       ceph_osd_node05_backend_address: 10.12.100.72
       ceph_osd_node05_public_address: 10.13.100.72
       ceph_osd_node05_deploy_address: 192.168.0.72
       ceph_osd_system_codename: xenial
    linux:
      network:
        host:
          osd005:
            address: ${_param:ceph_osd_node05_address}
            names:
            - ${_param:ceph_osd_node05_hostname}
            - ${_param:ceph_osd_node05_hostname}.${_param:cluster_domain}
    

    Note

    Skip the ceph_osd_node05_deploy_address parameter if you have DHCP enabled on a PXE network.

  3. If the nodes are not generated dynamically, add the following lines to the cluster/infra/config/nodes.yml and modify according to your environment. Otherwise, increase the number of generated OSDs.

    parameters:
      reclass:
        storage:
          node:
            ceph_osd_node05:
              name: ${_param:ceph_osd_node05_hostname}
              domain: ${_param:cluster_domain}
              classes:
              - cluster.${_param:cluster_name}.ceph.osd
              params:
                salt_master_host: ${_param:reclass_config_master}
                linux_system_codename:  ${_param:ceph_osd_system_codename}
                single_address: ${_param:ceph_osd_node05_address}
                deploy_address: ${_param:ceph_osd_node05_deploy_address}
                backend_address: ${_param:ceph_osd_node05_backend_address}
                ceph_public_address: ${_param:ceph_osd_node05_public_address}
                ceph_crush_parent: rack02
    

    Note

    Skip the deploy_address parameter if you have DHCP enabled on a PXE network.

  4. Since 2019.2.3, skip this step Verify that the cluster/ceph/osd.yml file and the pillar of the new Ceph OSD do not contain the following lines:

    parameters:
      ceph:
        osd:
          crush_update: false
    
  5. Log in to the Jenkins web UI.

  6. Select from the following options:

    • For MCP versions starting from the 2019.2.10 maintenance update, open the Ceph - add osd (upmap) pipeline.
    • For MCP versions prior to the 2019.2.10 maintenance update, open the Ceph - add node pipeline.

    Note

    Prior to the 2019.2.10 maintenance update, the Ceph - add node and Ceph - add osd (upmap) Jenkins pipeline jobs are available as technical preview only.

    Caution

    A large change in the crush weights distribution after the addition of Ceph OSDs can cause massive unexpected rebalancing, affect performance, and in some cases can cause data corruption. Therefore, if you are using Ceph - add node, Mirantis recommends that you add all disks with zero weight and reweight them gradually.

  7. Specify the following parameters:

    Parameter Description and values
    SALT_MASTER_CREDENTIALS The Salt Master credentials to use for connection, defaults to salt.
    SALT_MASTER_URL The Salt Master node host URL with the salt-api port, defaults to the jenkins_salt_api_url parameter. For example, http://172.18.170.27:6969.
    HOST Add the Salt target name of the new Ceph OSD. For example, osd005*.
    HOST_TYPE Removed since 2019.2.3 update Add osd as the type of Ceph node that is going to be added.
    CLUSTER_FLAGS Added since 2019.2.7 update Add a comma-separated list of flags to check after the pipeline execution.
    USE_UPMAP Added since 2019.2.13 update Use to facilitate the upmap module during rebalancing to minimize impact on cluster performance.
  8. Click Deploy.

    The Ceph - add node pipeline workflow prior to the 2019.2.3 maintenance update:

    1. Apply the reclass state.
    2. Apply the linux, openssh, salt, ntp, rsyslog, ceph.osd states.

    The Ceph - add node pipeline workflow starting from 2019.2.3 maintenance update:

    1. Apply the reclass state.
    2. Verify that all installed Ceph clients have the Luminous version.
    3. Apply the linux, openssh, salt, ntp, rsyslog, states.
    4. Set the Ceph cluster compatibility to Luminous.
    5. Switch the balancer module to the upmap mode.
    6. Set the norebalance flag before adding a Ceph OSD.
    7. Apply the ceph.osd state on the selected Ceph OSD node.
    8. Update the mappings for the remapped placement group (PG) using upmap back to the old Ceph OSDs.
    9. Unset the norebalance flag and verify that the cluster is healthy.
  9. If you use a custom CRUSH map, update the CRUSH map:

    1. Verify the updated /etc/ceph/crushmap file on cmn01. If correct, apply the CRUSH map using the following commands:

      crushtool -c /etc/ceph/crushmap -o /etc/ceph/crushmap.compiled
      ceph osd setcrushmap -i /etc/ceph/crushmap.compiled
      
    2. Add the following lines to the cluster/ceph/osd.yml file:

      parameters:
        ceph:
          osd:
            crush_update: false
      
    3. Apply the ceph.osd state to persist the CRUSH map:

      salt -C 'I@ceph:osd' state.sls ceph.osd
      
  10. Integrate the Ceph OSD nodes with StackLight:

    1. Update the Salt mine:

      salt -C 'I@ceph:osd or I@telegraf:remote_agent' state.sls salt.minion.grains
      salt -C 'I@ceph:osd or I@telegraf:remote_agent' saltutil.refresh_modules
      salt -C 'I@ceph:osd or I@telegraf:remote_agent' mine.update
      

      Wait for one minute.

    2. Apply the following states:

      salt -C 'I@ceph:osd or I@telegraf:remote_agent' state.sls telegraf
      salt -C 'I@ceph:osd' state.sls fluentd
      salt 'mon*' state.sls prometheus