Create a backup schedule for Ceph nodes

Create a backup schedule for Ceph nodes

This section describes how to manually create a backup schedule for Ceph OSD nodes metadata and for Ceph Monitor nodes.

By default, the backing up functionality enables automatically for the new MCP OpenStack with Ceph deployments in the cluster models generated using Model Designer. Use this procedure in case of manual deployment only or if you want to change the default backup configuration.

Note

The procedure below does not cover the backup of the Ceph OSD node data.

To create a backup schedule for Ceph nodes:

  1. Log in to the Salt Master node.

  2. Decide on which node you want to store the backups.

  3. Get <STORAGE_ADDRESS> of the node from point 2.

    cfg01:~\# salt NODE_NAME grains.get fqdn_ip4
    
  4. Configure the ceph backup server role by adding the cluster.deployment_name.infra.backup.server class to the definition of the target storage node from step 2:

    classes:
    - cluster.deployment_name.infra.backup.server
     parameters:
       _param:
          ceph_backup_public_key: <generate_your_keypair>
    

    By default, adding this include statement results in Ceph keeping five complete backups. To change the default setting, add the following pillar to the cluster/infra/backup/server.yml file:

    parameters:
      ceph:
        backup:
          server:
            enabled: true
            hours_before_full: 24
            full_backups_to_keep: 5
    
  5. To back up the Ceph Monitor nodes, configure the ceph backup client role by adding the following lines to the cluster/ceph/mon.yml file:

    Note

    Change <STORAGE_ADDRESS> to the address of the target storage node from step 2

    classes:
    - system.ceph.backup.client.single
    parameters:
      _param:
        ceph_remote_backup_server: <STORAGE_ADDRESS>
        root_private_key: |
          <generate_your_keypair>
    
  6. To back up the Ceph OSD nodes metadata, configure the ceph backup client role by adding the following lines to the cluster/ceph/osd.yml file:

    Note

    Change <STORAGE_ADDRESS> to the address of the target storage node from step 2

    classes:
    - system.ceph.backup.client.single
    parameters:
      _param:
        ceph_remote_backup_server: <STORAGE_ADDRESS>
        root_private_key: |
          <generate_your_keypair>
    

    By default, adding the above include statement results in Ceph keeping three complete backups on the client node. To change the default setting, add the following pillar to the cluster/ceph/mon.yml or cluster/ceph/osd.yml files:

    Note

    Change <STORAGE_ADDRESS> to the address of the target storage node from step 2

    parameters:
      ceph:
        backup:
          client:
            enabled: true
            full_backups_to_keep: 3
            hours_before_full: 24
            target:
              host: <STORAGE_ADDRESS>
    
  7. Refresh Salt pillars:

    salt -C '*' saltutil.refresh_pillar
    
  8. Apply the salt.minion state:

    salt -C 'I@ceph:backup:client or I@ceph:backup:server' state.sls salt.minion
    
  9. Refresh grains for the ceph client node:

    salt -C 'I@ceph:backup:client' saltutil.sync_grains
    
  10. Update the mine for the ceph client node:

    salt -C 'I@ceph:backup:client' mine.flush
    salt -C 'I@ceph:backup:client' mine.update
    
  11. Apply the following state on the ceph client node:

    salt -C 'I@ceph:backup:client' state.sls openssh.client,ceph.backup
    
  12. Apply the linux.system.cron state on the ceph server node:

    salt -C 'I@ceph:backup:server' state.sls linux.system.cron
    
  13. Apply the ceph.backup state on the ceph server node:

    salt -C 'I@ceph:backup:server' state.sls ceph.backup