This section describes how to manually create a backup schedule for Ceph OSD nodes metadata and for Ceph Monitor nodes.
By default, the backing up functionality enables automatically for the new MCP OpenStack with Ceph deployments in the cluster models generated using Model Designer. Use this procedure in case of manual deployment only or if you want to change the default backup configuration.
Note
The procedure below does not cover the backup of the Ceph OSD node data.
To create a backup schedule for Ceph nodes:
Log in to the Salt Master node.
Decide on which node you want to store the backups.
Get <STORAGE_ADDRESS>
of the node from point 2.
cfg01:~\# salt NODE_NAME grains.get fqdn_ip4
Configure the ceph
backup server role by adding the
cluster.deployment_name.infra.backup.server
class to the definition
of the target storage node from step 2:
classes:
- cluster.deployment_name.infra.backup.server
parameters:
_param:
ceph_backup_public_key: <generate_your_keypair>
By default, adding this include statement results in Ceph keeping five
complete backups. To change the default setting, add the following pillar
to the cluster/infra/backup/server.yml
file:
parameters:
ceph:
backup:
server:
enabled: true
hours_before_full: 24
full_backups_to_keep: 5
To back up the Ceph Monitor nodes, configure the ceph
backup client
role by adding the following lines to the cluster/ceph/mon.yml
file:
Note
Change <STORAGE_ADDRESS>
to the address of the target storage
node from step 2
classes:
- system.ceph.backup.client.single
parameters:
_param:
ceph_remote_backup_server: <STORAGE_ADDRESS>
root_private_key: |
<generate_your_keypair>
To back up the Ceph OSD nodes metadata, configure the ceph
backup
client role by adding the following lines to the cluster/ceph/osd.yml
file:
Note
Change <STORAGE_ADDRESS>
to the address of the target storage
node from step 2
classes:
- system.ceph.backup.client.single
parameters:
_param:
ceph_remote_backup_server: <STORAGE_ADDRESS>
root_private_key: |
<generate_your_keypair>
By default, adding the above include statement results in Ceph keeping
three complete backups on the client node. To change the default setting,
add the following pillar to the cluster/ceph/mon.yml
or
cluster/ceph/osd.yml
files:
Note
Change <STORAGE_ADDRESS>
to the address of the target storage
node from step 2
parameters:
ceph:
backup:
client:
enabled: true
full_backups_to_keep: 3
hours_before_full: 24
target:
host: <STORAGE_ADDRESS>
Refresh Salt pillars:
salt -C '*' saltutil.refresh_pillar
Apply the salt.minion
state:
salt -C 'I@ceph:backup:client or I@ceph:backup:server' state.sls salt.minion
Refresh grains for the ceph
client node:
salt -C 'I@ceph:backup:client' saltutil.sync_grains
Update the mine for the ceph
client node:
salt -C 'I@ceph:backup:client' mine.flush
salt -C 'I@ceph:backup:client' mine.update
Apply the following state on the ceph
client node:
salt -C 'I@ceph:backup:client' state.sls openssh.client,ceph.backup
Apply the linux.system.cron
state on the ceph
server node:
salt -C 'I@ceph:backup:server' state.sls linux.system.cron
Apply the ceph.backup
state on the ceph
server node:
salt -C 'I@ceph:backup:server' state.sls ceph.backup