OpenContrail 3.2: Create a backup schedule for a ZooKeeper database

OpenContrail 3.2: Create a backup schedule for a ZooKeeper databaseΒΆ

This section describes how to create a backup schedule for a ZooKeeper database on an OpenContrail 3.2 cluster.

To create a backup schedule for a ZooKeeper database:

  1. Log in to the Salt Master node.

  2. Configure the zookeeper server role by adding the following class to cluster/infra/config.yml:

    classes:
    - system.zookeeper.backup.server.single
    parameters:
      _param:
         zookeeper_backup_public_key: <generate_your_keypair>
    

    By default, adding this include statement results in ZooKeeper backup server keeping five full backups. To change the default setting, include the following pillar to cluster/infra/config.yml.

    parameters:
      zookeeper:
        backup:
          server:
            enabled: true
            hours_before_full: 24
            full_backups_to_keep: 5
    
  3. Configure the zookeeper client role by adding the following lines to cluster/opencontrail/control.yml:

    classes:
    - system.zookeeper.backup.client.single
    parameters:
      _param:
        zookeeper_remote_backup_server: cfg01
        root_private_key: |
          <generate_your_keypair>
    

    By default, adding this include statement results in ZooKeeper keeping three complete backups on the zookeeper client node. The rsync command moves the backup files to the Salt Master node. To change the default setting, include the following pillar to cluster/opencontrail/control.yml:

    parameters:
      zookeeper:
        backup:
          client:
            enabled: true
            full_backups_to_keep: 3
            hours_before_full: 24
            target:
              host: cfg01
    

    Note

    The target.host parameter must contain the resolvable hostname of the host where the zookeeper server is running.

  4. If you customized the default parameters, verify that the hours_before_full parameter of the zookeeper client in cluster/opencontrail/control.yml matches the same parameter of the zookeeper server in cluster/infra/config.yml.

  5. Run the following command on the Salt Master node:

    salt '*' saltutil.refresh_pillar
    
  6. Apply the salt.minion state:

    salt -C 'I@zookeeper:backup:client or I@zookeeper:backup:server' state.sls salt.minion
    
  7. Refresh grains for the zookeeper client node:

    salt -C 'I@zookeeper:backup:client' saltutil.sync_grains
    
  8. Update the mine for the zookeeper client node:

    salt -C 'I@zookeeper:backup:client' mine.flush
    salt -C 'I@zookeeper:backup:client' mine.update
    
  9. Apply the following state for the zookeeper client node:

    salt -C 'I@zookeeper:backup:client' state.sls openssh.client,zookeeper.backup
    
  10. Apply the following states for the zookeeper server node:

    salt -C 'I@zookeeper:backup:server' state.apply linux.system
    salt -C 'I@zookeeper:backup:server' state.sls zookeeper.backup