Migrate from GlusterFS to rsync for fernet and credential keys rotation

Migrate from GlusterFS to rsync for fernet and credential keys rotationΒΆ

By default, the latest MCP deployments use rsync for fernet and credential keys rotation. Though, if your MCP version is 2018.8.0 or earlier, GlusterFS is used as a default rotation driver and credential keys rotation driver. This section provides an instruction on how to configure your MCP OpenStack deployment to use rsync with SSH instead of GlusterFS.

To migrate from GlusterFS to rsync:

  1. Log in to the Salt Master node.

  2. On the system level, verify that the following class is included in keystone/server/cluster.yml:

    - system.keystone.server.fernet_rotation.cluster
    

    Note

    The default configuration for the system.keystone.server.fernet_rotation.cluster class is defined in keystone/server/fernet_rotation/cluster.yml. It includes the default list of nodes to synchronize fernet and credential keys that are sync_node01 and sync_node02. If there are more nodes to synchronize fernet and credential keys, expand this list as required.

  3. Verify that the crontab job is disabled in the keystone/client/core.yml and keystone/client/single.yml system-level files:

    linux:
      system:
        job:
          keystone_job_rotate:
            command: '/usr/bin/keystone-manage fernet_rotate --keystone-user keystone --keystone-group keystone >> /var/log/key_rotation_log 2>> /var/log/key_rotation_log'
            enabled: false
            user: root
            minute: 0
    
  4. Apply the Salt orchestration state to configure all required prerequisites like creating an SSH public key, uploading it to mine and secondary control nodes:

    salt-run state.orchestrate keystone.orchestrate.deploy
    
  5. Apply the keystone.server state to put the Keystone rotation script and run it in the sync mode hence fernet and credential keys will be synchronized with the Keystone secondary nodes:

    salt -C 'I@keystone:server:role:primary' state.apply keystone.server
    salt -C 'I@keystone:server' state.apply keystone.server
    
  6. Apply the linux.system state to add crontab jobs for the Keystone user:

    salt -C 'I@keystone:server' state.apply linux.system
    
  7. On all OpenStack Controller nodes:

    1. Copy the current credential and fernet keys to temporary directories:

      mkdir /tmp/keystone_credential /tmp/keystone_fernet
      cp /var/lib/keystone/credential-keys/* /tmp/keystone_credential
      cp /var/lib/keystone/fernet-keys/* /tmp/keystone_fernet
      
    2. Unmount the related GlusterFS mount points:

      umount /var/lib/keystone/credential-keys
      umount /var/lib/keystone/fernet-keys
      
    3. Copy the keys from the temporary directories to var/lib/keystone/credential-keys/ and /var/lib/keystone/fernet-keys/:

      mkdir -p /var/lib/keystone/credential-keys/ /var/lib/keystone/fernet-keys/
      cp /tmp/keystone_credential/* /var/lib/keystone/credential-keys/
      cp /tmp/keystone_fernet/* /var/lib/keystone/fernet-keys/
      chown -R keystone:keystone /var/lib/keystone/credential-keys/*
      chown -R keystone:keystone /var/lib/keystone/fernet-keys/*
      
  8. On a KVM node, stop and delete the keystone-credential-keys and keystone-keys volumes:

    1. Stop the volumes:

      gluster volume stop keystone-credential-keys
      gluster volume stop keystone-keys
      
    2. Delete the GlusterFS volumes:

      gluster volume delete keystone-credential-keys
      gluster volume delete keystone-keys
      
  9. On the cluster level model, remove the following GlusterFS classes included in the openstack/control.yml file by default:

    - system.glusterfs.server.volume.keystone
    - system.glusterfs.client.volume.keystone