Verify the GlusterFS volumes status

Verify the GlusterFS volumes statusΒΆ

This section describes how to verify the status of the GlusterFS volumes and troubleshoot issues if any.

To verify the GlusterFS volumes status:

  1. Log in to the Salt Master node.

  2. Verify the GlusterFS volumes status:

    salt -C 'I@glusterfs:server' cmd.run "gluster volume status all"
    
  3. If the system output contains issues, such as in the example below, or/and the volume status cannot be retrieved, refer to the GlusterFS official documentation to resolve the issues.

    Example of system response:

    kvm01.cookied-cicd-bm-os-contrail40-maas.local:
    Another transaction is in progress for aptly. Please try again after sometime.
    Another transaction is in progress for gerrit. Please try again after sometime.
    Another transaction is in progress for keystone-credential-keys. Please try again after sometime.
    Another transaction is in progress for mysql. Please try again after sometime.
    Another transaction is in progress for registry. Please try again after sometime.
    Locking failed on 10.167.8.243. Please check log file for details.
    
  4. Inspect the GlusterFS server logs for volumes at /var/log/glusterfs/bricks/srv-glusterfs-<volume name>.log on the kvm nodes.

  5. In case of any issues with the replication status of GlusterFS, stop all the GlusterFS volume-related services to prevent data corruption and immediately proceed with a troubleshooting to restore the volume in question.

  6. If you need to reboot a kvm that hosts GlusterFS, verify that all GlusterFS clients have volumes mounted after a node reboot:

    1. Log in to the Salt Master node.

    2. Identify the GlusterFS VIP address:

      salt-call pillar.get
      

      Example of system response:

      _param:infra_kvm_address
      local:
          10.167.8.240
      
    3. Verify that all GlusterFS clients have volumes mounted. For example:

      salt -C 'I@glusterfs:client and not I@glusterfs:server' cmd.run "mount | grep 10.167.8.240"
      

      Example of system response:

      prx02.cookied-cicd-bm-os-contrail40-maas.local:
          10.167.8.240:/salt_pki on /srv/salt/pki type fuse.glusterfs
      (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
      ctl01.cookied-cicd-bm-os-contrail40-maas.local:
          10.167.8.240:/keystone-credential-keys on /var/lib/keystone/credential-keys type fuse.glusterfs
      (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
          10.167.8.240:/keystone-keys on /var/lib/keystone/fernet-keys type fuse.glusterfs
      (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
      cid03.cookied-cicd-bm-os-contrail40-maas.local:
      prx01.cookied-cicd-bm-os-contrail40-maas.local:
          10.167.8.240:/salt_pki on /srv/salt/pki type fuse.glusterfs
      (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
      cid01.cookied-cicd-bm-os-contrail40-maas.local:
      cid02.cookied-cicd-bm-os-contrail40-maas.local:
      ctl03.cookied-cicd-bm-os-contrail40-maas.local:
      cfg01.cookied-cicd-bm-os-contrail40-maas.local:
          10.167.8.240:/salt_pki on /srv/salt/pki type fuse.glusterfs
          (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
      ctl02.cookied-cicd-bm-os-contrail40-maas.local:
      

      In the example above, several VMs do not have the GlusterFS volumes mounted, for example, cid03, cid01, cid02. In such case, reboot the corresponding VMs and verify the status again.

    4. Verify that all mounted volumes identified in the previous step match the pillar information on the corresponding VM. For example:

      salt prx02.cookied-cicd-bm-os-contrail40-maas.local pillar.get glusterfs:client:volumes
      

      Example of system response:

      ----------
          salt_pki:
          ----------
          opts:
              defaults,backup-volfile-servers=10.167.8.241:10.167.8.242:10.167.8.243
          path:
              /srv/salt/pki
          server:
              10.167.8.240
      

      In the output above, the server IP address and the only mounted salt_pki volume information match the output for the prx02 VM shown in the previous step.

    Caution

    Do not proceed to reboot the next kvm node that hosts the GlusterFS cluster before all volumes are mounted on VMs of the first kvm node.