Verify an OpenStack service status

Verify an OpenStack service status

To ensure that an OpenStack service is up and running, verify the service status on every controller node. Some OpenStack services require additional verification on the non-controller nodes. The following table describes the verification steps for the common OpenStack services.

Note

In the table below, the output of the service <SERVICE_NAME> status command should contain the service status and the process ID unless indicated otherwise. For example, neutron-server start/running, process 283.

Verifying an OpenStack service status
Service name Verification procedure
Ceilometer
  1. On every MongoDB node, run:

    # service mongodb status
    # netstat -nltp | grep mongo
    

    The output of the netstat command returns the management and local IP addresses and ports in the LISTEN status.

  2. On every controller node, run:

    # service ceilometer-agent-central status
    # service ceilometer-api status
    # service ceilometer-agent-notification status
    # service ceilometer-collector status
    
  3. On every compute node, run:

    # service ceilometer-polling status
    
  4. On any controller node, run pcs status | grep ceilometer or crm status | grep ceilometer to verify which node is currently handling requests and their status. The output should contain the node ID and the Started status.

Cinder
  1. On every controller node, run:

    # service cinder-api status
    # service cinder-scheduler status
    
  2. On every node with the Cinder role, run:

    # service cinder-volume status
    # service cinder-backup status
    
Corosync/Pacemaker

On every controller node:

  1. Run service corosync status and service pacemaker status.
  2. Verify the output of the pcs status or crm status command. The Online field should contain all the controllers’ host names.
  3. Verify the output of the pcs resource show or crm resource show command. All resources should be Started.
Glance

On every controller node, run:

# service glance-api status
# service glance-registry status
Heat
  1. On any controller node, verify the status of Heat engines:

    # source openrc
    # heat service-list
    

    The output should contain the table with a list of the Heat engines for all controller nodes in the up status.

  2. On every controller node, run:

    # service heat-api status
    # service heat-api-cfn status
    # service heat-api-cloudwatch status
    # service heat-engine status
    
Horizon

Since the Horizon service is available through the Apache server, you should verify the Apache service status as well. Complete the following steps on all controller nodes:

  1. Verify whether the Apache service is running using the service apache2 status command.
  2. Verify whether the Horizon ports are opened and listening using the netstat -nltp | egrep ':80|:443' command. The output should contain the management and local IP addresses with either port 80 or 443 in the LISTEN status.
Ironic
  1. On every controller node, run service ironic-api status.
  2. On every Ironic node, run service ironic-conductor status.
  3. On any controller node, run pcs status | grep ironic. The output should contain the name or ID of the node where the p_nova_compute_ironic resource is running.
Keystone

Since the Keystone service is available through the Apache server, you should verify the Apache service status as well. Complete the following steps on all controller nodes (and the nodes with the Keystone role if any):

  1. Verify whether the Apache service is running using service apache2 status.
  2. Verify whether the Keystone ports are opened and listening using netstat -nltp | egrep '5000|35357'. The output should contain the management and local IP addresses with the ports 5000 and 35357 in the LISTEN status.
MySQL/Galera

On any controller node:

  1. Verify the output of the pcs status|grep -A1 clone_p_mysql or crm status|grep -A1 clone_p_mysql command. The resource clone_p_mysqld should be in the Started status for all controllers.
  2. Verify the output of the mysql -e "show status" | egrep 'wsrep_(local_state|incoming_address)' command. The wsrep_local_state_comment variable should be Synced, the wsrep_incoming_address field should contain all IP addresses of the controller nodes (in the management network).
Neutron
  1. On every compute node, run:

    # service neutron-openvswitch-agent status
    
  2. On every controller node:

    1. Verify the neutron-server service status:

      # service neutron-server status
      
    2. Verify the statuses of the Neutron agents:

      # service neutron-metadata-agent status
      # service neutron-dhcp-agent status
      # service neutron-l3-agent status
      # service neutron-openvswitch-agent status
      
  3. On any controller node:

    1. Verify the states of the Neutron agents:

      # source openrc
      # neutron agent-list
      

      The output table should list all the Neutron agents with the :-) value in the alive column and the True value in the admin_state_up column.

    2. Verify the Corosync/Pacemaker status:

      # pcs status | grep -A2 neutron
      

      The output should contain the Neutron resources in the Started status for all controller nodes.

Nova
  • Using the Fuel CLI:

    1. On every controller node, run:

      # service nova-api status
      # service nova-cert status
      # service nova-compute status
      # service nova-conductor status
      # service nova-consoleauth status
      # service nova-novncproxy status
      # service nova-scheduler status
      # service nova-spicehtml5proxy status
      # service nova-xenvncproxy status
      
    2. On every compute node, run service nova-compute status.

  • Using the Nova CLI:

    # source openrc
    # nova service-list
    

    The output should contain the table with the Nova services list. The services status should be enabled, their state should be up.

RabbitMQ
  • On any controller node, run rabbitmqctl cluster_status.

    In the output, the running_nodes field should contain all the controllers’ host names in the rabbit@<HOSTNAME> format. The partitions field should be empty.

Swift
  • On every controller node, run:

    # service swift-account-auditor status
    # service swift-account status
    # service swift-account-reaper status
    # service swift-account-replicator status
    # service swift-container-auditor status
    # service swift-container status
    # service swift-container-reconciler status
    # service swift-container-replicator status
    # service swift-container-sync status
    # service swift-container-updater status
    # service swift-object-auditor status
    # service swift-object status
    # service swift-object-reconstructor status
    # service swift-object-replicator status
    # service swift-object-updater status
    # service swift-proxy status