| Ceilometer | 
Log in to a controller node CLI.Restart the Ceilometer services: # service ceilometer-agent-central restart
# service ceilometer-api restart
# service ceilometer-agent-notification restart
# service ceilometer-collector status restart
Verify the status of the Ceilometer services. See
Verify an OpenStack service status.Repeat step 1 - 3 on all controller nodes. | 
| Cinder | 
Log in to a controller node CLI.Restart the Cinder services: # service cinder-api restart
# service cinder-scheduler restart
Verify the status of the Cinder services. See
Verify an OpenStack service status.Repeat step 1 - 3 on all controller nodes.On every node with Cinder role, run: # service cinder-volume restart
# service cinder-backup restart
Verify the status of the cinder-volumeandcinder-backupservices. | 
| Corosync/Pacemaker | 
Log in to a controller node CLI.Restart the Corosync and Pacemaker services: # service corosync restart
# service pacemaker restart
Verify the status of the Corosync and Pacemaker services. See
Verify an OpenStack service status.Repeat step 1 - 3 on all controller nodes. | 
| Glance | 
Log in to a controller node CLI.Restart the Glance services: # service glance-api restart
# service glance-registry restart
Verify the status of the Glance services. See
Verify an OpenStack service status.Repeat step 1 - 3 on all controller nodes. | 
| Horizon | Since the Horizon service is available through the Apache server,
you should restart the Apache service on all controller nodes: 
Log in to a controller node CLI.Restart the Apache server: # service apache2 restart
Verify whether the Apache service is successfully running after
restart:Verify whether the Apache ports are opened and listening: # netstat -nltp | egrep apache2
Repeat step 1 - 3 on all controller nodes. | 
| Ironic | 
Log in to a controller node CLI.Restart the Ironic services: # service ironic-api restart
# service ironic-conductor restart
Verify the status of the Ironic services. See
Verify an OpenStack service status.Repeat step 1 - 3 on all controller nodes.On any controller node, run the following command for the
nova-computeservice configured to work with Ironic: # crm resource restart p_nova_compute_ironic
Verify the status of the p_nova_compute_ironicservice. | 
| Keystone | Since the Keystone service is available through the Apache server,
complete the following steps on all controller nodes: 
Log in to a controller node CLI.Restart the Apache server: # service apache2 restart
Verify whether the Apache service is successfully running after
restart:Verify whether the Apache ports are opened and listening: # netstat -nltp | egrep apache2
Repeat step 1 - 3 on all controller nodes. | 
| MySQL | 
Log in to any controller node CLI.Run the following command: # pcs status | grep -A1 mysql
In the output, the resource clone_p_mysqlshould be in theStartedstatus.Disable the clone_p_mysqlresource: # pcs resource disable clone_p_mysqld
Verify that the resource clone_p_mysqldis in theStoppedstatus: # pcs status | grep -A2 mysql
It may take some time for this resource to be stopped on all
controller nodes.Disable the clone_p_mysqlresource: # pcs resource enable clone_p_mysqld
Verify that the resource clone_p_mysqldis in theStartedstatus again on all controller nodes: # pcs status | grep -A2 mysql
 
Warning Use the pcs commands instead of crm
for restarting the service.
The pcs tool correctly stops the service according to the
quorum policy preventing MySQL failures. | 
| Neutron | Use the following restart steps for the DHCP Neutron agent as an
example for all Neutron agents. 
Log in to any controller node CLI.Verify the DHCP agent status: # pcs resource show | grep -A1 neutron-dhcp-agent
The output should contain the list of all controllers in the
Startedstatus.Stop the DHCP agent: # pcs resource disable clone_neutron-dhcp-agent
Verify the Corosync status of the DHCP agent: # pcs resource show | grep -A1 neutron-dhcp-agent
The output should contain the list of all controllers in the
Stoppedstatus.Verify the neutron-dhcp-agentstatus on the OpenStack side: The output table should contain the DHCP agents for every
controller node  with xxxin thealivecolumn.Start the DHCP agent on every controller node: # pcs resource enable clone_neutron-dhcp-agent
Verify the DHCP agent status: # pcs resource show | grep -A1 neutron-dhcp-agent
The output should contain the list of all controllers in the
Startedstatus.Verify the neutron-dhcp-agentstatus on the OpenStack side: The output table should contain the DHCP agents for every
controller node  with :-)in thealivecolumn andTruein theadmin_state_upcolumn. | 
| Nova | 
Log in to a controller node CLI.Restart the Nova services: # service nova-api restart
# service nova-cert restart
# service nova-compute restart
# service nova-conductor restart
# service nova-consoleauth restart
# service nova-novncproxy restart
# service nova-scheduler restart
# service nova-spicehtml5proxy restart
# service nova-xenvncproxy restart
Verify the status of the Nova services. See
Verify an OpenStack service status.Repeat step 1 - 3 on all controller nodes.On every compute node, run: # service nova-compute restart
Verify the status of the nova-computeservice. | 
| RabbitMQ | 
Log in to any controller node CLI.Disable the RabbitMQ service: # pcs resource disable master_p_rabbitmq-server
Verify whether the service is stopped: # pcs status | grep -A2 rabbitmq
Enable the service: # pcs resource enable master_p_rabbitmq-server
During the startup process, the output of the pcs status
command can show all existing RabbitMQ services in the Slavesmode.Verify the service status: # rabbitmqctl cluster_status
In the output, the running_nodesfield should contain all
controllers’ host names in therabbit@<HOSTNAME>format. Thepartitionsfield should be empty. | 
| Swift | 
Log in to a controller node CLI.Restart the Swift services: # service swift-account-auditor restart
# service swift-account restart
# service swift-account-reaper restart
# service swift-account-replicator restart
# service swift-container-auditor restart
# service swift-container restart
# service swift-container-reconciler restart
# service swift-container-replicator restart
# service swift-container-sync restart
# service swift-container-updater restart
# service swift-object-auditor restart
# service swift-object restart
# service swift-object-reconstructor restart
# service swift-object-replicator restart
# service swift-object-updater restart
# service swift-proxy restart
Verify the status of the Swift services. See
Verify an OpenStack service status.Repeat step 1 - 3 on all controller nodes. |