This section instructs you on how to roll back RabbitMQ to a clustered configuration after switching it to a nonclustered configuration as described in Switch to nonclustered RabbitMQ.
Note
After performing the rollback procedure, you may notice a number of
down
heat-engine
instances of a previous version among the
heat-engine
running instances. Such behavior is abnormal but expected.
Verify the Updated At
field of the running instances of heat-engine
.
Ignore the stopped(down)
instances of heat-engine
.
To roll back RabbitMQ to a clustered configuration:
If you have removed the non-clustered-rabbit-helpers.sh
script, create
it again as described in Switch to nonclustered RabbitMQ.
Revert the changes performed in the cluster model in the step 2 during
Switch to nonclustered RabbitMQ. Use git stash
, for example, if
you did not commit the changes.
From the Salt Master node, refresh pillars on all nodes:
salt '*' saltutil.sync_all; salt '*' saltutil.refresh_pillar
Roll back the changes on the RabbitMQ nodes:
salt -C 'I@rabbitmq:server' cmd.run 'systemctl stop rabbitmq-server'
salt -C 'I@rabbitmq:server' cmd.run 'rm -rf /var/lib/rabbitmq/mnesia/'
salt -C 'I@rabbitmq:server' state.apply keepalived
salt -C 'I@rabbitmq:server' state.apply haproxy
salt -C 'I@rabbitmq:server' state.apply telegraf
salt -C 'I@rabbitmq:server' state.apply rabbitmq
Verify that the RabbitMQ server is running in a clustered configuration:
salt -C 'I@rabbitmq:server' cmd.run "rabbitmqctl --formatter=erlang cluster_status |grep running_nodes"
Example of system response:
msg01.heat-cicd-queens-dvr-sl.local:
{running_nodes,[rabbit@msg02,rabbit@msg03,rabbit@msg01]},
msg02.heat-cicd-queens-dvr-sl.local:
{running_nodes,[rabbit@msg01,rabbit@msg03,rabbit@msg02]},
msg03.heat-cicd-queens-dvr-sl.local:
{running_nodes,[rabbit@msg02,rabbit@msg01,rabbit@msg03]},
Roll back the changes on other nodes:
Roll back the changes on the ctl
nodes:
. /root/non-clustered-rabbit-helpers.sh
run_openstack_states ctl*
Roll back changes on the gtw
nodes. Skip this step if your deployment
has OpenContrail or does not have gtw
nodes.
. /root/non-clustered-rabbit-helpers.sh
run_openstack_states gtw*
If your environment has OpenContrail, roll back the changes on the ntw
and nal
nodes:
salt -C 'ntw* or nal*' state.apply opencontrail
Roll back the changes on the cmp
nodes:
. /root/non-clustered-rabbit-helpers.sh
run_openstack_states cmp*