This section instructs you on how to safely remove a compute node from your OpenStack environment.
To remove a compute node:
Stop and disable the salt-minion
service on the compute node you want
to remove:
systemctl stop salt-minion
systemctl disable salt-minion
Verify that the name of the node is not registered in salt-key
on
the Salt Master node. If the node is present, remove it:
salt-key | grep cmp<NUM>
salt-key -d cmp<NUM>.domain_name
Log in to an OpenStack controller node.
Source the OpenStack RC file to set the required environment variables for the OpenStack command-line clients:
source keystonerv3
Disable the nova-compute
service on the target compute node:
openstack compute service set --disable <cmp_host_name> nova-compute
Verify that Nova does not schedule new instances on the target compute node by viewing the output of the following command:
openstack compute service list
The command output should display the disabled
status for the
nova-compute
service running on the target compute node.
Migrate your instances using the openstack server migrate command. You can perform live or cold migration.
Log in to the target compute node.
Stop the nova-compute
service:
systemctl disable nova-compute
systemctl stop nova-compute
Log in to the OpenStack controller node.
Obtain the ID of the compute service to delete:
openstack compute service list
Delete the compute service substituting service_id
with the value
obtained in the previous step:
openstack compute service delete <service_id>
Select from the following options:
For the deployments with OpenContrail:
Log in to the target compute node.
Stop the supervisor-vrouter
service:
service supervisor-vrouter disable
service supervisor-vrouter stop
Log in to the OpenContrail UI.
Navigate to Configure > infrastracture > Virtual Routers.
Select the target compute node.
Click Delete.
For the deployments with OVS:
Stop the neutron-openvswitch-agent
service:
systemctl disable neutron-openvswitch-agent.service
systemctl stop neutron-openvswitch-agent.service
Obtain the ID of the target compute node agent:
openstack network agent list
Delete the network agent substituting cmp_agent_id
with the
value obtained in the previous step:
openstack network agent delete <cmp_agent_id>
If you plan to replace the removed compute node with a new compute node with the same hostname, you need to manually clean up the resource provider record from the placement service using the curl tool:
Log in to an OpenStack controller node.
Obtain the token ID from the openstack token issue command output. For example:
openstack token issue
+------------+-------------------------------------+
| Field | Value |
+------------+-------------------------------------+
| expires | 2018-06-22T10:30:17+0000 |
| id | gAAAAABbLMGpVq2Gjwtc5Qqmp... |
| project_id | 6395787cdff649cdbb67da7e692cc592 |
| user_id | 2288ac845d5a4e478ffdc7153e389310 |
+------------+-------------------------------------+
Obtain the resource provider UUID of the target compute node:
curl -i -X GET <placement-endpoint-address>/resource_providers?name=<target-compute-host-name> -H \
'content-type: application/json' -H 'X-Auth-Token: <token>'
Susbtitute the following parameters as required:
placement-endpoint-address
The placement endpoint can be obtained from the
openstack catalog list command output.
A placement endpoint includes the scheme, endpoint address, and port,
for example, http://10.11.0.10:8778
. Depending on the deployment,
you may need to specify the https
scheme rather than http
.
target-compute-host-name
The hostname of the compute node you are removing. For the
correct hostname format to pass, see the Hypervisor Hostname
column in the openstack hypervisor list command output.
token
The token id
value obtained in the previous step.
Example of system response:
{
"resource_providers": [
{
"generation": 1,
"uuid": "08090377-965f-4ad8-9a1b-87f8e8153896",
"links": [
{
"href": "/resource_providers/08090377-965f-4ad8-9a1b-87f8e8153896",
"rel": "self"
},
{
"href": "/resource_providers/08090377-965f-4ad8-9a1b-87f8e8153896/aggregates",
"rel": "aggregates"
},
{
"href": "/resource_providers/08090377-965f-4ad8-9a1b-87f8e8153896/inventories",
"rel": "inventories"
},
{
"href": "/resource_providers/08090377-965f-4ad8-9a1b-87f8e8153896/usages",
"rel": "usages"
}
],
"name": "<compute-host-name>"
}
]
}
Delete the resource provider record from the placement service
substituting placement-endpoint-address
,
target-compute-node-uuid
, and token
with the values obtained in
the previous steps:
curl -i -X DELETE <placement-endpoint-address>/resource_providers/<target-compute-node-uuid> -H \
'content-type: application/json' -H 'X-Auth-Token: <token>'
Log in to the Salt Master node.
Remove the compute node definition from the model
in infra/config.yml
under the reclass:storage:node
pillar.
Remove the generated file for the removed compute node under
/srv/salt/reclass/nodes/_generated
.
Remove the compute node from StackLight LMA:
Update and clear the Salt mine:
salt -C 'I@salt:minion' state.sls salt.minion.grains
salt -C 'I@salt:minion' saltutil.refresh_modules
salt -C 'I@salt:minion' mine.update clear=true
Refresh the targets and alerts:
salt -C 'I@docker:swarm and I@prometheus:server' state.sls prometheus -b 1