Remove a compute node

Remove a compute nodeΒΆ

This section instructs you on how to safely remove a compute node from your OpenStack environment.

To remove a compute node:

  1. Stop and disable the salt-minion service on the compute node you want to remove:

    systemctl stop salt-minion
    systemctl disable salt-minion
    
  2. Verify that the name of the node is not registered in salt-key on the Salt Master node. If the node is present, remove it:

    salt-key | grep cmp<NUM>
    salt-key -d cmp<NUM>.domain_name
    
  3. Log in to an OpenStack controller node.

  4. Source the OpenStack RC file to set the required environment variables for the OpenStack command-line clients:

    source keystonerv3
    
  5. Disable the nova-compute service on the target compute node:

    openstack compute service set --disable <cmp_host_name> nova-compute
    
  6. Verify that Nova does not schedule new instances on the target compute node by viewing the output of the following command:

    openstack compute service list
    

    The command output should display the disabled status for the nova-compute service running on the target compute node.

  7. Migrate your instances using the openstack server migrate command. You can perform live or cold migration.

  8. Log in to the target compute node.

  9. Stop the nova-compute service:

    systemctl disable nova-compute
    systemctl stop nova-compute
    
  10. Log in to the OpenStack controller node.

  11. Obtain the ID of the compute service to delete:

    openstack compute service list
    
  12. Delete the compute service substituting service_id with the value obtained in the previous step:

    openstack compute service delete <service_id>
    
  13. Select from the following options:

    • For the deployments with OpenContrail:

      1. Log in to the target compute node.

      2. Stop the supervisor-vrouter service:

        service supervisor-vrouter disable
        service supervisor-vrouter stop
        
      3. Log in to the OpenContrail UI.

      4. Navigate to Configure > infrastracture > Virtual Routers.

      5. Select the target compute node.

      6. Click Delete.

    • For the deployments with OVS:

      1. Stop the neutron-openvswitch-agent service:

        systemctl disable neutron-openvswitch-agent.service
        systemctl stop neutron-openvswitch-agent.service
        
      2. Obtain the ID of the target compute node agent:

        openstack network agent list
        
      3. Delete the network agent substituting cmp_agent_id with the value obtained in the previous step:

        openstack network agent delete <cmp_agent_id>
        
  14. If you plan to replace the removed compute node with a new compute node with the same hostname, you need to manually clean up the resource provider record from the placement service using the curl tool:

    1. Log in to an OpenStack controller node.

    2. Obtain the token ID from the openstack token issue command output. For example:

      openstack token issue
      +------------+-------------------------------------+
      | Field      | Value                               |
      +------------+-------------------------------------+
      | expires    | 2018-06-22T10:30:17+0000            |
      | id         | gAAAAABbLMGpVq2Gjwtc5Qqmp...        |
      | project_id | 6395787cdff649cdbb67da7e692cc592    |
      | user_id    | 2288ac845d5a4e478ffdc7153e389310    |
      +------------+-------------------------------------+
      
    3. Obtain the resource provider UUID of the target compute node:

      curl -i -X GET <placement-endpoint-address>/resource_providers?name=<target-compute-host-name> -H \
      'content-type: application/json' -H 'X-Auth-Token: <token>'
      

      Susbtitute the following parameters as required:

      • placement-endpoint-address

        The placement endpoint can be obtained from the openstack catalog list command output. A placement endpoint includes the scheme, endpoint address, and port, for example, http://10.11.0.10:8778. Depending on the deployment, you may need to specify the https scheme rather than http.

      • target-compute-host-name

        The hostname of the compute node you are removing. For the correct hostname format to pass, see the Hypervisor Hostname column in the openstack hypervisor list command output.

      • token

        The token id value obtained in the previous step.

      Example of system response:

      {
        "resource_providers": [
          {
            "generation": 1,
            "uuid": "08090377-965f-4ad8-9a1b-87f8e8153896",
            "links": [
              {
                "href": "/resource_providers/08090377-965f-4ad8-9a1b-87f8e8153896",
                "rel": "self"
              },
              {
                "href": "/resource_providers/08090377-965f-4ad8-9a1b-87f8e8153896/aggregates",
                "rel": "aggregates"
              },
              {
                "href": "/resource_providers/08090377-965f-4ad8-9a1b-87f8e8153896/inventories",
                "rel": "inventories"
              },
              {
                "href": "/resource_providers/08090377-965f-4ad8-9a1b-87f8e8153896/usages",
                "rel": "usages"
              }
            ],
            "name": "<compute-host-name>"
          }
        ]
      }
      
    4. Delete the resource provider record from the placement service substituting placement-endpoint-address, target-compute-node-uuid, and token with the values obtained in the previous steps:

      curl -i -X DELETE <placement-endpoint-address>/resource_providers/<target-compute-node-uuid> -H \
      'content-type: application/json' -H 'X-Auth-Token: <token>'
      
  15. Log in to the Salt Master node.

  16. Remove the compute node definition from the model in infra/config.yml under the reclass:storage:node pillar.

  17. Remove the generated file for the removed compute node under /srv/salt/reclass/nodes/_generated.

  18. Remove the compute node from StackLight LMA:

    1. Update and clear the Salt mine:

      salt -C 'I@salt:minion' state.sls salt.minion.grains
      salt -C 'I@salt:minion' saltutil.refresh_modules
      salt -C 'I@salt:minion' mine.update clear=true
      
    2. Refresh the targets and alerts:

      salt -C 'I@docker:swarm and I@prometheus:server' state.sls prometheus -b 1