Reprovision a compute node

Reprovision a compute nodeΒΆ

Provisioning of compute nodes is relatively straightforward as you can run all states at once. Though, you need to run and reboot it multiple times for network configuration changes to take effect.

Note

Multiple reboots are needed because the ordering of dependencies is not yet orchestrated.

To reprovision a compute node:

  1. Verify that the name of the cmp node is not registered in salt-key on the Salt Master node:

    salt-key | grep 'cmp*'
    

    If the node is shown in the above command output, remove it:

    salt-key -d cmp<NUM>.domain_name
    
  2. Add a physical node using MAAS as described in the MCP Deployment Guide: Provision physical nodes using MAAS.

  3. Verify that the required nodes are defined in /classes/cluster/<cluster_name>/infra/config.yml.

    Note

    Create as many hosts as you have compute nodes in your environment within this file.

    Configuration example if the dynamic compute host generation is used:

    reclass:
      storage:
        node:
          openstack_compute_rack01:
            name: ${_param:openstack_compute_rack01_hostname}<<count>>
            domain: ${_param:cluster_domain}
            classes:
            - cluster.${_param:cluster_name}.openstack.compute
            repeat:
              count: 20
              start: 1
              digits: 3
              params:
                single_address:
                  value: 172.16.47.<<count>>
                  start: 101
                tenant_address:
                  value: 172.16.47.<<count>>
                  start: 101
            params:
              salt_master_host: ${_param:reclass_config_master}
              linux_system_codename: xenial
    

    Configuration example if the static compute host generation is used:

    reclass:
      storage:
        node:
          openstack_compute_node01:
            name: cmp01
            domain: ${_param:cluster_domain}
            classes:
            - cluster.${_param:cluster_name}.openstack.compute
            params:
              salt_master_host: ${_param:reclass_config_master}
              linux_system_codename: xenial
              single_address: 10.0.0.101
              deploy_address: 10.0.1.101
              tenant_address: 10.0.2.101
    
  4. Apply the reclass.storage state on the Salt Master node to generate node definitions:

    salt '*cfg*' state.sls reclass.storage
    
  5. Verify that the target nodes have connectivity with the Salt Master node:

    salt '*cmp<NUM>*' test.ping
    
  6. Verify that the Salt Minion nodes are synchronized:

    salt '*cmp<NUM>*' saltutil.sync_all
    
  7. Apply the Salt highstate on the compute node(s):

    salt '*cmp<NUM>*' state.highstate
    

    Note

    Failures may occur during the first run of highstate. Rerun the state until it is successfully applied.

  8. Reboot the compute node(s) to apply network configuration changes.

  9. Reapply the Salt highstate on the node(s):

    salt '*cmp<NUM>*' state.highstate
    
  10. Provision the vRouter on the compute node using CLI or the Contrail web UI. Example of the CLI command:

    salt '*cmp<NUM>*' cmd.run '/usr/share/contrail-utils/provision_vrouter.py \
        --host_name <CMP_HOSTNAME> --host_ip <CMP_IP_ADDRESS> --api_server_ip <CONTRAIL_VIP> \
        --oper add --admin_user admin --admin_password <PASSWORD> \
        --admin_tenant_name admin --openstack_ip <OPENSTACK_VIP>'
    

    Note

    • To obtain <CONTRAIL_VIP>, run salt-call pillar.get _param:keepalived_vip_address on any ntw node.
    • To obtain <OPENSTACK_VIP>, run salt-call pillar.get _param:keepalived_vip_address on any ctl node.