Add a controller node

Add a controller node

This section describes how to add a new control plane node to the existing MOS deployment.

To add an OpenStack controller node:

  1. Add a bare metal host to the managed cluster with MOS as described in Mirantis Container Cloud Operations Guide: Add a bare metal host using CLI.

    When adding the bare metal host YAML file, specify the following OpenStack control plane node labels for the OpenStack control plane services such as database, messaging, API, schedulers, conductors, L3 and L2 agents:

    • openstack-control-plane=enabled

    • openstack-gateway=enabled

    • openvswitch=enabled

  2. Create a Kubernetes machine in your cluster as described in Mirantis Container Cloud Operations Guide: Add a machine using CLI.

    When adding the machine, verify that OpenStack control plane node has the following labels:

    • openstack-control-plane=enabled

    • openstack-gateway=enabled

    • openvswitch=enabled

    Note

    Depending on the applications that were colocated on the failed controller node, you may need to specify some additional labels, for example, ceph_role_mgr=true and ceph_role_mon=true . To successfuly replace a failed mon and mgr node, refer to Mirantis Container Cloud Operations Guide: Manage Ceph.

  3. Verify that the node is in the Ready state through the Kubernetes API:

    kubectl get node <NODE-NAME> -o wide | grep Ready
    
  4. Verify that the node has all required labels described in the previous steps:

    kubectl get nodes --show-labels
    
  5. Configure new Octavia health manager resources:

    1. Rerun the octavia-create-resources job:

      kubectl -n osh-system exec -t <OS-CONTROLLER-POD> -c osdpl osctl-job-rerun octavia-create-resources openstack
      
    2. Wait until the Octavia health manager pod on the newly added control plane node appears in the Running state:

      kubectl -n openstack get pods -o wide | grep <NODE_ID> | grep octavia-health-manager
      

      Note

      If the pod is in the crashloopbackoff state, remove it:

      kubectl -n openstack delete pod <OCTAVIA-HEALTH-MANAGER-POD-NAME>
      
    3. Verify that an OpenStack port for the node has been created and the node is in the Active state:

      kubectl -n openstack exec -t <KEYSTONE-CLIENT-POD-NAME> openstack port show octavia-health-manager-listen-port-<NODE-NAME>