Add a controller node¶
This section describes how to add a new control plane node to the existing MOSK deployment.
To add an OpenStack controller node:
Add a bare-metal host to the MOSK cluster as described in Add a bare-metal host.
When adding the bare-metal host YAML file, specify the following OpenStack control plane node labels for the OpenStack control plane services such as database, messaging, API, schedulers, conductors, L3 and L2 agents:
openstack-control-plane=enabledopenstack-gateway=enabledopenvswitch=enabled
Create a Kubernetes machine in your cluster as described in Add a machine.
When adding the machine, verify that OpenStack control plane node has the following labels:
openstack-control-plane=enabledopenstack-gateway=enabledopenvswitch=enabled
Note
Depending on the applications that were colocated on the failed controller node, you may need to specify some additional labels, for example,
ceph_role_mgr=trueandceph_role_mon=true. To successfuly replace a failedmonandmgrnode, refer to Automated Ceph LCM.Verify that the node is in the
Readystate through the Kubernetes API:kubectl get node <NODE-NAME> -o wide | grep Ready
Verify that the node has all required labels described in the previous steps:
kubectl get nodes --show-labels
Configure new Octavia health manager resources:
Rerun the
octavia-create-resourcesjob:kubectl -n osh-system exec -t <OS-CONTROLLER-POD> -c osdpl osctl-job-rerun octavia-create-resources openstack
Wait until the Octavia health manager pod on the newly added control plane node appears in the
Runningstate:kubectl -n openstack get pods -o wide | grep <NODE_ID> | grep octavia-health-manager
Note
If the pod is in the
crashloopbackoffstate, remove it:kubectl -n openstack delete pod <OCTAVIA-HEALTH-MANAGER-POD-NAME>
Verify that an OpenStack port for the node has been created and the node is in the
Activestate:kubectl -n openstack exec -t <KEYSTONE-CLIENT-POD-NAME> openstack port show octavia-health-manager-listen-port-<NODE-NAME>
Strongly recommended. Back up MKE as described in Mirantis Kubernetes Engine documentation: Back up MKE.
Since the procedure above modifies the cluster configuration, a fresh backup is required to restore the cluster in case further reconfigurations fail.