Add a controller node¶
This section describes how to add a new control plane node to the existing MOSK deployment.
To add an OpenStack controller node:
Add a bare metal host to the managed cluster with MOSK as described in Add a bare metal host.
When adding the bare metal host YAML file, specify the following OpenStack control plane node labels for the OpenStack control plane services such as database, messaging, API, schedulers, conductors, L3 and L2 agents:
Create a Kubernetes machine in your cluster as described in Add a machine.
When adding the machine, verify that OpenStack control plane node has the following labels:
Depending on the applications that were colocated on the failed controller node, you may need to specify some additional labels, for example,
ceph_role_mon=true. To successfuly replace a failed
mgrnode, refer to Mirantis Container Cloud Operations Guide: Manage Ceph.
Verify that the node is in the
Readystate through the Kubernetes API:
kubectl get node <NODE-NAME> -o wide | grep Ready
Verify that the node has all required labels described in the previous steps:
kubectl get nodes --show-labels
Configure new Octavia health manager resources:
kubectl -n osh-system exec -t <OS-CONTROLLER-POD> -c osdpl osctl-job-rerun octavia-create-resources openstack
Wait until the Octavia health manager pod on the newly added control plane node appears in the
kubectl -n openstack get pods -o wide | grep <NODE_ID> | grep octavia-health-manager
If the pod is in the
crashloopbackoffstate, remove it:
kubectl -n openstack delete pod <OCTAVIA-HEALTH-MANAGER-POD-NAME>
Verify that an OpenStack port for the node has been created and the node is in the
kubectl -n openstack exec -t <KEYSTONE-CLIENT-POD-NAME> openstack port show octavia-health-manager-listen-port-<NODE-NAME>