Add a controller node¶
This section describes how to add a new control plane node to the existing MOSK deployment.
To add an OpenStack controller node:
Add a bare metal host to the MOSK cluster as described in Add a bare metal host.
When adding the bare metal host YAML file, specify the following OpenStack control plane node labels for the OpenStack control plane services such as database, messaging, API, schedulers, conductors, L3 and L2 agents:
openstack-control-plane=enabled
openstack-gateway=enabled
openvswitch=enabled
Create a Kubernetes machine in your cluster as described in Add a machine.
When adding the machine, verify that OpenStack control plane node has the following labels:
openstack-control-plane=enabled
openstack-gateway=enabled
openvswitch=enabled
Note
Depending on the applications that were colocated on the failed controller node, you may need to specify some additional labels, for example,
ceph_role_mgr=true
andceph_role_mon=true
. To successfuly replace a failedmon
andmgr
node, refer to Ceph operations.Verify that the node is in the
Ready
state through the Kubernetes API:kubectl get node <NODE-NAME> -o wide | grep Ready
Verify that the node has all required labels described in the previous steps:
kubectl get nodes --show-labels
Configure new Octavia health manager resources:
Rerun the
octavia-create-resources
job:kubectl -n osh-system exec -t <OS-CONTROLLER-POD> -c osdpl osctl-job-rerun octavia-create-resources openstack
Wait until the Octavia health manager pod on the newly added control plane node appears in the
Running
state:kubectl -n openstack get pods -o wide | grep <NODE_ID> | grep octavia-health-manager
Note
If the pod is in the
crashloopbackoff
state, remove it:kubectl -n openstack delete pod <OCTAVIA-HEALTH-MANAGER-POD-NAME>
Verify that an OpenStack port for the node has been created and the node is in the
Active
state:kubectl -n openstack exec -t <KEYSTONE-CLIENT-POD-NAME> openstack port show octavia-health-manager-listen-port-<NODE-NAME>