Newer documentation is now live.You are currently reading an older version.

Add a compute node

This section describes how to add a new compute node to your existing Mirantis OpenStack for Kubernetes deployment.

To add a compute node:

  1. Add a bare-metal host to the MOSK cluster as described in Add a bare-metal host.

  2. Create a Kubernetes machine in your cluster as described in Add a machine.

    When adding the machine, specify the node labels as required for an OpenStack compute node:

    OpenStack node roles

    Node role

    Description

    Kubernetes labels

    Minimal count

    OpenStack control plane

    Hosts the OpenStack control plane services such as database, messaging, API, schedulers, conductors, L3 and L2 agents.

    openstack-control-plane=enabled
    openstack-gateway=enabled
    openvswitch=enabled

    3

    OpenStack compute

    Hosts the OpenStack compute services such as libvirt and L2 agents.

    openstack-compute-node=enabled
    openvswitch=enabled (for a deployment with Open vSwitch as a backend for networking)

    Varies

  3. If required, configure the compute host to enable huge pages, SR-IOV, and other advanced features in your MOSK deployment. See Advanced OpenStack configuration (optional) for details.

  4. Once the node is available in Kubernetes and when the nova-compute and neutron pods are running on the node, verify that the compute service and Neutron Agents are healthy in OpenStack API.

    In the keystone-client pod, run:

    openstack network agent list --host <cmp_host_name>
    
    openstack compute service list --host <cmp_host_name>
    
  5. Verify that the compute service is mapped to cell.

    The OpenStack Controller triggers the nova-cell-setup job once it detects a new compute pod in the Ready state. This job sets mapping for new compute services to cells.

    In the nova-api-osapi pod, run:

    nova-manage cell_v2 list_hosts | grep <cmp_host_name>
    
  6. Strongly recommended. Back up MKE as described in Mirantis Kubernetes Engine documentation: Back up MKE.

    Since the procedure above modifies the cluster configuration, a fresh backup is required to restore the cluster in case further reconfigurations fail.

    Important

    Because the MKE restoration process is complicated, we strongly recommend contacting Mirantis support for assistance.

    If you still decide to restore MKE from a backup on your own, you must scale down helm-controller on the cluster being restored if the MKE version of the affected cluster after the restore will differ from the MKE version in the ClusterRelease object that is set in MOSK Cluster objects in the management cluster:

    • If you are restoring MKE on a management cluster: before starting the restore, scale down helm-controller on each affected MOSK cluster. This prevents unintended Ceph and OpenStack downgrades on MOSK clusters after the management cluster is restored.

    • If you are restoring MKE on a MOSK cluster: immediately after the restore completes, scale down helm-controller. Because the restore rolls the cluster back to an older release, this prevents it from triggering a premature upgrade of Helm releases.