Control plane virtual machines

Control plane virtual machines

MCP cluster infrastructure consists of a set of virtual machines that host the services required to manage workloads and respond to API calls.

MCP clusters includes a number of logical roles that define functions of its nodes. Each role can be assigned to a specific set of the control plane virtual machines. This allows to adjust the number of instances of a particular role independently of other roles, providing greater flexibility to the environment architecture.

To ensure high availability and fault tolerance, the control plane of an MCP cluster typically spreads across at least three physical nodes. However, depending on your hardware you may decide to break down the services on a larger number of nodes. The number of virtual instances that must run each service may vary as well.

The reference architecture for Cloud Provider Infrastructure use case uses 9 infrastructure nodes to host the MCP control plane services.

The following table lists the roles of infrastructure logical nodes and their standard code names used throughout the MCP metadata model:

MCP infrastructure logical nodes

Server role

Server role codename in metadata model


Infrastructure node


Infrastructure KVM hosts that provide virtualization platform all VCP component

Network node


Nodes that provide tenant network data plane services.

DriveTrain Salt Master node


The Salt Master node that is responsible for sending commands to Salt Minion nodes.

DriveTrain LCM engine node


Nodes that run DriveTrain services in containers in Docker Swarm mode cluster.

RabbitMQ server node


Nodes that run the message queue server (RabbitMQ).

Database server node


Nodes that run the clustered MySQL database (Galera).

OpenStack controller node


Nodes that run the Virtualized Control Plane service, including the OpenStack API servers and scheduler components.

OpenStack compute node


Nodes that run the hypervisor service and VM workloads.

OpenStack DNS node


Nodes that run OpenStack DNSaaS service (Designate).

OpenStack secrets storage nodes


Nodes that run OpenStack Secrets service (Barbican).

OpenStack telemetry database nodes


Nodes that run the Telemetry monitoring database services.

Proxy node


Nodes that run reverse proxy that exposes OpenStack API, dashboards, and other components externally.

Contrail controller nodes


Nodes that run the OpenContrail controller services.

Contrail analytics nodes


Nodes that run the OpenContrail analytics services.

StackLight LMA log nodes


Nodes that run the StackLight LMA logging and visualization services.

StackLight LMA database nodes


Nodes that run the StackLight database services.

StackLight LMA nodes


Nodes that run the StackLight LMA monitoring services.

Ceph RADOS gateway nodes


Nodes that run Ceph RADOS gateway daemon and expose Object Storage API.

Ceph Monitor nodes


Nodes that run Ceph Monitor service.

Ceph OSD nodes


Nodes that provide storage devices for Ceph cluster.


In the Cloud Provider reference configuration, Ceph OSDs run on dedicated hardware servers. This reduces operations complexity, isolates the failure domain, and helps avoid resources contention.