Control plane virtual machines

Control plane virtual machines

MCP cluster infrastructure consists of a set of virtual machines that host the services required to manage workloads and respond to API calls.

MCP clusters includes a number of logical roles that define functions of its nodes. Each role can be assigned to a specific set of the control plane virtual machines. This allows to adjust the number of instances of a particular role independently of other roles, providing greater flexibility to the environment architecture.

To ensure high availability and fault tolerance, the control plane of an MCP cluster typically spreads across at least three physical nodes. However, depending on your hardware you may decide to break down the services on a larger number of nodes. The number of virtual instances that must run each service may vary as well.

The reference architecture for Cloud Provider Infrastructure use case uses 9 infrastructure nodes to host the MCP control plane services.

The following table lists the roles of infrastructure logical nodes and their standard code names used throughout the MCP metadata model:

MCP infrastructure logical nodes
Server role Server role codename in metadata model Description
Infrastructure node kvm Infrastructure KVM hosts that provide virtualization platform all VCP component
Network node gtw Nodes that provide tenant network data plane services.
DriveTrain Salt Master node cfg The Salt Master node that is responsible for sending commands to Salt Minion nodes.
DriveTrain LCM engine node cid Nodes that run DriveTrain services in containers in Docker Swarm mode cluster.
RabbitMQ server node msg Nodes that run the message queue server (RabbitMQ).
Database server node dbs Nodes that run the clustered MySQL database (Galera).
OpenStack controller node ctl Nodes that run the Virtualized Control Plane service, including the OpenStack API servers and scheduler components.
OpenStack compute node cmp Nodes that run the hypervisor service and VM workloads.
OpenStack DNS node dns Nodes that run OpenStack DNSaaS service (Designate).
OpenStack secrets storage nodes kmn Nodes that run OpenStack Secrets service (Barbican).
OpenStack telemetry database nodes mdb Nodes that run the Telemetry monitoring database services.
Proxy node prx Nodes that run reverse proxy that exposes OpenStack API, dashboards, and other components externally.
Contrail controller nodes ntw Nodes that run the OpenContrail controller services.
Contrail analytics nodes nal Nodes that run the OpenContrail analytics services.
StackLight LMA log nodes log Nodes that run the StackLight LMA logging and visualization services.
StackLight LMA database nodes mtr Nodes that run the StackLight database services.
StackLight LMA nodes mon Nodes that run the StackLight LMA monitoring services.
Ceph RADOS gateway nodes rgw Nodes that run Ceph RADOS gateway daemon and expose Object Storage API.
Ceph Monitor nodes cmn Nodes that run Ceph Monitor service.
Ceph OSD nodes osd Nodes that provide storage devices for Ceph cluster.

Note

In the Cloud Provider reference configuration, Ceph OSDs run on dedicated hardware servers. This reduces operations complexity, isolates the failure domain, and helps avoid resources contention.