The Large Cloud is an OpenStack-based reference architecture for MCP. It is designed to provide a generic public cloud user experience to the cloud tenants in terms of available virtual infrastructure capabilities and expectations.
The large reference architecture is designed to support up to 5000 virtual servers or 500 hypervisor hosts. In addition to the desirable number of hypervisors, 18 infrastructure physical servers are required for the control plane. This number includes 9 servers that host OpenStack virtualized control plane (VCP), 6 servers dedicated to the StackLight services, and 3 servers for the OpenContrail control plane.
The following diagram describes the distribution of VCP and other services throughout the infrastructure nodes.
The following table describes the hardware nodes in the CPI reference architecture, roles assigned to them, and the number of nodes of each type.
Node type |
Role name |
Number of servers |
---|---|---|
Infrastructure nodes (VCP) |
|
9 |
Infrastructure nodes (OpenContrail) |
|
3 |
Monitoring nodes (StackLight LMA) |
|
3 |
Infrastructure nodes (StackLight LMA) |
|
3 |
OpenStack compute nodes |
|
200 - 500 |
Staging infrastructure nodes |
|
18 |
Staging OpenStack compute nodes |
|
2 - 5 |
The following table summarizes the VCP virtual machines mapped to physical servers.
Virtual server roles |
Physical servers |
# of instances |
CPU vCores per instance |
Memory (GB) per instance |
Disk space (GB) per instance |
---|---|---|---|---|---|
|
|
5 |
24 |
128 |
100 |
|
|
3 |
24 |
64 |
1000 |
|
|
3 |
32 |
196 |
100 |
|
|
2 |
8 |
32 |
100 |
|
|
3 |
8 |
32 |
150 |
TOTAL |
16 |
328 |
1580 |
4450 |
Virtual server roles |
Physical servers |
# of instances |
CPU vCores per instance |
Memory (GB) per instance |
Disk space (GB) per instance |
---|---|---|---|---|---|
|
|
1 |
8 |
32 |
50 |
|
|
3 |
4 |
32 |
500 |
TOTAL |
4 |
20 |
128 |
1550 |
Virtual server roles |
Physical servers |
# of instances |
CPU vCores per instance |
Memory (GB) per instance |
Disk space (GB) per instance |
---|---|---|---|---|---|
|
|
3 |
16 |
64 |
100 |
|
|
3 |
24 |
128 |
2000 |
TOTAL |
6 |
120 |
576 |
6300 |
Virtual server roles |
Physical servers |
# of instances |
CPU vCores per instance |
Memory (GB) per instance |
Disk space (GB) per instance |
---|---|---|---|---|---|
|
|
3 |
16 |
32 |
100 |
|
|
3 |
16 |
32 |
50 |
TOTAL |
6 |
96 |
192 |
450 |
Virtual server roles |
Physical servers |
# of instances |
CPU vCores per instance |
Memory (GB) per instance |
Disk space (GB) per instance |
---|---|---|---|---|---|
|
|
3 |
24 |
256 |
1000 0 |
|
|
3 |
16 |
196 |
3000 0 |
|
|
3 |
16 |
64 1 |
5000 2 |
TOTAL |
9 |
192 |
1548 |
27000 |
The required disk space per instance depends on the Prometheus retention
policy, which by default is 5 days for mon
nodes and 180 days for
mtr
nodes.
The Elasticsearch heap size must not exceed 32 GB. For details, see Limiting memory usage. To limit the heap size, see MCP Operations Guide: Configure Elasticsearch.
The required disk space per instance depends on the Elasticsearch retention policy, which is 31 days by default.
Note
The prx
VM should have an additional NIC for the Proxy network.
All other nodes should have two NICs for DHCP and Primary networks.
See also
Control plane virtual machines for the details on the functions of nodes of each type
Hardware requirements for Cloud Provider Infrastructure for the reference hardware configuration for each type of node.