Hardware requirements

Hardware requirements

This section provides hardware requirements for the Mirantis OpenStack for Kubernetes (MOS) cluster.

Note

A MOS managed cluster is deployed by a Mirantis Container Cloud baremetal-based management cluster. For the hardware requirements for this kind of management clusters, see Mirantis Container Cloud Reference Architecture: Reference hardware configuration.

The MOS reference architecture includes the following node types:

  • OpenStack control plane node and StackLight node

    Host OpenStack control plane services such as database, messaging, API, schedulers conductors, and L3 and L2 agents, as well as the StackLight components.

  • Tenant gateway node

    Optional, hosts OpenStack gateway services including L2, L3, and DHCP agents. The tenant gateway nodes are combined with OpenStack control plane nodes. The strict requirement is a dedicated physical network (bond) for tenant network traffic.

  • Tungsten Fabric control plane node

    Required only if Tungsten Fabric (TF) is enabled as a back end for the OpenStack networking. These nodes host the TF control plane services such as Cassandra database, messaging, API, control, and configuration services.

  • Tungsten Fabric analytics node

    Required only if TF is enabled as a back end for the OpenStack networking. These nodes host the TF analytics services such as Cassandra, ZooKeeper and collector.

  • Compute node

    Hosts OpenStack Compute services such as QEMU, L2 agents, and others.

  • Infrastructure nodes

    Runs underlying Kubernetes cluster management services. The MOS reference configuration requires minimum three infrastructure nodes.

The table below specifies the hardware resources the MOS reference architecture recommends for each node type.

Hardware requirements

Node type

# of servers

CPU cores # per server

Memory (GB) per server

Disk space per server

NICs # per server

OpenStack control plane, gateway 0, and StackLight nodes

3

32

128

2 TB SSD

5

Tenant gateway (optional)

0-3

32

128

2 TB SSD

5

Tungsten Fabric control plane nodes 1

3

16

64

500 GB SSD

1

Tungsten Fabric analytics nodes 1

3

32

64

1 TB SSD

1

Compute node

3 (varies)

16

64

500 GB SSD

5

Infrastructure node (Kubernetes cluster management)

3

16

64

500 GB SSD

5

Infrastructure node (Ceph) 2

3

16

64

1 SSD 500 GB and 2 HDDs 2 TB each

5

Note

The exact hardware specifications and number of nodes depend on a cloud configuration and scaling needs.

0

OpenStack gateway services can optionally be moved to separate nodes.

1(1,2)

TF control plane and analytics nodes can be combined with a respective addition of RAM, CPU, and disk space to the hardware hosts. Though, Mirantis does not recommend such configuration for production environments as the risk of the cluster downtime if one of the nodes unexpectedly fails increases.

2
  • A Ceph cluster with 3 Ceph nodes does not provide hardware fault tolerance and is not eligible for recovery operations, such as a disk or an entire node replacement.

  • A Ceph cluster uses the replication factor that equals 3. If the number of Ceph OSDs is less than 3, a Ceph cluster moves to the degraded state with the write operations restriction until the number of alive Ceph OSDs equals the replication factor again.

Note

If you are looking to try MOS and do not have much hardware at your disposal, you can deploy it in a virtual environment, for example, on top of another OpenStack cloud using the sample Heat templates.

Please mind, the tooling is provided for reference only and is not a part of the product itself. Mirantis does not guarantee its interoperability with the latest MOS version.