Hardware requirements

Hardware requirements

This section provides hardware requirements for the Mirantis OpenStack for Kubernetes (MOS) cluster.

Note

A MOS managed cluster is deployed by a Mirantis Container Cloud baremetal-based management cluster. For the hardware requirements for this kind of management clusters, see Mirantis Container Cloud Reference Architecture: Reference hardware configuration.

Note

One of the industry best practices is to verify every new update or configuration change in a non-customer-facing environment before applying it to production. Therefore, Mirantis recommends having a staging cloud, deployed and maintained along with the production clouds. The recommendation is especially applicable to the environments that:

  • Receive updates often and use continuous delivery. For example, any non-isolated deployment of Mirantis Container Cloud and Mirantis OpenStack for Kubernetes (MOS).

  • Have significant deviations from the reference architecture or third party extensions installed.

  • Are managed under the Mirantis OpsCare program.

  • Run business-critical workloads where even the slightest application downtime is unacceptable.

A typical staging cloud is a complete copy of the production environment including the hardware and software configurations, but with a bare minimum of compute and storage capacity.

The MOS reference architecture includes the following node types:

  • OpenStack control plane node and StackLight node

    Host OpenStack control plane services such as database, messaging, API, schedulers conductors, and L3 and L2 agents, as well as the StackLight components.

  • Tenant gateway node

    Optional, hosts OpenStack gateway services including L2, L3, and DHCP agents. The tenant gateway nodes are combined with OpenStack control plane nodes. The strict requirement is a dedicated physical network (bond) for tenant network traffic.

  • Tungsten Fabric control plane node

    Required only if Tungsten Fabric (TF) is enabled as a back end for the OpenStack networking. These nodes host the TF control plane services such as Cassandra database, messaging, API, control, and configuration services.

  • Tungsten Fabric analytics node

    Required only if TF is enabled as a back end for the OpenStack networking. These nodes host the TF analytics services such as Cassandra, ZooKeeper and collector.

  • Compute node

    Hosts OpenStack Compute services such as QEMU, L2 agents, and others.

  • Infrastructure nodes

    Runs underlying Kubernetes cluster management services. The MOS reference configuration requires minimum three infrastructure nodes.

The table below specifies the hardware resources the MOS reference architecture recommends for each node type.

Hardware requirements

Node type

# of servers

CPU cores # per server

RAM per server, GB

Disk space per server, GB

NICs # per server

OpenStack control plane, gateway 0, and StackLight nodes

3

32

128

1 SSD x 500
2 SSD x 1000 3

5

Tenant gateway (optional)

0-3

32

128

1 SSD x 500

5

Tungsten Fabric control plane nodes 1

3

16

64

1 SSD x 500

1

Tungsten Fabric analytics nodes 1

3

32

64

1 SSD x 1000

1

Compute node

3 (varies)

16

64

1 SSD x 500 4

5

Infrastructure node (Kubernetes cluster management)

3 5

16

64

1 SSD x 500

5

Infrastructure node (Ceph) 2

3

16

64

1 SSD x 500
2 HDDs x 2000

5

Note

The exact hardware specifications and number of nodes depend on a cloud configuration and scaling needs.

0

OpenStack gateway services can optionally be moved to separate nodes.

1(1,2)

TF control plane and analytics nodes can be combined with a respective addition of RAM, CPU, and disk space to the hardware hosts. Though, Mirantis does not recommend such configuration for production environments as the risk of the cluster downtime if one of the nodes unexpectedly fails increases.

2
  • A Ceph cluster with 3 Ceph nodes does not provide hardware fault tolerance and is not eligible for recovery operations, such as a disk or an entire node replacement.

  • A Ceph cluster uses the replication factor that equals 3. If the number of Ceph OSDs is less than 3, a Ceph cluster moves to the degraded state with the write operations restriction until the number of alive Ceph OSDs equals the replication factor again.

3
  • 1 SSD x 500 for operating system

  • 1 SSD x 1000 for OpenStack LVP

  • 1 SSD x 1000 for StackLight LVP

4

When Nova is used with local folders, additional capacity is required depending on the VM images size.

5

For nodes hardware requirements, refer to Container Cloud Reference Architecture: Managed cluster hardware configuration.

Note

If you are looking to try MOS and do not have much hardware at your disposal, you can deploy it in a virtual environment, for example, on top of another OpenStack cloud using the sample Heat templates.

Please mind, the tooling is provided for reference only and is not a part of the product itself. Mirantis does not guarantee its interoperability with the latest MOS version.