MOSK cluster hardware requirements

This section provides hardware requirements for the Mirantis Container Cloud management cluster with a managed Mirantis OpenStack for Kubernetes (MOSK) cluster.

For installing MOSK, the Mirantis Container Cloud management cluster and managed cluster must be deployed with baremetal provider.

Important

A MOSK cluster is to be used for a deployment of an OpenStack cluster and its components. Deployment of third-party workloads on a MOSK cluster is neither allowed nor supported.

Note

One of the industry best practices is to verify every new update or configuration change in a non-customer-facing environment before applying it to production. Therefore, Mirantis recommends having a staging cloud, deployed and maintained along with the production clouds. The recommendation is especially applicable to the environments that:

  • Receive updates often and use continuous delivery. For example, any non-isolated deployment of Mirantis Container Cloud.

  • Have significant deviations from the reference architecture or third party extensions installed.

  • Are managed under the Mirantis OpsCare program.

  • Run business-critical workloads where even the slightest application downtime is unacceptable.

A typical staging cloud is a complete copy of the production environment including the hardware and software configurations, but with a bare minimum of compute and storage capacity.

The table below describes the node types the MOSK reference architecture includes.

MOSK node types

Node type

Description

Mirantis Container Cloud management cluster nodes

The Container Cloud management cluster architecture on bare metal requires three physical servers for manager nodes. On these hosts, we deploy a Kubernetes cluster with services that provide Container Cloud control plane functions.

OpenStack control plane node and StackLight node

Host OpenStack control plane services such as database, messaging, API, schedulers conductors, and L3 and L2 agents, as well as the StackLight components.

Note

MOSK enables the cloud operator to collocate the OpenStack control plane with the managed cluster master nodes on the OpenStack deployments of a small size. This capability is available as technical preview. Use such configuration for testing and evaluation purposes only.

Tenant gateway node

Optional. Hosts OpenStack gateway services including L2, L3, and DHCP agents. The tenant gateway nodes are combined with OpenStack control plane nodes. The strict requirement is a dedicated physical network (bond) for tenant network traffic.

Tungsten Fabric control plane node

Required only if Tungsten Fabric is enabled as a back end for the OpenStack networking. These nodes host the TF control plane services such as Cassandra database, messaging, API, control, and configuration services.

Tungsten Fabric analytics node

Required only if Tungsten Fabric is enabled as a back end for the OpenStack networking. These nodes host the TF analytics services such as Cassandra, ZooKeeper, and collector.

Compute node

Hosts the OpenStack Compute services such as QEMU, L2 agents, and others.

Infrastructure nodes

Runs underlying Kubernetes cluster management services. The MOSK reference configuration requires minimum three infrastructure nodes.

The table below specifies the hardware resources the MOSK reference architecture recommends for each node type.

Hardware requirements

Node type

# of servers

CPU cores # per server

RAM per server, GB

Disk space per server, GB

NICs # per server

Mirantis Container Cloud management cluster node

3 0

16

128

1 SSD x 960
1 SSD x 1900 1

3 2

OpenStack control plane, gateway 3, and StackLight nodes

3 or more

32

128

1 SSD x 500
2 SSD x 1000 6

5

Tenant gateway (optional)

0-3

32

128

1 SSD x 500

5

Tungsten Fabric control plane nodes 4

3

16

64

1 SSD x 500

1

Tungsten Fabric analytics nodes 4

3

32

64

1 SSD x 1000

1

Compute node

3 (varies)

16

64

1 SSD x 500 7

5

Infrastructure node (Kubernetes cluster management)

3 8

16

64

1 SSD x 500

5

Infrastructure node (Ceph) 5

3

16

64

1 SSD x 500
2 HDDs x 2000

5

Note

The exact hardware specifications and number of the control plane and gateway nodes depend on a cloud configuration and scaling needs. For example, for the clouds with more than 12,000 Neutron ports, Mirantis recommends increasing the number of gateway nodes.

0

Adding more than 3 nodes to a management cluster is not supported.

1

In total, at least 2 disks are required:

  • disk0 - system storage, minimum 60 GB.

  • disk1 - Container Cloud services storage, not less than 110 GB. The exact capacity requirements depend on StackLight data retention period.

See Management cluster storage for details.

2

OOB management (IPMI) port is not included.

3

OpenStack gateway services can optionally be moved to separate nodes.

4(1,2)

TF control plane and analytics nodes can be combined with a respective addition of RAM, CPU, and disk space to the hardware hosts. Though, Mirantis does not recommend such configuration for production environments as the risk of the cluster downtime if one of the nodes unexpectedly fails increases.

5
  • A Ceph cluster with 3 Ceph nodes does not provide hardware fault tolerance and is not eligible for recovery operations, such as a disk or an entire node replacement.

  • A Ceph cluster uses the replication factor that equals 3. If the number of Ceph OSDs is less than 3, a Ceph cluster moves to the degraded state with the write operations restriction until the number of alive Ceph OSDs equals the replication factor again.

6
  • 1 SSD x 500 for operating system

  • 1 SSD x 1000 for OpenStack LVP

  • 1 SSD x 1000 for StackLight LVP

7

When Nova is used with local folders, additional capacity is required depending on the VM images size.

8

For nodes hardware requirements, refer to Container Cloud Reference Architecture: Managed cluster hardware configuration.

Note

If you would like to evaluate the MOSK capabilities and do not have much hardware at your disposal, you can deploy it in a virtual environment. For example, on top of another OpenStack cloud using the sample Heat templates.

Please mind, the tooling is provided for reference only and is not a part of the product itself. Mirantis does not guarantee its interoperability with any MOSK version.