Requirements for an Equinix Metal based cluster


Since Container Cloud 2.18.0, the public networking mode for the Equinix Metal based clusters is deprecated for the sake of the private networking mode. Deployments with public networks will become unsupported in one of the following Container Cloud releases.

While planning the deployment of a Mirantis Container Cloud cluster with MKE that is based on the Equinix Metal cloud provider, consider the requirements described below.

Mirantis supports deploying of clusters on Equinix Metal in two modes: with public or private networks. The deployment mode for management and managed clusters must be the same. For details on the private networks mode, see Equinix Metal with private networking.

For system requirements for a bootstrap node, see Requirements for a bootstrap node.


For the Equinix Metal cloud provider with private networks, a bootstrap node must be attached to the VLAN that will be used to deploy a management cluster.

If you want to deploy an Equinix Metal based managed cluster with public networks on top of an AWS management cluster, also refer to requirements for an Requirements for an AWS-based cluster.

If you use a firewall or proxy, make sure that the bootstrap, management, and regional clusters have access to the following IP ranges and domain names required for the Container Cloud content delivery network and alerting:

  • IP ranges:

  • Domain names:

    • and for packages

    • for binaries and Helm charts

    • and * for Docker images

    • for Telemetry (port 443 if proxy is enabled)

    • and for Salesforce alerts


  • Access to Salesforce is required from any Container Cloud cluster type.

  • If any additional Alertmanager notification receiver is enabled, for example, Slack, its endpoint must also be accessible from the cluster.


The requirements in this section apply to the latest supported Container Cloud release.

Requirements for an Equinix Metal based Container Cloud cluster


Management or regional cluster

Managed cluster


# of nodes

3 (HA)

5 (6 with StackLight HA)

  • A management cluster requires 3 nodes for the manager nodes HA. Adding more than 3 nodes to a management or regional cluster is not supported.

  • A managed cluster requires 3 manager nodes for HA and 2 worker nodes for the Container Cloud workloads. If the multiserver mode is enabled for StackLight, 3 worker nodes are required for workloads.

# of vCPUs per node



RAM in GB per node



Operating system

Ubuntu 20.04

Ubuntu 20.04




Mirantis Container Runtime (MCR) is deployed by Container Cloud as a Container Runtime Interface (CRI) instead of Docker Engine.

Server type



Most available Equinix Metal servers are configured with minimal requirements to deploy Container Cloud clusters. However, ensure that the selected Equinix Metal server type meets the following minimal requirements for a managed cluster:

  • 16 GB RAM

  • 8 CPUs

  • 2 storage devices with more than 120 GB each


If the Equinix Metal data center has not enough capacity, the server provisioning request will fail. Servers of particular types can be unavailable at a given time. Therefore, before you deploy a cluster, verify that the selected server type is available as described in Verify the capacity of the Equinix Metal facility.

For more details about the Equinix Metal capacity, see official Equinix Metal Documentation.

# of Elastic IP addresses to be used



  • Elastic IPs for a management cluster: 1 for Kubernetes, 5 for Container Cloud, 6 for StackLight

  • Elastic IPs for a managed cluster: 1 for Kubernetes and 5 for StackLight

  • Elastic IPs are not needed for clusters with private networks

# of IP addresses for a cluster with private networks



  • Managed cluster requires 5 IPs for StackLight

  • Management cluster requires IPs for the following services:

    • 6 for StackLight

    • 2 for IAM

    • 2 for Ironic

    • 1 for mcc-cache

    • 1 for UI

# VLANs for a cluster with private networks



Each cluster deployed on Equinix Metal with private networks requires 1 separate VLAN.

Ceph nodes


See comments


Ceph changes in Container Cloud 2.20.0:

  • Since Container Cloud 2.20.0, Ceph cluster does not deploy on management and regional clusters of the Equinix Metal provider with private networking.

  • Ceph cluster is automatically removed from existing management and regional clusters during the Container Cloud update to 2.20.0.

  • Managed clusters continue using Ceph as a distributed storage system.

Recommended minimal number of Ceph node roles:


Manager and Monitor




3 (for HA)

> 500


If you select Manual Ceph Configuration during the cluster creation, you can manually configure Ceph roles for each machine in the cluster following the recommended minimal number of Ceph node roles. Otherwise, Equinix Metal cloud provider will automatically configure Ceph roles: all control plane machines will be configured with Storage and Manager and Monitor roles. All worker machines will be configured with Storage role.