Requirements for a VMware vSphere-based cluster

Note

Container Cloud is developed and tested on VMware vSphere 7.0 and 6.7.

For system requirements for a bootstrap node, see Requirements for a bootstrap node.

If you use a firewall or proxy, make sure that the bootstrap and management clusters have access to the following IP ranges and domain names required for the Container Cloud content delivery network and alerting:

  • IP ranges:

  • Domain names:

    • mirror.mirantis.com and repos.mirantis.com for packages

    • binary.mirantis.com for binaries and Helm charts

    • mirantis.azurecr.io and *.blob.core.windows.net for Docker images

    • mcc-metrics-prod-ns.servicebus.windows.net:9093 for Telemetry (port 443 if proxy is enabled)

    • mirantis.my.salesforce.com and login.salesforce.com for Salesforce alerts

Note

  • Access to Salesforce is required from any Container Cloud cluster type.

  • If any additional Alertmanager notification receiver is enabled, for example, Slack, its endpoint must also be accessible from the cluster.

Caution

Regional clusters are unsupported since Container Cloud 2.25.0. Mirantis does not perform functional integration testing of the feature and the related code is removed in Container Cloud 2.26.0. If you still require this feature, contact Mirantis support for further information.

Note

The requirements in this section apply to the latest supported Container Cloud release.

System requirements

Requirements for a vSphere-based Container Cloud cluster

Resource

Management cluster

Managed cluster

Comments

# of nodes

3 (HA)

5 (6 with StackLight HA)

  • A bootstrap cluster requires access to the vSphere API.

  • A management cluster requires 3 nodes for the manager nodes HA. Adding more than 3 nodes to a management cluster is not supported.

  • A managed cluster requires 3 manager nodes for HA and 2 worker nodes for the Container Cloud workloads. If the multiserver mode is enabled for StackLight, 3 worker nodes are required for workloads.

# of vCPUs per node

8

8

Refer to the RAM recommendations described below to plan resources for different types of nodes.

RAM in GB per node

32

16

To prevent issues with low RAM, Mirantis recommends the following VM templates for a managed cluster with 50-200 nodes:

  • 16 vCPUs and 40 GB of RAM - manager node

  • 16 vCPUs and 128 GB of RAM - nodes where the StackLight server components run

Storage in GB per node

120

120

The listed amount of disk space must be available as a shared datastore of any type, for example, NFS or vSAN, mounted on all hosts of the vCenter cluster.

Operating system

RHEL 8.7 1
Ubuntu 20.04
RHEL 8.7 1
Ubuntu 20.04

For a management and managed cluster, a base OS VM template must be present in the VMware VM templates folder available to Container Cloud. For details, see VsphereVMTemplate.

RHEL license
(for RHEL deployments only)

RHEL licenses for Virtual Datacenters

RHEL licenses for Virtual Datacenters

This license type allows running unlimited guests inside one hypervisor. The amount of licenses is equal to the amount of hypervisors in vCenter Server, which will be used to host RHEL-based machines. Container Cloud will schedule machines according to scheduling rules applied to vCenter Server. Therefore, make sure that your RedHat Customer portal account has enough licenses for allowed hypervisors.

MCR

23.0.9 Since 16.1.0
23.0.7 Since 16.0.1
20.10.17 Since 14.0.0
23.0.9 Since 16.1.0
23.0.7 Since 16.0.1
20.10.17 Since 14.0.0

Mirantis Container Runtime (MCR) is deployed by Container Cloud as a Container Runtime Interface (CRI) instead of Docker Engine.

VMware vSphere version

7.0, 6.7

7.0, 6.7

cloud-init version

20.3 for RHEL

20.3 for RHEL

The minimal cloud-init package version built for the VsphereVMTemplate.

VMware Tools version

11.0.5

11.0.5

The minimal open-vm-tools package version built for the VsphereVMTemplate.

Obligatory vSphere capabilities

DRS,
Shared datastore
DRS,
Shared datastore

A shared datastore must be mounted on all hosts of the vCenter cluster. Combined with Distributed Resources Scheduler (DRS), it ensures that the VMs are dynamically scheduled to the cluster hosts.

IP subnet size

/24

/24

Consider the supported VMware vSphere network objects and IPAM recommendations.

Minimal IP addresses distribution:

  • Management cluster:

    • 1 for the load balancer of Kubernetes API

    • 3 for manager nodes (one per node)

    • 6 for the Container Cloud services

    • 6 for StackLight

  • Managed cluster:

    • 1 for the load balancer of Kubernetes API

    • 3 for manager nodes

    • 2 for worker nodes

    • 6 for StackLight

1(1,2)
  • RHEL 8.7 is generally available since Cluster releases 16.0.0 and 14.1.0. Before these Cluster releases, it is supported as Technology Preview.

  • Container Cloud does not support mixed operating systems, RHEL combined with Ubuntu, in one cluster.

Deployment resources requirements

The VMware vSphere provider of Mirantis Container Cloud requires the following resources to successfully create virtual machines for Container Cloud clusters:

  • Data center

    All resources below must be related to one data center.

  • Cluster

    All virtual machines must run on the hosts of one cluster.

  • Virtual Network or Distributed Port Group

    Network for virtual machines. For details, see VMware vSphere network objects and IPAM recommendations.

  • Datastore

    Storage for virtual machines disks and Kubernetes volumes.

  • Folder

    Placement of virtual machines.

  • Resource pool

    Pool of CPU and memory resources for virtual machines.

You must provide the data center and cluster resources by name. You can provide other resources by:

  • Name

    Resource name must be unique in the data center and cluster. Otherwise, the vSphere provider detects multiple resources with same name and cannot determine which one to use.

  • Full path (recommended)

    Full path to a resource depends on its type. For example:

    • Network

      /<data_center>/network/<network_name>

    • Resource pool

      /<data_center>/host/<cluster>/Resources/<resource pool_name>

    • Folder

      /<data_center>/vm/<folder1>/<folder2>/.../<folder_name> or /<data_center>/vm/<folder_name>

    • Datastore

      /<data_center>/datastore/<datastore_name>

You can determine the proper resource name using the vSphere UI.

To obtain the full path to vSphere resources:

  1. Download the latest version of GOVC utility depending on your operating system and unpack the govc binary into PATH on your machine.

  2. Set the environment variables to access your vSphere cluster. For example:

    export GOVC_USERNAME=user
    export GOVC_PASSWORD=password
    export GOVC_URL=https://vcenter.example.com
    
  3. List the data center root using the govc ls command. Example output:

    /<data_center>/vm
    /<data_center>/network
    /<data_center>/host
    /<data_center>/datastore
    
  4. Obtain the full path to resources by name for:

    1. Network or Distributed Port Group (Distributed Virtual Port Group):

      govc find /<data_center> -type n -name <network_name>
      
    2. Datastore:

      govc find /<data_center> -type s -name <datastore_name>
      
    3. Folder:

      govc find /<data_center> -type f -name <folder_name>
      
    4. Resource pool:

      govc find /<data_center> -type p -name <resource_pool_name>
      
  5. Verify the resource type by full path:

    govc object.collect -json -o "<full_path_to_resource>" | jq .Self.Type