Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!

Now, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.

Network requirements

This section describes network types for Layer 3 networks are used for Kubernetes and Mirantis OpenStack for Kubernetes (MOSK) clusters along with requirements for each network type.

Note

Only IPv4 is currently supported by MOSK and IPAM for infrastructure networks. Both IPv4 and IPv6 are supported for OpenStack workloads.

The following diagram provides an overview of the underlay networks in a MOSK environment:

../../_images/os-cluster-l3-networking.png

L3 networks for a MOSK environment

A MOSK deployment typically requires the following types of networks:

  • Out-of-band (OOB) network

    Connects to Baseboard Management Controllers of the servers that host the management cluster. The out-of-band (OOB) subnet must be accessible from the management network through IP routing. The OOB network is not managed by MOSK.

  • Provisioning (PXE) network

    Enables remote booting of servers through the Preboot eXecution Environment (PXE) protocol. The PXE subnet provides IP addresses for DHCP and network boot of the bare metal hosts for initial inspection and operating system provisioning using the bare metal provisioning service (Ironic). This network may not have the default gateway or a router connected to it. The operator defines the PXE subnet during bootstrap.

    In management clusters, the DHCP server listens on this network for hosts discovery and inspection.

    In MOSK clusters, hosts use this network only for the initial PXE boot and provisioning, and this network must not be configured on an operational MOSK node after its provisioning.

    For requirements, see DHCP range requirements for PXE.

  • Management network

    This network must be configured on the management cluster to connect to the Kubernetes API endpoints of MOSK clusters. It also connects LCM agents of MOSK nodes to the Kubernetes API endpoint of the management cluster.

    Note

    Management cluster supports full L3 networking topology in the Technology Preview scope. This enables deployment of management cluster nodes in dedicated racks without the need for L2 layer extension between them.

  • API/LCM network

    MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

    If you configure BGP announcement of the load-balancer IP address for the MOSK cluster API, the API/LCM network is not required. Announcement of the cluster API LB address is done using the LCM or external network.

    If you configure ARP announcement of the load-balancer IP address for the MOSK cluster API, the API/LCM network must be configured on the Kubernetes manager nodes of the cluster. This network contains the Kubernetes API endpoint with the VRRP virtual IP address.

    Depending of cluster needs, an operator can select how the VIP address for Kubernetes API is advertised. When BGP advertisement is used or the OpenStack control plane is deployed on separate nodes, as opposite to a compact control plane, it allows for a more flexible configuration and there is no need to search for a compromise such as the one described below.

    But, when using ARP advertisement on a compact control plane, the selection of network for advertising the VIP address for Kubernetes API may depend on whether the symmetry of service return traffic is required. Therefore, when using ARP advertisement on a compact control plane, select one of the following options in the drop-down list:

    Network selection for advertising the VIP address for Kubernetes API on a compact control plane
    • For traffic symmetry between MOSK and management clusters and asymmetry in case of external clients:

      Use the API/LCM network to advertise the VIP address for Kubernetes API. Allocate this VIP address in the CIDR address of the API/LCM network.

      The gateway in the API/LCM network for a MOSK cluster must have a route to the management subnet of the management cluster. This is required to ensure symmetric traffic flow between the management and MOSK clusters.

    • For traffic symmetry in case of external clients and asymmetry between MOSK and management clusters:

      Use external network to advertise the VIP address for Kubernetes API. Allocate this VIP address in the CIDR address of the external network.

      One of the gateways either in the API/LCM network, or in the external network for a MOSK cluster must have a route to the management subnet of the management cluster. This is required to establish the traffic flow between the management and MOSK clusters.

  • Life-cycle management (LCM) network

    This network is configured on MOSK clusters and connects LCM agents running on a node to the management network of the management cluster. It is also used for communication between kubelet and the Kubernetes API server inside a Kubernetes cluster. The MKE components use this network for communication inside a swarm cluster.

    Multiple VLAN segments and IP subnets can be created for a multi-rack architecture. Each server must be connected to one of the LCM segments and have an IP from the corresponding subnet.

  • Kubernetes external network

    Used to expose the OpenStack, StackLight, and other services of the MOSK cluster. In management clusters, it is replaced by the management network.

    Depending of cluster needs, an operator can select how the VIP address for Kubernetes API is advertised. When BGP advertisement is used or the OpenStack control plane is deployed on separate nodes, as opposite to a compact control plane, it allows for a more flexible configuration and there is no need to search for a compromise such as the one described below.

    But, when using ARP advertisement on a compact control plane, the selection of network for advertising the VIP address for Kubernetes API may depend on whether the symmetry of service return traffic is required. Therefore, when using ARP advertisement on a compact control plane, select one of the following options in the drop-down list:

    Network selection for advertising the VIP address for Kubernetes API on a compact control plane
    • For traffic symmetry between MOSK and management clusters and asymmetry in case of external clients:

      Use the API/LCM network to advertise the VIP address for Kubernetes API. Allocate this VIP address in the CIDR address of the API/LCM network.

      The gateway in the API/LCM network for a MOSK cluster must have a route to the management subnet of the management cluster. This is required to ensure symmetric traffic flow between the management and MOSK clusters.

    • For traffic symmetry in case of external clients and asymmetry between MOSK and management clusters:

      Use external network to advertise the VIP address for Kubernetes API. Allocate this VIP address in the CIDR address of the external network.

      One of the gateways either in the API/LCM network, or in the external network for a MOSK cluster must have a route to the management subnet of the management cluster. This is required to establish the traffic flow between the management and MOSK clusters.

  • Kubernetes workloads (pods) network

    Used for communication between containers in Kubernetes. Each host has an address on this network, and this address is used by Calico as an endpoint to the underlay network.

  • Storage replication network (Ceph cluster)

    Used for Ceph storage replication. Connects Ceph nodes to each other. Serves internal replication traffic.In Ceph terms, this is a cluster network 0. To ensure low latency and fast access, place the network on a dedicated hardware interface.

  • Storage access network (Ceph public)

    Used for accessing the Ceph storage. Connects Ceph nodes to the storage clients. The Ceph OSD service is bound to the address on this network. In Ceph terms, this is a public network 0. Mirantis recommends that it is placed on a dedicated hardware interface.

0(1,2)

For details about Ceph networks, see Ceph Network Configuration Reference.

For more details, see Management cluster networking and MOSK cluster networking.

Network routing requirements

For routing requirements, see Underlay networking: routing configuration.

Interfaces names configuration

The following table summarizes the default names used for the bridges connected to the networks listed above:

Management cluster

Network type

Bridge name

Assignment method TechPreview

OOB network

N/A

N/A

Provisioning (PXE) network

bm-pxe

By a static interface name

Management (LCM) network

k8s-lcm 1

By a subnet label ipam/SVC-k8s-lcm

Kubernetes workloads (pods) network

k8s-pods 2

By a static interface name

Kubernetes external network

k8s-ext 3

By a static interface name

MOSK cluster

Network type

Bridge name

Assignment method

OOB network

N/A

N/A

Provisioning (PXE) network

N/A

N/A

LCM network

k8s-lcm 1

By a subnet label ipam/SVC-k8s-lcm

Kubernetes workloads (pods) network

k8s-pods 2

By a static interface name

Kubernetes external network

k8s-ext

By a static interface name

Storage access network (Ceph public)

ceph-public

By the subnet label ipam/SVC-ceph-public

Storage replication network (Ceph cluster)

ceph-cluster

By the subnet label ipam/SVC-ceph-cluster

1(1,2)

The use of this network role is mandatory for every cluster.

2(1,2)

Interface name for this network role is static and cannot be changed.

3

Only if BGP mode is used for announcement of IP addresses of the load-balanced services and for the cluster API VIP.

Note

Management cluster supports full L3 networking topology in the Technology Preview scope. This enables deployment of management cluster nodes in dedicated racks without the need for L2 layer extension between them.

OpenStack networks configuration for MOSK cluster:

L3 networks for OpenStack

Service name

Network

Description

VLAN name

Networking

Provider networks

Typically, a routable network used to provide the external access to OpenStack instances (a floating network). Can be used by the OpenStack services such as Ironic, Manila, and others, to connect their management resources.

pr-floating

Networking

Overlay networks (virtual networks)

The network used to provide denied, secure tenant networks with the help of the tunneling mechanism (VLAN/GRE/VXLAN). If the VXLAN and GRE encapsulation takes place, the IP address assignment is required on interfaces at the node level.

neutron-tunnel

Compute

Live migration network

The network used by the OpenStack compute service (Nova) to transfer data during live migration. Depending on the cloud needs, it can be placed on a dedicated physical network not to affect other networks during live migration. The IP address assignment is required on interfaces at the node level.

lm-vlan

The way of mapping of the logical networks described above to physical networks and interfaces on nodes depends on the cloud size and configuration. We recommend placing OpenStack networks on a dedicated physical interface (bond) that is not shared with storage and Kubernetes management network to minimize the influence on each other.

L3 networks requirements

The following tables describe networking requirements for the following cluster types: MOSK, management, and Ceph.

Note

Management cluster supports full L3 networking topology in the Technology Preview scope. This enables deployment of management cluster nodes in dedicated racks without the need for L2 layer extension between them.

Note

When BGP mode is used for announcement of IP addresses of load-balanced services and for the cluster API VIP, three BGP sessions are created for every node of a management cluster:

  • Two sessions are created by MetalLB for public and provisioning services

  • One session is created by the bird BGP daemon for the cluster API VIP

BGP only allows one session to be established per a pair of endpoints. For details, see MetalLB documentation: Issues with Calico. To solve this issue, different methods can be used. MOSK allows configuring three networks for a management cluster: provisioning, management, and external. In this case, configure MetalLB to use provisioning and external networks, and BIRD to use the management network.

If you configure BGP announcement of the load-balancer IP address for a management cluster API and for load-balanced services of the cluster using three networks, ensure that your management cluster networking meets the following requirements:

Management cluster networking requirements

Network type

Provisioning (PXE)

Management (LCM)

External (Public)

Suggested interface name

bm-pxe

k8s-lcm

k8s-ext

Minimum number of VLANs

1

1

1

Minimum number of IP subnets

2

1

2

Minimum recommended IP subnet size

  • 5 IP addresses (management cluster hosts)

  • 5 IP addresses (MetalLB for provisioning services)

  • Optional. 16 IP addresses (DHCP range for directly connected servers)

  • 5 IP addresses (management cluster hosts, API VIP)

  • 5 IP addresses (management cluster hosts)

  • 16 IP addresses (MetalLB for management cluster services)

External routing

Not required

Not required

Required, may use proxy server

Multiple segments/stretch segment

Multiple

Multiple

Multiple

Internal routing

Routing to separate DHCP segments, if in use

  • Routing to API endpoints of MOSK clusters for LCM

  • Routing to MetalLB ranges of MOSK clusters for StackLight authentication

  • Default route from the management cluster hosts

If you configure ARP announcement of the load-balancer IP address for a management cluster API and for load-balanced services of the cluster, ensure that your management cluster networking meets the following requirements:

Management cluster networking requirements

Network type

Provisioning (PXE)

Management (LCM)

Suggested interface name

bm-pxe

k8s-lcm

Minimum number of VLANs

1

1

Minimum number of IP subnets

3

2

Minimum recommended IP subnet size

  • 5 IP addresses (management cluster hosts)

  • 5 IP addresses (MetalLB for provisioning services)

  • 16 IP addresses (DHCP range for directly connected servers)

  • 5 IP addresses (management cluster hosts, API VIP)

  • 16 IP addresses (MetalLB for MOSK management services)

External routing

Not required

Required, may use proxy server

Multiple segments/stretch segment

Stretch segment for management cluster due to MetalLB Layer 2 limitations 4

Stretch segment due to VRRP, MetalLB Layer 2 limitations

Internal routing

Routing to separate DHCP segments, if in use

  • Routing to API endpoints of MOSK clusters for LCM

  • Routing to MetalLB ranges of MOSK clusters for StackLight authentication

  • Default route from management cluster hosts

4

Multiple VLAN segments with IP subnets can be added to the cluster configuration for separate DHCP domains.

MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

If you configure BGP announcement of the load-balancer IP address for a MOSK cluster API and for load-balanced services of the cluster, ensure that your MOSK cluster networking meets the following requirements:

Networking requirements for a MOSK cluster

Network type

Provisioning

LCM

External

Kubernetes workloads

Minimum number of VLANs

1 (optional)

1

1

1

Suggested interface name

N/A

k8s-lcm

k8s-ext-v

k8s-pods 5

Minimum number of IP subnets

1 (optional)

1

2

1

Minimum recommended IP subnet size

> 10 (DHCP range)

Depends on the cluster size. For details, see DHCP range requirements for PXE.

  • IP per cluster node

  • 1 IP for the API endpoint VIP

  • 1 IP per MOSK controller node

  • 16 IPs (MetalLB for StackLight, OpenStack services)

1 IP per cluster node

Stretch or multiple segments

Multiple

Multiple

Multiple For details, see Configure node selectors for MetalLB speakers.

Multiple

External routing

Not required

Not required

Required, default route

Not required

Internal routing

Routing to the provisioning network of the management cluster

  • Routing to the IP subnet of the management network of the management cluster

  • Routing to all LCM IP subnets of the same MOSK cluster

Routing to the IP subnet of the MOSK management API

Routing to all IP subnets of Kubernetes workloads

If you configure ARP announcement of the load-balancer IP address for a MOSK cluster API and for load-balanced services of the cluster, ensure that your MOSK cluster networking meets the following requirements:

Networking requirements for a MOSK cluster

Network type

Provisioning

API/LCM

LCM

External

Kubernetes workloads

Minimum number of VLANs

1 (optional)

1

1 (optional)

1

1

Suggested interface name

N/A

k8s-lcm

k8s-lcm

k8s-ext-v

k8s-pods 5

Minimum number of IP subnets

1 (optional)

1

1 (optional)

2

1

Minimum recommended IP subnet size

> 10 (DHCP range)

Depends on the cluster size. For details, see DHCP range requirements for PXE.

  • 3 IPs for Kubernetes manager nodes

  • 1 IP for the API endpoint VIP

1 IP per MOSK node (Kubernetes worker)

  • 1 IP per MOSK controller node

  • 16 IPs (MetalLB for StackLight, OpenStack services)

1 IP per cluster node

Stretch or multiple segments

Multiple

Stretch due to VRRP limitations

Multiple

Stretch connected to all MOSK controller nodes. For details, see Configure node selectors for MetalLB speakers.

Multiple

External routing

Not required

Not required

Not required

Required, default route

Not required

Internal routing

Routing to the provisioning network of the management cluster

  • Routing to the IP subnet of the management network of the management cluster

  • Routing to all LCM IP subnets of the same MOSK cluster, if in use

  • Routing to the IP subnet of the API/LCM network

  • Routing to all IP subnets of the LCM network, if in use

Routing to the IP subnet of the MOSK management API

Routing to all IP subnets of Kubernetes workloads

5(1,2)

The bridge interface with this name is mandatory if you need to separate Kubernetes workloads traffic. You can configure this bridge over the VLAN or directly over the bonded or single interface.

Networking requirements for a Ceph cluster

Network type

Storage access

Storage replication

Minimum number of VLANs

1

1

Suggested interface name

stor-public 6

stor-cluster 6

Minimum number of IP subnets

1

1

Minimum recommended IP subnet size

1 IP per cluster node

1 IP per cluster node

Stretch or multiple segments

Multiple

Multiple

External routing

Not required

Not required

Internal routing

Routing to all IP subnets of the Storage access network

Routing to all IP subnets of the Storage replication network

Note

When selecting externally routable subnets, ensure that the subnet ranges do not overlap with the internal subnets ranges. Otherwise, internal resources of users will not be available from the MOSK cluster.

6(1,2)

For details about Ceph networks, see Ceph Network Configuration Reference.

DHCP range requirements for PXE

When setting up the network range for DHCP Preboot Execution Environment (PXE), keep in mind several considerations to ensure smooth server provisioning:

  • Determine the network size. For instance, if you target a concurrent provision of 50+ servers, a /24 network is recommended. This specific size is crucial as it provides sufficient scope for the DHCP server to provide unique IP addresses to each new Media Access Control (MAC) address, thereby minimizing the risk of collision.

    The concept of collision refers to the likelihood of two or more devices being assigned the same IP address. With a /24 network, the collision probability using the SDBM hash function, which is used by the DHCP server, is low. If a collision occurs, the DHCP server provides a free address using a linear lookup strategy.

  • In the context of PXE provisioning, technically, the IP address does not need to be consistent for every new DHCP request associated with the same MAC address. However, maintaining the same IP address can enhance user experience, making the /24 network size more of a recommendation than an absolute requirement.

  • For a minimal network size, it is sufficient to cover the number of concurrently provisioned servers plus one additional address (50 + 1). This calculation applies after covering any exclusions that exist in the range. You can define exclusions in the corresponding field of the Subnet object. For details, see API Reference: Subnet resource.

  • When the available address space is less than the minimum described above, you will not be able to automatically provision all servers. However, you can manually provision them by combining manual IP assignment for each bare metal host with manual pauses. For these operations, use the host.dnsmasqs.metal3.io/address and baremetalhost.metal3.io/detached annotations in the BareMetalHostInventory object. For details, see Manually allocate IP addresses for bare metal hosts.

  • All addresses within the specified range must remain unused before provisioning. If an IP address in use is issued by the DHCP server to a BOOTP client, that specific client cannot complete provisioning.