Network types

This section describes network types for Layer 3 networks used for Kubernetes and Mirantis OpenStack for Kubernetes (MOSK) clusters along with requirements for each network type.

Note

Only IPv4 is currently supported by Container Cloud and IPAM for infrastructure networks. Both IPv4 and IPv6 are supported for OpenStack workloads.

The following diagram provides an overview of the underlay networks in a MOSK environment:

../../_images/os-cluster-l3-networking.png

L3 networks for Kubernetes

A MOSK deployment typically requires the following types of networks:

  • Out-of-band (OOB) network

    Connects the Baseboard Management Controllers (BMCs) of the hosts in the network to Ironic. This network is out of band for the host operating system.

  • PXE/provisioning network

    Enables remote booting of servers through the PXE protocol. In management clusters, DHCP server listens on this network for hosts discovery and inspection. In managed clusters, hosts use this network for the initial PXE boot and provisioning.

  • Management network

    Used in management clusters for managing MOSK infrastructure and for communication between containers in Kubernetes. Serves external connections to the management API and services of the management cluster.

  • LCM/API network

    Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

    If BGP announcement is configured for the MOSK cluster API LB address, the API/LCM network is not required. Announcement of the cluster API LB address is done using the LCM network.

    If you configure ARP announcement of the load-balancer IP address for the MOSK cluster API, the API/LCM network must be configured on the Kubernetes manager nodes of the cluster. This network contains the Kubernetes API endpoint with the VRRP virtual IP address.

  • LCM network

    Connects LCM agents running on a node to the LCM API of the management cluster. It is also used for communication between kubelet and the Kubernetes API server inside a Kubernetes cluster. The MKE components use this network for communication inside a swarm cluster. In management clusters, it is replaced by the management network.

    Multiple VLAN segments and IP subnets can be created for a multi-rack architecture. Each server must be connected to one of the LCM segments and have an IP from the corresponding subnet.

  • Kubernetes external network

    Used to expose the OpenStack, StackLight, and other services of the MOSK cluster. In management clusters, it is replaced by the management network.

  • Kubernetes workloads (pods) network

    Used for communication between containers in Kubernetes. Each host has an address on this network, and this address is used by Calico as an endpoint to the underlay network.

  • Storage access network (Ceph)

    Used for accessing the Ceph storage. Connects Ceph nodes to the storage clients. The Ceph OSD service is bound to the address on this network. In Ceph terms, this is a public network 0. We recommended that it is placed on a dedicated hardware interface.

    Connects Ceph nodes to each other. Serves internal replication traffic. This is a cluster network in Ceph terms. 0

  • Storage replication network (Ceph)

    Used for Ceph storage replication. Connects Ceph nodes to each other. Serves internal replication traffic.In Ceph terms, this is a cluster network 0. To ensure low latency and fast access, place the network on a dedicated hardware interface.

0(1,2,3)

For details about Ceph networks, see Ceph Network Configuration Reference.

The following table summarizes the default names used for the bridges connected to the networks listed above:

Management cluster

Network type

Bridge name

Assignment method TechPreview

OOB network

N/A

N/A

PXE network

bm-pxe

By a static interface name

Management network

k8s-lcm 2

By a subnet label ipam/SVC-k8s-lcm

Kubernetes workloads network

k8s-pods 1

By a static interface name

MOSK cluster

Network type

Bridge name

Assignment method

OOB network

N/A

N/A

PXE network

N/A

N/A

LCM network

k8s-lcm 2

By a subnet label ipam/SVC-k8s-lcm

Kubernetes workloads network

k8s-pods 1

By a static interface name

Kubernetes external network

k8s-ext

By a static interface name

Storage access (public) network

ceph-public

By the subnet label ipam/SVC-ceph-public

Storage replication (cluster) network

ceph-cluster

By the subnet label ipam/SVC-ceph-cluster

1(1,2)

Interface name for this network role is static and cannot be changed.

2(1,2)

The use of this interface name (and network role) is mandatory for every cluster.

L3 networks for MOSK

The MOSK deployment additionally requires the following networks.

L3 networks for MOSK

Service name

Network

Description

VLAN name

Networking

Provider networks

Typically, a routable network used to provide the external access to OpenStack instances (a floating network). Can be used by the OpenStack services such as Ironic, Manila, and others, to connect their management resources.

pr-floating

Networking

Overlay networks (virtual networks)

The network used to provide denied, secure tenant networks with the help of the tunneling mechanism (VLAN/GRE/VXLAN). If the VXLAN and GRE encapsulation takes place, the IP address assignment is required on interfaces at the node level.

neutron-tunnel

Compute

Live migration network

The network used by the OpenStack compute service (Nova) to transfer data during live migration. Depending on the cloud needs, it can be placed on a dedicated physical network not to affect other networks during live migration. The IP address assignment is required on interfaces at the node level.

lm-vlan

The way of mapping of the logical networks described above to physical networks and interfaces on nodes depends on the cloud size and configuration. We recommend placing OpenStack networks on a dedicated physical interface (bond) that is not shared with storage and Kubernetes management network to minimize the influence on each other.

L3 networks requirements

The following tables describe networking requirements for a MOSK cluster, Container Cloud management and Ceph clusters.

Container Cloud management cluster networking requirements

Network type

Provisioning

Management

Suggested interface name

N/A

k8s-lcm

Minimum number of VLANs

1

1

Minimum number of IP subnets

3

2

Minimum recommended IP subnet size

  • 8 IP addresses (Container Cloud management cluster hosts)

  • 8 IP addresses (MetalLB for provisioning services)

  • 16 IP addresses (DHCP range for directly connected servers)

  • 8 IP addresses (Container Cloud management cluster hosts, API VIP)

  • 16 IP addresses (MetalLB for Container Cloud services)

External routing

Not required

Required, may use proxy server

Multiple segments/stretch segment

Stretch segment for management cluster due to MetalLB Layer 2 limitations 3

Stretch segment due to VRRP, MetalLB Layer 2 limitations

Internal routing

Routing to separate DHCP segments, if in use

  • Routing to API endpoints of managed clusters for LCM

  • Routing to MetalLB ranges of managed clusters for StackLight authentication

  • Default route from Container Cloud management cluster hosts

3

Multiple VLAN segments with IP subnets can be added to the cluster configuration for separate DHCP domains.

Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

If you configure BGP announcement of the load-balancer IP address for a MOSK cluster API and for load-balanced services of the cluster:

Networking requirements for a MOSK cluster

Network type

Provisioning

LCM

External

Kubernetes workloads

Minimum number of VLANs

1 (optional)

1

1

1

Suggested interface name

N/A

k8s-lcm

k8s-ext-v

k8s-pods 4

Minimum number of IP subnets

1 (optional)

1

2

1

Minimum recommended IP subnet size

16 IPs (DHCP range)

  • IP per cluster node

  • 1 IP for the API endpoint VIP

  • 1 IP per MOSK controller node

  • 16 IPs (MetalLB for StackLight, OpenStack services)

1 IP per cluster node

Stretch or multiple segments

Multiple

Multiple

Multiple For details, see Configure node selectors for MetalLB speakers.

Multiple

External routing

Not required

Not required

Required, default route

Not required

Internal routing

Routing to the provisioning network of the management cluster

  • Routing to the IP subnet of the Container Cloud management network

  • Routing to all LCM IP subnets of the same MOSK cluster

Routing to the IP subnet of the Container Cloud Management API

Routing to all IP subnets of Kubernetes workloads

If you configure ARP announcement of the load-balancer IP address for a MOSK cluster API and for load-balanced services of the cluster:

Networking requirements for a MOSK cluster

Network type

Provisioning

LCM/API

LCM

External

Kubernetes workloads

Minimum number of VLANs

1 (optional)

1

1 (optional)

1

1

Suggested interface name

N/A

k8s-lcm

k8s-lcm

k8s-ext-v

k8s-pods 4

Minimum number of IP subnets

1 (optional)

1

1 (optional)

2

1

Minimum recommended IP subnet size

16 IPs (DHCP range)

  • 3 IPs for Kubernetes manager nodes

  • 1 IP for the API endpoint VIP

1 IP per MOSK node (Kubernetes worker)

  • 1 IP per MOSK controller node

  • 16 IPs (MetalLB for StackLight, OpenStack services)

1 IP per cluster node

Stretch or multiple segments

Multiple

Stretch due to VRRP limitations

Multiple

Stretch connected to all MOSK controller nodes. For details, see Configure node selectors for MetalLB speakers.

Multiple

External routing

Not required

Not required

Not required

Required, default route

Not required

Internal routing

Routing to the provisioning network of the management cluster

  • Routing to the IP subnet of the Container Cloud management network

  • Routing to all LCM IP subnets of the same MOSK cluster, if in use

  • Routing to the IP subnet of the LCM/API network

  • Routing to all IP subnets of the LCM network, if in use

Routing to the IP subnet of the Container Cloud Management API

Routing to all IP subnets of Kubernetes workloads

4(1,2)

The bridge interface with this name is mandatory if you need to separate Kubernetes workloads traffic. You can configure this bridge over the VLAN or directly over the bonded or single interface.

Networking requirements for a Ceph cluster

Network type

Storage access

Storage replication

Minimum number of VLANs

1

1

Suggested interface name

stor-public 5

stor-cluster 5

Minimum number of IP subnets

1

1

Minimum recommended IP subnet size

1 IP per cluster node

1 IP per cluster node

Stretch or multiple segments

Multiple

Multiple

External routing

Not required

Not required

Internal routing

Routing to all IP subnets of the Storage access network

Routing to all IP subnets of the Storage replication network

Note

When selecting externally routable subnets, ensure that the subnet ranges do not overlap with the internal subnets ranges. Otherwise, internal resources of users will not be available from the MOSK cluster.

5(1,2)

For details about Ceph networks, see Ceph Network Configuration Reference.