Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
This section describes network types for Layer 3 networks used for Kubernetes
and Mirantis OpenStack for Kubernetes (MOSK) clusters along with
requirements for each network type.
Note
Only IPv4 is currently supported by MOSK and IPAM
for infrastructure networks. Both IPv4 and IPv6 are supported
for OpenStack workloads.
The following diagram provides an overview of the underlay networks in a
MOSK environment:
A MOSK deployment typically requires the following types of
networks:
Out-of-band (OOB) network
Connects the Baseboard Management Controllers (BMCs) of the hosts in the
network to Ironic. This network is out of band for the host operating
system.
PXE/provisioning network
Enables remote booting of servers through the PXE protocol. In management
clusters, DHCP server listens on this network for hosts discovery and
inspection. In managed clusters, hosts use this network for the initial PXE
boot and provisioning.
Management network
Used in management clusters for managing MOSK
infrastructure and for communication between containers in Kubernetes.
Serves external connections to the management API and services of the
management cluster.
Note
Since Container Cloud 2.30.0 (Cluster releases 21.0.0 and 20.0.0), management
cluster supports full L3 networking topology in the Technology Preview scope.
This enables deployment of management cluster nodes in dedicated racks without
the need for L2 layer extension between them.
If you configure BGP announcement of the load-balancer IP address for the
MOSK cluster API, the API/LCM network is not required. Announcement of
the cluster API LB address is done using the LCM or external network.
ARP announcement
If you configure ARP announcement of the load-balancer IP address for the
MOSK cluster API, the API/LCM network must be configured on the
Kubernetes manager nodes of the cluster. This network contains the
Kubernetes API endpoint with the VRRP virtual IP address.
Depending of cluster needs, an operator can select how the VIP address for
Kubernetes API is advertised. When BGP advertisement is used or the OpenStack
control plane is deployed on separate nodes, as opposite to a compact control
plane, it allows for a more flexible configuration and there is no need to
search for a compromise such as the one described below.
But, when using ARP advertisement on a compact control plane, the selection of
network for advertising the VIP address for Kubernetes API may depend on
whether the symmetry of service return traffic is required. Therefore, when
using ARP advertisement on a compact control plane, select one of the following
options in the drop-down list:
Network selection for advertising the VIP address for Kubernetes
API on a compact control plane
For traffic symmetry between MOSK and management clusters and asymmetry
in case of external clients:
Use the API/LCM network to advertise the VIP address for Kubernetes API.
Allocate this VIP address in the CIDR address of the API/LCM network.
The gateway in the API/LCM network for a MOSK cluster must have a route to
the management subnet of the management cluster. This is required to
ensure symmetric traffic flow between the management and MOSK clusters.
For traffic symmetry in case of external clients and asymmetry between
MOSK and management clusters:
Use external network to advertise the VIP address for Kubernetes API.
Allocate this VIP address in the CIDR address of the external network.
One of the gateways either in the API/LCM network, or in the external
network for a MOSK cluster must have a route to the management subnet of
the management cluster. This is required to establish the traffic flow
between the management and MOSK clusters.
LCM network
Connects LCM agents running on a node to the LCM API of the management
cluster. It is also used for communication between kubelet and the
Kubernetes API server inside a Kubernetes cluster. The MKE components use
this network for communication inside a swarm cluster.
In management clusters, it is replaced by the management network.
Multiple VLAN segments and IP subnets can be created for a multi-rack
architecture. Each server must be connected to one of the LCM segments and
have an IP from the corresponding subnet.
Kubernetes external network
Used to expose the OpenStack, StackLight, and other services of the
MOSK cluster. In management clusters, it is replaced by
the management network.
Depending of cluster needs, an operator can select how the VIP address for
Kubernetes API is advertised. When BGP advertisement is used or the OpenStack
control plane is deployed on separate nodes, as opposite to a compact control
plane, it allows for a more flexible configuration and there is no need to
search for a compromise such as the one described below.
But, when using ARP advertisement on a compact control plane, the selection of
network for advertising the VIP address for Kubernetes API may depend on
whether the symmetry of service return traffic is required. Therefore, when
using ARP advertisement on a compact control plane, select one of the following
options in the drop-down list:
Network selection for advertising the VIP address for Kubernetes
API on a compact control plane
For traffic symmetry between MOSK and management clusters and asymmetry
in case of external clients:
Use the API/LCM network to advertise the VIP address for Kubernetes API.
Allocate this VIP address in the CIDR address of the API/LCM network.
The gateway in the API/LCM network for a MOSK cluster must have a route to
the management subnet of the management cluster. This is required to
ensure symmetric traffic flow between the management and MOSK clusters.
For traffic symmetry in case of external clients and asymmetry between
MOSK and management clusters:
Use external network to advertise the VIP address for Kubernetes API.
Allocate this VIP address in the CIDR address of the external network.
One of the gateways either in the API/LCM network, or in the external
network for a MOSK cluster must have a route to the management subnet of
the management cluster. This is required to establish the traffic flow
between the management and MOSK clusters.
Kubernetes workloads (pods) network
Used for communication between containers in Kubernetes. Each host has an
address on this network, and this address is used by Calico as an endpoint
to the underlay network.
Storage access network (Ceph)
Used for accessing the Ceph storage. Connects Ceph nodes to the storage
clients. The Ceph OSD service is bound to the address on this network. In
Ceph terms, this is a public network 0. We recommended that it is
placed on a dedicated hardware interface.
Connects Ceph nodes to each other. Serves internal replication traffic.
This is a cluster network in Ceph terms. 0
Storage replication network (Ceph)
Used for Ceph storage replication. Connects Ceph nodes to each other. Serves
internal replication traffic.In Ceph terms, this is a cluster network
0. To ensure low latency and fast access, place the network on a
dedicated hardware interface.
Only if BGP mode is used for announcement of IP addresses of the
load-balanced services and for the cluster API VIP.
Note
Since Container Cloud 2.30.0 (Cluster releases 21.0.0 and 20.0.0), management
cluster supports full L3 networking topology in the Technology Preview scope.
This enables deployment of management cluster nodes in dedicated racks without
the need for L2 layer extension between them.
Typically, a routable network used to provide the external access to
OpenStack instances (a floating network). Can be used by the OpenStack
services such as Ironic, Manila, and others, to connect their
management resources.
pr-floating
Networking
Overlay networks (virtual networks)
The network used to provide denied, secure tenant networks with the
help of the tunneling mechanism (VLAN/GRE/VXLAN). If the VXLAN and GRE
encapsulation takes place, the IP address assignment is required on
interfaces at the node level.
neutron-tunnel
Compute
Live migration network
The network used by the OpenStack compute service (Nova) to transfer
data during live migration. Depending on the cloud needs, it can be
placed on a dedicated physical network not to affect other networks
during live migration. The IP address assignment is required on
interfaces at the node level.
lm-vlan
The way of mapping of the logical networks described above to physical
networks and interfaces on nodes depends on the cloud size and configuration.
We recommend placing OpenStack networks on a dedicated physical interface
(bond) that is not shared with storage and Kubernetes management network
to minimize the influence on each other.
The following tables describe networking requirements for the following cluster
types: MOSK, management, and Ceph.
Note
Since Container Cloud 2.30.0 (Cluster releases 21.0.0 and 20.0.0), management
cluster supports full L3 networking topology in the Technology Preview scope.
This enables deployment of management cluster nodes in dedicated racks without
the need for L2 layer extension between them.
Note
When BGP mode is used for announcement of IP addresses of load-balanced
services and for the cluster API VIP, three BGP sessions are created for every
node of a management cluster:
Two sessions are created by MetalLB for public and provisioning services
One session is created by the bird BGP daemon for the cluster API VIP
BGP only allows one session to be established per a pair of endpoints. For
details, see MetalLB documentation: Issues with Calico.
To solve this issue, different methods can be used. MOSK allows configuring
three networks for a management cluster: provisioning, management, and
external. In this case, configure MetalLB to use provisioning and external
networks, and BIRD to use the management network.
If you configure BGP announcement of the load-balancer IP address for a
management cluster API and for load-balanced services of the cluster using
three networks, ensure that your management cluster networking meets the
following requirements:
8 IP addresses (MetalLB for provisioning services)
Optional. 16 IP addresses (DHCP range for directly connected servers)
8 IP addresses (management cluster hosts, API VIP)
8 IP addresses (management cluster hosts)
16 IP addresses (MetalLB for management cluster services)
External routing
Not required
Not required
Required, may use proxy server
Multiple segments/stretch segment
Multiple
Multiple
Multiple
Internal routing
Routing to separate DHCP segments, if in use
Routing to API endpoints of MOSK clusters for LCM
Routing to MetalLB ranges of MOSK clusters for
StackLight authentication
Default route from the management cluster hosts
If you configure ARP announcement of the load-balancer IP address for a
management cluster API and for load-balanced services of the cluster, ensure
that your management cluster networking meets the following requirements:
If you configure BGP announcement of the load-balancer IP address for a
MOSK cluster API and for load-balanced services
of the cluster, ensure that your MOSK cluster networking
meets the following requirements:
Routing to the provisioning network of the management cluster
Routing to the IP subnet of the management network
Routing to all LCM IP subnets of the same MOSK
cluster
Routing to the IP subnet of the MOSK management API
Routing to all IP subnets of Kubernetes workloads
If you configure ARP announcement of the load-balancer IP address for a
MOSK cluster API and for load-balanced services of the
cluster, ensure that your MOSK cluster networking meets
the following requirements:
The bridge interface with this name is mandatory if you need to separate
Kubernetes workloads traffic. You can configure this bridge over the VLAN or
directly over the bonded or single interface.
Routing to all IP subnets of the Storage access network
Routing to all IP subnets of the Storage replication network
Note
When selecting externally routable subnets, ensure that the subnet
ranges do not overlap with the internal subnets ranges. Otherwise, internal
resources of users will not be available from the MOSK
cluster.