Network types¶
This section describes network types for Layer 3 networks used for Kubernetes and Mirantis OpenStack for Kubernetes (MOSK) clusters along with requirements for each network type.
Note
Only IPv4 is currently supported by Container Cloud and IPAM for infrastructure networks. Both IPv4 and IPv6 are supported for OpenStack workloads.
The following diagram provides an overview of the underlay networks in a MOSK environment:
L3 networks for Kubernetes¶
A MOSK deployment typically requires the following types of networks:
- Provisioning network
Used for provisioning of bare metal servers.
- Management network
Used for management of the Container Cloud infrastructure and for communication between containers in Kubernetes.
- LCM/API network
Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.
If BGP announcement is configured for the MOSK cluster API LB address, the API/LCM network is not required. Announcement of the cluster API LB address is done using the LCM network.
If you configure ARP announcement of the load-balancer IP address for the MOSK cluster API, the API/LCM network must be configured on the Kubernetes manager nodes of the cluster. This network contains the Kubernetes API endpoint with the VRRP virtual IP address.
- LCM network
Enables communication between the MKE cluster nodes. Multiple VLAN segments and IP subnets can be created for a multi-rack architecture. Each server must be connected to one of the LCM segments and have an IP from the corresponding subnet.
- External network
Used to expose the OpenStack, StackLight, and other services of the MOSK cluster.
- Kubernetes workloads network
Used for communication between containers in Kubernetes.
- Storage access network (Ceph)
Used for accessing the Ceph storage. In Ceph terms, this is a public network 0. We recommended that it is placed on a dedicated hardware interface.
- Storage replication network (Ceph)
Used for Ceph storage replication. In Ceph terms, this is a cluster network 0. To ensure low latency and fast access, place the network on a dedicated hardware interface.
- 0(1,2)
For details about Ceph networks, see Ceph Network Configuration Reference.
L3 networks for MOSK¶
The MOSK deployment additionally requires the following networks.
Service name |
Network |
Description |
VLAN name |
---|---|---|---|
Networking |
Provider networks |
Typically, a routable network used to provide the external access to OpenStack instances (a floating network). Can be used by the OpenStack services such as Ironic, Manila, and others, to connect their management resources. |
|
Networking |
Overlay networks (virtual networks) |
The network used to provide denied, secure tenant networks with the help of the tunneling mechanism (VLAN/GRE/VXLAN). If the VXLAN and GRE encapsulation takes place, the IP address assignment is required on interfaces at the node level. |
|
Compute |
Live migration network |
The network used by the OpenStack compute service (Nova) to transfer data during live migration. Depending on the cloud needs, it can be placed on a dedicated physical network not to affect other networks during live migration. The IP address assignment is required on interfaces at the node level. |
|
The way of mapping of the logical networks described above to physical networks and interfaces on nodes depends on the cloud size and configuration. We recommend placing OpenStack networks on a dedicated physical interface (bond) that is not shared with storage and Kubernetes management network to minimize the influence on each other.
L3 networks requirements¶
The following tables describe networking requirements for a MOSK cluster, Container Cloud management and Ceph clusters.
Network type |
Provisioning |
Management |
---|---|---|
Suggested interface name |
N/A |
|
Minimum number of VLANs |
1 |
1 |
Minimum number of IP subnets |
3 |
2 |
Minimum recommended IP subnet size |
|
|
External routing |
Not required |
Required, may use proxy server |
Multiple segments/stretch segment |
Stretch segment for management cluster due to MetalLB Layer 2 limitations 1 |
Stretch segment due to VRRP, MetalLB Layer 2 limitations |
Internal routing |
Routing to separate DHCP segments, if in use |
|
- 1
Multiple VLAN segments with IP subnets can be added to the cluster configuration for separate DHCP domains.
Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.
If you configure BGP announcement of the load-balancer IP address for a MOSK cluster API and for load-balanced services of the cluster:
Network type |
Provisioning |
LCM |
External |
Kubernetes workloads |
---|---|---|---|---|
Minimum number of VLANs |
1 (optional) |
1 |
1 |
1 |
Suggested interface name |
N/A |
|
|
|
Minimum number of IP subnets |
1 (optional) |
1 |
2 |
1 |
Minimum recommended IP subnet size |
16 IPs (DHCP range) |
|
|
1 IP per cluster node |
Stretch or multiple segments |
Multiple |
Multiple |
Multiple For details, see Configure the MetalLB speaker node selector. |
Multiple |
External routing |
Not required |
Not required |
Required, default route |
Not required |
Internal routing |
Routing to the provisioning network of the management cluster |
|
Routing to the IP subnet of the Container Cloud Management API |
Routing to all IP subnets of Kubernetes workloads |
If you configure ARP announcement of the load-balancer IP address for a MOSK cluster API and for load-balanced services of the cluster:
Network type |
Provisioning |
LCM/API |
LCM |
External |
Kubernetes workloads |
---|---|---|---|---|---|
Minimum number of VLANs |
1 (optional) |
1 |
1 (optional) |
1 |
1 |
Suggested interface name |
N/A |
|
|
|
|
Minimum number of IP subnets |
1 (optional) |
1 |
1 (optional) |
2 |
1 |
Minimum recommended IP subnet size |
16 IPs (DHCP range) |
|
1 IP per MOSK node (Kubernetes worker) |
|
1 IP per cluster node |
Stretch or multiple segments |
Multiple |
Stretch due to VRRP limitations |
Multiple |
Stretch connected to all MOSK controller nodes. For details, see Configure the MetalLB speaker node selector. |
Multiple |
External routing |
Not required |
Not required |
Not required |
Required, default route |
Not required |
Internal routing |
Routing to the provisioning network of the management cluster |
|
|
Routing to the IP subnet of the Container Cloud Management API |
Routing to all IP subnets of Kubernetes workloads |
- 2(1,2)
The bridge interface with this name is mandatory if you need to separate Kubernetes workloads traffic. You can configure this bridge over the VLAN or directly over the bonded or single interface.
Network type |
Storage access |
Storage replication |
---|---|---|
Minimum number of VLANs |
1 |
1 |
Suggested interface name |
|
|
Minimum number of IP subnets |
1 |
1 |
Minimum recommended IP subnet size |
1 IP per cluster node |
1 IP per cluster node |
Stretch or multiple segments |
Multiple |
Multiple |
External routing |
Not required |
Not required |
Internal routing |
Routing to all IP subnets of the Storage access network |
Routing to all IP subnets of the Storage replication network |
Note
When selecting externally routable subnets, ensure that the subnet ranges do not overlap with the internal subnets ranges. Otherwise, internal resources of users will not be available from the MOSK cluster.
- 3(1,2)
For details about Ceph networks, see Ceph Network Configuration Reference.