Underlay networking: routing configuration

This section describes requirements for the configuration of the underlay network for an MOSK cluster in a multi-rack reference configuration. The infrastructure operator must configure the underlay network according to these guidelines. Mirantis Container Cloud will not configure routing on the network devices.

Provisioning network

In the multi-rack reference architecture, every server rack has its own layer-2 segment (VLAN) for network bootstrap and installation of physical servers.

You need to configure top-of-rack (ToR) switches in each rack with the default gateway for the provisioning network VLAN. This gateway must also serve as a DHCP Relay Agent on the border of the VLAN. The agent handles broadcast DHCP requests coming from the bare metal servers in the rack and forwards them as unicast packets across the data center L3 fabric to the provisioning network of a Container Cloud management cluster.

Therefore, each ToR gateway must have an IP route to the IP subnet of the provisioning network of the management cluster. The provisioning network gateway, in turn, must have routes to all IP subnets of all racks.

The hosts of the management cluster must have routes to all IP subnets in the provisioning network through the gateway in the provisioning network of the management cluster.

All hosts in the management cluster must have IP addresses from the same IP subnet of the provisioning network. Even if the hosts of the management cluster are mounted to different racks, they must share a single provisioning VLAN segment.

Management network

All hosts of a management cluster must have IP addresses from the same subnet of the management network. Even if hosts of a management cluster are mounted to different racks, they must share a single management VLAN segment.

The gateway in this network is used as the default route on the nodes in a Container Cloud management cluster. This gateway must connect to external Internet networks directly or through a proxy server. If the Internet is accessible through a proxy server, you must configure Container Cloud bootstrap to use it as well. For details, see Container Cloud Deployment Guide: Deploy a management cluster using CLI.

This network connects a Container Cloud management cluster to Kubernetes API endpoints of MOSK clusters. It also connects LCM agents of MOSK nodes to the Kubernetes API endpoint of the management cluster.

The network gateway must have routes to all API/LCM subnets of all MOSK clusters.

LCM network

This network may include multiple VLANs, typically, one VLAN per rack. Each VLAN may have one or more IP subnets with gateways configured on ToR switches.

Each ToR gateway must provide routes to all other IP subnets in all other VLANs in the LCM network to enable communication between nodes in the cluster.

Note

Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

If you configure BGP announcement of the load-balancer IP address for a MOSK cluster API:

  • All nodes of a MOSK cluster must be connected to the LCM network. Each host connected to this network must have routes to all IP subnets in the LCM network and to the management subnet of the management cluster, through the ToR gateway for the rack of this host.

  • It is not required to configure a separate API/LCM network. Announcement of the IP address of the load balancer is done using the LCM network.

If you configure ARP announcement of the load-balancer IP address for a MOSK cluster API:

  • All nodes of a MOSK cluster excluding manager nodes must be connected to the LCM network. Each host connected to this network must have routes to all IP subnets in the LCM network, including the API/LCM network of this MOSK cluster and to the Management subnet of the management cluster, through the ToR gateway for the rack of this host.

  • It is required to configure a separate API/LCM network. All manager nodes of a MOSK cluster must be connected to the API/LCM network. IP address announcement for load balancing is done using the API/LCM network.

API/LCM network

Note

Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

If BGP announcement is configured for the MOSK cluster API LB address, the API/LCM network is not required. Announcement of the cluster API LB address is done using the LCM network.

If you configure ARP announcement of the load-balancer IP address for the MOSK cluster API, the API/LCM network must be configured on the Kubernetes manager nodes of the cluster. This network contains the Kubernetes API endpoint with the VRRP virtual IP address.

This network consists of a single VLAN shared between all MOSK manager nodes in a MOSK cluster, even if the nodes are spread across multiple racks. All manager nodes of a MOSK cluster must be connected to this network and have IP addresses from the same subnet in this network.

The gateway in the API/LCM network for a MOSK cluster must have a route to the Management subnet of the management cluster. This is required to ensure symmetric traffic flow between the management and MOSK clusters.

The gateway in this network must also have routes to all IP subnets in the LCM network of this MOSK cluster.

The load-balancer IP address for cluster API must be allocated from the same CIDR address that the API/LCM subnet uses.

External network

Note

Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

If you configure BGP announcement for IP addresses of load-balanced services of a MOSK cluster, the external network can consist of multiple VLAN segments connected to all nodes of a MOSK cluster where MetalLB speaker components are configured to announce IP addresses for Kubernetes load-balanced services. Mirantis recommends that you use OpenStack controller nodes for this purpose.

If you configure ARP announcement for IP addresses of load-balanced services of a MOSK cluster, the external network must consist of a single VLAN stretched to the ToR switches of all the racks where MOSK nodes connected to the external network are located. Those are the nodes where MetalLB speaker components are configured to announce IP addresses for Kubernetes load-balanced services. Mirantis recommends that you use OpenStack controller nodes for this purpose.

The IP gateway in this network is used as the default route on all nodes in the MOSK cluster, which are connected to this network. This allows external users to connect to the OpenStack endpoints exposed as Kubernetes load-balanced services.

Dedicated IP ranges from this network must be configured as address pools for the MetalLB service. MetalLB allocates addresses from these address pools to Kubernetes load-balanced services.

Ceph public network

This network may include multiple VLANs and IP subnets, typically, one VLAN and IP subnet per rack. All IP subnets in this network must be connected by IP routes on the ToR switches.

Typically, every node in a MOSK cluster is connected to this network and have routes to all IP subnets from this network through its rack IP gateway.

This network is not connected to the external networks.

Ceph cluster network

This network may include multiple VLANs and IP subnets, typically, one VLAN and IP subnet per rack. All IP subnets in this network must be connected by IP routes on the ToR switches.

Every Ceph OSD node in a MOSK cluster must be connected to this network and have routes to all IP subnets from this network through its rack IP gateway.

This network is not connected to the external networks.

Kubernetes workloads network

This network may include multiple VLANs and IP subnets, typically, one VLAN and IP subnet per rack. All IP subnets in this network must be connected by IP routes on the ToR switches.

All nodes in a MOSK cluster must be connected to this network and have routes to all IP subnets from this network through its rack IP gateway.

This network is not connected to the external networks.