MOSK cluster networking

Mirantis OpenStack for Kubernetes (MOSK) clusters managed by Mirantis Container Cloud use the following networks to serve different types of traffic:

MOSK network types

Network role

Description

Provisioning (PXE) network

Facilitates the iPXE boot of all bare metal machines in a MOSK cluster and provisioning of the operating system to machines.

This network is only used during provisioning of the host. It must not be configured on an operational MOSK node.

Life-cycle management (LCM) network

Connects LCM Agents running on the hosts to the Container Cloud LCM API. The LCM API is provided by the management cluster. The LCM network is also used for communication between kubelet and the Kubernetes API server inside a Kubernetes cluster. The MKE components use this network for communication inside a swarm cluster.

The LCM subnet(s) provides IP addresses that are statically allocated by the IPAM service to bare metal hosts. This network must be connected to the Kubernetes API endpoint of the management cluster through an IP router. LCM Agents running on MOSK clusters will connect to the management cluster API through this router. LCM subnets may be different per MOSK cluster as long as this connection requirement is satisfied.

You can use more than one LCM network segment in a MOSK cluster. In this case, separated L2 segments and interconnected L3 subnets are still used to serve LCM and API traffic.

All IP subnets in the LCM networks must be connected to each other by IP routes. These routes must be configured on the hosts through L2 templates.

All IP subnets in the LCM network must be connected to the Kubernetes API endpoints of the management cluster through an IP router.

You can manually select the load balancer IP address for external access to the cluster API and specify it in the Cluster object configuration. Alternatively, you can allocate a dedicated IP range for a virtual IP of the cluster API load balancer by adding a Subnet object with a special annotation. Mirantis recommends that this subnet stays unique per MOSK cluster. For details, see Create subnets.

Note

When using the ARP announcement of the IP address for the cluster API load balancer, the following limitations apply:

  • Only one of the LCM networks can contain the API endpoint. This network is called API/LCM throughout this documentation. It consists of a VLAN segment stretched between all Kubernetes master nodes in the cluster and the IP subnet that provides IP addresses allocated to these nodes.

  • The load balancer IP address must be allocated from the same subnet CIDR address that the LCM subnet uses.

When using the BGP announcement of the IP address for the cluster API load balancer, which is available as Technology Preview since MOSK 23.2.2, no segment stretching is required between Kubernetes master nodes. Also, in this scenario, the load balancer IP address is not required to match the LCM subnet CIDR address.

Kubernetes workloads network

Serves as an underlay network for traffic between pods in the MOSK cluster. Do not share this network between clusters.

There might be more than one Kubernetes pods network segment in the cluster. In this case, they must be connected through an IP router.

Kubernetes workloads network does not need an external access.

The Kubernetes workloads subnet(s) provides IP addresses that are statically allocated by the IPAM service to all nodes and that are used by Calico for cross-node communication inside a cluster. By default, VXLAN overlay is used for Calico cross-node communication.

Kubernetes external network

Serves for an access to the OpenStack endpoints in a MOSK cluster.

When using the ARP announcement of the external endpoints of load-balanced services, the network must contain a VLAN segment extended to all MOSK nodes connected to this network.

When using the BGP announcement of the external endpoints of load-balanced services, which is available as Technology Preview since MOSK 23.2.2, there is no requirement of having a single VLAN segment extended to all MOSK nodes connected to this network.

A typical MOSK cluster only has one external network.

The external network must include at least two IP address ranges defined by separate Subnet objects in Container Cloud API:

  • MOSK services address range

    Provides IP addresses for externally available load-balanced services, including OpenStack API endpoints.

  • External address range

    Provides IP addresses to be assigned to network interfaces on all cluster nodes that are connected to this network. MetalLB speakers must run on the same nodes. For details, see Configure the MetalLB speaker node selector.

    This is required for external traffic to return to the originating client. The default route on the MOSK nodes that are connected to the external network must be configured with the default gateway in the external network.

Storage access network

Serves for the storage access traffic from and to Ceph OSD services.

A MOSK cluster may have more than one VLAN segment and IP subnet in the storage access network. All IP subnets of this network in a single cluster must be connected by an IP router.

The storage access network does not require external access unless you want to directly expose Ceph to the clients outside of a MOSK cluster.

Note

A direct access to Ceph by the clients outside of a MOSK cluster is technically possible but not supported by Mirantis. Use at your own risk.

The IP addresses from subnets in this network are statically allocated by the IPAM service to Ceph nodes. The Ceph OSD services bind to these addresses on their respective nodes.

This is a public network in Ceph terms. 1

Storage replication network

Serves for the storage replication traffic between Ceph OSD services.

A MOSK cluster may have more than one VLAN segment and IP subnet in this network as long as the subnets are connected by an IP router.

This network does not require external access.

The IP addresses from subnets in this network are statically allocated by the IPAM service to Ceph nodes. The Ceph OSD services bind to these addresses on their respective nodes.

This is a cluster network in Ceph terms. 1

Out-of-Band (OOB) network

Connects Baseboard Management Controllers (BMCs) of the bare metal hosts. Must not be accessible from a MOSK cluster.

1(1,2)

For more details about Ceph networks, see Ceph Network Configuration Reference.

The following diagram illustrates the networking schema of the Container Cloud deployment on bare metal with a MOSK cluster using ARP announcements:

../../_images/network-multirack.png

Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. The following diagram illustrates the networking schema of the Container Cloud deployment on bare metal with a MOSK cluster using BGP announcements:

../../_images/network-multirack-bgp.png