MOSK cluster networking

Mirantis OpenStack for Kubernetes (MOSK) clusters managed by Mirantis Container Cloud use the following networks to serve different types of traffic:

MOSK network types

Network role

Description

Provisioning (PXE) network

Facilitates the iPXE boot of all bare metal machines in a MOSK cluster and provisioning of the operating system to machines.

This network is only used during provisioning of the host. It must not be configured on an operational MOSK node.

Life-cycle management (LCM) and API network

Connects LCM agents on the hosts to the Container Cloud API provided by the regional or management cluster. Used for communication between kubelet and the Kubernetes API server inside a Kubernetes cluster. The MKE components use this network for communication inside a swarm cluster.

You can use more than one LCM network segment in a MOSK cluster. In this case, separated L2 segments and interconnected L3 subnets are still used to serve LCM and API traffic.

All IP subnets in the LCM networks must be connected to each other by IP routes. These routes must be configured on the hosts through L2 templates.

All IP subnets in the LCM network must be connected to the Kubernetes API endpoints of the management or regional cluster through an IP router.

You can manually select the VIP address for the Kubernetes API endpoint from the LCM subnet and specify it in the Cluster object configuration. Alternatively, you can allocate a dedicated IP range for a virtual IP of the API endpoint by adding a Subnet object with a special annotation. For details, see Create subnets.

Note

Due to current limitations of the API endpoint failover, only one of the LCM networks can contain the API endpoint. This network is called API/LCM throughout this documentation. It consists of a VLAN segment stretched between all Kubernetes manager nodes in the cluster and the IP subnet that provides IP addresses allocated to these nodes.

Kubernetes workloads network

Serves as an underlay network for traffic between pods in the managed cluster. Calico uses this network to build mesh interconnections between nodes in the cluster. This network should not be shared between clusters.

There might be more than one Kubernetes pods network in the cluster. In this case, they must be connected through an IP router.

Kubernetes workloads network does not need an external access.

Kubernetes external network

Serves for an access to the OpenStack endpoints in a MOSK cluster. Due to the limitations of MetalLB in the layer2 mode, the network must contain a VLAN segment extended to all MOSK controller nodes.

A typical MOSK cluster only has one external network.

The external network must include at least two IP address ranges defined by separate Subnet objects in Container Cloud API:

  • MOSK services range Technology Preview

    Provides IP addresses for externally available load-balanced services, including OpenStack API endpoints. The IP addresses for MetalLB services are assigned from this range.

  • External range

    Provides IP addresses to be assigned to the network interfaces on the cluster nodes:

    • Before MOSK 22.2, on the OpenStack controller nodes

    • Since MOSK 22.2, on all nodes

    This is required for external traffic to return to the originating client. The default route on the MOSK nodes must be configured with the default gateway in the external network.

Storage access network

Serves for the storage access traffic from and to Ceph OSD services.

A MOSK cluster may have more than one VLAN segment and IP subnet in the storage access network. All IP subnets of this network in a single cluster must be connected by an IP router.

The storage access network does not require external access unless you want to directly expose Ceph to the clients outside of a MOSK cluster.

Note

A direct access to Ceph by the clients outside of a MOSK cluster is technically possible but not supported by Mirantis. Use at your own risk.

The IP addresses from subnets in this network are assigned to Ceph nodes. The Ceph OSD services bind to these addresses on their respective nodes.

This is a public network in Ceph terms. 1

Storage replication network

Serves for the storage replication traffic between Ceph OSD services.

A MOSK cluster may have more than one VLAN segment and IP subnet in this network as long as the subnets are connected by an IP router.

This network does not require external access.

The IP addresses from subnets in this network are assigned to Ceph nodes. The Ceph OSD services bind to these addresses on their respective nodes.

This is a cluster network in Ceph terms. 1

Out-of-Band (OOB) network

Connects Baseboard Management Controllers (BMCs) of the bare metal hosts. Must not be accessible from a MOSK cluster.

1(1,2)

For more details about Ceph networks, see Ceph Network Configuration Reference.

The following diagram illustrates the networking schema of the Container Cloud deployment on bare metal with a MOSK cluster:

../../_images/network-multirack.png