Multi-rack architecture

Available since MOS 21.6 TechPreview

Mirantis OpenStack for Kubernetes (MOS) enables you to deploy a cluster with a multi-rack architecture, where every data center cabinet (a rack), incorporates its own Layer 2 network infrastructure that does not extend beyond its top-of-rack switch. The architecture allows a MOS cloud to integrate natively with the Layer 3-centric networking topologies seen in modern data centers, such as Spine-Leaf.

The architecture eliminates the need to stretch and manage VLANs across multiple physical locations in a single data center, or to establish VPN tunnels between the parts of a geographically distributed cloud.

The set of networks present in each rack depends on the type of the OpenStack networking service back end in use.

../../_images/multi-rack.png

Bare metal provisioning

The multi-rack architecture in Mirantis Container Cloud and MOS requires additional configuration of networking infrastructure. Every Layer 2 domain, or rack, needs to have a DHCP relay agent configured on its dedicated segment of the Common/PXE network (lcm-nw VLAN). The agent handles all Layer-2 DHCP requests incoming from the bare metal servers living in the rack and forwards them as Layer-3 packets across the data center fabric to a Mirantis Container Cloud regional cluster.

../../_images/multi-rack-bm.png

You need to configure per-rack DHCP ranges by defining Subnet resources in Mirantis Container Cloud as described in Mirantis Container Cloud documentation: Configure multiple DHCP ranges using Subnet resources.

Based on the address of the DHCP agent that relays a request from a server, Mirantis Container Cloud will automatically allocate an IP address in the corresponding subnet.

For the networks types other than Common/PXE, you need to define subnets using the Mirantis Container Cloud L2 templates. Every rack needs to have a dedicated set of L2 templates, each template representing a specific server role and configuration.

Multi-rack MOS cluster with Tungsten Fabric

For MOS clusters with the Tungsten Fabric back end, you need to place the servers running the cloud control plane components into a single rack. This limitation is caused by the Layer 2 VRRP protocol used by the Kubernetes load balancer mechanism (MetalLB) to ensure high availability of Mirantis Container Cloud and MOS API.

Note

In the future product versions, Mirantis will be implementing support for the Layer 3 BGP mode for the Kubernetes load balancing mechanism.

The diagram below will help you to plan the networking layout of a multi-rack MOS cloud with Tungsten Fabric.

../../_images/multi-rack-tf.png

The table below provides a mapping between the racks and the network types participating in a multi-rack MOS cluster with the Tungsten Fabric back end.

Networks and VLANs for a multi-rack MOS cluster with TF

Network

VLAN name

Rack 1

Rack 2 and N

Common/PXE

lcm-nw

Yes

Yes

Management

lcm-nw

Yes

Yes

External (MetalLB)

k8s-ext-v

Yes

No

Kubernetes workloads

k8s-pods-v

Yes

Yes

Storage access (Ceph)

stor-frontend

Yes

Yes

Storage replication (Ceph)

stor-backend

Yes

Yes

Overlay

tenant-vlan

Yes

Yes

Live migration

lm-vlan

Yes

Yes