Multi-rack architecture¶
Available since MOSK 21.6 TechPreview
Mirantis OpenStack for Kubernetes (MOSK) enables you to deploy a cluster with a multi-rack architecture, where every data center cabinet (a rack), incorporates its own Layer 2 network infrastructure that does not extend beyond its top-of-rack switch. The architecture allows a MOSK cloud to integrate natively with the Layer 3-centric networking topologies seen in modern data centers, such as Spine-Leaf.
The architecture eliminates the need to stretch and manage VLANs across multiple physical locations in a single data center, or to establish VPN tunnels between the parts of a geographically distributed cloud.
The set of networks present in each rack depends on the type of the OpenStack networking service back end in use.

Bare metal provisioning¶
The multi-rack architecture in Mirantis Container Cloud and
MOSK requires
additional configuration of networking infrastructure. Every Layer 2 domain,
or rack, needs to have a DHCP relay agent configured on its dedicated
segment of the Common/PXE network (lcm-nw
VLAN). The agent
handles all Layer-2 DHCP requests incoming from the bare metal servers living
in the rack and forwards them as Layer-3 packets across the data center fabric
to a Mirantis Container Cloud regional cluster.

You need to configure per-rack DHCP ranges by defining Subnet resources in Mirantis Container Cloud as described in Mirantis Container Cloud documentation: Configure multiple DHCP ranges using Subnet resources.
Based on the address of the DHCP agent that relays a request from a server, Mirantis Container Cloud will automatically allocate an IP address in the corresponding subnet.
For the networks types other than Common/PXE, you need to define subnets using the Mirantis Container Cloud L2 templates. Every rack needs to have a dedicated set of L2 templates, each template representing a specific server role and configuration.
Multi-rack MOSK cluster with Tungsten Fabric¶
For MOSK clusters with the Tungsten Fabric back end, you need to place the servers running the cloud control plane components into a single rack. This limitation is caused by the Layer 2 VRRP protocol used by the Kubernetes load balancer mechanism (MetalLB) to ensure high availability of Mirantis Container Cloud and MOSK API.
Note
In the future product versions, Mirantis will be implementing support for the Layer 3 BGP mode for the Kubernetes load balancing mechanism.
The diagram below will help you to plan the networking layout of a multi-rack MOSK cloud with Tungsten Fabric.

The table below provides a mapping between the racks and the network types participating in a multi-rack MOSK cluster with the Tungsten Fabric back end.
Network |
VLAN name |
Rack 1 |
Rack 2 and N |
---|---|---|---|
Common/PXE |
|
Yes |
Yes |
Management |
|
Yes |
Yes |
External (MetalLB) |
|
Yes |
No |
Kubernetes workloads |
|
Yes |
Yes |
Storage access (Ceph) |
|
Yes |
Yes |
Storage replication (Ceph) |
|
Yes |
Yes |
Overlay |
|
Yes |
Yes |
Live migration |
|
Yes |
Yes |