Multi-rack architecture

TechPreview

Mirantis OpenStack for Kubernetes (MOSK) enables you to deploy a cluster with a multi-rack architecture, where every data center cabinet (a rack), incorporates its own Layer 2 network infrastructure that does not extend beyond its top-of-rack switch. The architecture allows a MOSK cloud to integrate natively with the Layer 3-centric networking topologies such as Spine-Leaf that are commonly seen in modern data centers.

The architecture eliminates the need to stretch and manage VLANs across parts of a single data center, or to build VPN tunnels between the segments of a geographically distributed cloud.

The set of networks present in each rack depends on the OpenStack networking service back end in use.

multi-rack-overview.html

Bare metal provisioning

The multi-rack architecture in Mirantis Container Cloud and MOSK requires additional configuration of networking infrastructure. Every Layer 2 domain, or rack, needs to have a DHCP relay agent configured on its dedicated segment of the Common/PXE network (lcm-nw VLAN). The agent handles all Layer-2 DHCP requests incoming from the bare metal servers living in the rack and forwards them as Layer-3 packets across the data center fabric to a Mirantis Container Cloud regional cluster.

multi-rack-bm.html

You need to configure per-rack DHCP ranges by defining Subnet resources in Mirantis Container Cloud as described in Mirantis Container Cloud documentation: Configure multiple DHCP ranges using Subnet resources.

Based on the address of the DHCP agent that relays a request from a server, Mirantis Container Cloud will automatically allocate an IP address in the corresponding subnet.

For the networks types other than Common/PXE, you need to define subnets using the Mirantis Container Cloud L2 templates. Every rack needs to have a dedicated set of L2 templates, each template representing a specific server role and configuration.

Multi-rack MOSK cluster with Tungsten Fabric

A typical medium and more sized MOSK cloud consists of three or more racks that can generally be divided into the following major categories:

  • Compute/Storage racks that contain the hypervisors and instances running on top of them. Additionally, they contain nodes that store cloud applications’ block, emphemeral, and object data as part of the Ceph cluster.

  • Control plane racks that incorporate all the components needed by the cloud operator to manage its life cycle. Also, they include the services through which the cloud users interact with the cloud to deploy their applications, such as cloud APIs and web UI.

    A control plane rack may also contain additional compute and storage nodes.

The diagram below will help you to plan the networking layout of a multi-rack MOSK cloud with Tungsten Fabric.

multi-rack-tf.html

Note

As of the current MOSK version, Kubernetes masters (3 nodes) either need to be placed into a single rack or, if distributed across multiple racks for better availability, require stretching of the L2 segment of the management network across these racks. This requirement is caused by the Mirantis Kubernetes Engine underlay for MOSK relying on the Layer 2 VRRP protocol to ensure high availability of the Kubernetes API endpoint. Mirantis will be looking for a solution to address these limitations in the future versions.

The table below provides a mapping between the racks and the network types participating in a multi-rack MOSK cluster with the Tungsten Fabric back end.

Networks and VLANs for a multi-rack MOSK cluster with TF

Network

VLAN name

Control Plane rack

Compute/Storage rack

Common/PXE

lcm-nw

Yes

Yes

Management

lcm-nw

Yes

Yes

External (MetalLB)

k8s-ext-v

Yes

No

Kubernetes workloads

k8s-pods-v

Yes

Yes

Storage access (Ceph)

stor-frontend

Yes

Yes

Storage replication (Ceph)

stor-backend

Yes

Yes

Overlay

tenant-vlan

Yes

Yes

Live migration

lm-vlan

Yes

Yes