Multi-rack architecture

TechPreview

Mirantis OpenStack for Kubernetes (MOSK) enables you to deploy a cluster with a multi-rack architecture, where every data center cabinet (a rack) incorporates its own Layer 2 network infrastructure that does not extend beyond its top-of-rack switch. The architecture allows a MOSK cloud to integrate natively with the Layer 3-centric networking topologies such as Spine-Leaf that are commonly seen in modern data centers.

The architecture eliminates the need to stretch and manage VLANs across parts of a single data center, or to build VPN tunnels between the segments of a geographically distributed cloud.

The set of networks present in each rack depends on the back end used by the OpenStack networking service.

multi-rack-overview.html

Bare metal provisioning network

In the Mirantis Container Cloud and MOSK multi-rack reference architecture, every rack has its own L2 segment (VLAN) to bootstrap and install servers.

Segmentation of the provisioning network requires additional configuration of the underlay networking infrastructure and certain Container Cloud API objects. You need to configure a DHCP Relay agent on the border of each VLAN in the provisioning network. The agent handles broadcast DHCP requests coming from the bare metal servers in the rack and forwards them as unicast packets across L3 fabric of the data center to a Container Cloud management cluster.

multi-rack-bm.html

From the standpoint of Container Cloud API, you need to configure per-rack DHCP ranges by adding Subnet resources in Container Cloud as described in Container Cloud documentation: Configure multiple DHCP ranges using Subnet resources.

The DHCP server of Container Cloud automatically leases a temporary IP address from the DHCP range to the requester host depending on the address of the DHCP agent that relays the request.

Multi-rack MOSK cluster

To deploy a MOSK cluster with multi-rack reference architecture, you need to create a dedicated set of subnets and L2 templates for every rack in your cluster.

Every specific host type in the rack, which is defined by the role in the MOSK cluster and network-related hardware configuration, may require a specific L2 template.

Note

Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

For MOSK 23.1 and older versions, due to the Container Cloud limitations, you need to configure the following networks to have L2 segments (VLANs) stretch across racks to all hosts of certain types in a multi-rack environment:

LCM/API network

Must be configured on the Kubernetes manager nodes of the MOSK cluster. Contains a Kubernetes API endpoint with a VRRP virtual IP address. Enables MKE cluster nodes to communicate with each other.

External network

Exposes OpenStack, StackLight, and other services of the MOSK cluster to external clients.

For details, see Underlay networking: routing configuration.

When planning space allocation for IP addresses in your cluster, pick large IP ranges for each type of network. Then you will split these ranges into per-rack subnets.

For example, if you allocate a /20 address block for LCM network, then you can create up to 16 Subnet objects with the /24 address block each for up to 16 racks. This way you can simplify routing on your hosts using the large /20 IP subnet as an aggregated route destination. For details, see Underlay networking: routing configuration.

Multi-rack MOSK cluster with Tungsten Fabric

A typical medium and more sized MOSK cloud consists of three or more racks that can generally be divided into the following major categories:

  • Compute/Storage racks that contain the hypervisors and instances running on top of them. Additionally, they contain nodes that store cloud applications’ block, ephemeral, and object data as part of the Ceph cluster.

  • Control plane racks that incorporate all the components needed by the cloud operator to manage its life cycle. Also, they include the services through which the cloud users interact with the cloud to deploy their applications, such as cloud APIs and web UI.

    A control plane rack may also contain additional compute and storage nodes.

The diagram below will help you to plan the networking layout of a multi-rack MOSK cloud with Tungsten Fabric.

multi-rack-tf.html

Note

Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

For MOSK 23.1 and older versions, Kubernetes masters (3 nodes) either need to be placed into a single rack or, if distributed across multiple racks for better availability, require stretching of the L2 segment of the management network across these racks. This requirement is caused by the Mirantis Kubernetes Engine underlay for MOSK relying on the Layer 2 VRRP protocol to ensure high availability of the Kubernetes API endpoint.

The table below provides a mapping between the racks and the network types participating in a multi-rack MOSK cluster with the Tungsten Fabric back end.

Networks and VLANs for a multi-rack MOSK cluster with TF

Network

VLAN name

Control Plane rack

Compute/Storage rack

Common/PXE

lcm-nw

Yes

Yes

Management

lcm-nw

Yes

Yes

External (MetalLB)

k8s-ext-v

Yes

No

Kubernetes workloads

k8s-pods-v

Yes

Yes

Storage access (Ceph)

stor-frontend

Yes

Yes

Storage replication (Ceph)

stor-backend

Yes

Yes

Overlay

tenant-vlan

Yes

Yes

Live migration

lm-vlan

Yes

Yes