Create an L2 template for a MOSK controller node¶
Note
Mirantis does not recommend modifying L2 templates in use to prevent accidental cluster failures due to unsafe changes.
The list of risks posed by modifying L2 templates includes:
Services running on hosts cannot reconfigure automatically to switch to the new IP addresses and/or interfaces.
Connections between services are interrupted unexpectedly, which can cause data loss.
Incorrect configurations on hosts can lead to irrevocable loss of connectivity between services and unexpected cluster partition or disassembly.
According to the reference architecture, MOSK controller nodes must be connected to the following networks:
PXE network
LCM network
Kubernetes workloads network
Storage access network (if deploying with Ceph as a back end for ephemeral storage)
Floating IP and provider networks. Not required for deployment with Tungsten Fabric.
Tenant underlay networks. If deploying with VXLAN networking or with Tungsten Fabric. In the latter case, the BGP service is configured over this network.
To create L2 templates for MOSK controller nodes:
Create or open the
mosk-l2template.yml
file that contains the L2 templates.Add L2 templates using the following example. Adjust the values of specific parameters according to the specification of your environment.
apiVersion: ipam.mirantis.com/v1alpha1 kind: L2Template metadata: labels: kaas.mirantis.com/provider: baremetal kaas.mirantis.com/region: region-one cluster.sigs.k8s.io/cluster-name: mosk-cluster-name rack1-mosk-controller: "true" name: rack1-mosk-controller namespace: mosk-namespace-name spec: autoIfMappingPrio: - provision - eno - ens - enp l3Layout: - subnetName: mgmt-lcm scope: global - subnetName: rack1-k8s-lcm scope: namespace - subnetName: k8s-external scope: namespace - subnetName: rack1-k8s-pods scope: namespace - subnetName: rack1-ceph-public scope: namespace - subnetName: rack1-tenant-tunnel scope: namespace npTemplate: |- version: 2 ethernets: {{nic 0}}: dhcp4: false dhcp6: false match: macaddress: {{mac 0}} set-name: {{nic 0}} mtu: 9000 {{nic 1}}: dhcp4: false dhcp6: false match: macaddress: {{mac 1}} set-name: {{nic 1}} mtu: 9000 {{nic 2}} dhcp4: false dhcp6: false match: macaddress: {{mac 2}} set-name: {{nic 2}} mtu: 9000 {{nic 3}}: dhcp4: false dhcp6: false match: macaddress: {{mac 3}} set-name: {{nic 3}} mtu: 9000 bonds: bond0: mtu: 9000 parameters: mode: 802.3ad mii-monitor-interval: 100 interfaces: - {{nic 0}} - {{nic 1}} bond1: mtu: 9000 parameters: mode: 802.3ad mii-monitor-interval: 100 interfaces: - {{nic 2}} - {{nic 3}} vlans: k8s-lcm-v: id: 403 link: bond0 mtu: 9000 k8s-ext-v: id: 409 link: bond0 mtu: 9000 k8s-pods-v: id: 408 link: bond0 mtu: 9000 pr-floating: id: 407 link: bond1 mtu: 9000 stor-frontend: id: 404 link: bond0 addresses: - {{ip "stor-frontend:rack1-ceph-public"}} mtu: 9000 routes: - to: 10.199.16.0/22 # aggregated address space for Ceph public network via: {{ gateway_from_subnet "rack1-ceph-public" }} tenant-tunnel: id: 406 link: bond1 addresses: - {{ip "tenant-tunnel:rack1-tenant-tunnel"}} mtu: 9000 routes: - to: 10.195.0.0/22 # aggregated address space for tenant networks via: {{ gateway_from_subnet "rack1-tenant-tunnel" }} bridges: k8s-lcm: interfaces: [k8s-lcm-v] addresses: - {{ ip "k8s-lcm:rack1-k8s-lcm" }} nameservers: addresses: {{nameservers_from_subnet "rack1-k8s-lcm"}} routes: - to: 10.197.0.0/21 # aggregated address space for LCM and API/LCM networks via: {{ gateway_from_subnet "rack1-k8s-lcm" }} - to: {{ cidr_from_subnet "mgmt-lcm" }} via: {{ gateway_from_subnet "rack1-k8s-lcm" }} k8s-ext: interfaces: [k8s-ext-v] addresses: - {{ip "k8s-ext:k8s-external"}} nameservers: addresses: {{nameservers_from_subnet "k8s-external"}} gateway4: {{ gateway_from_subnet "k8s-external" }} mtu: 9000 k8s-pods: interfaces: [k8s-pods-v] addresses: - {{ip "k8s-pods:rack1-k8s-pods"}} mtu: 9000 routes: - to: 10.199.0.0/22 # aggregated address space for Kubernetes workloads via: {{ gateway_from_subnet "rack1-k8s-pods" }}
Note
The
kaas.mirantis.com/region
label is removed from all Container Cloud and MOSK objects in 24.1.Therefore, do not add the label starting with these releases. On existing clusters updated to these releases, or if added manually, Container Cloud ignores this label.
Note
Before MOSK 23.3, an L2 template requires
clusterRef: <clusterName>
in thespec
section. Since MOSK 23.3, this parameter is deprecated and automatically migrated to thecluster.sigs.k8s.io/cluster-name: <clusterName>
label.
Caution
If you plan to deploy a MOSK cluster with
the compact control plane option and configure ARP announcement of the
load-balancer IP address for the MOSK cluster API, the
API/LCM network will be used for MOSK controller nodes.
Therefore, change the rack1-k8s-lcm
subnet to the api-lcm
one in
the corresponding L2Template
object:
spec:
...
l3Layout:
...
- subnetName: api-lcm
scope: namespace
...
npTemplate: |-
...
bridges:
k8s-lcm:
interfaces: [k8s-lcm-v]
addresses:
- {{ ip "k8s-lcm:api-lcm" }}
nameservers:
addresses: {{nameservers_from_subnet "api-lcm"}}
routes:
- to: 10.197.0.0/21 # aggregated address space for LCM and API/LCM networks
via: {{ gateway_from_subnet "api-lcm" }}
- to: {{ cidr_from_subnet "mgmt-lcm" }}
via: {{ gateway_from_subnet "api-lcm" }}
...
Proceed with Create an L2 template for a MOSK compute node.