Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Now, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Create an L2 template for a MOSK controller node¶
Caution
Modification of L2 templates in use is only allowed with a mandatory validation step from the infrastructure operator to prevent accidental cluster failures due to unsafe changes. The list of risks posed by modifying L2 templates includes:
Services running on hosts cannot reconfigure automatically to switch to the new IP addresses and/or interfaces.
Connections between services are interrupted unexpectedly, which can cause data loss.
Incorrect configurations on hosts can lead to irrevocable loss of connectivity between services and unexpected cluster partition or disassembly.
For details, see Modify network configuration on an existing machine.
According to the reference architecture, MOSK controller nodes must be connected to the following networks:
PXE network
LCM network
External network
Kubernetes workloads network
Storage access network (if deploying with Ceph as a backend for ephemeral storage)
Floating IP and provider networks. Not required for deployment with Tungsten Fabric.
Tenant underlay networks. If deploying with VXLAN networking or with Tungsten Fabric. In the latter case, the BGP service is configured over this network.
To create L2 templates for MOSK controller nodes:
Create or open the
mosk-l2template.ymlfile that contains the L2 templates.Add L2 templates using the following example. Adjust the values of specific parameters according to the specification of your environment.
Example of an L2 template for a MOSK controller node¶apiVersion: ipam.mirantis.com/v1alpha1 kind: L2Template metadata: labels: kaas.mirantis.com/provider: baremetal cluster.sigs.k8s.io/cluster-name: mosk-cluster-name rack1-mosk-controller: "true" name: rack1-mosk-controller namespace: mosk-namespace-name spec: autoIfMappingPrio: - provision - eno - ens - enp l3Layout: - subnetName: mgmt-lcm scope: global - subnetName: rack1-k8s-lcm scope: namespace - subnetName: k8s-external scope: namespace - subnetName: rack1-k8s-pods scope: namespace - subnetName: rack1-ceph-public scope: namespace - subnetName: rack1-tenant-tunnel scope: namespace npTemplate: |- version: 2 ethernets: {{nic 0}}: dhcp4: false dhcp6: false match: macaddress: {{mac 0}} set-name: {{nic 0}} mtu: 9000 {{nic 1}}: dhcp4: false dhcp6: false match: macaddress: {{mac 1}} set-name: {{nic 1}} mtu: 9000 {{nic 2}} dhcp4: false dhcp6: false match: macaddress: {{mac 2}} set-name: {{nic 2}} mtu: 9000 {{nic 3}}: dhcp4: false dhcp6: false match: macaddress: {{mac 3}} set-name: {{nic 3}} mtu: 9000 bonds: bond0: mtu: 9000 parameters: mode: 802.3ad mii-monitor-interval: 100 interfaces: - {{nic 0}} - {{nic 1}} bond1: mtu: 9000 parameters: mode: 802.3ad mii-monitor-interval: 100 interfaces: - {{nic 2}} - {{nic 3}} vlans: k8s-lcm-v: id: 403 link: bond0 mtu: 9000 k8s-ext-v: id: 409 link: bond0 mtu: 9000 k8s-pods-v: id: 408 link: bond0 mtu: 9000 pr-floating: id: 407 link: bond1 mtu: 9000 stor-frontend: id: 404 link: bond0 addresses: - {{ip "stor-frontend:rack1-ceph-public"}} mtu: 9000 routes: - to: 10.199.16.0/22 # aggregated address space for Ceph public network via: {{ gateway_from_subnet "rack1-ceph-public" }} tenant-tunnel: id: 406 link: bond1 addresses: - {{ip "tenant-tunnel:rack1-tenant-tunnel"}} mtu: 9000 routes: - to: 10.195.0.0/22 # aggregated address space for tenant networks via: {{ gateway_from_subnet "rack1-tenant-tunnel" }} bridges: k8s-lcm: interfaces: [k8s-lcm-v] addresses: - {{ ip "k8s-lcm:rack1-k8s-lcm" }} nameservers: addresses: {{nameservers_from_subnet "rack1-k8s-lcm"}} routes: - to: 10.197.0.0/21 # aggregated address space for LCM and API/LCM networks via: {{ gateway_from_subnet "rack1-k8s-lcm" }} - to: {{ cidr_from_subnet "mgmt-lcm" }} via: {{ gateway_from_subnet "rack1-k8s-lcm" }} k8s-ext: interfaces: [k8s-ext-v] addresses: - {{ip "k8s-ext:k8s-external"}} nameservers: addresses: {{nameservers_from_subnet "k8s-external"}} gateway4: {{ gateway_from_subnet "k8s-external" }} mtu: 9000 k8s-pods: interfaces: [k8s-pods-v] addresses: - {{ip "k8s-pods:rack1-k8s-pods"}} mtu: 9000 routes: - to: 10.199.0.0/22 # aggregated address space for Kubernetes workloads via: {{ gateway_from_subnet "rack1-k8s-pods" }}
Caution
If you plan to deploy a MOSK cluster with the compact control plane option and configure ARP announcement of the load-balancer IP address for the MOSK cluster API, the API/LCM network will be used for MOSK controller nodes. Therefore, change the
rack1-k8s-lcmsubnet to theapi-lcmone in the correspondingL2Templateobject:spec: ... l3Layout: ... - subnetName: api-lcm scope: namespace ... npTemplate: |- ... bridges: k8s-lcm: interfaces: [k8s-lcm-v] addresses: - {{ ip "k8s-lcm:api-lcm" }} nameservers: addresses: {{nameservers_from_subnet "api-lcm"}} routes: - to: 10.197.0.0/21 # aggregated address space for LCM and API/LCM networks via: {{ gateway_from_subnet "api-lcm" }} - to: {{ cidr_from_subnet "mgmt-lcm" }} via: {{ gateway_from_subnet "api-lcm" }} ...
Proceed with Create an L2 template for a MOSK compute node.