L2 template example with bonds and bridges

This section contains an exemplary L2 template that demonstrates how to set up bonds and bridges on hosts for your managed clusters.

Caution

Use of a dedicated network for Kubernetes pods traffic, for external connection to the Kubernetes services exposed by the cluster, and for the Ceph cluster access and replication traffic is available as Technology Preview. Use such configurations for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview features.


Kubernetes LCM network

The Kubernetes LCM network connects LCM Agents running on nodes to the LCM API of the management cluster. It is also used for communication between kubelet and Kubernetes API server inside a Kubernetes cluster. The MKE components use this network for communication inside a swarm cluster.

  • Each node of every cluster must have one and only IP address in the LCM network that is allocated from one of the Subnet objects having the ipam/SVC-k8s-lcm label defined. Therefore, all Subnet objects used for LCM networks must have the ipam/SVC-k8s-lcm label defined.

  • You can use any interface name for the LCM network traffic. The Subnet objects for the LCM network must have the ipam/SVC-k8s-lcm label. For details, see Service labels and their life cycle.

Dedicated network for the Kubernetes pods traffic

If you want to use a dedicated network for Kubernetes pods traffic, configure each node with an IPv4 address that will be used to route the pods traffic between nodes. To accomplish that, use the npTemplate.bridges.k8s-pods bridge in the L2 template, as demonstrated in the example below. As defined in Container Cloud Reference Architecture: Host networking, this bridge name is reserved for the Kubernetes pods network. When the k8s-pods bridge is defined in an L2 template, Calico CNI uses that network for routing the pods traffic between nodes.

Dedicated network for the Kubernetes services traffic (MetalLB)

You can use a dedicated network for external connection to the Kubernetes services exposed by the cluster. If enabled, MetalLB will listen and respond on the dedicated virtual bridge. To accomplish that, configure each node where metallb-speaker is deployed with an IPv4 address. The default route on the MOSK nodes that are connected to the external network must be configured with the default gateway in the external network.

Use the npTemplate.bridges.k8s-ext bridge in the L2 template, as demonstrated in the example below. The Subnet object that corresponds to the k8s-ext bridge must have explicitly excluded IP address ranges that are in use by MetalLB.

Dedicated networks for the Ceph distributed storage traffic

Starting from Container Cloud 2.7.0, you can configure dedicated networks for the Ceph cluster access and replication traffic. Set labels on the Subnet CRs for the corresponding networks, as described in Create subnets. Container Cloud automatically configures Ceph to use the addresses from these subnets. Ensure that the addresses are assigned to the storage nodes.

The Subnet objects used to assign IP addresses to these networks must have corresponding labels ipam/SVC-ceph-public for the Ceph public (storage access) network and ipam/SVC-ceph-cluster for the Ceph cluster (storage replication) network.

Example of an L2 template with interfaces bonding

apiVersion: ipam.mirantis.com/v1alpha1
kind: L2Template
metadata:
  name: test-managed
  namespace: managed-ns
  cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
spec:
  autoIfMappingPrio:
    - provision
    - eno
    - ens
    - enp
  l3Layout:
    - subnetName: mgmt-lcm
      scope: global
    - subnetName: demo-lcm
      scope: namespace
    - subnetName: demo-ext
      scope: namespace
    - subnetName: demo-pods
      scope: namespace
    - subnetName: demo-ceph-cluster
      scope: namespace
    - subnetName: demo-ceph-public
      scope: namespace
  npTemplate: |
    version: 2
    ethernets:
      ten10gbe0s0:
        dhcp4: false
        dhcp6: false
        match:
          macaddress: {{mac 2}}
        set-name: {{nic 2}}
      ten10gbe0s1:
        dhcp4: false
        dhcp6: false
        match:
          macaddress: {{mac 3}}
        set-name: {{nic 3}}
    bonds:
      bond0:
        interfaces:
          - ten10gbe0s0
          - ten10gbe0s1
        mtu: 9000
        parameters:
          mode: 802.3ad
          mii-monitor-interval: 100
    vlans:
      k8s-lcm-vlan:
        id: 1009
        link: bond0
      k8s-ext-vlan:
        id: 1001
        link: bond0
      k8s-pods-vlan:
        id: 1002
        link: bond0
      stor-frontend:
        id: 1003
        link: bond0
      stor-backend:
        id: 1004
        link: bond0
    bridges:
      k8s-lcm:
        interfaces: [k8s-lcm-vlan]
        addresses:
          - {{ip "k8s-lcm:demo-lcm"}}
        routes:
          - to: {{ cidr_from_subnet "mgmt-lcm" }}
            via: {{ gateway_from_subnet "demo-lcm" }}
      k8s-ext:
        interfaces: [k8s-ext-vlan]
        addresses:
          - {{ip "k8s-ext:demo-ext"}}
        nameservers:
          addresses: {{nameservers_from_subnet "demo-ext"}}
        gateway4: {{ gateway_from_subnet "demo-ext" }}
      k8s-pods:
        interfaces: [k8s-pods-vlan]
        addresses:
          - {{ip "k8s-pods:demo-pods"}}
      ceph-cluster:
        interfaces: [stor-backend]
        addresses:
          - {{ip "ceph-cluster:demo-ceph-cluster"}}
      ceph-public:
        interfaces: [stor-frontend]
        addresses:
          - {{ip "ceph-public:demo-ceph-public"}}

Note

Before MOSK 23.3, an L2 template requires clusterRef: <clusterName> in the spec section. Since MOSK 23.3, this parameter is deprecated and automatically migrated to the cluster.sigs.k8s.io/cluster-name: <clusterName> label.