L2 template example with bonds and bridges¶
This section contains an exemplary L2 template that demonstrates how to set up bonds and bridges on hosts for your managed clusters.
Parameters of the bond interface¶
Configure bonding options using the parameters
field. The only
mandatory option is mode
. See the example below for details.
Note
You can set any mode supported by netplan and your hardware.
Important
Bond monitoring is disabled in Ubuntu by default. However,
Mirantis highly recommends enabling it using the Media Independent Interface
(MII) monitoring by setting the mii-monitor-interval
parameter to a
non-zero value. For details, see Linux documentation: bond monitoring.
Kubernetes LCM network¶
The Kubernetes LCM network connects LCM Agents running on nodes to the LCM API
of the management cluster. It is also used for communication between
kubelet
and Kubernetes API server inside a Kubernetes cluster.
The MKE
components use this network for communication inside a swarm
cluster.
To configure each node with an IP address that will be used for LCM traffic,
use the npTemplate.bridges.k8s-lcm
bridge in the L2 template, as
demonstrated in the example below.
Each node of every cluster must have one and only IP address in the LCM network that is allocated from one of the
Subnet
objects having theipam/SVC-k8s-lcm
label defined. Therefore, allSubnet
objects used for LCM networks must have theipam/SVC-k8s-lcm
label defined.You can use any interface name for the LCM network traffic. The
Subnet
objects for the LCM network must have theipam/SVC-k8s-lcm
label. For details, see Service labels and their life cycle.
Dedicated network for the Kubernetes pods traffic¶
If you want to use a dedicated network for Kubernetes pods traffic,
configure each node with an IPv4
address that will be used to route the pods traffic between nodes.
To accomplish that, use the npTemplate.bridges.k8s-pods
bridge
in the L2 template, as demonstrated in the example below.
As defined in Container Cloud Reference Architecture: Host networking,
this bridge name is reserved for the Kubernetes pods network. When the
k8s-pods
bridge is defined in an L2 template, Calico CNI uses that network
for routing the pods traffic between nodes.
Dedicated network for the Kubernetes services traffic (MetalLB)¶
You can use a dedicated network for external connection to the Kubernetes
services exposed by the cluster.
If enabled, MetalLB will listen and respond on the dedicated virtual bridge.
To accomplish that, configure each node where metallb-speaker
is deployed
with an IPv4 address. For details on selecting nodes for metallb-speaker
,
see Configure node selectors for MetalLB speakers. Both the MetalLB IP address ranges and the
IP addresses configured on those nodes must fit in the same CIDR.
The default route on the MOSK nodes that are connected to the external network must be configured with the default gateway in the external network.
Caution
The IP address ranges of the corresponding subnet used in
L2Template
for the dedicated virtual brigde must be excluded from the
MetalLB address ranges.
Dedicated networks for the Ceph distributed storage traffic¶
You can configure dedicated networks for the Ceph cluster access and
replication traffic. Set labels on the Subnet
CRs for the corresponding
networks, as described in Create subnets.
Container Cloud automatically configures Ceph to use the addresses from these
subnets. Ensure that the addresses are assigned to the storage nodes.
The Subnet
objects used to assign IP addresses to these networks
must have corresponding labels ipam/SVC-ceph-public
for the
Ceph public (storage access) network and ipam/SVC-ceph-cluster
for the
Ceph cluster (storage replication) network.
Example of an L2 template with interfaces bonding¶
apiVersion: ipam.mirantis.com/v1alpha1
kind: L2Template
metadata:
name: test-managed
namespace: managed-ns
labels:
cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
spec:
autoIfMappingPrio:
- provision
- eno
- ens
- enp
l3Layout:
- subnetName: mgmt-lcm
scope: global
- subnetName: demo-lcm
scope: namespace
- subnetName: demo-ext
scope: namespace
- subnetName: demo-pods
scope: namespace
- subnetName: demo-ceph-cluster
scope: namespace
- subnetName: demo-ceph-public
scope: namespace
npTemplate: |
version: 2
ethernets:
ten10gbe0s0:
dhcp4: false
dhcp6: false
match:
macaddress: {{mac 2}}
set-name: {{nic 2}}
ten10gbe0s1:
dhcp4: false
dhcp6: false
match:
macaddress: {{mac 3}}
set-name: {{nic 3}}
bonds:
bond0:
interfaces:
- ten10gbe0s0
- ten10gbe0s1
mtu: 9000
parameters:
mode: 802.3ad
mii-monitor-interval: 100
vlans:
k8s-lcm-vlan:
id: 1009
link: bond0
k8s-ext-vlan:
id: 1001
link: bond0
k8s-pods-vlan:
id: 1002
link: bond0
stor-frontend:
id: 1003
link: bond0
stor-backend:
id: 1004
link: bond0
bridges:
k8s-lcm:
interfaces: [k8s-lcm-vlan]
addresses:
- {{ip "k8s-lcm:demo-lcm"}}
routes:
- to: {{ cidr_from_subnet "mgmt-lcm" }}
via: {{ gateway_from_subnet "demo-lcm" }}
k8s-ext:
interfaces: [k8s-ext-vlan]
addresses:
- {{ip "k8s-ext:demo-ext"}}
nameservers:
addresses: {{nameservers_from_subnet "demo-ext"}}
gateway4: {{ gateway_from_subnet "demo-ext" }}
k8s-pods:
interfaces: [k8s-pods-vlan]
addresses:
- {{ip "k8s-pods:demo-pods"}}
ceph-cluster:
interfaces: [stor-backend]
addresses:
- {{ip "ceph-cluster:demo-ceph-cluster"}}
ceph-public:
interfaces: [stor-frontend]
addresses:
- {{ip "ceph-public:demo-ceph-public"}}
Note
Before MOSK 23.3, an L2 template requires
clusterRef: <clusterName>
in the spec
section. Since MOSK 23.3,
this parameter is deprecated and automatically migrated to the
cluster.sigs.k8s.io/cluster-name: <clusterName>
label.