L2 template example with bonds and bridges¶
This section contains an exemplary L2 template that demonstrates how to set up bonds and bridges on hosts for your managed clusters as described in Create L2 templates.
Caution
Use of a dedicated network for Kubernetes pods traffic, for external connection to the Kubernetes services exposed by the cluster, and for the Ceph cluster access and replication traffic is available as Technology Preview. Use such configurations for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview features.
The following feature is still under development and will be announced in one of the following Container Cloud releases:
Switching Kubernetes API to listen to the specified IP address on the node
Kubernetes LCM network¶
The Kubernetes LCM network connects LCM Agents running on nodes to the LCM API
of the management or regional cluster. It is also used for communication
between kubelet
and Kubernetes API server inside a Kubernetes cluster.
The MKE
components use this network for communication inside a swarm
cluster.
To configure each node with an IP address that will be used for LCM traffic,
use the npTemplate.bridges.k8s-lcm
bridge in the L2 template, as
demonstrated in the example below.
Since Container Cloud 2.20.0 and 2.20.1 for MOSK 22.4, each node of every
cluster must have only one IP address in the LCM network that is allocated
from one of the Subnet
objects having the ipam/SVC-k8s-lcm
label
defined. Therefore, all Subnet
objects used for LCM networks must have the
ipam/SVC-k8s-lcm
label defined.
Before Container Cloud 2.20.0 and since MOSK 22.2, you can use any interface
name for the LCM network traffic. The Subnet
objects for the LCM network
must have the ipam/SVC-k8s-lcm
label. For details,
see Service labels and their life cycle.
As defined in Host networking, the LCM network can be collocated with the PXE network.
Dedicated network for the Kubernetes pods traffic¶
If you want to use a dedicated network for Kubernetes pods traffic,
configure each node with an IPv4
address that will be used to route the pods traffic between nodes.
To accomplish that, use the npTemplate.bridges.k8s-pods
bridge
in the L2 template, as demonstrated in the example below.
As defined in Host networking, this bridge name is reserved for the
Kubernetes pods network. When the k8s-pods
bridge is defined in an L2
template, Calico CNI uses that network for routing the pods traffic between
nodes.
Dedicated network for the Kubernetes services traffic (MetalLB)¶
You can use a dedicated network for external connection to the Kubernetes
services exposed by the cluster.
If enabled, MetalLB will listen and respond on the dedicated virtual bridge.
To accomplish that, configure each node where metallb-speaker
is deployed
with an IPv4 address. Both the MetalLB IP address ranges and the IP
addresses configured on those nodes must fit in the same CIDR.
Use the npTemplate.bridges.k8s-ext
bridge in the L2 template,
as demonstrated in the example below.
This bridge name is reserved for the Kubernetes external network.
The Subnet
object that corresponds to the k8s-ext
bridge must have
explicitly excluded the IP address ranges that are in use by MetalLB.
Dedicated network for the Ceph distributed storage traffic¶
You can configure dedicated networks for the Ceph cluster access and
replication traffic. Set labels on the Subnet
CRs for the corresponding
networks, as described in Create subnets.
Container Cloud automatically configures Ceph to use the addresses from these
subnets. Ensure that the addresses are assigned to the storage nodes.
Use the npTemplate.bridges.ceph-cluster
and
npTemplate.bridges.ceph-public
bridges in the L2 template,
as demonstrated in the example below. These names are reserved for the Ceph
cluster access (public) and replication (cluster) networks.
The Subnet
objects used to assign IP addresses to these bridges
must have corresponding labels ipam/SVC-ceph-public
for the
ceph-public
bridge and ipam/SVC-ceph-cluster
for the
ceph-cluster
bridge.
Example of an L2 template with interfaces bonding¶
apiVersion: ipam.mirantis.com/v1alpha1
kind: L2Template
metadata:
name: test-managed
namespace: managed-ns
spec:
clusterRef: managed-cluster
autoIfMappingPrio:
- provision
- eno
- ens
- enp
l3Layout:
- subnetName: demo-lcm
scope: namespace
- subnetName: demo-pods
scope: namespace
- subnetName: demo-ext
scope: namespace
- subnetName: demo-ceph-cluster
scope: namespace
- subnetName: demo-ceph-public
scope: namespace
npTemplate: |
version: 2
ethernets:
{{nic 2}}:
dhcp4: false
dhcp6: false
match:
macaddress: {{mac 2}}
set-name: {{nic 2}}
{{nic 3}}:
dhcp4: false
dhcp6: false
match:
macaddress: {{mac 3}}
set-name: {{nic 3}}
bonds:
bond0:
interfaces:
- {{nic 2}}
- {{nic 3}}
parameters:
mode: 802.3ad
vlans:
k8s-ext-vlan:
id: 1001
link: bond0
k8s-pods-vlan:
id: 1002
link: bond0
stor-frontend:
id: 1003
link: bond0
stor-backend:
id: 1004
link: bond0
bridges:
k8s-lcm:
interfaces: [bond0]
addresses:
- {{ip "k8s-lcm:demo-lcm"}}
gateway4: {{gateway_from_subnet "demo-lcm"}}
nameservers:
addresses: {{nameservers_from_subnet "demo-lcm"}}
k8s-ext:
interfaces: [k8s-ext-vlan]
addresses:
- {{ip "k8s-ext:demo-ext"}}
k8s-pods:
interfaces: [k8s-pods-vlan]
addresses:
- {{ip "k8s-pods:demo-pods"}}
ceph-cluster:
interfaces: [stor-backend]
addresses:
- {{ip "ceph-cluster:demo-ceph-cluster"}}
ceph-public:
interfaces: [stor-frontend]
addresses:
- {{ip "ceph-public:demo-ceph-public"}}