Create L2 templates¶
Since Container Cloud 2.9.0, L2 templates have a new format.
In the new L2 templates format,
is used directly during provisioning. Therefore, a hardware node
obtains and applies a complete network configuration
during the first system boot.
Update any L2 template created before Container Cloud 2.9.0 as described in Release Notes: Switch L2 templates to the new format.
After you create subnets for one or more managed clusters or projects as described in Create subnets or Automate multiple subnet creation using SubnetPool, follow the procedure below to create L2 templates for a managed cluster. This procedure contains exemplary L2 templates for the following use cases:
- L2 template example with bonds and bridges
- L2 template example for automatic multiple subnet creation
Modification of L2 templates in use is allowed with a mandatory validation step from the Infrastructure Operator to prevent accidental cluster failures due to unsafe changes. The list of risks posed by modifying L2 templates includes:
Services running on hosts cannot reconfigure automatically to switch to the new IP addresses and/or interfaces.
Connections between services are interrupted unexpectedly, which can cause data loss.
Incorrect configurations on hosts can lead to irrevocable loss of connectivity between services and unexpected cluster partition or disassembly.
For details, see Modify network configuration on an existing machine.
Create an L2 template for a new managed cluster¶
Make sure that you create L2 templates before adding any machines to your new managed cluster.
Log in to a local machine where your management cluster
kubeconfigis located and where
The management cluster
kubeconfigis created during the last stage of the management cluster bootstrap.
Inspect the existing L2 templates to select the one that fits your deployment:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> \ get l2template -n <ProjectNameForNewManagedCluster>
Create an L2 YAML template specific to your deployment using one of the exemplary templates:
You can create several L2 templates with different configurations to be applied to different nodes of the same cluster. See Assign L2 templates to machines for details.
Add or edit the mandatory parameters in the new L2 template. The following tables provide the description of the mandatory and the
l3Layoutsection parameters in the example templates mentioned in the previous step.
Clusterobject name that this template is applied to. The
defaultvalue is used to apply the given template to all clusters within a particular project, unless an L2 template that references a specific cluster name exists. The
clusterReffield has priority over the
clusterRefis set to a non-
cluster.sigs.k8s.io/cluster-namelabel will be added or updated with that value.
clusterRefis set to
cluster.sigs.k8s.io/cluster-namelabel will be absent or removed.
An L2 template must have the same namespace as the referenced cluster.
A cluster can be associated with many L2 templates. Only one of them can have the
ipam/DefaultForClusterlabel. Every L2 template that does not have the
ipam/DefaultForClusterlabel can be later assigned to a particular machine using
A project (Kubernetes namespace) can have only one default L2 template (
List of interface names for the template. The interface mapping is defined globally for all bare metal hosts in the cluster but can be overridden at the host level, if required, by editing the
IpamHostobject for a particular host. The
ifMappingparameter is mutually exclusive with
autoIfMappingPriois a list of prefixes, such as
ens, and so on, to match the interfaces to automatically create a list for the template. If you are not aware of any specific ordering of interfaces on the nodes, use the default ordering from Predictable Network Interfaces Names specification for systemd. You can also override the default NIC list per host using the
IfMappingOverrideparameter of the corresponding
provisionvalue corresponds to the network interface that was used to provision a node. Usually, it is the first NIC found on a particular node. It is defined explicitly to ensure that this interface will not be reconfigured accidentally.
autoIfMappingPrioparameter is mutually exclusive with
Subnets to be used in the
npTemplatesection. The field contains a list of subnet definitions with parameters used by template macros.
Defines the alias name of the subnet that can be used to reference this subnet from the template macros. This parameter is mandatory for every entry in the
Optional. Default: none. Defines a name of the parent
SubnetPoolobject that will be used to create a
Subnetobject with a given
If a corresponding
Subnetobject already exists, nothing will be created and the existing object will be used. If no
SubnetPoolis provided, no new
Subnetobject will be created.
Logical scope of the
Subnetobject with a corresponding
subnetName. Possible values:
Subnetobject is accessible globally, for any Container Cloud project and cluster in the region, for example, the PXE subnet.
Subnetobject is accessible within the same project and region where the L2 template is defined.
Subnetobject is only accessible to the cluster that
L2Template.spec.clusterRefrefers to. The
Subnetobjects with the
clusterscope will be created for every new cluster.
Contains a dictionary of labels and their respective values that will be used to find the matching
Subnetobject for the subnet. If the
labelSelectorfield is omitted, the
Subnetobject will be selected by name, specified by the
l3Layoutsection is mandatory for each
A netplan-compatible configuration with special lookup functions that defines the networking settings for the cluster hosts, where physical NIC names and details are parameterized. This configuration will be processed using Go templates. Instead of specifying IP and MAC addresses, interface names, and other network details specific to a particular host, the template supports use of special lookup functions. These lookup functions, such as
ip, and so on, return host-specific network information when the template is rendered for a particular host.
All rules and restrictions of the netplan configuration also apply to L2 templates. For details, see the official netplan documentation.
We strongly recommend following the below conventions on network interface naming:
A physical NIC name set by an L2 template must not exceed 15 symbols. Otherwise, an L2 template creation fails. This limit is set by the Linux kernel.
Names of virtual network interfaces such as VLANs, bridges, bonds, veth, and so on must not exceed 15 symbols.
We recommend setting interfaces names that do not exceed 13 symbols for both physical and virtual interfaces to avoid corner cases and issues in netplan rendering.
The following table describes the main lookup functions for an L2 template.
Name of a NIC number N. NIC numbers correspond to the interface mapping list. This macro can be used as a key for the elements of the
ethernetsmap, or as the value of the
set-nameparameters of a NIC. It is also used to reference the physical NIC from definitions of virtual interfaces (
MAC address of a NIC number N registered during a host hardware inspection.
IP address and mask for a NIC number N. The address will be auto-allocated from the given subnet if the address does not exist yet.
IP address and mask for a virtual interface,
“br0”in this example. The address will be auto-allocated from the given subnet if the address does not exist yet.
For virtual interfaces names, an IP address placeholder must contain a human-readable ID that is unique within the L2 template and must have the following format:
<shortUniqueHumanReadableID>is made equal to a virtual interface name throughout this document and Container Cloud bootstrap templates.
IPv4 CIDR address from the given subnet.
IPv4 default gateway address from the given subnet.
List of the IP addresses of name servers from the given subnet.
Every subnet referenced in an L2 template can have either a global or namespaced scope. In the latter case, the subnet must exist in the same project where the corresponding cluster and L2 template are located.
Optional. To designate an L2 template as default, assign the
ipam/DefaultForClusterlabel to it. Only one L2 template in a cluster can have this label. It will be used for machines that do not have an L2 template explicitly assigned to them.
clusterRefparameter in the L2 template spec to assign the default template to the cluster.
Optional. Add the
l2template-<NAME>: "exists"label to the L2 template. Replace
<NAME>with the unique L2 template name or any other unique identifier. You can refer to this label to assign this L2 template when you create machines.
Add the L2 template to your management cluster:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> apply -f <pathToL2TemplateYamlFile>
Proceed with Add a machine. The resulting L2 template will be used to render the netplan configuration for the managed cluster machines.
Workflow of the netplan configuration using an L2 template¶
kaas-ipamservice uses the data from
BareMetalHost, the L2 template, and subnets to generate the netplan configuration for every cluster machine.
The generated netplan configuration is saved in the
status.netconfigFilessection of the
IpamHostresource. If the
status.netconfigFilesStatefield of the
OK, the configuration was rendered in the
IpamHostresource successfully. Otherwise, the status contains an error message.
The following fields of the
ipamHoststatus are renamed since Container Cloud 2.22.0 in the scope of the
No user actions are required after renaming.
The format of
netconfigFilesStatechanged after renaming. The
netconfigFilesStatesfield contains a dictionary of statuses of network configuration files stored in
netconfigFiles. The dictionary contains the keys that are file paths and values that have the same meaning for each file that
For a successfully rendered configuration file:
OK: <timestamp> <sha256-hash-of-rendered-file>, where a timestamp is in the RFC 3339 format.
For a failed rendering:
baremetal-providerservice copies data from the
lcm-agentservice on every host synchronizes the
LCMMachinedata to its host. The
lcm-agentservice runs a playbook to update the netplan configuration on the host during the