Create L2 templates¶
Since Container Cloud 2.9.0, L2 templates have a new format.
In the new L2 templates format,
is used directly during provisioning. Therefore, a hardware node
obtains and applies a complete network configuration
during the first system boot.
Update any L2 template created before Container Cloud 2.9.0 as described in Release Notes: Switch L2 templates to the new format.
After you create subnets for one or more managed clusters or projects as described in Create subnets or Automate multiple subnet creation using SubnetPool, follow the procedure below to create L2 templates for a managed cluster. This procedure contains exemplary L2 templates for the following use cases:
- L2 template example with bonds and bridges
- L2 template example for automatic multiple subnet creation
To create an L2 template for a new managed cluster:
Log in to a local machine where your management cluster
kubeconfigis located and where
The management cluster
kubeconfigis created during the last stage of the management cluster bootstrap.
Inspect the existing L2 templates to select the one that fits your deployment:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> \ get l2template -n <ProjectNameForNewManagedCluster>
Create an L2 YAML template specific to your deployment using one of the exemplary templates:
You can create several L2 templates with different configurations to be applied to different nodes of the same cluster. See Assign L2 templates to machines for details.
Add or edit the mandatory parameters in the new L2 template. The following tables provide the description of the mandatory and the
l3Layoutsection parameters in the example templates mentioned in the previous step.
References the Cluster object that this template is applied to. The
defaultvalue is used to apply the given template to all clusters within a particular project, unless an L2 template that references a specific cluster name exists.
An L2 template must have the same namespace as the referenced cluster.
A cluster can be associated with many L2 templates. Only one of them can have the
ipam/DefaultForClusterlabel. Every L2 template that does not have the
ipam/DefaultForClusterlabel can be later assigned to a particular machine using
A project (Kubernetes namespace) can have only one default L2 template (
ifMappingis a list of interface names for the template. The interface mapping is defined globally for all bare metal hosts in the cluster but can be overridden at the host level, if required, by editing the
IpamHostobject for a particular host. The
ifMappingparameter is mutually exclusive with
autoIfMappingPriois a list of prefixes, such as
ens, and so on, to match the interfaces to automatically create a list for the template. If you are not aware of any specific ordering of interfaces on the nodes, use the default ordering from Predictable Network Interfaces Names specification for systemd. You can also override the default NIC list per host using the
IfMappingOverrideparameter of the corresponding
provisionvalue corresponds to the network interface that was used to provision a node. Usually, it is the first NIC found on a particular node. It is defined explicitly to ensure that this interface will not be reconfigured accidentally.
autoIfMappingPrioparameter is mutually exclusive with
Subnets to be used in the
l3Layoutsection is mandatory for each
L2Templatecustom resource (CR). For more details about
L2Template, see L2Template API.
A netplan-compatible configuration with special lookup functions that defines the networking settings for the cluster hosts, where physical NIC names and details are parameterized. This configuration will be processed using Go templates. Instead of specifying IP and MAC addresses, interface names, and other network details specific to a particular host, the template supports use of special lookup functions. These lookup functions, such as
ip, and so on, return host-specific network information when the template is rendered for a particular host. For details about netplan, see the official netplan documentation.
All rules and restrictions of the netplan configuration also apply to L2 templates. For details, see the official netplan documentation.
For more details about the
L2Templatecustom resource (CR), see the L2Template API section.
Name of the
Subnetobject that will be used in the
npTemplatesection to allocate IP addresses from. All
Subnetnames must be unique across a single L2 template.
Optional. Default: none. Name of the parent
SubnetPoolobject that will be used to create a
Subnetobject with a given
scope. If a corresponding
Subnetobject already exists, nothing will be created and the existing object will be used. If no
SubnetPoolis provided, no new
Subnetobject will be created.
Logical scope of the
Subnetobject with a corresponding
subnetName. Possible values:
Subnetobject is accessible globally, for any Container Cloud project and cluster in the region, for example, the PXE subnet.
Subnetobject is accessible within the same project and region where the L2 template is defined.
Subnetobject is only accessible to the cluster that
L2Template.spec.clusterRefrefers to. The
Subnetobjects with the
clusterscope will be created for every new cluster.
The following table describes the main lookup functions for an L2 template.
Name of a NIC number N. NIC numbers correspond to the interface mapping list.
MAC address of a NIC number N registered during a host hardware inspection.
IP address and mask for a NIC number N. The address will be auto-allocated from the given subnet if the address does not exist yet.
IP address and mask for a virtual interface,
“br0”in this example. The address will be auto-allocated from the given subnet if the address does not exist yet.
IPv4 default gateway address from the given subnet.
List of the IP addresses of name servers from the given subnet.
Every subnet referenced in an L2 template can have either a global or namespaced scope. In the latter case, the subnet must exist in the same project where the corresponding cluster and L2 template are located.
Add the L2 template to your management cluster:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> apply -f <pathToL2TemplateYamlFile>
Optional. Further modify the template:
kubectl --kubeconfig <pathToManagementClusterKubeconfig> \ -n <ProjectNameForNewManagedCluster> edit l2template <L2templateName>
Proceed with Add a machine. The resulting L2 template will be used to render the netplan configuration for the managed cluster machines.
The workflow of the netplan configuration using an L2 template is as follows:
kaas-ipamservice uses the data from
BareMetalHost, the L2 template, and subnets to generate the netplan configuration for every cluster machine.
The generated netplan configuration is saved in the
status.netconfigV2section of the
IpamHostresource. If the
status.l2RenderResultfield of the
OK, the configuration was rendered in the
IpamHostresource successfully. Otherwise, the status contains an error message.
baremetal-providerservice copies data from the
lcm-agentservice on every host synchronizes the
LCMMachinedata to its host. The
lcm-agentservice runs a playbook to update the netplan configuration on the host during the