Create L2 templates


Since Container Cloud 2.9.0, L2 templates have a new format. In the new L2 templates format, l2template:status:npTemplate is used directly during provisioning. Therefore, a hardware node obtains and applies a complete network configuration during the first system boot.

Update any L2 template created before Container Cloud 2.9.0 as described in Release Notes: Switch L2 templates to the new format.

After you create subnets for one or more managed clusters or projects as described in Create subnets or Automate multiple subnet creation using SubnetPool, follow the procedure below to create L2 templates for a managed cluster. This procedure contains exemplary L2 templates for the following use cases:


Modification of L2 templates in use is allowed with a mandatory validation step from the Infrastructure Operator to prevent accidental cluster failures due to unsafe changes. The list of risks posed by modifying L2 templates includes:

  • Services running on hosts cannot reconfigure automatically to switch to the new IP addresses and/or interfaces.

  • Connections between services are interrupted unexpectedly, which can cause data loss.

  • Incorrect configurations on hosts can lead to irrevocable loss of connectivity between services and unexpected cluster partition or disassembly.

For details, see Modify network configuration on an existing machine.

Create an L2 template for a new managed cluster


Make sure that you create L2 templates before adding any machines to your new managed cluster.

  1. Log in to a local machine where your management cluster kubeconfig is located and where kubectl is installed.


    The management cluster kubeconfig is created during the last stage of the management cluster bootstrap.

  2. Inspect the existing L2 templates to select the one that fits your deployment:

    kubectl --kubeconfig <pathToManagementClusterKubeconfig> \
    get l2template -n <ProjectNameForNewManagedCluster>
  3. Create an L2 YAML template specific to your deployment using one of the exemplary templates:


    You can create several L2 templates with different configurations to be applied to different nodes of the same cluster. See Assign L2 templates to machines for details.

  4. Add or edit the mandatory parameters in the new L2 template. The following tables provide the description of the mandatory parameters in the example templates mentioned in the previous step.

    L2 template mandatory parameters





    Deprecated since Container Cloud 2.25.0 in favor of the mandatory label. Will be removed in one of the following releases.

    On existing clusters, this parameter is automatically migrated to the label since 2.25.0.

    If an existing cluster has clusterRef: default set, the migration process involves removing this parameter. Subsequently, it is not substituted with the label, ensuring the application of the L2 template across the entire Kubernetes namespace.

    The Cluster object name that this template is applied to. The default value is used to apply the given template to all clusters within a particular project, unless an L2 template that references a specific cluster name exists. The clusterRef field has priority over the label:

    • When clusterRef is set to a non-default value, the label will be added or updated with that value.

    • When clusterRef is set to default, the label will be absent or removed.

    L2 template requirements

    • An L2 template must have the same project (Kubernetes namespace) as the referenced cluster.

    • A cluster can be associated with many L2 templates. Only one of them can have the ipam/DefaultForCluster label. Every L2 template that does not have the ipam/DefaultForCluster label can be later assigned to a particular machine using l2TemplateSelector.

    • The following rules apply to the default L2 template of a namespace:

      • Since Container Cloud 2.25.0, creation of the default L2 template for a namespace is disabled. On existing clusters, the Spec.clusterRef: default parameter of such an L2 template is automatically removed during the migration process. Subsequently, this parameter is not substituted with the label, ensuring the application of the L2 template across the entire Kubernetes namespace. Therefore, you can continue using existing default namespaced L2 templates.

      • Before Container Cloud 2.25.0, the default L2Template object of a namespace must have the Spec.clusterRef: default parameter that is deprecated since 2.25.0.

    ifMapping or autoIfMappingPrio

    • ifMapping

      List of interface names for the template. The interface mapping is defined globally for all bare metal hosts in the cluster but can be overridden at the host level, if required, by editing the IpamHost object for a particular host. The ifMapping parameter is mutually exclusive with autoIfMappingPrio.

    • autoIfMappingPrio

      autoIfMappingPrio is a list of prefixes, such as eno, ens, and so on, to match the interfaces to automatically create a list for the template. If you are not aware of any specific ordering of interfaces on the nodes, use the default ordering from Predictable Network Interfaces Names specification for systemd. You can also override the default NIC list per host using the IfMappingOverride parameter of the corresponding IpamHost. The provision value corresponds to the network interface that was used to provision a node. Usually, it is the first NIC found on a particular node. It is defined explicitly to ensure that this interface will not be reconfigured accidentally.

      The autoIfMappingPrio parameter is mutually exclusive with ifMapping.


    Subnets to be used in the npTemplate section. The field contains a list of subnet definitions with parameters used by template macros.

    • subnetName

      Defines the alias name of the subnet that can be used to reference this subnet from the template macros. This parameter is mandatory for every entry in the l3Layout list.

    • subnetPool

      Optional. Default: none. Defines a name of the parent SubnetPool object that will be used to create a Subnet object with a given subnetName and scope.

      If a corresponding Subnet object already exists, nothing will be created and the existing object will be used. If no SubnetPool is provided, no new Subnet object will be created.

    • scope

      Logical scope of the Subnet object with a corresponding subnetName. Possible values:

      • global - the Subnet object is accessible globally, for any Container Cloud project and cluster, for example, the PXE subnet.

      • namespace - the Subnet object is accessible within the same project where the L2 template is defined.

      • cluster - the Subnet object is only accessible to the cluster that L2Template.spec.clusterRef refers to. The Subnet objects with the cluster scope will be created for every new cluster.

    • labelSelector

      Contains a dictionary of labels and their respective values that will be used to find the matching Subnet object for the subnet. If the labelSelector field is omitted, the Subnet object will be selected by name, specified by the subnetName parameter.


    The l3Layout section is mandatory for each L2Template custom resource.


    A netplan-compatible configuration with special lookup functions that defines the networking settings for the cluster hosts, where physical NIC names and details are parameterized. This configuration will be processed using Go templates. Instead of specifying IP and MAC addresses, interface names, and other network details specific to a particular host, the template supports use of special lookup functions. These lookup functions, such as nic, mac, ip, and so on, return host-specific network information when the template is rendered for a particular host.


    All rules and restrictions of the netplan configuration also apply to L2 templates. For details, see the official netplan documentation.


    We strongly recommend following the below conventions on network interface naming:

    • A physical NIC name set by an L2 template must not exceed 15 symbols. Otherwise, an L2 template creation fails. This limit is set by the Linux kernel.

    • Names of virtual network interfaces such as VLANs, bridges, bonds, veth, and so on must not exceed 15 symbols.

    We recommend setting interfaces names that do not exceed 13 symbols for both physical and virtual interfaces to avoid corner cases and issues in netplan rendering.

    The following table describes the main lookup functions for an L2 template.

    Lookup function


    {{nic N}}

    Name of a NIC number N. NIC numbers correspond to the interface mapping list. This macro can be used as a key for the elements of the ethernets map, or as the value of the name and set-name parameters of a NIC. It is also used to reference the physical NIC from definitions of virtual interfaces (vlan, bridge).

    {{mac N}}

    MAC address of a NIC number N registered during a host hardware inspection.

    {{ip “N:subnet-a”}}

    IP address and mask for a NIC number N. The address will be auto-allocated from the given subnet if the address does not exist yet.

    {{ip “br0:subnet-x”}}

    IP address and mask for a virtual interface, “br0” in this example. The address will be auto-allocated from the given subnet if the address does not exist yet.

    For virtual interfaces names, an IP address placeholder must contain a human-readable ID that is unique within the L2 template and must have the following format:

    {{ip "<shortUniqueHumanReadableID>:<subnetNameFromL3Layout>"}}

    The <shortUniqueHumanReadableID> is made equal to a virtual interface name throughout this document and Container Cloud bootstrap templates.

    {{cidr_from_subnet “subnet-a”}}

    IPv4 CIDR address from the given subnet.

    {{gateway_from_subnet “subnet-a”}}

    IPv4 default gateway address from the given subnet.

    {{nameservers_from_subnet “subnet-a”}}

    List of the IP addresses of name servers from the given subnet.


    Technology Preview since Container Cloud 2.24.4. IP address for a cluster API load balancer.


    Every subnet referenced in an L2 template can have either a global or namespaced scope. In the latter case, the subnet must exist in the same project where the corresponding cluster and L2 template are located.

  5. Optional. To designate an L2 template as default, assign the ipam/DefaultForCluster label to it. Only one L2 template in a cluster can have this label. It will be used for machines that do not have an L2 template explicitly assigned to them.

    To assign the default template to the cluster:

    • Since Container Cloud 2.25.0, use the mandatory label in the L2 template metadata section.

    • Before Container Cloud 2.25.0, use the label or the clusterRef parameter in the L2 template spec section. This parameter is deprecated and will be removed in one of the following releases. During cluster update to 2.25.0, this parameter is automatically migrated to the label.

  6. Optional. Add the l2template-<NAME>: "exists" label to the L2 template. Replace <NAME> with the unique L2 template name or any other unique identifier. You can refer to this label to assign this L2 template when you create machines.

  7. Add the L2 template to your management cluster. Select one of the following options:

    kubectl --kubeconfig <pathToManagementClusterKubeconfig> apply -f <pathToL2TemplateYamlFile>

    Available since Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0)

    1. Log in to the Container Cloud web UI with the operator permissions.

    2. Switch to the required non-default project using the Switch Project action icon located on top of the main left-side navigation panel.

      To create a project, refer to Create a project for managed clusters.

    3. In the left sidebar, navigate to Networks and click the L2 Templates tab.

    4. Click Create L2 Template.

    5. Fill out the Create L2 Template form as required:

      • Name

        L2 template name.

      • Cluster

        Cluster name that the L2 template is being added for. To set the L2 template as default for all machines, also select Set default for the cluster.

      • YAML file

        L2 template file in the YAML format that you have previously created. Click Upload to select the required file for uploading.

  8. Proceed with Add a machine. The resulting L2 template will be used to render the netplan configuration for the managed cluster machines.

Workflow of the netplan configuration using an L2 template

  1. The kaas-ipam service uses the data from BareMetalHost, the L2 template, and subnets to generate the netplan configuration for every cluster machine.

  2. The generated netplan configuration is saved in the status.netconfigFiles section of the IpamHost resource. If the status.netconfigFilesState field of the IpamHost resource is OK, the configuration was rendered in the IpamHost resource successfully. Otherwise, the status contains an error message.


    The following fields of the ipamHost status are renamed since Container Cloud 2.22.0 in the scope of the L2Template and IpamHost objects refactoring:

    • netconfigV2 to netconfigCandidate

    • netconfigV2state to netconfigCandidateState

    • netconfigFilesState to netconfigFilesStates (per file)

    No user actions are required after renaming.

    The format of netconfigFilesState changed after renaming. The netconfigFilesStates field contains a dictionary of statuses of network configuration files stored in netconfigFiles. The dictionary contains the keys that are file paths and values that have the same meaning for each file that netconfigFilesState had:

    • For a successfully rendered configuration file: OK: <timestamp> <sha256-hash-of-rendered-file>, where a timestamp is in the RFC 3339 format.

    • For a failed rendering: ERR: <error-message>.

  3. The baremetal-provider service copies data from the status.netconfigFiles of IpamHost to the Spec.StateItemsOverwrites[‘deploy’][‘bm_ipam_netconfigv2’] parameter of LCMMachine.

  4. The lcm-agent service on every host synchronizes the LCMMachine data to its host. The lcm-agent service runs a playbook to update the netplan configuration on the host during the pre-download and deploy phases.