Create subnets for a managed cluster using CLI

After creating the MetalLB configuration as described in Configure and verify MetalLB and before creating an L2 template, create the required subnets to use in the L2 template to allocate IP addresses for the managed cluster nodes.

Prerequisites for a multi-rack cluster

Create subnets using CLI

  1. Create a cluster using one of the following options:

  2. Log in to a local machine where your management cluster kubeconfig is located and where kubectl is installed.

    Note

    The management cluster kubeconfig is created during the last stage of the management cluster bootstrap.

  3. Create the subnet.yaml file with a number of global or namespaced subnets depending on the configuration of your cluster:

    kubectl --kubeconfig <pathToManagementClusterKubeconfig> apply -f <SubnetFileName.yaml>
    

    Note

    In the command above and in the steps below, substitute the parameters enclosed in angle brackets with the corresponding values.

    Example of a subnet.yaml file:

    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: demo
      namespace: demo-namespace
      labels:
        kaas.mirantis.com/provider: baremetal
    spec:
      cidr: 10.11.0.0/24
      gateway: 10.11.0.9
      includeRanges:
      - 10.11.0.5-10.11.0.70
      nameservers:
      - 172.18.176.6
    

    Note

    The kaas.mirantis.com/region label is removed from all MOSK objects in 24.1. Therefore, do not add the label starting with this release. On existing clusters updated to this release, or if added manually, MOSK ignores this label.

    Specification fields of the Subnet object

    Parameter

    Description

    cidr (singular)

    A valid IPv4 CIDR, for example, 10.11.0.0/24.

    includeRanges (list)

    A comma-separated list of IP address ranges within the given CIDR that should be used in the allocation of IPs for nodes. The gateway, network, broadcast, and DNSaddresses will be excluded (protected) automatically if they intersect with one of the range. The IPs outside the given ranges will not be used in the allocation. Each element of the list can be either an interval 10.11.0.5-10.11.0.70 or a single address 10.11.0.77.

    Warning

    Do not use values that are out of the given CIDR.

    excludeRanges (list)

    A comma-separated list of IP address ranges within the given CIDR that should not be used in the allocation of IPs for nodes. The IPs within the given CIDR but outside the given ranges will be used in the allocation. The gateway, network, broadcast, and DNS addresses will be excluded (protected) automatically if they are included in the CIDR. Each element of the list can be either an interval 10.11.0.5-10.11.0.70 or a single address 10.11.0.77.

    Warning

    Do not use values that are out of the given CIDR.

    useWholeCidr (boolean)

    If set to true, the subnet address (10.11.0.0 in the example above) and the broadcast address (10.11.0.255 in the example above) are included into the address allocation for nodes. Otherwise, (false by default), the subnet address and broadcast address are excluded from the address allocation.

    gateway (singular)

    A valid gateway address, for example, 10.11.0.9.

    nameservers (list)

    A list of the IP addresses of name servers. Each element of the list is a single address, for example, 172.18.176.6.

    Configuration rules:

    • The subnet for the LCM network must contain the ipam/SVC-k8s-lcm: "1" label. For details, see Service labels and their life cycle.

    • Each cluster must use at least one subnet for its LCM network. Every node must have the address allocated in the LCM network using such subnet(s).

    • Each node of every cluster must have one and only IP address in the LCM network that is allocated from one of the Subnet objects having the ipam/SVC-k8s-lcm label defined. Therefore, all Subnet objects used for LCM networks must have the ipam/SVC-k8s-lcm label defined.

    • You can use any interface name for the LCM network traffic. The Subnet objects for the LCM network must have the ipam/SVC-k8s-lcm label. For details, see Service labels and their life cycle.

    Note

    You may use different subnets to allocate IP addresses to different Container Cloud components in your cluster. Add a label with the ipam/SVC- prefix to each subnet that is used to configure a Container Cloud service. For details, see Service labels and their life cycle and the optional steps below.

  4. Configure DHCP relay agents on the edges of the broadcast domains in the provisioning network, as needed.

    Make sure to assign the IP address ranges you want to allocate to the hosts using DHCP for discovery and inspection. Create subnets using these IP parameters. Specify the IP address of your DHCP relay as the default gateway in the corresponding Subnet object.

    Caution

    Support of multiple DHCP ranges has the following limitations:

    • Using of custom DNS server addresses for servers that boot over PXE is not supported.

    • The Subnet objects for DHCP ranges cannot be associated with any specific cluster, as the DHCP server configuration is only applicable to the management cluster where the DHCP server is running. The cluster.sigs.k8s.io/cluster-name label will be ignored.

    Configuration examples:

    Single-rack cluster
    apiVersion: "ipam.mirantis.com/v1alpha1"
    kind: Subnet
    metadata:
      name: mgmt-dhcp
      namespace: default
      labels:
        kaas.mirantis.com/provider: baremetal
        ipam/SVC-dhcp-range: "presents"
    spec:
      cidr: 10.20.10.0/24
      includeRanges:
        - 10.20.10.10-10.20.10.20
    
    Multi-rack cluster
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-1-dhcp
      namespace: default
      labels:
        ipam/SVC-dhcp-range: "1"
        kaas.mirantis.com/provider: baremetal
    spec:
      cidr: 10.20.101.0/24
      gateway: 10.20.101.1
      includeRanges:
        - 10.20.101.16-10.20.101.127
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-2-dhcp
      namespace: default
      labels:
        ipam/SVC-dhcp-range: "1"
        kaas.mirantis.com/provider: baremetal
    spec:
      cidr: 10.20.102.0/24
      gateway: 10.20.102.1
      includeRanges:
        - 10.20.102.16-10.20.102.127
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-3-dhcp
      namespace: default
      labels:
        ipam/SVC-dhcp-range: "1"
        kaas.mirantis.com/provider: baremetal
    spec:
      cidr: 10.20.103.0/24
      gateway: 10.20.103.1
      includeRanges:
        - 10.20.103.16-10.20.103.127
    ---
    # Add more Subnet object templates as required using the above example
    # (one subnet per rack)
    
  5. Optional. Add subnets for configuring multiple DHCP ranges. For details, see Configure multiple DHCP address ranges.

  6. Add one or more subnets for the LCM network:

    • Set the ipam/SVC-k8s-lcm label with the value "1" to create a subnet that will be used to assign IP addresses in the LCM network.

    • Optional. Set the cluster.sigs.k8s.io/cluster-name label to the name of the target cluster during the subnet creation.

    • Use this subnet in the L2 template for cluster nodes.

    • Using the L2 template, assign this subnet to the interface connected to your LCM network.

    Precautions for the LCM network usage

    • Each cluster must use at least one subnet for its LCM network. Every node must have the address allocated in the LCM network using such subnet(s).

    • Each node of every cluster must have one and only IP address in the LCM network that is allocated from one of the Subnet objects having the ipam/SVC-k8s-lcm label defined. Therefore, all Subnet objects used for LCM networks must have the ipam/SVC-k8s-lcm label defined.

    • You can use any interface name for the LCM network traffic. The Subnet objects for the LCM network must have the ipam/SVC-k8s-lcm label. For details, see Service labels and their life cycle.

    Configuration examples:

    Single-rack cluster
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      labels:
        kaas.mirantis.com/provider: baremetal
        ipam/SVC-k8s-lcm: "1"
      name: lcm-nw
      namespace: <MOSKClusterNamespace>
    spec:
      cidr: 172.16.43.0/24
      gateway: 172.16.43.1
      includeRanges:
      - 172.16.43.10-172.16.43.100
      nameservers:
        - 8.8.8.8
    
    Multi-rack cluster
    Example mosk-racks-lcm-subnets.yaml

    Note

    Subnet labels such as rack-x-lcm, rack-api-lcm, and so on are optional. You can use them in L2 templates to select Subnet objects by label.

    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-1-lcm
      namespace: mosk-namespace-name
      labels:
        ipam/SVC-k8s-lcm: "1"
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-1-lcm: "true"
    spec:
      cidr: 10.20.111.0/24
      gateway: 10.20.111.1
      includeRanges:
        - 10.20.111.16-10.20.111.255
      nameservers:
        - 8.8.8.8
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-2-lcm
      namespace: mosk-namespace-name
      labels:
        ipam/SVC-k8s-lcm: "1"
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-2-lcm: "true"
    spec:
      cidr: 10.20.112.0/24
      gateway: 10.20.112.1
      includeRanges:
        - 10.20.112.16-10.20.112.255
      nameservers:
        - 8.8.8.8
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-3-lcm
      namespace: mosk-namespace-name
      labels:
        ipam/SVC-k8s-lcm: "1"
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-3-lcm: "true"
    spec:
      cidr: 10.20.113.0/24
      gateway: 10.20.113.1
      includeRanges:
        - 10.20.113.16-10.20.113.255
      nameservers:
        - 8.8.8.8
    ---
    # Add more subnet object templates as required using the above example
    # (one subnet per rack)
    
    Example mosk-racks-api-lcm-subnet.yaml

    Note

    Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

    If BGP announcement is configured for the MOSK cluster API LB address, the API/LCM network is not required. Announcement of the cluster API LB address is done using the LCM network.

    If you configure ARP announcement of the load-balancer IP address for the MOSK cluster API, the API/LCM network must be configured on the Kubernetes manager nodes of the cluster. This network contains the Kubernetes API endpoint with the VRRP virtual IP address.

    This network contains Kubernetes API endpoint with the VRRP virtual IP address. This is the IP address space that Container Cloud uses to ensure communication between the LCM agents and the management API. These addresses are also used by Kubernetes nodes for communication. The addresses from the subnet are assigned to all Kubernetes manager nodes of the MOSK cluster.

    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-api-lcm
      namespace: mosk-namespace-name
      labels:
        ipam/SVC-k8s-lcm: "1"
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-api-lcm: "true"
    spec:
      cidr: 10.20.110.0/24
      gateway: 10.20.110.1
      includeRanges:
        - 10.20.110.16-10.20.110.25
      nameservers:
        - 8.8.8.8
    
  7. Optional. Add a subnet for external connection to the Kubernetes services exposed by the MOSK cluster. The network is used to expose the OpenStack, StackLight, and other MOSK services. Configuration examples:

    Single-rack cluster
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      labels:
        kaas.mirantis.com/provider: baremetal
      name: k8s-ext-subnet
      namespace: <MOSKClusterNamespace>
    spec:
      cidr: 172.16.45.0/24
      gateway: 172.16.45.1
      includeRanges:
      - 172.16.45.10-172.16.45.100
      nameservers:
        - 8.8.8.8
    
    Multi-rack cluster

    Note

    Since 23.2.2, MOSK supports full L3 networking topology in the Technology Preview scope. This enables deployment of specific cluster segments in dedicated racks without the need for L2 layer extension between them. For configuration procedure, see Configure BGP announcement for cluster API LB address and Configure BGP announcement of external addresses of Kubernetes load-balanced services in Deployment Guide.

    If you configure BGP announcement for IP addresses of load-balanced services of a MOSK cluster, the external network can consist of multiple VLAN segments connected to all nodes of a MOSK cluster where MetalLB speaker components are configured to announce IP addresses for Kubernetes load-balanced services. Mirantis recommends that you use OpenStack controller nodes for this purpose.

    If you configure ARP announcement for IP addresses of load-balanced services of a MOSK cluster, the external network must consist of a single VLAN stretched to the ToR switches of all the racks where MOSK nodes connected to the external network are located. Those are the nodes where MetalLB speaker components are configured to announce IP addresses for Kubernetes load-balanced services. Mirantis recommends that you use OpenStack controller nodes for this purpose.

    The subnets are used to assign addresses to the external interfaces of the MOSK controller nodes and will be used to assign the default gateway to these hosts. The default gateway for other hosts of the MOSK cluster is assigned using the LCM and optionally API/LCM subnets.

    Example of a subnet where a single VLAN segment is stretched to all MOSK controller nodes:

    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: k8s-external
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        k8s-external: true
    spec:
      cidr: 10.20.120.0/24
      gateway: 10.20.120.1 # This will be the default gateway on hosts
      includeRanges:
        - 10.20.120.16-10.20.120.20
      nameservers:
        - 8.8.8.8
    

    Example of subnets where separate VLAN segments per rack are used:

    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-1-k8s-ext
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-1-k8s-ext: true
    spec:
      cidr: 10.20.121.0/24
      gateway: 10.20.121.1 # This will be the default gateway on hosts
      includeRanges:
        - 10.20.121.16-10.20.121.20
      nameservers:
        - 8.8.8.8
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-2-k8s-ext
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-2-k8s-ext: true
    spec:
      cidr: 10.20.122.0/24
      gateway: 10.20.122.1 # This will be the default gateway on hosts
      includeRanges:
        - 10.20.122.16-10.20.122.20
      nameservers:
        - 8.8.8.8
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-3-k8s-ext
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-3-k8s-ext: true
    spec:
      cidr: 10.20.123.0/24
      gateway: 10.20.123.1 # This will be the default gateway on hosts
      includeRanges:
        - 10.20.123.16-10.20.123.20
      nameservers:
        - 8.8.8.8
    

    Configuration rules:

    • Make sure that loadBalancerHost is set to "" (empty string) in the Cluster spec.

      spec:
        providerSpec:
          value:
            apiVersion: baremetal.k8s.io/v1alpha1
            kind: BaremetalClusterProviderSpec
            ...
            loadBalancerHost: ""
      
    • Create a subnet with the ipam/SVC-LBhost label having the "1" value to make the baremetal-provider use this subnet for allocation of addresses for cluster API endpoints. One IP address will be allocated for each cluster to serve its Kubernetes/MKE API endpoint.

    • Make sure that master nodes have host local-link addresses in the same subnet as the cluster API endpoint address. These host IP addresses will be used for VRRP traffic. The cluster API endpoint address will be assigned to the same interface on one of the master nodes where these host IP addresses are assigned.

    • Mirantis highly recommends that you assign the cluster API endpoint address from the LCM or external network. For details on cluster network types, refer to MOSK cluster networking.

    To add an address allocation scope of API endpoints, create a subnet in the corresponding namespace with a reference to the target cluster using the cluster.sigs.k8s.io/cluster-name label. For example:

    apiVersion: "ipam.mirantis.com/v1alpha1"
    kind: Subnet
    metadata:
      name: lbhost-mgmt-cluster
      namespace: default
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mgmt-cluster
        ipam/SVC-LBhost: "presents"
    spec:
      cidr: "10.0.30.100/32"
      useWholeCidr: true
    
  8. Optional. Add a subnet(s) for the storage access network. Ceph will automatically use this subnet for its external connections. A Ceph OSD will look for and bind to an address from this subnet when it is started on a machine. Configuration examples:

    Single-rack cluster
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      labels:
        kaas.mirantis.com/provider: baremetal
        ipam/SVC-ceph-public: true
        cluster.sigs.k8s.io/cluster-name: <MOSKClusterName>
      name: ceph-public-subnet
      namespace: <MOSKClusterNamespace>
    spec:
      cidr: 10.12.0.0/24
    
    Multi-rack cluster

    This network may have per-rack VLANs and IP subnets. The addresses from the subnets are assigned to all MOSK cluster nodes besides Kubernetes manager nodes.

    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-1-ceph-public
      namespace: mosk-namespace-name
      labels:
        ipam/SVC-ceph-public: "1"
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-1-ceph-public: true
    spec:
      cidr: 10.20.131.0/24
      gateway: 10.20.131.1
      includeRanges:
        - 10.20.131.16-10.20.131.255
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-2-ceph-public
      namespace: mosk-namespace-name
      labels:
        ipam/SVC-ceph-public: "1"
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-2-ceph-public: true
    spec:
      cidr: 10.20.132.0/24
      gateway: 10.20.132.1
      includeRanges:
        - 10.20.132.16-10.20.132.255
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-3-ceph-public
      namespace: mosk-namespace-name
      labels:
        ipam/SVC-ceph-public: "1"
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-3-ceph-public: true
    spec:
      cidr: 10.20.133.0/24
      gateway: 10.20.133.1
      includeRanges:
        - 10.20.133.16-10.20.133.255
    ---
    # Add more Subnet object templates as required using the above example
    # (one subnet per rack)
    

    Configuration rules:

    • Set the ipam/SVC-ceph-public label with the value "1" to create a subnet that will be used to configure the Ceph public network.

    • Set the cluster.sigs.k8s.io/cluster-name label to the name of the target cluster during the subnet creation.

    • Use this subnet in the L2 template for all cluster nodes except Kubernetes manager nodes.

    • Assign this subnet to the interface connected to your storage access network.

  9. Optional. Add a subnet(s) for the storage replication network. Ceph will automatically use this network for its internal replication traffic. Configuration examples:

    Single-rack cluster
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      labels:
        kaas.mirantis.com/provider: baremetal
        ipam/SVC-ceph-cluster: true
        cluster.sigs.k8s.io/cluster-name: <MOSKClusterName>
      name: ceph-cluster-subnet
      namespace: <MOSKClusterNamespace>
    spec:
      cidr: 10.12.1.0/24
    
    Multi-rack cluster

    This network may have per-rack VLANs and IP subnets. The addresses from the subnets are assigned to storage nodes in the MOSK cluster.

    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-1-ceph-cluster
      namespace: mosk-namespace-name
      labels:
        ipam/SVC-ceph-cluster: "1"
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-1-ceph-cluster: true
    spec:
      cidr: 10.20.141.0/24
      gateway: 10.20.141.1
      includeRanges:
        - 10.20.141.16-10.20.141.255
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-2-ceph-cluster
      namespace: mosk-namespace-name
      labels:
        ipam/SVC-ceph-cluster: "1"
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-2-ceph-cluster: true
    spec:
      cidr: 10.20.142.0/24
      gateway: 10.20.142.1
      includeRanges:
        - 10.20.142.16-10.20.142.255
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-3-ceph-cluster
      namespace: mosk-namespace-name
      labels:
        ipam/SVC-ceph-cluster: "1"
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-3-ceph-cluster: true
    spec:
      cidr: 10.20.143.0/24
      gateway: 10.20.143.1
      includeRanges:
        - 10.20.143.16-10.20.143.255
    ---
    # Add more Subnet object templates as required using the above example
    # (one subnet per rack)
    

    Configuration rules:

    • Set the ipam/SVC-ceph-cluster label with the value "1" to create a subnet that will be used to configure the Ceph cluster network.

    • Set the cluster.sigs.k8s.io/cluster-name label to the name of the target cluster during the subnet creation.

    • Use this subnet in the L2 template for storage nodes.

    • Assign this subnet to the interface connected to your storage replication network.

  10. Optional. Add a subnet for the Kubernetes Pods traffic. The addresses from this subnet are assigned to interfaces connected to the Kubernetes workloads network and used by Calico CNI as underlay for traffic between the pods in the Kubernetes cluster. Configuration examples:

    Single-rack cluster
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      labels:
        kaas.mirantis.com/provider: baremetal
      name: k8s-pods-subnet
      namespace: <MOSKClusterNamespace>
    spec:
      cidr: 10.12.3.0/24
      includeRanges:
      - 10.12.3.10-10.12.3.100
    
    Multi-rack cluster

    This network may include multiple per-rack VLANs and IP subnets. The addresses from the subnets are assigned to all MOSK cluster nodes. For details, see Network types.

    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-1-k8s-pods
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-1-k8s-pods: true
    spec:
      cidr: 10.20.151.0/24
      gateway: 10.20.151.1
      includeRanges:
        - 10.20.151.16-10.20.151.255
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-2-k8s-pods
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-2-k8s-pods: true
    spec:
      cidr: 10.20.152.0/24
      gateway: 10.20.152.1
      includeRanges:
        - 10.20.152.16-10.20.152.255
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-3-k8s-pods
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-3-k8s-pods: true
    spec:
      cidr: 10.20.153.0/24
      gateway: 10.20.153.1
      includeRanges:
        - 10.20.153.16-10.20.153.255
    ---
    # Add more Subnet object templates as required using the above example
    # (one subnet per rack)
    

    Configuration rules:

    • Use this subnet in the L2 template for all nodes in the cluster.

    • Use the npTemplate.bridges.k8s-pods bridge name in the L2 template. This bridge name is reserved for the Kubernetes workloads network. When the k8s-pods bridge is defined in an L2 template, Calico CNI uses that network for routing the Pods traffic between nodes.

  11. Optional. Add a subnet for the MOSK overlay network. this is the underlay network for VXLAN tunnels for the MOSK tenant traffic. If deployed with Tungsten Fabric, it is used for the MPLS over UDP+GRE traffic. Configuration examples:

    Single-rack cluster
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      labels:
        kaas.mirantis.com/provider: baremetal
      name: neutron-tunnel-subnet
      namespace: <MOSKClusterNamespace>
    spec:
      cidr: 10.12.2.0/24
      includeRanges:
      - 10.12.2.10-10.12.2.100
    
    Multi-rack cluster
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-1-tenant-tunnel
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-1-tenant-tunnel: true
    spec:
      cidr: 10.20.161.0/24
      gateway: 10.20.161.1
      includeRanges:
        - 10.20.161.16-10.20.161.255
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-2-tenant-tunnel
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-2-tenant-tunnel: true
    spec:
      cidr: 10.20.162.0/24
      gateway: 10.20.162.1
      includeRanges:
        - 10.20.162.16-10.20.162.255
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-3-tenant-tunnel
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-3-tenant-tunnel: true
    spec:
      cidr: 10.20.163.0/24
      gateway: 10.20.163.1
      includeRanges:
        - 10.20.163.16-10.20.163.255
    ---
    # Add more Subnet object templates as required using the above example
    # (one subnet per rack)
    

    Configuration rules:

    • Use this subnet in the L2 template for the compute and gateway (controller) nodes in the MOSK cluster.

    • Assign this subnet to the interface connected to your MOSK overlay network.

    • This network is used to provide denied and secure tenant networks with the help of the tunneling mechanism (VLAN/GRE/VXLAN). If the VXLAN and GRE encapsulation takes place, the IP address assignment is required on interfaces at the node level. On the Tungsten Fabric deployments, this network is used for MPLS over UDP+GRE traffic.

  12. Optional. Add a subnet for the MOSK live migration network. This subnet is used by the Compute service (OpenStack Nova) to transfer data during live migration. Depending on the cloud needs, you can place it on a dedicated physical network not to affect other networks during live migration. The IP address assignment is required on interfaces at the node level. Configuration examples:

    Single-rack cluster
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      labels:
        kaas.mirantis.com/provider: baremetal
      name: live-migration-subnet
      namespace: <MOSKClusterNamespace>
    spec:
      cidr: 10.12.7.0/24
      includeRanges:
      - 10.12.7.10-10.12.7.100
    
    Multi-rack cluster
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-1-live-migration
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-1-live-migration: true
    spec:
      cidr: 10.20.171.0/24
      gateway: 10.20.171.1
      includeRanges:
        - 10.20.171.16-10.20.171.255
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-2-live-migration
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-2-live-migration: true
    spec:
      cidr: 10.20.172.0/24
      gateway: 10.20.172.1
      includeRanges:
        - 10.20.172.16-10.20.172.255
    ---
    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      name: rack-3-live-migration
      namespace: mosk-namespace-name
      labels:
        kaas.mirantis.com/provider: baremetal
        cluster.sigs.k8s.io/cluster-name: mosk-cluster-name
        rack-3-live-migration: true
    spec:
      cidr: 10.20.173.0/24
      gateway: 10.20.173.1
      includeRanges:
        - 10.20.173.16-10.20.173.255
    ---
    # Add more Subnet object templates as required using the above example
    # (one subnet per rack)
    

    Configuration rules:

    • Use this subnet in the L2 template for compute nodes in the MOSK cluster.

    • Assign this subnet to the interface connected to your MOSK overlay network.

  13. Verify that the subnet is successfully created:

    kubectl get subnet kaas-mgmt -oyaml
    

    In the system output, verify the Subnet object status.

    Status fields of the Subnet object

    Parameter

    Description

    state Since 23.1

    Contains a short state description and a more detailed one if applicable. The short status values are as follows:

    • OK - object is operational.

    • ERR - object is non-operational. This status has a detailed description in the messages list.

    • TERM - object was deleted and is terminating.

    messages Since 23.1

    Contains error or warning messages if the object state is ERR. For example, ERR: Wrong includeRange for CIDR….

    statusMessage

    Deprecated since MOSK 23.1 and will be removed in one of the following releases in favor of state and messages. Since MOSK 23.2, this field is not set for the objects of newly created clusters.

    cidr

    Reflects the actual CIDR, has the same meaning as spec.cidr.

    gateway

    Reflects the actual gateway, has the same meaning as spec.gateway.

    nameservers

    Reflects the actual name servers, has same meaning as spec.nameservers.

    ranges

    Specifies the address ranges that are calculated using the fields from spec: cidr, includeRanges, excludeRanges, gateway, useWholeCidr. These ranges are directly used for nodes IP allocation.

    allocatable

    Includes the number of currently available IP addresses that can be allocated for nodes from the subnet.

    allocatedIPs

    Specifies the list of IPv4 addresses with the corresponding IPaddr object IDs that were already allocated from the subnet.

    capacity

    Contains the total number of IP addresses being held by ranges that equals to a sum of the allocatable and allocatedIPs parameters values.

    objCreated

    Date, time, and IPAM version of the Subnet CR creation.

    objStatusUpdated

    Date, time, and IPAM version of the last update of the status field in the Subnet CR.

    objUpdated

    Date, time, and IPAM version of the last Subnet CR update by kaas-ipam.

    Example of a successfully created subnet:

    apiVersion: ipam.mirantis.com/v1alpha1
    kind: Subnet
    metadata:
      labels:
        ipam/UID: 6039758f-23ee-40ba-8c0f-61c01b0ac863
        kaas.mirantis.com/provider: baremetal
        ipam/SVC-k8s-lcm: "1"
      name: kaas-mgmt
      namespace: default
    spec:
      cidr: 10.0.0.0/24
      excludeRanges:
      - 10.0.0.100
      - 10.0.0.101-10.0.0.120
      gateway: 10.0.0.1
      includeRanges:
      - 10.0.0.50-10.0.0.90
      nameservers:
      - 172.18.176.6
    status:
      allocatable: 38
      allocatedIPs:
      - 10.0.0.50:0b50774f-ffed-11ea-84c7-0242c0a85b02
      - 10.0.0.51:1422e651-ffed-11ea-84c7-0242c0a85b02
      - 10.0.0.52:1d19912c-ffed-11ea-84c7-0242c0a85b02
      capacity: 41
      cidr: 10.0.0.0/24
      gateway: 10.0.0.1
      objCreated: 2021-10-21T19:09:32Z  by  v5.1.0-20210930-121522-f5b2af8
      objStatusUpdated: 2021-10-21T19:14:18.748114886Z  by  v5.1.0-20210930-121522-f5b2af8
      objUpdated: 2021-10-21T19:09:32.606968024Z  by  v5.1.0-20210930-121522-f5b2af8
      nameservers:
      - 172.18.176.6
      ranges:
      - 10.0.0.50-10.0.0.90
    
  14. Proceed to creating L2 templates as described in Create L2 templates.