Separate PXE and management networks¶
This section describes how to configure a dedicated PXE network for a management bare metal cluster. A separate PXE network allows isolating sensitive bare metal provisioning process from the end users. The users still have access to Container Cloud services, such as Keycloak, to authenticate workloads in managed clusters, such as Horizon in a Mirantis OpenStack for Kubernetes cluster.
Note
This additional configuration procedure must be completed as part of the main Deploy a management cluster using CLI procedure. It substitutes or appends some configuration parameters and templates that are used in the main procedure for the management cluster to use two networks, PXE and management, instead of one PXE/management network. Mirantis recommends considering the main procedure first.
The following table describes the overall network mapping scheme with all
L2/L3 parameters, for example, for two networks, PXE (CIDR 10.0.0.0/24
)
and management (CIDR 10.0.11.0/24
):
Deployment file name |
Network |
Parameters and values |
---|---|---|
|
Management |
|
|
PXE |
|
|
Management |
|
|
PXE |
|
When using separate PXE and management networks, the management cluster services are exposed in different networks using two separate MetalLB address pools:
Services exposed through the PXE network are as follows:
Ironic API as a bare metal provisioning server
HTTP server that provides images for network boot and server provisioning
Caching server for accessing the Container Cloud artifacts deployed on hosts
Services exposed through the management network are all other Container Cloud services, such as Keycloak, web UI, and so on.
To configure separate PXE and management networks:
Inspect guidelines to follow during configuration of the
Subnet
object as a MetalLB address pool as described MetalLB configuration guidelines for subnets.To ensure successful bootstrap, enable asymmetric routing on the interfaces of the management cluster nodes. This is required because the seed node relies on one network by default, which can potentially cause traffic asymmetry.
In the
kernelParameters
section ofbaremetalhostprofiles.yaml.template
, setrp_filter
to2
. This enables loose mode as defined in RFC3704.Example configuration of asymmetric routing
... kernelParameters: ... sysctl: # Enables the "Loose mode" for the "k8s-lcm" interface (management network) net.ipv4.conf.k8s-lcm.rp_filter: "2" # Enables the "Loose mode" for the "bond0" interface (PXE network) net.ipv4.conf.bond0.rp_filter: "2" ...
Note
More complicated solutions that are not described in this manual include getting rid of traffic asymmetry, for example:
Configure source routing on management cluster nodes.
Plug the seed node into the same networks as the management cluster nodes, which requires custom configuration of the seed node.
In
kaas-bootstrap/templates/bm/ipam-objects.yaml.template
:Substitute all
Subnet
object templates with the new ones as described in the example template belowUpdate the L2 template
spec.l3Layout
andspec.npTemplate
fields as described in the example template below
Example of the Subnet object templates
# Subnet object that provides IP addresses for bare metal hosts of # management cluster in the PXE network. apiVersion: "ipam.mirantis.com/v1alpha1" kind: Subnet metadata: name: mgmt-pxe namespace: default labels: kaas.mirantis.com/provider: baremetal kaas-mgmt-pxe-subnet: "" spec: cidr: SET_IPAM_CIDR gateway: SET_PXE_NW_GW nameservers: - SET_PXE_NW_DNS includeRanges: - SET_IPAM_POOL_RANGE excludeRanges: - SET_METALLB_PXE_ADDR_POOL --- # Subnet object that provides IP addresses for bare metal hosts of # management cluster in the management network. apiVersion: "ipam.mirantis.com/v1alpha1" kind: Subnet metadata: name: mgmt-lcm namespace: default labels: kaas.mirantis.com/provider: baremetal kaas-mgmt-lcm-subnet: "" ipam/SVC-k8s-lcm: "1" ipam/SVC-ceph-cluster: "1" ipam/SVC-ceph-public: "1" cluster.sigs.k8s.io/cluster-name: CLUSTER_NAME spec: cidr: {{ SET_LCM_CIDR }} includeRanges: - {{ SET_LCM_RANGE }} excludeRanges: - SET_LB_HOST - SET_METALLB_ADDR_POOL --- # Deprecated since 2.27.0. Subnet object that provides configuration # for "services-pxe" MetalLB address pool that will be used to expose # services LB endpoints in the PXE network. apiVersion: "ipam.mirantis.com/v1alpha1" kind: Subnet metadata: name: mgmt-pxe-lb namespace: default labels: kaas.mirantis.com/provider: baremetal metallb/address-pool-name: services-pxe metallb/address-pool-protocol: layer2 metallb/address-pool-auto-assign: "false" cluster.sigs.k8s.io/cluster-name: CLUSTER_NAME spec: cidr: SET_IPAM_CIDR includeRanges: - SET_METALLB_PXE_ADDR_POOL
Example of the L2 template spec
kind: L2Template ... spec: ... l3Layout: - scope: namespace subnetName: kaas-mgmt-pxe labelSelector: kaas.mirantis.com/provider: baremetal kaas-mgmt-pxe-subnet: "" - scope: namespace subnetName: kaas-mgmt-lcm labelSelector: kaas.mirantis.com/provider: baremetal kaas-mgmt-lcm-subnet: "" npTemplate: | version: 2 renderer: networkd ethernets: {{nic 0}}: dhcp4: false dhcp6: false match: macaddress: {{mac 0}} set-name: {{nic 0}} {{nic 1}}: dhcp4: false dhcp6: false match: macaddress: {{mac 1}} set-name: {{nic 1}} bridges: bm-pxe: interfaces: - {{ nic 0 }} dhcp4: false dhcp6: false addresses: - {{ ip "bm-pxe:kaas-mgmt-pxe" }} nameservers: addresses: {{ nameservers_from_subnet "kaas-mgmt-pxe" }} routes: - to: 0.0.0.0/0 via: {{ gateway_from_subnet "kaas-mgmt-pxe" }} k8s-lcm: interfaces: - {{ nic 1 }} dhcp4: false dhcp6: false addresses: - {{ ip "k8s-lcm:kaas-mgmt-lcm" }} nameservers: addresses: {{ nameservers_from_subnet "kaas-mgmt-lcm" }}
Deprecated since Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0): the last
Subnet
template namedmgmt-pxe-lb
in the example above will be used to configure the MetalLB address pool in the PXE network. The bare metal provider will automatically configure MetalLB with address pools using theSubnet
objects identified by specific labels.Warning
The
bm-pxe
address must have a separate interface with only one address on this interface.Verify the current MetalLB configuration that is stored in
MetalLB
objects:kubectl -n metallb-system get ipaddresspools,l2advertisements
For the example configuration described above, the system outputs a similar content:
NAME AGE ipaddresspool.metallb.io/default 129m ipaddresspool.metallb.io/services-pxe 129m NAME AGE l2advertisement.metallb.io/default 129m l2advertisement.metallb.io/services-pxe 129m
To verify the
MetalLB
objects:kubectl -n metallb-system get <object> -o json | jq '.spec'
For the example configuration described above, the system outputs a similar content for
ipaddresspool
objects:{ "addresses": [ "10.0.11.61-10.0.11.80" ], "autoAssign": true, "avoidBuggyIPs": false } $ kubectl -n metallb-system get ipaddresspool.metallb.io/services-pxe -o json | jq '.spec' { "addresses": [ "10.0.0.61-10.0.0.70" ], "autoAssign": false, "avoidBuggyIPs": false }
The
auto-assign
parameter will be set tofalse
for all address pools except thedefault
one. So, a particular service will get an address from such an address pool only if theService
object has a specialmetallb.universe.tf/address-pool
annotation that points to the specific address pool name.Note
- It is expected that every Container Cloud service on a management
cluster will be assigned to one of the address pools. Current consideration is to have two MetalLB address pools:
services-pxe
is a reserved address pool name to use for the Container Cloud services in the PXE network (Ironic API, HTTP server, caching server).The bootstrap cluster also uses the
services-pxe
address pool for its provision services for management cluster nodes to be provisioned from the bootstrap cluster. After the management cluster is deployed, the bootstrap cluster is deleted and that address pool is solely used by the newly deployed cluster.default
is an address pool to use for all other Container Cloud services in the management network. No annotation is required on theService
objects in this case.
In addition to the network parameters defined in Deploy a management cluster using CLI, configure the following ones by replacing them in
templates/bm/ipam-objects.yaml.template
:New subnet template parameters¶ Parameter
Description
Example value
SET_LCM_CIDR
Address of a management network for the management cluster in the CIDR notation. You can later share this network with managed clusters where it will act as the LCM network. If managed clusters have their separate LCM networks, those networks must be routable to the management network.
10.0.11.0/24
SET_LCM_RANGE
Address range that includes addresses to be allocated to bare metal hosts in the management network for the management cluster. When this network is shared with managed clusters, the size of this range limits the number of hosts that can be deployed in all clusters that share this network. When this network is solely used by a management cluster, the range should include at least 3 IP addresses for bare metal hosts of the management cluster.
10.0.11.100-10.0.11.109
SET_METALLB_PXE_ADDR_POOL
Address range to be used for LB endpoints of the Container Cloud services: Ironic-API, HTTP server, and caching server. This range must be within the PXE network. The minimum required range is 5 IP addresses.
10.0.0.61-10.0.0.70
The following parameters will now be tied to the management network while their meaning remains the same as described in Deploy a management cluster using CLI:
Subnet template parameters migrated to management network¶ Parameter
Description
Example value
SET_LB_HOST
IP address of the externally accessible API endpoint of the management cluster. This address must NOT be within the
SET_METALLB_ADDR_POOL
range but within the management network. External load balancers are not supported.10.0.11.90
SET_METALLB_ADDR_POOL
The address range to be used for the externally accessible LB endpoints of the Container Cloud services, such as Keycloak, web UI, and so on. This range must be within the management network. The minimum required range is 19 IP addresses.
10.0.11.61-10.0.11.80
Proceed to further steps in Deploy a management cluster using CLI.