Bare Metal service¶
The Bare Metal service (Ironic) is an extra OpenStack service that can be
deployed by the OpenStack Controller (Rockoon). This section provides the
baremetal-specific configuration options of the OpenStackDeployment
resource.
Enabling the Bare Metal service¶
The Bare Metal service is not included into the core set of services and needs
to be explicitly enabled in the OpenStackDeployment custom resource.
To install bare metal services, add the baremetal keyword to the
spec:features:services list:
spec:
features:
services:
- baremetal
Note
All bare metal services are scheduled to the nodes with the
openstack-control-plane: enabled label.
Ironic agent deployment images¶
To provision a user image onto a bare metal server, Ironic boots a node with
a ramdisk image. Depending on the node’s deploy interface and hardware, the
ramdisk may require different drivers (agents). MOSK
provides tinyIPA-based ramdisk images and uses the direct deploy interface
with the ipmitool power interface.
Example of agent_images configuration:
spec:
features:
ironic:
agent_images:
base_url: https://binary.mirantis.com/openstack/bin/ironic/tinyipa
initramfs: tinyipa-stable-ussuri-20200617101427.gz
kernel: tinyipa-stable-ussuri-20200617101427.vmlinuz
Since the bare metal nodes hardware may require additional drivers, you may need to build a deploy ramdisk for particular hardware. For more information, see Ironic Python Agent Builder. Be sure to create a ramdisk image with the version of Ironic Python Agent appropriate for your OpenStack release.
Redfish Virtual Media¶
The Redfish Ironic driver is enabled by default to provide high-performance management and boot capabilities for bare metal nodes. While traditional PXE-based deployments rely on TFTP, the Redfish driver leverages Virtual Media to streamline provisioning.
The Virtual Media boot enables the operator to boot bare metal nodes from an ISO image. The primary advantage of Vritual Media over traditional PXE is the shift from TFTP to HTTP-based image delivery.
Boot process¶
The Virtual Media boot process transitions from the Ironic conductor to the node hardware through the Redfish API. The boot process consists of the following stages:
Optional. The Ironic Redfish driver prepares the bootable ISO image for the bare metal node.
The image is uploaded to the target Baseboard Management Controller (BMC) of the bare metal node.
The BMC mounts the ISO image as a virtual drive.
The bare metal node boots directly from this virtual drive, bypassing the need for a PXE network boot.
Image composition and requirements¶
The Redfish driver dynamically constructs ISO images using the kernel and ramdisk associated with the bare metal node. In the case of UEFI boot, an additional EFI System Partition (ESP) image is required for building the ISO.
These image components are defined in the following driver configuration parameters:
Component |
Description |
Configuration parameter |
|---|---|---|
Kernel |
The operating system kernel used for deployment/rescue. |
|
Ramdisk |
The temporary file system used during the boot process. |
|
Bootloader |
Required specifically for UEFI boot modes to handle the EFI System Partition (ESP). |
|
Using pre-built images¶
For environments where dynamic image building is not preferred, the Redfish driver supports pre-built ISO images. This is common in highly secured environment or where custom-tailored boot images are required. For implementation details, refer to OpenStack official documentation: Pre-built ISO images.
Bare metal networking¶
Ironic supports the flat networking mode for both Open vSwitch (OVS)
and Open Virtual Network (OVN) backends, and the multitenancy networking
mode for the OVN backend only.
Flat networking¶
The flat networking mode assumes that all bare metal nodes are
pre-connected to a single network that cannot be changed during the
virtual machine provisioning. This network with bridged interfaces
for Ironic should be spread across all nodes including compute nodes
to allow plug-in regular virtual machines to connect to Ironic network.
In its turn, the interface defined as provisioning_interface should
be spread across gateway nodes. The cloud operator can perform
all these underlying configuration through the L2 templates.
Example of the OsDpl resource illustrating the configuration for the flat
network mode:
spec:
features:
services:
- baremetal
neutron:
external_networks:
- bridge: ironic-pxe
interface: <baremetal-interface>
network_types:
- flat
physnet: ironic
vlan_ranges: null
ironic:
# The name of neutron network used for provisioning/cleaning.
baremetal_network_name: ironic-provisioning
networks:
# Neutron baremetal network definition.
baremetal:
physnet: ironic
name: ironic-provisioning
network_type: flat
external: true
shared: true
subnets:
- name: baremetal-subnet
range: 10.13.0.0/24
pool_start: 10.13.0.100
pool_end: 10.13.0.254
gateway: 10.13.0.11
# The name of interface where provision services like tftp and ironic-conductor
# are bound.
provisioning_interface: br-baremetal
Multitenant networking¶
OVN only
The multitenancy network mode uses the neutron Ironic network
interface to share physical connection information with Neutron. This
information is handled by Neutron ML2 drivers when plugging a Neutron port
to a specific network. MOSK supports the
networking-generic-switch Neutron ML2 driver out of the box.
Example of the OsDpl resource illustrating the configuration for the
multitenancy network mode:
spec:
features:
services:
- baremetal
neutron:
tunnel_interface: ens3
external_networks:
- physnet: physnet1
interface: <physnet1-interface>
bridge: br-ex
network_types:
- flat
vlan_ranges: null
mtu: null
- physnet: ironic
interface: <physnet-ironic-interface>
bridge: ironic-pxe
network_types:
- vlan
vlan_ranges: 1000:1099
ironic:
# The name of interface where provision services like tftp and ironic-conductor
# are bound.
provisioning_interface: <baremetal-interface>
baremetal_network_name: ironic-provisioning
networks:
baremetal:
physnet: ironic
name: ironic-provisioning
network_type: vlan
segmentation_id: 1000
external: true
shared: false
subnets:
- name: baremetal-subnet
range: 10.13.0.0/24
pool_start: 10.13.0.100
pool_end: 10.13.0.254
gateway: 10.13.0.11
Port trunking¶
OVN only TechPreview
MOSK provides the port trunking capability for Ironic
instances through the networking-generic-switch Neutron driver allowing for
a single NIC/port on a bare metal server to carry traffic for multiple VLANs
simultaneously.
To learn how to configure the port trunking, refer to the Configure network trunking in projects tutorial.
To check the functionality compatibility with your hardware, contact Mirantis support.