Enable network trunking

Enable network trunkingΒΆ

The Mirantis Cloud Platform supports port trunking which enables you to attach a virtual machine to multiple Neutron networks using VLANs as a local encapsulation to differentiate traffic for each network as it goes in and out of a single virtual machine network interface (VIF).

Using network trunking is particularly beneficial in the following use cases:

  • Some applications require connection to hundreds of Neutron networks. To achieve this, you may want to use a single or a few VIFs and VLANs to differentiate traffic for each network rather than having hundreds of VIFs per VM.

  • Cloud workloads are often very dynamic. You may prefer to add or remove VLANs rather than to hotplug interfaces in a virtual machine.

  • Moving a virtual machine from one network to another without detaching the VIF from the virtual machine.

  • A virtual machine may run many containers. Each container may have requirements to be connected to different Neutron networks. Assigning a VLAN or other encapsulation ID for each container is more efficient and scalable than requiring a vNIC per container.

  • Some legacy applications that require VLANs to connect to multiple networks.

The current limitation of network trunking support is that MCP supports only Neutron OVS with DPDK and the Open vSwitch firewall driver enabled. Other Neutron ML2 plugins, such as Linux Bridge and OVN, are not supported. If you use security groups and network trunking, MCP automatically enables the native Open vSwitch firewall driver.

To enable network trunking:

  1. Log in to the Salt Master node.

  2. Open the cluster.<NAME>.openstack.init.yml file for editing.

  3. Set the neutron_enable_vlan_aware_vms parameter to True:

    parameters:
      _param:
        neutron_enable_vlan_aware_vms: True
        ...
    
  4. Re-run Salt configuration:

    salt -C 'I@neutron:server' state.sls neutron
    salt -C 'I@neutron:gateway' state.sls neutron
    salt -C 'I@neutron:compute' state.sls neutron