Enable host passthrough for VCP

Enable host passthrough for VCPΒΆ

Note

This feature is available starting from the MCP 2019.2.16 maintenance update. Before using the feature, follow the steps described in Apply maintenance updates.

This section describes how to enable the host-passthrough CPU mode that can enhance performance of the MCP Virtualized Control Plane (VCP). For details, see libvirt documentation: CPU model and topology.

Warning

Prior to enabling the host passthrough, run the following command to verify that it is applicable to your deployment:

salt -C "I@salt:control" cmd.run "virsh list | tail -n +3 | awk '{print \$1}' | xargs -I{} virsh dumpxml {} | grep cpu_mode"

If the output is empty, proceed to enabling host passthrough. Otherwise, first contact Mirantis support.

To enable host passthrough:

  1. Log in to a KVM node.

  2. Obtain the list of running VMs:

    virsh list
    

    Example of system response:

    Id  Name                                State
    ------------------------------------------------
     1  msg01.bm-cicd-queens-ovs-maas.local running
     2  rgw01.bm-cicd-queens-ovs-maas.local running
     3  dbs01.bm-cicd-queens-ovs-maas.local running
     4  bmt01.bm-cicd-queens-ovs-maas.local running
     5  kmn01.bm-cicd-queens-ovs-maas.local running
     6  cid01.bm-cicd-queens-ovs-maas.local running
     7  cmn01.bm-cicd-queens-ovs-maas.local running
     8  ctl01.bm-cicd-queens-ovs-maas.local running
    
  3. Edit the configuration of each VM using the virsh edit %VM_NAME% command. Add the following lines to the XML configuration file:

    <cpu mode='host-passthrough'>
      <cache mode='passthrough'/>
    </cpu>
    

    For example:

    <domain type='kvm'>
      <name>msg01.bm-cicd-queens-ovs-maas.local</name>
      <uuid>81e18795-cf2f-4ffc-ac90-9fa0a3596ffb</uuid>
      <memory unit='KiB'>67108864</memory>
      <currentMemory unit='KiB'>67108864</currentMemory>
      <vcpu placement='static'>16</vcpu>
      <cpu mode='host-passthrough'>
        <cache mode='passthrough'/>
      </cpu>
      <os>
        <type arch='x86_64' machine='pc-i440fx-bionic'>hvm</type>
        <boot dev='hd'/>
      </os>
    .......
    
  4. Perform the steps 1-3 on the remaning kvm nodes one by one.

  5. Log in to the Salt Master node.

  6. Reboot the VCP nodes as described in Scheduled maintenance with a planned power outage using the salt 'nodename01*' system.reboot command. Do not reboot the kvm, apt, and cmp nodes.

    Warning

    Reboot nodes one by one instead of rebooting all nodes of the same role at a time. Wait for 10 minutes between each reboot.