Compute service

Mirantis OpenStack for Kubernetes (MOSK) provides instances management capability through the Compute service (OpenStack Nova). The Compute service interacts with other OpenStack components of an OpenStack environment to provide life-cycle management of the virtual machine instances.

vCPU type

Available since MOSK 22.1

Parameter

spec:features:nova:vcpu_type

Usage

Configures the type of vCPU that Nova will create instances with. The default CPU model configured for all instances managed by Nova is host-model, the same as in Nova for the KVM or QEMU hypervisor.

Supported CPU models

The supported CPU models include:

  • host-model (default) - mimics the host CPU and provides for decent performance, good security, and moderate compatibility with live migrations.

    With this mode, libvirt finds an available predefined CPU model that best matches the host CPU, and then explicitly adds the missing CPU feature flags to closely match the host CPU features. To mitigate known security flaws, libvirt automatically adds critical CPU flags, supported by installed libvirt, QEMU, kernel, and CPU microcode versions.

    This is a safe choice if your OpenStack compute node CPUs are of the same generation. If your OpenStack compute node CPUs are sufficiently different, for example, span multiple CPU generations, Mirantis strongly recommends setting explicit CPU models supported by all of your OpenStack compute node CPUs or organizing your OpenStack compute nodes into host aggregates and availability zones that have largely identical CPUs.

    Note

    The host-model model does not guarantee two-way live migrations between nodes.

    When migrating instances, the libvirt domain XML is first copied as is to the destination OpenStack compute node. Once the instance is hard rebooted or shut down and started again, the domain XML will be re-generated. If versions of libvirt, kernel, CPU microcode, or BIOS firmware differ from what they were on the source compute node the instance was started before, libvirt may pick up additional CPU feature flags, making it impossible to live-migrate back to the original compute node.

  • host-passthrough - provides maximum performance, especially when nested virtualization is required or if live migration support is not a concern for workloads. Live migration requires exactly the same CPU on all OpenStack compute nodes, including the CPU microcode and kernel versions. Therefore, for live migrations support, organize your compute nodes into host aggregates and availability zones. For workload migration between non-identical OpenStack compute nodes, contact Mirantis support.

  • A comma-separated list of exact QEMU CPU models to create and emulate. Specify the common and less advanced CPU models first. All explicit CPU models provided must be compatible with the OpenStack compute node CPUs.

    To specify an exact CPU model, review the available CPU models and their features. List and inspect the /usr/share/libvirt/cpu_map/*.xml files in the libvirt containers of pods of the libvirt DeamonSet or multiple DaemonSets if you are using node-specific settings.

Configuration examples

For example, to set the host-passthrough CPU model for all OpenStack compute nodes:

spec:
  features:
    nova:
      vcpu_type: host-passthrough

For nodes that are labeled with processor=amd-epyc, set a custom EPYC CPU model:

spec:
  nodes:
    processor::amd-epyc
      features:
        nova:
          vcpu_type: EPYC

Live migration

Parameter

features:nova:live_migration_interface

Usage

Specifies the name of the NIC device on the actual host that will be used by Nova for the live migration of instances.

Mirantis recommends setting up your Kubernetes hosts in such a way that networking is configured identically on all of them, and names of the interfaces serving the same purpose or plugged into the same network are consistent across all physical nodes.

Also, set the option to vhost0 in the following cases:

  • The Neutron service uses Tungsten Fabric.

  • Nova migrates instances through the interface specified by the Neutron’s tunnel_interface parameter.

Images storage back end

Parameter

features:nova:images:backend

Usage

Defines the type of storage for Nova to use on the compute hosts for the images that back up the instances.

The list of supported options include:

  • local

    The local storage is used. The pros include faster operation, failure domain independency from the external storage. The cons include local space consumption and less performant and robust live migration with block migration.

  • ceph

    Instance images are stored in a Ceph pool shared across all Nova hypervisors. The pros include faster image start, faster and more robust live migration. The cons include considerably slower IO performance, workload operations direct dependency on Ceph cluster availability and performance.

  • lvm TechPreview

    Instance images and ephemeral images are stored on a local Logical Volume. If specified, features:nova:images:lvm:volume_group must be set to an available LVM Volume Group, by default, nova-vol. For details, see Enable LVM ephemeral storage.