Virtual CPU¶
MOSK provides the capability to configure virtual CPU types
for OpenStack instances through the OpenStackDeployment
custom resource.
This feature enables cloud user to tailor performance and resource allocation
within their OpenStack environment to meet specific workload demands
effectively.
Parameter |
|
---|---|
Usage |
Configures the type of virtual CPU that Nova will use when creating instances. The list of supported CPU models include |
The host-model CPU model¶
The host-model
CPU model (default) mimics the host CPU and provides for
decent performance, good security, and moderate compatibility with live
migrations.
With this mode, libvirt finds an available predefined CPU model that best matches the host CPU, and then explicitly adds the missing CPU feature flags to closely match the host CPU features. To mitigate known security flaws, libvirt automatically adds critical CPU flags, supported by installed libvirt, QEMU, kernel, and CPU microcode versions.
This is a safe choice if your OpenStack compute node CPUs are of the same generation. If your OpenStack compute node CPUs are sufficiently different, for example, span multiple CPU generations, Mirantis strongly recommends setting explicit CPU models supported by all of your OpenStack compute node CPUs or organizing your OpenStack compute nodes into host aggregates and availability zones that have largely identical CPUs.
Note
The host-model
model does not guarantee two-way live migrations
between nodes.
When migrating instances, the libvirt domain XML is first copied as is to the destination OpenStack compute node. Once the instance is hard rebooted or shut down and started again, the domain XML will be re-generated. If versions of libvirt, kernel, CPU microcode, or BIOS firmware differ from what they were on the source compute node the instance was started before, libvirt may pick up additional CPU feature flags, making it impossible to live-migrate back to the original compute node.
The host-passthrough CPU model¶
The host-passthrough
CPU model provides maximum performance, especially
when nested virtualization is required or if live migration support is not
a concern for workloads. Live migration requires exactly the same CPU
on all OpenStack compute nodes, including the CPU microcode and kernel
versions. Therefore, for live migrations support, organize your compute
nodes into host aggregates and availability zones. For workload migration
between non-identical OpenStack compute nodes, contact Mirantis support.
For example, to set the host-passthrough
CPU model for all OpenStack
compute nodes:
spec:
features:
nova:
vcpu_type: host-passthrough
Custom CPU model¶
MOSK enables you to specify a comma-separated list of exact QEMU CPU models to create and emulate. Specify the common and less advanced CPU models first. All explicit CPU models provided must be compatible with the OpenStack compute node CPUs.
To specify an exact CPU model, review the available CPU models and their
features. List and inspect the /usr/share/libvirt/cpu_map/*.xml
files in
the libvirt
containers of pods of the libvirt
DeamonSet or multiple
DaemonSets if you are using node-specific settings.
To review the available CPU models
Identify the available libvirt DaemonSets:
kubectl -n openstack get ds -l application=libvirt --show-labels
Example of system response:
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE LABELS libvirt-libvirt-default 2 2 2 2 2 openstack-compute-node=enabled 34d app.kubernetes.io/managed-by=Helm,application=libvirt,component=libvirt,release_group=openstack-libvirt
Identify the pods of libvirt DaemonSets:
kubectl -n openstack get po -l application=libvirt,release_group=openstack-libvirt
Example of system response:
NAME READY STATUS RESTARTS AGE libvirt-libvirt-default-5zs8m 2/2 Running 0 8d libvirt-libvirt-default-vt8wd 2/2 Running 0 3d14h
List and review the available CPU model definition files. For example:
kubectl -n openstack exec -ti libvirt-libvirt-default-5zs8m -c libvirt -- ls /usr/share/libvirt/cpu_map/*.xml
List and review the content of all CPU model definition files. For example:
kubectl -n openstack exec -ti libvirt-libvirt-default-5zs8m -c libvirt -- bash -c 'for f in `ls /usr/share/libvirt/cpu_map/*.xml`; do echo $f; cat $f; done'
For example, for nodes that are labeled with processor=amd-epyc
, set
a custom EPYC
CPU model:
spec:
nodes:
processor::amd-epyc
features:
nova:
vcpu_type: EPYC