Clearing ephemeral LVM volumes using shred consumes huge hardware resources

Clearing ephemeral LVM volumes using shred consumes huge hardware resourcesΒΆ

Note

This feature is available starting from the MCP 2019.2.4 maintenance update. Before enabling the feature, follow the steps described in Apply maintenance updates.

To prevent excessive disk consumption while clearing ephemeral LVM volumes using shred, you can set the ionice level for the ephemeral LVM volume shred operation in nova-compute.

Setting of the ionice level described below makes sense if:

  • nova:compute:lvm:ephemeral is set to True
  • nova:compute:lvm:volume_clear is set to zero or shred

To set the ionice level:

  1. Log in to the Salt Master node.

  2. In classes/cluster/<cluster_name>/openstack/compute.yml, set the level for volume_clear_ionice_level as required:

    nova:
      compute:
        lvm:
          volume_clear_ionice_level: <level>
    

    Possible <level> values are as follows:

    • idle - to use the idle scheduling class. This option impacts system performance the least with a downside of increased time for a volume clearance.
    • From 0 to 7 - to use the best-effort scheduling class. Set the priority level to the specified number.
    • No value - not to set the I/O scheduling class explicitly. Mirantis does not recommend using no value since this is the most aggressive option in terms of system performance impact.
  3. Apply the changes:

    salt -C 'I@nova:compute' state.sls nova.compute