Issues resolutions requiring manual application

Issues resolutions requiring manual application

Note

Before proceeding with the manual steps below, verify that you have performed the steps described in Apply maintenance updates.


[31204] Systemd does not restart the apache2 daemon

Pike, Queens

Fixed the issue when systemd did not restart the apache2 daemon after its unexpected exit. To apply the fix, Apache should be upgraded. The resolution applies automatically when you select the OS_UPGRADE or OS_DIST_UPGRADE chack boxes when running the Deploy - upgrade control VMs Jenkins pipeline.

To verify that the fix has been applied correctly:

  1. After the Apache2 packages update, verify that the apache2 service is running:

    systemctl status apache2
    
  2. Verify that the process ID is not changing:

    pgrep apache2
    

[30537] Excessive disk usage while clearing ephemeral LVM volumes using shred

Pike

Implemented the ability to set the ionice level for the ephemeral LVM volume shred operation in nova-compute to prevent excessive disk consumption. Setting of the ionice level described below makes sense if:

  • nova:compute:lvm:ephemeral is set to True
  • nova:compute:lvm:volume_clear is set to zero or shred

To apply the issue resolution:

  1. Log in to the Salt Master node.

  2. In classes/cluster/<cluster_name>/openstack/compute.yml, set the level for volume_clear_ionice_level as required:

    nova:
      compute:
        lvm:
          volume_clear_ionice_level: <level>
    

    Possible <level> values are as follows:

    • idle - to use the idle scheduling class. This option impacts system performance the least with a downside of increased time for a volume clearance.
    • From 0 to 7 - to use the best-effort scheduling class. Set the priority level to the specified number.
    • No value - not to set the I/O scheduling class explicitly. Mirantis does not recommend using no value since this is the most aggressive option in terms of system performance impact.
  3. Apply the changes:

    salt -C 'I@nova:compute' state.sls nova.compute
    

[30656] The creation of large Heat stacks fails with 502 bad gateway error

Fixed the issue that caused the failure during the creation of a large Heat stack. The issue was caused by the HAProxy timeout of 60 seconds. Now, the default timeout value is 2 minutes.

To apply the issue resolution, apply the haproxy state on the OpenStack controller nodes.

If you have changed the default timeout value on your deployment before the update, it will remain unchanged.

To tune the timeout parameter depending on the needs of an already deployed environment:

  1. Log in to the Salt Master node.

  2. In /srv/salt/reclass/nodes/_generated/ctl01.<cluster_name>.local.yml, set the timeout parameter as required. For example:

    parameters:
      haproxy:
        proxy:
          listen:
            heat_api:
              timeout:
                client: '90s'
                server: '3m'
    
  3. Apply the change:

    salt -C 'I@haproxy:proxy:listen:heat_api' state.sls haproxy