Issues resolutions requiring manual application

Issues resolutions requiring manual application

Note

Before proceeding with the manual steps below, verify that you have performed the steps described in Apply maintenance updates.


[28172] MySQL server node fails after desyncing itself from group

Pike, Queens

Fixed the issue that caused the MySQL server node failure after it desynced itself from the Galera cluster.

To apply the issue resolution:

  1. Log in to the Salt Master node.

  2. Restart the MySQL service on every database server node, one by one.

    For example:

    salt 'dbs03*' cmd.run 'systemctl restart mysql'
    
  3. Verify that every node loaded the updated Galera provider.

    For example:

    salt 'dbs*' mysql.status | grep -A1 wsrep_provider_version
    

    Example of system response:

    wsrep_provider_version:
        3.20(r7e383f7)
    --
    wsrep_provider_version:
        3.20(r7e383f7)
    --
    wsrep_provider_version:
        3.20(r7e383f7)
    

[29930] Excessive disk usage while clearing ephemeral LVM volumes using shred

Queens

Implemented the ability to set the ionice level for the ephemeral LVM volume shred operation in nova-compute to prevent excessive disk consumption. Setting of the ionice level described below makes sense if:

  • nova:compute:lvm:ephemeral is set to True
  • nova:compute:lvm:volume_clear is set to zero or shred

To apply the issue resolution:

  1. Log in to the Salt Master node.

  2. In classes/cluster/<cluster_name>/openstack/compute.yml, set the level for volume_clear_ionice_level as required:

    nova:
      compute:
        lvm:
          volume_clear_ionice_level: <level>
    

    Possible <level> values are as follows:

    • idle - to use the idle scheduling class. This option impacts system performance the least with a downside of increased time for a volume clearance.
    • From 0 to 7 - to use the best-effort scheduling class. Set the priority level to the specified number.
    • No value - not to set the I/O scheduling class explicitly. Mirantis does not recommend using no value since this is the most aggressive option in terms of system performance impact.
  3. Apply the changes:

    salt -C 'I@nova:compute' state.sls nova.compute
    

[30205] The Telemetry notification queues in RabbitMQ with disabled Telemetry

Pike, Queens

Disabled the Telemetry notification queues in RabbitMQ for the OpenStack clusters with StackLight enabled and Telemetry disabled.

To apply the issue resolution:

  1. Log in to the Salt Master node.

  2. In classes/cluster/<cluster_name>/openstack/init.yml, remove the notifications variable from the openstack_notification_topics parameter leaving only the ${_param:stacklight_notification_topic} variable:

    openstack_notification_topics: "${_param:stacklight_notification_topic}"
    
  3. Apply the changes:

    salt "ctl*" state.sls keystone,glance,heat
    salt -C "ctl* or cmp*" state.sls nova,neutron,cinder -b 20
    

[27765] Nova live snapshot feature not using Ceph back end snapshot mechanism

Pike, Queens

Added support for the Ceph back end snapshotting mechanism to the Nova VM live snapshotting feature on the OpenStack environments with Ceph back end used for Nova.

To apply the issue resolution:

  1. Log in to the Salt Master node.

  2. In classes/cluster/<cluster_name>/openstack/control.yml of your Reclass model, add the following parameter:

    glance:
      server:
        show_multiple_locations: True
    
  3. In classes/cluster/<cluster_name>/openstack/compute/init.yml, add the following parameter:

    nova:
      compute:
        workaround:
          disable_libvirt_livesnapshot: False
    
  4. Apply the changes:

    salt -C 'I@glance:server' state.sls glance.server
    salt -C 'I@nova:compute' state.sls nova.compute
    
  5. Log in to the cmn01 node.

  6. Define the rbd permission for pools where images and VMs are stored:

    ceph-authtool /etc/ceph/ceph.client.nova.keyring -n client.nova \
    --cap osd 'profile rbd pool=vms, profile rbd pool=images' \
    --cap mon 'allow r, allow command \"osd blacklist\"'
    

    Substitute the vms and images values with the corresponding pool names for Nova and Glance.

  7. Apply the changes for Ceph:

    ceph auth import -i /etc/ceph/ceph.client.nova.keyring
    

[30216] The fs.inotify.max_user_instances value reaches the maximum limit

Pike, Queens

Fixed the issue with reaching the maximum limit of the fs.inotify.max_user_instances parameter value that prevented an OpenStack compute node to be configured as a DHCP node. The fix increases the default value to 4096 with the possibility to modify it as required.

To apply the issue resolution:

  1. Log in to the Salt Master node.

  2. In classes/system/neutron/compute/cluster.yml of your Reclass model, verify that the following snippet exists:

    linux:
      system:
        kernel:
          sysctl:
            fs.inotify.max_user_instances: 4096
    
  3. Apply the changes to the OpenStack compute nodes hosting DHCP:

    salt 'cmp<node_number*>' state.apply linux.system.kernel
    

[31284] Neutron failing to connect to MySQL

Pike

Fixed the issue with neutron-server failing to reconnect to MySQL after a crash of a MySQL server.

To apply the issue resolution:

  1. Log in to the Salt Master node.

  2. Apply the neutron state on the OpenStack controller nodes:

    salt -C 'I@neutron:server' state.sls neutron