Known issues

Known issues

This section lists the MCP 2019.2.4 known issues and workarounds.


[31028] Barbican may interfere with other services

PIKE, fixed in 2019.2.5

Barbican may interfere with other services, such as Ceilometer, Aodh, Panko, or Designate, by consuming notifications needed by these services to function properly. The symptoms of the issue include:

  • The event alarms are sometimes not triggered
  • The Designate records are sometimes not automatically created
  • Some events are missing in Panko

Workaround:

  1. Log in to the Salt Master node.

  2. Open your project Git repository with the Reclass model on the cluster level.

  3. In the /classes/cluster/<cluster_name>/openstack/control.yml file, set an additional topic for Keystone to send notifications to:

    keystone:
      server:
        notification:
          topics: "notifications, stacklight_notificaitons, barbican_notifications"
    
  4. In the /classes/cluster/<cluster_name>/openstack/barbican.yml file, configure Barbican to listen on its own topic:

    barbican:
      server:
        ks_notifications_topic: barbican_notifications
    
  5. Apply the changes:

    salt 'ctl*' state.apply keystone -b 1
    salt 'kmn*' state.apply barbican -b 1
    

[31397] Upgrade of controller VMs fails on the ctl01 node

PIKE TO QUEENS UPGRADE, fixed in 2019.2.5

The Deploy - upgrade control VMs pipeline job fails for the ctl01 node during the OpenStack environment upgrade from Pike to Queens with heat-keystone-setup-domain authorization error.

Workaround:

  1. Log in to the Salt Master node.

  2. Open your project Git repository with the Reclass model on the cluster level.

  3. In /classes/cluster/<cluster_name>/infra/init.yml, add the system.linux.network.hosts.openstack class.

  4. Refresh pillars:

    salt '*' saltutil.pillar_refresh
    
  5. Apply the changes:

    salt '*' state.apply linux.network.host
    salt 'ctl*' state.apply keystone.server
    
  6. Verify the Keystone endpoint list:

    salt 'ctl*' cmd.run ". /root/keystonercv3; openstack user list"
    

    The system response must contain the Keystone user list.

    Example of system response extract:

    ctl03.8827.local:
    +----------------------------------+----------------------+
    | ID                               | Name                 |
    +----------------------------------+----------------------+
    | 01a8ab06442a4a0193088e9ce112defa | glance               |
    | 06367bc2db6e497694279fc87f1b4b91 | nova                 |
    | 2f80a6609ab1402abd9257cf0e414c97 | neutron              |
    | 4e30f3e7d0a045a29094f5fe684dd955 | heat_domain_admin    |
    | 9b575cef6b6744fb853fb6ebedfe41f5 | cinder               |
    | b6b3f72daaee4b479a90e0a764d9548e | admin                |
    | e8a58ebfacab41318709255be6714439 | barbican             |
    | fe9cdd9f456844d194682a4d265679be | heat                 |
    +----------------------------------+----------------------+
    
  7. Rerun the Deploy - upgrade control VMs pipeline job.


[31462] Kubernetes deployment failure

The Kubernetes with Calico deployment using the Deploy - OpenStack pipeline job fails during the CA file generation stage.

Workaround:

  1. Log in to the Salt Master node.

  2. Update mine:

    state.sls salt.minion.ca
    
  3. Create the CA file:

    state.sls salt.minion.cert
    
  4. Re-run the Deploy - OpenStack pipeline job to finalize the Kubernetes deployment.