This section lists the MCP 2019.2.4 known issues and workarounds.
PIKE, fixed in 2019.2.5
Barbican may interfere with other services, such as Ceilometer, Aodh, Panko, or Designate, by consuming notifications needed by these services to function properly. The symptoms of the issue include:
Workaround:
Log in to the Salt Master node.
Open your project Git repository with the Reclass model on the cluster level.
In the /classes/cluster/<cluster_name>/openstack/control.yml
file, set
an additional topic for Keystone to send notifications to:
keystone:
server:
notification:
topics: "notifications, stacklight_notificaitons, barbican_notifications"
In the /classes/cluster/<cluster_name>/openstack/barbican.yml
file,
configure Barbican to listen on its own topic:
barbican:
server:
ks_notifications_topic: barbican_notifications
Apply the changes:
salt 'ctl*' state.apply keystone -b 1
salt 'kmn*' state.apply barbican -b 1
PIKE TO QUEENS UPGRADE, fixed in 2019.2.5
The Deploy - upgrade control VMs pipeline job fails for the
ctl01
node during the OpenStack environment upgrade from Pike to Queens
with heat-keystone-setup-domain
authorization error.
Workaround:
Log in to the Salt Master node.
Open your project Git repository with the Reclass model on the cluster level.
In /classes/cluster/<cluster_name>/infra/init.yml
, add the
system.linux.network.hosts.openstack
class.
Refresh pillars:
salt '*' saltutil.pillar_refresh
Apply the changes:
salt '*' state.apply linux.network.host
salt 'ctl*' state.apply keystone.server
Verify the Keystone endpoint list:
salt 'ctl*' cmd.run ". /root/keystonercv3; openstack user list"
The system response must contain the Keystone user list.
Example of system response extract:
ctl03.8827.local:
+----------------------------------+----------------------+
| ID | Name |
+----------------------------------+----------------------+
| 01a8ab06442a4a0193088e9ce112defa | glance |
| 06367bc2db6e497694279fc87f1b4b91 | nova |
| 2f80a6609ab1402abd9257cf0e414c97 | neutron |
| 4e30f3e7d0a045a29094f5fe684dd955 | heat_domain_admin |
| 9b575cef6b6744fb853fb6ebedfe41f5 | cinder |
| b6b3f72daaee4b479a90e0a764d9548e | admin |
| e8a58ebfacab41318709255be6714439 | barbican |
| fe9cdd9f456844d194682a4d265679be | heat |
+----------------------------------+----------------------+
Rerun the Deploy - upgrade control VMs pipeline job.
The Kubernetes with Calico deployment using the Deploy - OpenStack pipeline job fails during the CA file generation stage.
Workaround:
Log in to the Salt Master node.
Update mine
:
state.sls salt.minion.ca
Create the CA file:
state.sls salt.minion.cert
Re-run the Deploy - OpenStack pipeline job to finalize the Kubernetes deployment.