Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!

Starting with MOSK 25.2, the MOSK documentation set will cover all product layers, including MOSK management (formerly MCC). This means everything you need will be in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.

Addressed issues

The following issues have been addressed in the MOSK 23.3 release:

  • [OpenStack] [34897] Resolved the issue that caused the unavailability of machines from the nodes with DPDK after update of OpenStack from Victoria to Wallaby.

  • [OpenStack] [34411] Resolved the issue with an incorrect port value for RabbitMQ after update.

  • [OpenStack] [25124] Improved performance while sending data between instances affected by the Multiprotocol Label Switching over Generic Routing Encapsulation (MPLSoGRE) throughput limitation.

  • [TF] [30738] Fixed the issue that caused the tf-vrouter-agent readiness probe failure (No Configuration for self).

  • [Update] [35111] Resolved the issue that caused the openstack-operator-ensure-resources job getting stuck in CrashLoopBackOff.

  • [WireGuard] [35147] Resolved the issue that prevented the WireGuard interface from having the IPv4 address assigned.

  • [Bare metal] [34342] Resolved the issue that caused a failure of the etcd pods due to the simultaneous deployment of several pods on a single node. To ensure that etcd pods are always placed on different nodes, MOSK now deploys etcd with the requiredDuringSchedulingIgnoredDuringExecution policy.

  • [StackLight] [35738] Resolved the issue with ucp-node-exporter. It was unable to bind port 9100, causing the ucp-node-exporter start failure. This issue was due to a conflict with the StackLight node-exporter, which was also binding the same port.

    The resolution of the issue involves an automatic change of the port for the StackLight node-exporter from 9100 to 19100. No manual port update is required.

    If your cluster uses a firewall, add an additional firewall rule that grants the same permissions to port 19100 as those currently assigned to port 9100 on all cluster nodes.