Addressed issues

The following issues have been addressed in the Mirantis Container Cloud release 2.25.0 along with the Cluster releases 17.0.0, 16.0.0, and 14.1.0.


This section provides descriptions of issues addressed since the last Container Cloud patch release 2.24.5.

For details on addressed issues in earlier patch releases since 2.24.0, which are also included into the major release 2.25.0, refer to 2.24.x patch releases.

  • [34462] [BM] Fixed the issue with incorrect handling of the DHCP egress traffic by reconfiguring the external traffic policy for the dhcp-lb Kubernetes Service. For details about the issue, refer to the Kubernetes upstream bug.

    On existing clusters with multiple L2 segments using DHCP relays on the border switches, in order to successfully provision new nodes or reprovision existing ones, manually point the DHCP relays on your network infrastructure to the new IP address of the dhcp-lb Service of the Container Cloud cluster.

    To obtain the new IP address:

    kubectl -n kaas get service dhcp-lb
  • [35429] [BM] Fixed the issue with the WireGuard interface not having the IPv4 address assigned. The fix implies automatic restart of the calico-node Pod to allocate the IPv4 address on the WireGuard interface.

  • [36131] [BM] Fixed the issue with IpamHost object changes not being propagated to LCMMachine during netplan configuration after cluster deployment.

  • [34657] [LCM] Fixed the issue with iam-keycloak Pods not starting after powering up master nodes and starting the Container Cloud upgrade right after.

  • [34750] [LCM] Fixed the issue with journald generating a lot of log messages that already exist in the auditd log due to enabled systemd-journald-audit.socket.

  • [35738] [StackLight] Fixed the issue with ucp-node-exporter being unable to bind the port 9100 with the ucp-node-exporter failing to start due to a conflict with the StackLight node-exporter binding the same port.

    The resolution of the issue involves an automatic change of the port for the StackLight node-exporter from 9100 to 19100. No manual port update is required.

    If your cluster uses a firewall, add an additional firewall rule that grants the same permissions to port 19100 as those currently assigned to port 9100 on all cluster nodes.

  • [34296] [StackLight] Fixed the issue with the CPU over-consumption by helm-controller leading to the KubeContainersCPUThrottlingHigh alert firing.