This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.5.0. For the list of enhancements in the Cluster release 5.12.0 and Cluster release 6.12.0 that are supported by the Container Cloud release 2.5.0, see the 5.12.0 and 6.12.0 sections.
Updated version of Mirantis Kubernetes Engine¶
Updated the Mirantis Kubernetes Engine (MKE) version to 3.3.6 for the Container Cloud management and managed clusters.
For the MKE release highlights and components versions, see MKE documentation: MKE release notes.
Proxy support for OpenStack and VMware vSphere providers¶
Implemented proxy support for OpenStack-based and vSphere-based Technology Preview clusters. If you require all Internet access to go through a proxy server for security and audit purposes, you can now bootstrap management and regional clusters using proxy.
You can also enable a separate proxy access on an OpenStack-based managed cluster using the Container Cloud web UI. This proxy is intended for the end user needs and is not used for a managed cluster deployment or for access to the Mirantis resources.
The proxy support for:
The OpenStack provider is generally available.
The VMware vSphere provider is available as Technology Preview. For the Technology Preview feature definition, refer to Technology Preview features.
The AWS and bare metal providers is in the development stage and will become available in the future Container Cloud releases.
Introduced artifacts caching support for all Container Cloud providers to enable deployment of managed clusters without direct Internet access. The Mirantis artifacts used during managed clusters deployment are downloaded through a cache running on a regional cluster.
The feature is enabled by default on new managed clusters based on the Cluster releases 5.12.0 and 6.12.0 and will be automatically enabled on existing clusters during upgrade to the latest version.
NTP server configuration on regional clusters¶
Implemented the possibility to configure regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region. The feature is applicable to all supported cloud providers. The NTP server parameters can be added before or after management and regional clusters deployment.
Optimized ClusterRelease upgrade process¶
Optimized the ClusterRelease upgrade process by enabling the Container Cloud provider to upgrade the LCMCluster components, such as MKE, before the HelmBundle components, such as StackLight or Ceph.
Dedicated network for external connection to the Kubernetes services¶
k8s-ext bridge in L2 templates that allows you to use
a dedicated network for external connection to the Kubernetes services
exposed by the cluster. When using such bridge, the MetalLB ranges and the
IP addresses provided by the subnet that is associated with the bridge
must fit in the same CIDR.
If enabled, MetalLB will listen and respond on the dedicated virtual bridge. Also, you can create additional subnets to configure additional address ranges for MetalLB.
Use of a dedicated network for Kubernetes pods traffic, for external connection to the Kubernetes services exposed by the cluster, and for the Ceph cluster access and replication traffic is available as Technology Preview. Use such configurations for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview features.
The following feature is still under development and will be announced in one of the following Container Cloud releases:
Switching Kubernetes API to listen to the specified IP address on the node