Enhancements¶
This section outlines new features and enhancements introduced in the Mirantis Container Cloud release 2.17.0. For the list of enhancements in the Cluster releases 11.1.0 and 7.7.0 that are introduced by the Container Cloud release 2.17.0, see the Cluster releases (managed).
General availability for Ubuntu 20.04 on greenfield deployments
Container Cloud on top of MOSK Victoria with Tungsten Fabric
EBS instead of NVMe as persistent storage for AWS-based nodes
Automatic propagation of Salesforce configuration to all clusters
General availability for Ubuntu 20.04 on greenfield deployments¶
Implemented full support for Ubuntu 20.04 LTS (Focal Fossa) as the default host operating system that now installs on management, regional, and managed clusters for the following cloud providers: AWS, Azure, OpenStack, Equinix Metal with public or private networking, and non-MOSK-based bare metal.
For the vSphere and MOSK-based (managed) deployments, support for Ubuntu 20.04 will be announced in one of the following Container Cloud releases.
Note
The management or regional bare metal cluster dedicated for managed clusters running MOSK is based on Ubuntu 20.04.
Caution
Upgrading from Ubuntu 18.04 to 20.04 on existing deployments is not supported.
Container Cloud on top of MOSK Victoria with Tungsten Fabric¶
Implemented the capability to deploy Container Cloud management, regional, and managed clusters based on OpenStack Victoria with Tungsten Fabric networking on top of Mirantis OpenStack for Kubernetes (MOSK) Victoria with Tungsten Fabric.
Note
On the MOSK Victoria with Tungsten Fabric clusters of Container Cloud deployed before MOSK 22.3, Octavia enables a default security group for newly created load balancers. To change this configuration, refer to MOSK Operations Guide: Configure load balancing. To use the default security group, configure ingress rules as described in Create a managed cluster.
EBS instead of NVMe as persistent storage for AWS-based nodes¶
Replaced the Non-Volatile Memory Express (NVMe) drive type with the Amazon Elastic Block Store (EBS) one as the persistent storage requirement for AWS-based nodes. This change prevents cluster nodes from becoming unusable after instances are stopped and NVMe drives are erased.
Previously, the /var/lib/docker
Docker data was located on local NVMe SSDs
by default. Now, this data is located on the same EBS volume drive as the
operating system.
Manager nodes deletion on all cluster types¶
TechPreview
Implemented the capability to delete manager nodes with the purpose of replacement or recovery. Consider the following precautions:
Create a new manager machine to replace the deleted one as soon as possible. This is necessary since after a machine removal, the cluster has limited capabilities to tolerate faults. Deletion of manager machines is intended only for replacement or recovery of failed nodes.
You can delete a manager machine only if your cluster has at least two manager machines in the
Ready
state.Do not delete more than one manager machine at once to prevent cluster failure and data loss.
For MOSK-based clusters, after a manager machine deletion, proceed with additional manual steps described in Mirantis OpenStack for Kubernetes Operations Guide: Replace a failed controller node.
For the Equinix Metal and bare metal providers:
Ensure that the machine to delete is not a Ceph Monitor. If it is, migrate the Ceph Monitor to keep the odd number quorum of Ceph Monitors after the machine deletion. For details, see Migrate a Ceph Monitor before machine replacement.
If you delete a machine on the regional cluster, refer to the known issue 23853 to complete the deletion.
For the sake of HA, limited a managed cluster size to have only an odd number
of manager machines. In an even-sized cluster, an additional machine remains
in the Pending
state until an extra manager machine is added.
Learn more
Custom values for node labels¶
Extended the use of node labels for all supported cloud providers with the
ability to set custom values. Especially from the MOSK
standpoint, this feature makes it easy to schedule overrides for OpenStack
services using API. For example, now you can set the node-type
label to
define the node purpose such as hpc-compute
, compute-lvm
, or
storage-ssd
in its value.
The list of allowed node labels is located in the Cluster
object status
providerStatus.releaseRef.current.allowedNodeLabels
field. Before or after
a machine deployment, add the required label from the allowed node labels list
with the corresponding value to spec.providerSpec.value.nodeLabels
in
machine.yaml
.
Note
Due to the known issue 23002, it is not possible to set a custom value for a predefined node label using the Container Cloud web UI. For a workaround, refer to the issue description.
Machine pools¶
Introduced the MachinePool
custom resource. A machine pool is a template
that allows managing a set of machines with the same provider spec as a
single unit. You can create different sets of machine pools with required
specs during machines creation on a new or existing cluster using the
Create machine wizard in the Container Cloud web UI. You can
assign or unassign machines from a pool, if required. You can also increase
or decrease replicas count. In case of replicas count increasing, new
machines will be added automatically.
Automatic propagation of Salesforce configuration to all clusters¶
Implemented the capability to enable automatic propagation of the Salesforce
configuration of your management cluster to the related regional and managed
clusters using the autoSyncSalesForceConfig=true
flag added to the
Cluster
object of the management cluster. This option allows for
automatic update and sync of the Salesforce settings on all your clusters
after you update your management cluster configuration.
You can also set custom settings for regional and managed clusters that always override automatically propagated Salesforce values.
Note
The capability to enable this option using the Container Cloud web UI will be announced in one of the following releases.