New features

Pre-update inspection of pinned product artifacts in a Cluster object

To ensure that Container Cloud clusters remain consistently updated with the latest security fixes and product improvements, the Admission Controller has been enhanced. Now, it actively prevents the utilization of pinned custom artifacts for Container Cloud components. Specifically, it blocks a management or managed cluster release update, or any cluster configuration update, for example, adding public keys or proxy, if a Cluster object contains any custom Container Cloud artifacts with global or image-related values overwritten in the helm-releases section, until these values are removed.

Normally, the Container Cloud clusters do not contain pinned artifacts, which eliminates the need for any pre-update actions in most deployments. However, if the update of your cluster is blocked with the invalid HelmReleases configuration error, refer to Update notes: Pre-update actions for details.

Note

In rare cases, if the image-related or global values should be changed, you can use the ClusterRelease or KaaSRelease objects instead. But make sure to update these values manually after every major and patch update.

Note

The pre-update inspection applies only to images delivered by Container Cloud that are overwritten. Any custom images unrelated to the product components are not verified and do not block cluster update.

OpenStack Antelope

Added full support for OpenStack Antelope with Open vSwitch and Tungsten Fabric 21.4 networking back ends.

Starting from 24.1, MOSK deploys all new clouds with OpenStack Antelope by default. To upgrade an existing cloud from OpenStack Yoga to Antelope, follow the Upgrade OpenStack procedure.

For the OpenStack support cycle in MOSK, refer to OpenStack support cycle.

Highlights from upstream OpenStack supported by MOSK deployed on Antelope

Designate:

  • Ability to share Designate zones across multiple projects. This not only allows two or more projects to manage recordsets in the zone but also enables “Classless IN-ADDR.ARPA delegation” (RFC 2317) in Designate. “Classless IN-ADDR.ARPA delegation” permits IP address DNS PTR record assignment in smaller blocks without creating a DNS zone for each address.

Manila:

  • Feature parity between the native client and OSC.

  • Capability for users to specify metadata when creating their share snapshots. The behavior should be similar to Manila shares, allowing users to query snapshots filtering them by metadata, and update or delete the metadata of the given resources.

Neutron:

  • Capability for managing network traffic based on packet rate by implementing the QoS (Quality of Service) rule type “packet per second” (pps).

Nova:

  • Improved behavior for Windows guests by adding new Hyper-V enlightments on all libvirt guests by default.

  • Ability to unshelve an instance to a specific host (admin only).

  • With microversion 2.92, the capability to only import a public key and not generate a keypair. Also, the capability to use an extended name pattern.

Octavia:

  • Support for notifications about major events of the life cycle of a load balancer. Only loadbalancer.[create|update|delete].end events are emitted.

SPICE remote console

TechPreview

Implemented the capability to enable SPICE remote console through the OpenStackDeployment custom resource as a method to interact with OpenStack virtual machines through the CLI and desktop client as well as MOSK Dashboard (OpenStack Horizon).

The usage of the SPICE remote console is an alternative to using the noVNC-based VNC remote console.

Windows guests

TechPreview

Implemented the capability to configure and run Windows guests on OpenStack, which allows for optimization of cloud infrastructure for diverse workloads.

GPU virtualization

TechPreview

Introduced support for the Virtual Graphics Processing Unit (vGPU) feature that allows for leveraging the power of virtualized GPU resources to enhance performance and scalability of cloud deployments.

Deterministic Open vSwitch restarts

Implemented a new logic for Open vSwitch restart process during a MOSK cluster update that allows for minimized workload downtime.

Orchestration of stateful applications rescheduling

Implemented automated management and coordination of relocating stateful applications.

CQL to connect with Cassandra clusters

TechPreview

Enhanced the connectivity between the Tungsten Fabric services and Cassandra database clusters through the Cassandra Query Language (CQL) protocol.

Tungsten Fabric Operator API v2

TechPreview

Introduced the technical preview support for the API v2 for the Tungsten Fabric Operator. This API version aligns with the OpenStack Controller API and provides better interface for advanced configurations.

In MOSK 24.1, the API v2 is available only for the greenfield product deployments with Tungsten Fabric. The Tungsten Fabric configuration documentation provides configuration examples for both API v1alpha1 and API v2.

Tungsten Fabric analytics services unsupported

Removed from support Tungsten Fabric analytics services, primarily designed for collecting various metrics from the Tungsten Fabric services.

Despite its initial implementation, user demand for this feature has been minimal. As a result, Tungsten Fabric analytics services become unsupported in the product.

All greenfield deployments starting from MOSK 24.1 do not include Tungsten Fabric analytics services using StackLight capabilities instead by default. The existing deployments updated to 24.1 and newer versions will include Tungsten Fabric analytics services as well as the ability to disable them.

Monitoring of OpenStack credential rotation dates

Implemented alerts to notify the cloud users when their OpenStack administrator and OpenStack service user credentials are overdue for rotation.

Removal of the StackLight telegraf-openstack plugin

Removed StackLight telegraf-openstack plugin and replaced it with osdpl-exporter.

All valuable Telegraf metrics used by StackLight components have been reimplemented in osdpl-exporter and all dependent StackLight alerts and dashboards started using new metrics.

Restrictive network policies for Kubernetes pods

Implemented more restrictive network policies for Kubernetes pods running OpenStack services.

As part of the enhancement, added NetworkPolicy objects for all types of Ceph daemons. These policies allow only specified ports to be used by the corresponding Ceph daemon pods.