Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Perform migration¶
Before starting the migration, ensure that you, as a cloud operator, have read, understood, and assessed the risks and completed all steps described in Prepare environment for migration and Perform pre-migration verification.
Caution
Mirantis strongly recommends performing the preparation and migration of your networking from OVS to OVN in a staging environment before executing the procedure in production.
Testing the full migration in a staging environment that closely mirrors the target environment configuration is essential to identify potential issues early and minimize operational risk.
Additionally, no automated rollback to OVS is available. The only option is to restore the Neutron database from a backup. Such a rollback may cause significant downtime for user workloads, and the network state may remain inconsistent even after restoration.
Launch the migration using the osctl-ovs-ovn-migrate utility¶
Verify that the
OpenStackDeployment
object is in thehealthy
state.Launch the migration process from the
OpenStackDeployment
container in the Rockoon pod:osctl-ovs-ovn-migrate migration <OPTIONAL-PARAMETERS>
Optional parameters:
--non-interactive
Runs migration in the non-interactive mode. Default is
False
.In interactive mode, the utility prompts for confirmation after each stage. Interactive mode is recommended for production to maintain full control over each step.
--max-workers
Maximum number of workers to spawn for parallel operations. Default is
0
.By default, the internal defaults for operations are used. For example, for pods parallel operations, such as exec, the number of workers equals to the number of the target pods.
The tool writes all logs with INFO
and higher levels to stdout
,
and DEBUG
level logs to the
/tmp/ovs-ovn-migration/ovs-ovn-migration.log
file.
Also, logs from the CLEANUP
stage are collected from all nodes
and saved under the /tmp/ovs-ovn-migration/<CLEANUP-DS-NAME/>
directory.
Current state of migration execution is stored in ConfigMap that is
created on the first run of osctl-ovs-ovn-migrate migration
.
To check the ConfigMap content:
kubectl -n openstack get cm ovs-ovn-migration-state \
-o jsonpath='{.data}' | jq
Example of a positive system response:
{
"10_PREPARE": "{\"status\": \"completed\", \"error\": null}",
"20_DEPLOY_OVN_DB": "{\"status\": \"init\", \"error\": null}",
"30_DEPLOY_OVN_CONTROLLERS": "{\"status\": \"init\", \"error\": null}",
"40_MIGRATE_DATAPLANE": "{\"status\": \"init\", \"error\": null}",
"50_FINALIZE_MIGRATION": "{\"status\": \"init\", \"error\": null}",
"60_CLEANUP": "{\"status\": \"init\", \"error\": null}"
}
General migration workflow¶
Once started, the migration utility performs the following steps:
Checks if the migration state ConfigMap exists. If not, creates it with the
init
state for each stage.For each migration stage:
Checks the status of the stage. If
completed
, skips it and moves to next stage.Otherwise:
Sets the status to
started
.Executes the stage.
Updates the status to
completed
orfailed
depending on the result.In interactive mode, prompts for confirmation before proceeding to the next stage. If running non-interactively, the migration stops immediately if an error occured or proceeds automatically to the next stage.
Example of log output from one of stages:
2025-08-08 14:45:57,640 [INFO] rockoon.cli.ovs_ovn_migration: Running 40_MIGRATE_DATAPLANE stage
Description:
Deploy OVN controller on the same nodes as openvswitch pods are running.
Switch dataplane to be managed by OVN controller and cleanup old dataplane
leftovers.
IMPACT:
WORKLOADS: Short periods of downtime ARE EXPECTED.
OPENSTACK API: Neutron Metadata downtime continues in this stage.
2025-08-08 14:45:59,472 [INFO] rockoon.cli.ovs_ovn_migration: Pre-migration check: Checking ovs db connectivity in ovn controllers
2025-08-08 14:45:59,541 [INFO] rockoon.cli.ovs_ovn_migration: Running command ['ovs-vsctl', '--no-wait', 'list-br'] on pods of daemonsets [<DaemonSet openvswitch-ovn-controller-57e1c5e45daece33>, <DaemonSet openvswitch-ovn-controller-5953263df82fbfb2>]
2025-08-08 14:45:59,547 [INFO] rockoon.cli.ovs_ovn_migration: Waiting command on pods of daemonsets [<DaemonSet openvswitch-ovn-controller-57e1c5e45daece33>, <DaemonSet openvswitch-ovn-controller-5953263df82fbfb2>]
2025-08-08 14:45:59,691 [INFO] rockoon.cli.ovs_ovn_migration: Done waiting command on pods of daemonsets [<DaemonSet openvswitch-ovn-controller-57e1c5e45daece33>, <DaemonSet openvswitch-ovn-controller-5953263df82fbfb2>]
2025-08-08 14:45:59,694 [INFO] rockoon.cli.ovs_ovn_migration: Pre-migration check: Ovs db connectivity check completed
2025-08-08 14:45:59,861 [INFO] rockoon.cli.ovs_ovn_migration: Waiting 600 for openvswitch-ovn-controller-57e1c5e45daece33 to be absent
2025-08-08 14:46:29,907 [INFO] rockoon.cli.ovs_ovn_migration: openvswitch-ovn-controller-57e1c5e45daece33 is absent
2025-08-08 14:46:29,907 [INFO] rockoon.cli.ovs_ovn_migration: Waiting 600 for openvswitch-ovn-controller-5953263df82fbfb2 to be absent
2025-08-08 14:46:29,919 [INFO] rockoon.cli.ovs_ovn_migration: openvswitch-ovn-controller-5953263df82fbfb2 is absent
2025-08-08 14:46:31,799 [INFO] rockoon.helm: Running helm command started: '['upgrade', 'openstack-openvswitch', '/opt/operator/charts/infra/openvswitch', '--namespace', 'openstack', '--values', '/tmp/openstack-openvswitch4n0_cpo2', '--history-max', '1', '--install']'
2025-08-08 14:46:34,891 [INFO] rockoon.helm: Running helm (<rockoon.helm.HelmManager object at 0x7f8074724670>, ['upgrade', 'openstack-openvswitch', '/opt/operator/charts/infra/openvswitch', '--namespace', 'openstack', '--values', '/tmp/openstack-openvswitch4n0_cpo2', '--history-max', '1', '--install'], True, 'openstack-openvswitch'), {} command took 3.091739
2025-08-08 14:46:34,892 [INFO] rockoon.cli.ovs_ovn_migration: Waiting for ['ovn-controller'] to be ready
2025-08-08 14:46:35,083 [INFO] rockoon.kube: Waiting for 600 DaemonSet/openvswitch-ovn-controller-57e1c5e45daece33 is ready
2025-08-08 14:47:05,159 [INFO] rockoon.kube: The DaemonSet/openvswitch-ovn-controller-57e1c5e45daece33 is ready
2025-08-08 14:47:05,160 [INFO] rockoon.kube: Waiting for 600 DaemonSet/openvswitch-ovn-controller-5953263df82fbfb2 is ready
2025-08-08 14:47:05,178 [INFO] rockoon.kube: The DaemonSet/openvswitch-ovn-controller-5953263df82fbfb2 is ready
2025-08-08 14:47:05,180 [INFO] rockoon.cli.ovs_ovn_migration: ['ovn-controller'] are ready
2025-08-08 14:47:05,217 [INFO] rockoon.cli.ovs_ovn_migration: Completed 40_MIGRATE_DATAPLANE stage
2025-08-08 14:47:05,217 [INFO] rockoon.cli.ovs_ovn_migration: Next stage to run is 50_FINALIZE_MIGRATION
Description:
Remove neutron l3 agent daemonsets.
Stop openvswitch pods and disbale migration mode (switch ovn
controllers to start own vswitchd and ovs db containers).
Enable Neutron metadata agents and Neutron rabbitmq.
IMPACT:
WORKLOADS: Short periods of downtime ARE EXPECTED.
OPENSTACK API: Neutron Metadata downtime stops in this stage.
[USER INPUT NEEDED] To proceed to next stage press Y, to abort WHOLE procedure press N --> Y
Migration stages description¶
Step |
Description |
Impact |
---|---|---|
|
Saving the original Neutron bridge mappings for each node. |
No downtime for the API or workloads. |
|
Disabling the Neutron server and metadata agents, removing all OVS-related Neutron components except for the Neutron L3 agents, and deploying the OVN database components. |
Neutron API and Metadata downtime starts, no downtime for workloads. |
|
Stopping the Neutron RabbitMQ service. Deploying OVN controllers in migration mode. These pods do not contain
Synchronizing the Neutron database with the Starting the Neutron server. |
Neutron metadata downtime continues during this stage. The Neutron API is started to process existing ports, but API operations may fail because OVN is not functional yet. |
|
Stopping the OVN controller. Switching the integration bridges under the OVN controller control on all nodes in parallel. Restoring the original bridge mappings saved during the Cleaning up the OVS dataplane leftovers. Starting the OVN controller. |
Neutron metadata downtime continues during this stage. Short workload downtime is expected. |
|
Removing the Neutron L3 agent DaemonSets. Stopping the OVS pods and disbaling migration mode during, switching
the OVN controllers to start their own Enabling the Neutron metadata agents and the Neutron RabbitMQ service. This stage can take a significant amount of time to complete, as OVN controller pods restart one by one on all nodes. |
Neutron metadata downtime ends. Short workload downtime is expected. |
|
Cleaning up the Neutron API resources. Cleaning up the node system resources related to OVS that include network namespaces, interfaces, and so on. |
No downtime for the API or workloads. |