Change a cluster configuration
After deploying a MOSK cluster, you can configure a few cluster settings using the MOSK management console as described below.
To change a cluster configuration:
Log in to the MOSK management console with the
m:kaas:namespace@operatororm:kaas:namespace@writerpermissions.Select the required project.
On the Clusters page, click the More action icon in the last column of the required cluster and select Configure cluster.
In the Configure cluster window:
In the General Settings tab, you can:
Add or update proxy for a cluster by selecting the name of previously created proxy settings from the Proxy drop-down menu.
Proxy configuration
In the Proxies tab, configure proxy:
Click Add Proxy.
In the Add New Proxy wizard, fill out the form with the following parameters:
For implementation details, see Proxy support and cache of artifacts.
If your proxy requires a trusted CA certificate, select the CA Certificate check box and paste a CA certificate for a MITM proxy to the corresponding field or upload a certificate using Upload Certificate.
Note
The possibility to use a MITM proxy with a CA certificate is available since MOSK 23.1.
For the list of Mirantis resources and IP addresses to be accessible from MOSK clusters, see Reference Architecture: Requirements.
Using the SSH Keys drop-down menu, select the required previously created SSH key to add it to the running cluster. If required, you can add several keys or remove unused ones, if any.
Note
To delete an SSH key, use the SSH Keys tab of the main menu.
Using the Container Registry drop-down menu, select the previously created Docker container registry name to add it to the running cluster.
Using the following options, define the maximum number of worker machines to be upgraded in parallel during cluster update:
- Parallel Upgrade Of Worker Machines
The maximum number of the worker nodes to update simultaneously. It serves as an upper limit on the number of machines that are drained at a given moment of time. Defaults to
1.- Parallel Preparation For Upgrade Of Worker Machines
The maximum number of worker nodes being prepared at a given moment of time, which includes downloading of new artifacts. It serves as a limit for the network load that can occur when downloading the files to the nodes. Defaults to
50.
In the Stacklight tab, select or deselect StackLight and configure its parameters if enabled.
You can also update the default log level severity for all StackLight components as well as set a custom log level severity for specific StackLight components. For details about severity levels, see Log verbosity.
Click Update to apply the changes.
Strongly recommended. Back up MKE as described in Create backups of Mirantis Kubernetes Engine.
Since the procedure above modifies the cluster configuration, a fresh backup is required to restore the cluster in case further reconfigurations fail.
Important
Because the MKE restoration process is complicated, we strongly recommend contacting Mirantis support for assistance.
If you still decide to restore MKE from a backup on your own, you must scale down
helm-controlleron the cluster being restored if the MKE version of the affected cluster after the restore will differ from the MKE version in theClusterReleaseobject that is set in MOSK Cluster objects in the management cluster:If you are restoring MKE on a management cluster: before starting the restore, scale down
helm-controlleron each affected MOSK cluster. This prevents unintended Ceph and OpenStack downgrades on MOSK clusters after the management cluster is restored.If you are restoring MKE on a MOSK cluster: immediately after the restore completes, scale down
helm-controller. Because the restore rolls the cluster back to an older release, this prevents it from triggering a premature upgrade of Helm releases.