Connect to a Mirantis Container Cloud cluster


The Container Cloud web UI communicates with Keycloak to authenticate users. Keycloak is exposed using HTTPS with self-signed TLS certificates that are not trusted by web browsers.

To use your own TLS certificates for Keycloak, refer to Configure TLS certificates for cluster applications.

After you deploy a Mirantis Container Cloud management or managed cluster, connect to the cluster to verify the availability and status of the nodes as described below.

This section also describes how to SSH to a node of a cluster where a Bastion host is used for SSH access. For example, on the OpenStack-based management cluster or AWS-based management and managed clusters.

To connect to a managed cluster:

  1. Log in to the Container Cloud web UI with the m:kaas:namespace@operator or m:kaas:namespace@writer permissions.

  2. Switch to the required project using the Switch Project action icon located on top of the main left-side navigation panel.

  3. In the Clusters tab, click the required cluster name. The cluster page with the Machines list opens.

  4. Verify the status of the manager nodes. Once the first manager node is deployed and has the Ready status, the Download Kubeconfig option for the cluster being deployed becomes active.

  5. Open the Clusters tab.

  6. Click the More action icon in the last column of the required cluster and select Download Kubeconfig:

    1. Enter your user password.

    2. Not recommended. Select Offline Token to generate an offline IAM token. Otherwise, for security reasons, the kubeconfig token expires every 30 minutes of the Container Cloud API idle time and you have to download kubeconfig again with a newly generated token.

    3. Click Download.

  7. Verify the availability of the managed cluster machines:

    1. Export the kubeconfig parameters to your local machine with access to kubectl. For example:

      export KUBECONFIG=~/Downloads/kubeconfig-test-cluster.yml
    2. Obtain the list of available Container Cloud machines:

      kubectl get nodes -o wide

      The system response must contain the details of the nodes in the READY status.

To connect to a management cluster:

  1. Log in to a local machine where your management cluster kubeconfig is located and where kubectl is installed.


    The management cluster kubeconfig is created during the last stage of the management cluster bootstrap.

  2. Obtain the list of available management cluster machines:

    kubectl get nodes -o wide

    The system response must contain the details of the nodes in the READY status.

To SSH to a Container Cloud cluster node if Bastion is used:

  1. Obtain kubeconfig of the management or managed cluster as described in the procedures above.

  2. Obtain the internal IP address of a node you require access to:

    kubectl get nodes -o wide
  3. Obtain the Bastion public IP:

    • For OpenStack-based clusters:

      kubectl get cluster -o jsonpath='{.status.providerStatus.bastion.publicIP}' \
      -n <project_name> <cluster_name>
    • For AWS-based clusters:

      kubectl get cluster -o jsonpath='{.status.providerStatus.bastion.publicIp}' \
      -n <project_name> <cluster_name>
    • For Azure-based clusters:

      kubectl get cluster -o=jsonpath='{.status.providerStatus.loadBalancerHost}' \
      -n <project_name> <cluster_name>
  4. Run the following command substituting the parameters enclosed in angle brackets with the corresponding values of your cluster obtained in previous steps:

    ssh -i <private_key> mcc-user@<node_internal_ip> -o "proxycommand ssh -W %h:%p \
    -i <private_key> mcc-user@<bastion_public_ip>"
    • The <private_key> for a managed cluster is the SSH Key that you added in the Container Cloud web UI before the managed cluster creation. For a management cluster, this is ssh_key created during bootstrap in the same directory as the bootstrap script.


      If the initial version of your Container Cloud management cluster was earlier than 2.6.0, ssh_key is named openstack_tmp and is located at ~/.ssh/.

    • For Azure-based clusters, SSH ports are different for each machine and are assigned sequentially. For example, 22, 2201, 2202, 2203, and so on.

      Starting from Container Cloud 2.22.0, the SSH port for a machine is reflected in the machine status.

      • To obtain the SSH port for the machine:

        kubectl get machine -o jsonpath='{.status.providerStatus.sshPort}' \
        -n <project_name> <machine_name>
      • To override the default port for the ssh command, add the -p flag:

        ssh -i <private_key> mcc-user@<node_internal_ip> -o "proxycommand ssh -W %h:%p \
        -p <ssh_port> -i <private_key> mcc-user@<bastion_public_ip>"