Collect the bootstrap logs

If the bootstrap script fails during the deployment process, collect and inspect the bootstrap and management cluster logs.

To collect the bootstrap logs:

  1. Log in to your local machine where the bootstrap script was executed.

  2. If you bootstrapped the cluster a while ago, verify that the bootstrap directory is updated.

    Select from the following options:

    • For clusters deployed using Container Cloud 2.11.0 or later:

      ./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \
      --target-dir <pathToBootstrapDirectory>
    • For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the kaas-bootstrap folder, download and run the Container Cloud bootstrap script:

      chmod 0755
  3. Run the following command:

    ./ collect_logs

    Since Container Cloud 2.19.0, you can add COLLECT_EXTENDED_LOGS=true before the command to output the extended version of logs that contains system and MKE logs, logs from LCM Ansible and LCM Agent along with cluster events and Kubernetes resources description and logs.

    Without the --extended flag, the basic version of logs is collected, which is sufficient for most use cases. The basic version of logs contains all events, Kubernetes custom resources, and logs from all Container Cloud components. This version does not require passing --key-file.

    Before Container Cloud 2.19.0, the extended version of logs is collected by default.

    The logs are collected in the directory where the bootstrap script is located.

  4. Technology Preview. For bare metal clusters, assess the Ironic pod logs:

    • Extract the content of the 'message' fields from every log message:

      kubectl -n kaas logs <ironicPodName> -c syslog | jq -rM '.message'
    • Extract the content of the 'message' fields from the ironic_conductor source log messages:

      kubectl -n kaas logs <ironicPodName> -c syslog | jq -rM 'select(.source == "ironic_conductor") | .message'

    The syslog container collects logs generated by Ansible during the node deployment and cleanup and outputs them in the JSON format.

The Container Cloud logs structure in <output_dir>/<cluster_name>/ is as follows:

  • /events.log - human-readable table that contains information about the cluster events

  • /system - system logs

  • /system/mke (or /system/MachineName/mke) - Mirantis Kuberntes Engine (MKE) logs

  • /objects/cluster - logs of the non-namespaced Kubernetes objects

  • /objects/namespaced - logs of the namespaced Kubernetes objects

  • /objects/namespaced/<namespaceName>/core/pods - pods logs from a specified Kubernetes namespace

  • /objects/namespaced/<namespaceName>/core/pods/<containerName>.prev.log - logs of the pods from a specified Kubernetes namespace that were previously removed or failed

  • /objects/namespaced/<namespaceName>/core/pods/<ironicPodName>/syslog.log Technology Preview - Ironic pod logs of the bare metal clusters


    Logs collected by the syslog container during the bootstrap phase are not transferred to the management cluster during pivoting. These logs are located in /volume/log/ironic/ansible_conductor.log inside the Ironic pod.

Since Container Cloud 2.19.0, each log entry of the management cluster logs contains a request ID that identifies chronology of actions performed on a cluster or machine. The format of the log entry is:


For example, bm.machine.req:374, bm.cluster.req:172.

  • <providerType> - provider name, possible values: aws, azure, os, bm, vsphere, equinix.

  • <objectName> - name of an object being processed by provider, possible values: cluster, machine.

  • <requestID> - request ID number that increases when a provider receives a request from Kubernetes about creating, updating, deleting an object. The request ID allows combining all operations performed with an object within one request. For example, the result of a machine creation, update of its statuses, and so on.

Example of a log extract for the OpenStack provider:

I0620 10:51:34.882334  1 cluster_controller.go:119] os.cluster.req:1: Running reconcile Cluster for "default/demo-1"
I0620 10:51:34.999753  1 cluster_controller.go:232] os.cluster.req:1: Reconciling cluster object "default/demo-1" triggers idempotent reconcile.
I0620 10:51:34.999822  1 actuator.go:52] os.cluster.req:1: Reconciling cluster: "default/demo-1"
I0620 10:51:35.506873  1 networkservice.go:91] Reconciling network components for cluster default/demo-1
I0620 10:51:36.323846  1 actuator.go:276] os.cluster.req:1: failed to reconcile security groups: failed to generate control plane security group: ReleaseRefs is not yet populated for cluster "default/demo-1"
I0620 10:51:47.604355  1 cluster_controller.go:119] os.cluster.req:2: Running reconcile Cluster for "default/demo-1"
I0620 10:51:49.976077  1 actuator.go:380] os.cluster.req:2: Reconciling API endpoints for cluster default/demo-1
I0620 10:51:50.727595  1 actuator.go:89] os.cluster.req:2: failed to reconcile IAM: failed to register cluster IAM config: cluster's OIDC hasn't been loaded yet

Depending on the type of issue found in logs, apply the corresponding fixes. For example, if you detect the LoadBalancer ERROR state errors during the bootstrap of an OpenStack-based management cluster, contact your system administrator to fix the issue. To troubleshoot other issues, refer to the corresponding section in Troubleshooting.