Reprovision the Kubernetes Master node

Reprovision the Kubernetes Master nodeΒΆ


Kubernetes support termination notice

Starting with the MCP 2019.2.5 update, the Kubernetes component is no longer supported as a part of the MCP product. This implies that Kubernetes is not tested and not shipped as an MCP component. Although the Kubernetes Salt formula is available in the community driven SaltStack formulas ecosystem, Mirantis takes no responsibility for its maintenance.

Customers looking for a Kubernetes distribution and Kubernetes lifecycle management tools are encouraged to evaluate the Mirantis Kubernetes-as-a-Service (KaaS) and Docker Enterprise products.

If the Kubernetes Master node became non-operational and recovery is not possible, you can reprovision the node from scratch.

When reprovisioning a node, you can not update some of the configuration data:

  • Hostname and FQDN - because it breaks Calico.
  • Node role - for example, from Kubernetes Master to Node role. However, you can use the kubectl label node command to reset a node labels later.
  • Network plugin - for example, from Calico to Weave.

You can change the following information:

  • Host IP(s)
  • MAC addresses
  • Operating system
  • Application certificates


All Master nodes must serve the same apiserver certificate. Otherwise, service tokens will become invalidated.

To reprovision the Kubernetes Master node:

  1. Verify that MAAS works properly and provides the DHCP service to assign an IP address and bootstrap an instance.

  2. Verify that the target nodes have connectivity with the Salt Master node:

    salt 'ctl[<NUM>]*'
  3. Update modules and states on the new Minion of the Salt Master node:

    salt 'ctl[<NUM>]*' saltutil.sync_all


    The ctl[<NUM>] parameter is the ID of a failed Kubernetes Master node.

  4. Create and distribute SSL certificates for services using the salt state:

    salt 'ctl[<NUM>]*' state.sls salt
  5. Install Keepalived:

    salt 'ctl[<NUM>]*' state.sls keepalived -b 1
  6. Install HAProxy and verify its status:

    salt 'ctl[<NUM>]*' state.sls haproxy
    salt 'ctl[<NUM>]*' service.status haproxy
  7. Install etcd and verify the cluster health:

    salt 'ctl[<NUM>]*' state.sls etcd.server.service
    salt 'ctl[<NUM>]*' "etcdctl cluster-health"

    Install etcd with the SSL support:

    salt 'ctl[<NUM>]*' state.sls salt.minion.cert,etcd.server.service
    salt 'ctl[<NUM>]*' '. /var/lib/etcd/configenv && etcdctl cluster-health'
  8. Install Kubernetes:

    salt 'ctl[<NUM>]*' state.sls kubernetes.master.kube-addons
    salt 'ctl[<NUM>]*' state.sls kubernetes.pool
  9. Set up NAT for Calico:

    salt 'ctl[<NUM>]*' state.sls etcd.server.setup
  10. Run master to check consistency:

    salt 'ctl[<NUM>]*' state.sls kubernetes exclude=kubernetes.master.setup
  11. Register add-ons:

    salt 'ctl[<NUM>]*' --subset 1 state.sls kubernetes.master.setup