Upgrade Verification and Access
Typical upgrade durations, determined through controlled testing in an AWS environment*:
Node Configuration | Detail | Duration |
---|---|---|
5-node cluster | 3 managers, 2 workers | 10:19.87 minutes |
10-node cluster | 3 managers, 7 workers | 11:26.64 minutes |
These estimates are offered for general guidance, however, as actual upgrade durations will vary based on hardware performance (CPU/memory/disk), workload density, network throughput, and storage backend performance. For precise planning purposes, Mirantis strongly recommends that you run a test upgrade in a staging environment that mirrors your production specifications.
* Ubuntu 22.04 LTS, manager and worker nodes (m5.2xlarge: 8 vCPU, 32GB RAM)
On completion of the mkectl upgrade
command, a kubeconfig file for the default admin user is generated and stored at ~/.mke/mke.kubeconf
.
Set a KUBECONFIG
environment variable.
export KUBECONFIG=~/.mke/mke.kubeconf
Next, verify the MKE 4k cluster node readiness, cluster health, and workload status::
Verify node readiness:
kubectl get nodes
Healthy nodes should report
STATUS=Ready
.kubectl describe node <node-name> | grep -i conditions: -A 10
Confirm the following conditions:
Ready=True
MemoryPressure/NetworkUnavailable/DiskPressure=False
Verify workload status:
kubectl get pods --all-namespaces
Check columns for
STATUS=Running
andREADY
kubectl get deployments,statefulsets --all-namespaces
Confirm that
AVAILABLE
matchesDESIRED
replicas.
Review the logs:
kubectl get pods -n mke # MKE namespace is mke kubectl logs <pod-name> -n mke # Check logs for mke system pods kubectl logs <pod-name> -n <namespace> # Or any other application pods
Verify cluster health:
kubectl top nodes # Resource usage kubectl top pods -A