This section describes the manual Calico upgrade procedure from major version 2.6 to 3.3. To simplify the upgrade process, use the automatic Calico upgrade procedure that is included into the Kubernetes upgrade pipeline. For details, see: Automatically update or upgrade Kubernetes.
Note
To update the minor Calico version (for example, from 3.1.x to 3.3.x), use the regular Kubernetes update procedure described in Update or upgrade Kubernetes.
The upgrade process implies the Calico-related services downtime for about 1-2 minutes on a virtual 5-node cluster. The downtime may vary depending on hardware and cluster configuration.
Caution
This upgrade procedure is applicable when MCP is upgraded from Build ID 2018.8.0 to a newer MCP release version.
MCP does not support the Calico upgrade path for the MCP Build IDs earlier than 2018.8.0.
The Kubernetes services downtime for workloads operations are caused
by the necessity of the etcd schema migration where the Calico endpoints data
and other configuration data is stored. Also, the calico-node
and
calico-kube-controllers
components should have the same versions
to operate properly, so there will be downtime while these components
are being restarted.
To upgrade Calico from version 2.6 to 3.3:
Upgrade your MCP cluster to a newer Build ID as described in Upgrade DriveTrain to a newer release version. Once done, the version parameters and configuration files of the Calico components are updated automatically to the latest supported version.
Log in to any Kubernetes ctl
node where etcd is running.
Migrate the etcd schema:
Download the Calico upgrade binary file:
wget https://github.com/projectcalico/calico-upgrade/releases/download/v1.0.5/calico-upgrade
Grant execute permissions to the binary file:
chmod +x ./calico-upgrade
Obtain the etcd endpoints:
salt-call pillar.get etcd:server:members
Export the etcd environment variables. For example:
export APIV1_ETCD_ENDPOINTS=https://10.70.2.101:4001,https://10.70.2.102:4001,https://10.70.2.103:4001
export APIV1_ETCD_CA_CERT_FILE=/var/lib/etcd/ca.pem
export APIV1_ETCD_CERT_FILE=/var/lib/etcd/etcd-client.crt
export APIV1_ETCD_KEY_FILE=/var/lib/etcd/etcd-client.key
export ETCD_ENDPOINTS=https://10.70.2.101:4001,https://10.70.2.102:4001,https://10.70.2.103:4001
export ETCD_CA_CERT_FILE=/var/lib/etcd/ca.pem
export ETCD_CERT_FILE=/var/lib/etcd/etcd-client.crt
export ETCD_KEY_FILE=/var/lib/etcd/etcd-client.key
Substitute APIV1_ETCD_ENDPOINTS
and ETCD_ENDPOINTS
with
corresponding values.
Start the Calico upgrade:
./calico-upgrade start --no-prompts
Note
After executing this command, Calico pauses to avoid running into an incorrect data state.
Apply the new Calico configuration:
Log in to the Salt Master node.
Update basic Calico components:
salt -C 'I@kubernetes:pool' state.sls kubernetes.pool
Log in to the ctl
node on which you started the Calico upgrade.
Resume Calico after the etcd schema migration. For example:
export APIV1_ETCD_ENDPOINTS=https://10.70.2.101:4001,https://10.70.2.102:4001,https://10.70.2.103:4001
export APIV1_ETCD_CA_CERT_FILE=/var/lib/etcd/ca.pem
export APIV1_ETCD_CERT_FILE=/var/lib/etcd/etcd-client.crt
export APIV1_ETCD_KEY_FILE=/var/lib/etcd/etcd-client.key
export ETCD_ENDPOINTS=https://10.70.2.101:4001,https://10.70.2.102:4001,https://10.70.2.103:4001
export ETCD_CA_CERT_FILE=/var/lib/etcd/ca.pem
export ETCD_CERT_FILE=/var/lib/etcd/etcd-client.crt
export ETCD_KEY_FILE=/var/lib/etcd/etcd-client.key
./calico-upgrade complete --no-prompts
Log in to the Salt Master node.
Update the Kubernetes add-ons:
salt -C 'I@kubernetes:master' state.sls kubernetes.master.kube-addons
salt -C 'I@kubernetes:master' state.sls kubernetes exclude=kubernetes.master.setup
salt -C 'I@kubernetes:master' --subset 1 state.sls kubernetes.master.setup
Restart kubelet
:
salt -C 'I@kubernetes:pool' service.restart kubelet
Log in to any ctl
node.
Verify the Kubernetes cluster consistency:
Verify the Calico version and Calico cluster consistency:
calicoctl version
calicoctl node status
calicoctl get ipPool
Verify that the Kubernetes objects are healthy and consistent:
kubectl get node -o wide
kubectl get pod -o wide --all-namespaces
kubectl get ep -o wide --all-namespaces
kubectl get svc -o wide --all-namespaces
Verify the connectivity using Netchecker:
kubectl get ep netchecker -n netchecker
curl {{netchecker_endpoint}}/api/v1/connectivity_check
See also