Configure cluster and service networking in an existing cluster¶
On systems that use the managed CNI, you can switch existing clusters to either kube-proxy with ipvs proxier or eBPF mode.
MKE does not support switching kube-proxy in an existing cluster from ipvs proxier to iptables proxier, nor does it support disabling eBPF mode after it has been enabled. Using a CNI that supports both cluster and service networking requires that you disable kube-proxy.
Refer to Cluster and service networking options in the MKE Installation Guide for information on how to configure cluster and service networking at install time.
Caution
The configuration changes described here cannot be reversed. As such, Mirantis recommends that you make a cluster backup, drain your workloads, and take your cluster offline prior to performing any of these changes.
Caution
Swarm workloads that require the use of encrypted overlay networks must use iptables proxier. Be aware that the other networking options detailed here automatically disable Docker Swarm encrypted overlay networks.
To switch an existing cluster to kube-proxy with ipvs proxier while using the managed CNI:
Obtain the current MKE configuration file for your cluster.
Set
kube_proxy_mode
to"ipvs"
.Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.
Verify that the following values are set in your MKE configuration file:
unmanaged_cni = false calico_ebpf_enabled = false kube_default_drop_masq_bits = false kube_proxy_mode = "ipvs" kube_proxy_no_cleanup_on_start = false
Verify that the
ucp-kube-proxy
container logs on all nodes contain the following:KUBE_PROXY_MODE (ipvs) CLEANUP_ON_START_DISABLED false Performing cleanup kube-proxy cleanup succeeded Actually starting kube-proxy....
Obtain the current MKE configuration file for your cluster.
Set
kube_proxy_no_cleanup_on_start
totrue
.Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.
Reboot all nodes.
Verify that the following values are set in your MKE configuration file and that your cluster is in a healthy state with all nodes ready:
unmanaged_cni = false calico_ebpf_enabled = false kube_default_drop_masq_bits = false kube_proxy_mode = "ipvs" kube_proxy_no_cleanup_on_start = true
Verify that the
ucp-kube-proxy
container logs on all nodes contain the following:KUBE_PROXY_MODE (ipvs) CLEANUP_ON_START_DISABLED true Actually starting kube-proxy.... ..... I1111 02:41:05.559641 1 server_others.go:274] Using ipvs Proxier. W1111 02:41:05.559951 1 proxier.go:445] IPVS scheduler not specified, use rr by default
Optional. Configure the following ipvs-related parameters in the MKE configuration file (otherwise, MKE will use the Kubernetes default parameter settings):
ipvs_exclude_cidrs = ""
ipvs_min_sync_period = ""
ipvs_scheduler = ""
ipvs_strict_arp = false
ipvs_sync_period = ""
ipvs_tcp_timeout = ""
ipvs_tcpfin_timeout = ""
ipvs_udp_timeout = ""
For more information on using these parameters, refer to kube-proxy in the Kubernetes documentation.
To switch an existing cluster to eBPF mode while using the managed CNI:
Verify that the prerequisites for eBPF use have been met, including kernel compatibility, for all Linux manager and worker nodes. Refer to the Calico documentation Enable the eBPF dataplane for more information.
Obtain the current MKE configuration file for your cluster.
Set
kube_default_drop_masq_bits
totrue
.Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.
Verify that the
ucp-kube-proxy
container started on all nodes, that the kube-proxy cleanup took place, and thatucp-kube-proxy
launched kube-proxy.for cont in $(docker ps -a|rev | cut -d' ' -f 1 | rev|grep ucp-kube-proxy); \ do nodeName=$(echo $cont|cut -d '/' -f1); \ docker logs $cont 2>/dev/null|grep -q 'kube-proxy cleanup succeeded'; \ if [ $? -ne 0 ]; \ then echo $nodeName; \ fi; \ done|sort
Expected output in the
ucp-kube-proxy
logs:KUBE_PROXY_MODE (iptables) CLEANUP_ON_START_DISABLED false Performing cleanup kube-proxy cleanup succeeded Actually starting kube-proxy....
Note
If the count returned by the command does not quickly converge at
0
, check theucp-kube-proxy
logs on the nodes where either of the following took place:The
ucp-kube-proxy
container did not launch.The kube-proxy cleanup did not happen.
Reboot all nodes.
Obtain the current MKE configuration file for your cluster.
Verify that the following values are set in your MKE configuration file:
unmanaged_cni = false calico_ebpf_enabled = false kube_default_drop_masq_bits = true kube_proxy_mode = "iptables" kube_proxy_no_cleanup_on_start = false
Verify that the
ucp-kube-proxy
container logs on all nodes contain the following:KUBE_PROXY_MODE (iptables) CLEANUP_ON_START_DISABLED false Performing cleanup .... kube-proxy cleanup succeeded Actually starting kube-proxy.... .... I1111 03:29:25.048458 1 server_others.go:212] Using iptables Proxier.
Set
kube_proxy_mode
to"disabled"
.Set
calico_ebpf_enabled
totrue
.Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.
Verify that the
ucp-kube-proxy
container started on all nodes, that the kube-proxy cleanup took place, and thatucp-kube-proxy
did not launch kube-proxy.for cont in $(docker ps -a|rev | cut -d' ' -f 1 | rev|grep ucp-kube-proxy); \ do nodeName=$(echo $cont|cut -d '/' -f1); \ docker logs $cont 2>/dev/null|grep -q 'Sleeping forever'; \ if [ $? -ne 0 ]; \ then echo $nodeName; \ fi; \ done|sort
Expected output in the
ucp-kube-proxy
logs:KUBE_PROXY_MODE (disabled) CLEANUP_ON_START_DISABLED false Performing cleanup kube-proxy cleanup succeeded Sleeping forever....
Note
If the count returned by the command does not quickly converge at
0
, check theucp-kube-proxy
logs on the nodes where either of the following took place:The
ucp-kube-proxy
container did not launch.The
ucp-kube-proxy
container launched kube-proxy.
Obtain the current MKE configuration file for your cluster.
Verify that the following values are set in your MKE configuration file:
unmanaged_cni = false calico_ebpf_enabled = true kube_default_drop_masq_bits = true kube_proxy_mode = "disabled" kube_proxy_no_cleanup_on_start = false
Set
kube_proxy_no_cleanup_on_start
totrue
.Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.
Verify that the following values are set in your MKE configuration file and that your cluster is in a healthy state with all nodes ready:
unmanaged_cni = false calico_ebpf_enabled = true kube_default_drop_masq_bits = true kube_proxy_mode = "disabled" kube_proxy_no_cleanup_on_start = true
Verify that eBPF mode is operational by confirming the presence of the following lines in the
ucp-kube-proxy
container logs:KUBE_PROXY_MODE (disabled) CLEANUP_ON_START_DISABLED true "Sleeping forever...."
Verify that you can SSH into all nodes.