Configure cluster and service networking in an existing cluster

On systems that use the managed CNI, you can switch existing clusters to either kube-proxy with ipvs proxier or eBPF mode.

MKE does not support switching kube-proxy in an existing cluster from ipvs proxier to iptables proxier, nor does it support disabling eBPF mode after it has been enabled. Using a CNI that supports both cluster and service networking requires that you disable kube-proxy.

Refer to Cluster and service networking options in the MKE Installation Guide for information on how to configure cluster and service networking at install time.

Caution

The configuration changes described here cannot be reversed. As such, Mirantis recommends that you make a cluster backup, drain your workloads, and take your cluster offline prior to performing any of these changes.

Caution

Swarm workloads that require the use of encrypted overlay networks must use iptables proxier. Be aware that the other networking options detailed here automatically disable Docker Swarm encrypted overlay networks.


To switch an existing cluster to kube-proxy with ipvs proxier while using the managed CNI:

  1. Obtain the current MKE configuration file for your cluster.

  2. Set kube_proxy_mode to "ipvs".

  3. Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.

  4. Verify that the following values are set in your MKE configuration file:

    unmanaged_cni = false
    calico_ebpf_enabled = false
    kube_default_drop_masq_bits = false
    kube_proxy_mode = "ipvs"
    kube_proxy_no_cleanup_on_start = false
    
  5. Verify that the ucp-kube-proxy container logs on all nodes contain the following:

    KUBE_PROXY_MODE (ipvs) CLEANUP_ON_START_DISABLED false
    Performing cleanup
    kube-proxy cleanup succeeded
    Actually starting kube-proxy....
    
  6. Obtain the current MKE configuration file for your cluster.

  7. Set kube_proxy_no_cleanup_on_start to true.

  8. Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.

  9. Reboot all nodes.

  10. Verify that the following values are set in your MKE configuration file and that your cluster is in a healthy state with all nodes ready:

    unmanaged_cni = false
    calico_ebpf_enabled = false
    kube_default_drop_masq_bits = false
    kube_proxy_mode = "ipvs"
    kube_proxy_no_cleanup_on_start = true
    
  11. Verify that the ucp-kube-proxy container logs on all nodes contain the following:

    KUBE_PROXY_MODE (ipvs) CLEANUP_ON_START_DISABLED true
    Actually starting kube-proxy....
    .....
    I1111 02:41:05.559641     1 server_others.go:274] Using ipvs Proxier.
    W1111 02:41:05.559951     1 proxier.go:445] IPVS scheduler not specified, use rr by default
    
  12. Optional. Configure the following ipvs-related parameters in the MKE configuration file (otherwise, MKE will use the Kubernetes default parameter settings):

    • ipvs_exclude_cidrs = ""

    • ipvs_min_sync_period = ""

    • ipvs_scheduler = ""

    • ipvs_strict_arp = false

    • ipvs_sync_period = ""

    • ipvs_tcp_timeout = ""

    • ipvs_tcpfin_timeout = ""

    • ipvs_udp_timeout = ""

    For more information on using these parameters, refer to kube-proxy in the Kubernetes documentation.


To switch an existing cluster to eBPF mode while using the managed CNI:

  1. Verify that the prerequisites for eBPF use have been met, including kernel compatibility, for all Linux manager and worker nodes. Refer to the Calico documentation Enable the eBPF dataplane for more information.

  2. Obtain the current MKE configuration file for your cluster.

  3. Set kube_default_drop_masq_bits to true.

  4. Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.

  5. Verify that the ucp-kube-proxy container started on all nodes, that the kube-proxy cleanup took place, and that ucp-kube-proxy launched kube-proxy.

    for cont in $(docker ps -a|rev | cut -d' ' -f 1 | rev|grep ucp-kube-proxy); \
    do nodeName=$(echo $cont|cut -d '/' -f1); \
    docker logs $cont 2>/dev/null|grep -q 'kube-proxy cleanup succeeded'; \
    if [ $? -ne 0 ]; \
    then echo $nodeName; \
    fi; \
    done|sort
    

    Expected output in the ucp-kube-proxy logs:

    KUBE_PROXY_MODE (iptables) CLEANUP_ON_START_DISABLED false
    Performing cleanup
    kube-proxy cleanup succeeded
    Actually starting kube-proxy....
    

    Note

    If the count returned by the command does not quickly converge at 0, check the ucp-kube-proxy logs on the nodes where either of the following took place:

    • The ucp-kube-proxy container did not launch.

    • The kube-proxy cleanup did not happen.

  6. Reboot all nodes.

  7. Obtain the current MKE configuration file for your cluster.

  8. Verify that the following values are set in your MKE configuration file:

    unmanaged_cni = false
    calico_ebpf_enabled = false
    kube_default_drop_masq_bits = true
    kube_proxy_mode = "iptables"
    kube_proxy_no_cleanup_on_start = false
    
  9. Verify that the ucp-kube-proxy container logs on all nodes contain the following:

    KUBE_PROXY_MODE (iptables) CLEANUP_ON_START_DISABLED false
    Performing cleanup
    ....
    kube-proxy cleanup succeeded
    Actually starting kube-proxy....
    ....
    I1111 03:29:25.048458     1 server_others.go:212] Using iptables Proxier.
    
  10. Set kube_proxy_mode to "disabled".

  11. Set calico_ebpf_enabled to true.

  12. Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.

  13. Verify that the ucp-kube-proxy container started on all nodes, that the kube-proxy cleanup took place, and that ucp-kube-proxy did not launch kube-proxy.

    for cont in $(docker ps -a|rev | cut -d' ' -f 1 | rev|grep ucp-kube-proxy); \
    do nodeName=$(echo $cont|cut -d '/' -f1); \
    docker logs $cont 2>/dev/null|grep -q 'Sleeping forever'; \
    if [ $? -ne 0 ]; \
    then echo $nodeName; \
    fi; \
    done|sort
    

    Expected output in the ucp-kube-proxy logs:

    KUBE_PROXY_MODE (disabled) CLEANUP_ON_START_DISABLED false
    Performing cleanup
    kube-proxy cleanup succeeded
    Sleeping forever....
    

    Note

    If the count returned by the command does not quickly converge at 0, check the ucp-kube-proxy logs on the nodes where either of the following took place:

    • The ucp-kube-proxy container did not launch.

    • The ucp-kube-proxy container launched kube-proxy.

  14. Obtain the current MKE configuration file for your cluster.

  15. Verify that the following values are set in your MKE configuration file:

    unmanaged_cni = false
    calico_ebpf_enabled = true
    kube_default_drop_masq_bits = true
    kube_proxy_mode = "disabled"
    kube_proxy_no_cleanup_on_start = false
    
  16. Set kube_proxy_no_cleanup_on_start to true.

  17. Upload the new MKE configuration file. Be aware that this will require a wait time of approximately five minutes.

  18. Verify that the following values are set in your MKE configuration file and that your cluster is in a healthy state with all nodes ready:

    unmanaged_cni = false
    calico_ebpf_enabled = true
    kube_default_drop_masq_bits = true
    kube_proxy_mode = "disabled"
    kube_proxy_no_cleanup_on_start = true
    
  19. Verify that eBPF mode is operational by confirming the presence of the following lines in the ucp-kube-proxy container logs:

    KUBE_PROXY_MODE (disabled) CLEANUP_ON_START_DISABLED true
    "Sleeping forever...."
    
  20. Verify that you can SSH into all nodes.