Skip to content

Deploy Calico eBPF Data Plane#

Alert

The dataplane value you set when you install your MKE 4k cluster is immutable. You cannot switch the setting at a later time.

To run the Calico eBPF data plane on MKE 4k your cluster must have:

  • A supported Linux distribution
  • The eBPF file system mounted at /sys/fs/bpf.

    To confirm the mount, run mount | grep "/sys/fs/bpf" on the cluster nodes. The bpf fs is mounted if the generated output resembles the following:

    none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
    

Info

Without a supported Linux distribution and the eBPF file system in place, the eBPF data plane may still appear to work. This is because:

  • When Calico does not detect a compatible kernel, it emits a warning and falls back to the standard Linux networking data plane.

  • If the file system is not mounted and does not persist the Pods will temporarily lose connectivity whenever Calico is restarted. In addition, the host endpoints may be left unsecured, as their attached policy program will be discarded.

Enable eBPF#

To enable the eBPF data plane, set the calicoNetwork.linuxDataplane parameter in the provided values.yaml file for Calico to BPF.

As the eBPF data plane requires a straight path to the kube-apiserver without Kubernetes service resolution, ensure that your values.yaml file is configured with the appropriate parameter values for KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT.

Danger

If you are installing the Iptables data plane rather than the eBPF data plane, and you are specifying the full specification using the value.yaml file, you must not provide values for the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT parameters.

MKE 4k uses Helm to manage the installation of Tigera Operator. Refer to the official documentation for more information on Tigera Operator installation and the Helm values file.

Notes

  • It is only necessary to add registry values to the values.yaml file for offline MKE 4k installations.
  • The cidr value in the values.yaml file overrides any CIDR values that are specified elsewhere.

A sample MKE4.yaml configuration file with BPF data plane network configurations is presented below:

providers:
    - enabled: true
      extraConfig:
        cidrV4: 192.168.0.0/16
        loglevel: Info
        "values.yaml": |
          kubeletVolumePluginPath: /var/lib/kubelet
          tigeraOperator:                                   
            registry: <registry>     
          installation:
            registry: <registry-path>     
            logging:
              cni:
                logSeverity: Debug
            cni:
              type: Calico
            calicoNetwork:
              linuxDataplane: BPF
              ipPools:
              - cidr: 192.168.0.0/15
                encapsulation: VXLAN
          resources:
            requests:
              cpu: 260m
          defaultFelixConfiguration:
            enabled: true
            wireguardEnabled: false
            wireguardEnabledV6: false
          kubernetesServiceEndpoint:
            host: <Kubernetes_API_host>
            port: <Kubernetes_API_port>
      provider: calico

Set the values for and so that kube-apiserver can be contacted without the need for a service-level resolution. Typically, these values are set to the address of the external load balancer, if one is in use; information that is specified in the MKE4.yaml configuration file at spec.api.externalAddress and the spec.api.port. For the same purpose, though, you can also opt to use the address of a manager node, or deploy a load balancer that is set aside specifically for the use of the CNI with its SAN-s appropriately added.

While you can change most configuration values in the mke4.yaml configuration file simply by running the mkectl apply command, changes made to either the <Kubernetes_API_host> or <Kubernetes_API_port> settings require that you follow a specific series of steps:

  1. Change the settings in the config kubernetes-service-endpoint in the tigera-operator namespace, using the k0s kc -n tigera-operator edit cm/ubernetes-service-endpoint command to set the new values.

  2. Restart the tigera-operator deployment pod in the tigera-operator namespace, and wait for all of the resources to become ready.

  3. Run the mkectl apply command with the changed configuration.

Confirm eBPF assignment#

To confirm that eBPF is the current data plane:

  1. Run the following command on any manager node:

    k0s kc -n calico-system logs daemonset/calico-node --all-containers --all-pods|grep BPF|grep -q 'newReport="ready"'
    
  2. Verify that the last log line reports BPF as ready.