Cilium configuration for child clusters#
This appendix describes how to deploy Cilium as the CNI on child clusters provisioned with Mirantis k0rdent Enterprise. The examples cover standalone control plane and hosted control plane (HCP) setups. Configure k0s so Cilium is the only CNI: set the network provider to custom and, when you enable kube-proxy replacement, align k0s disabledComponents or k0smotron flags with your Cilium values.
Note
The YAML examples use bare-metal templates (capm3-*). The same principles apply to other infrastructure providers. For management-side bare-metal preparation, see Bare metal preparation.
Service template and Helm catalog#
Define a ServiceTemplate that pulls the Cilium chart from the Mirantis k0rdent Enterprise catalog. The catalog is exposed as a Flux HelmRepository.
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
labels:
k0rdent.mirantis.com/managed: "true"
name: k0rdent-catalog
namespace: kcm-system
spec:
provider: generic
type: oci
url: oci://ghcr.io/k0rdent/catalog/charts
---
apiVersion: k0rdent.mirantis.com/v1beta1
kind: ServiceTemplate
metadata:
name: cilium-1.19.0
namespace: kcm-system
spec:
helm:
chartSpec:
chart: cilium
version: 1.19.0
sourceRef:
kind: HelmRepository
name: k0rdent-catalog
---
Note
The chart version is an example. Use the chart version that your organization publishes or consumes from the catalog.
Service deployment using ClusterDeployment or MultiClusterService#
Include the Cilium service in your ClusterDeployment or MultiClusterService. The following serviceSpec uses a minimal configuration: Hubble, Envoy, and egress gateway are disabled, kube-proxy replacement is off, and pod CIDRs are set for cluster-pool IPAM.
serviceSpec:
stopOnConflict: false
syncMode: Continuous
continueOnError: true
priority: 100
services:
- template: cilium-1.19.0
name: cilium
namespace: cilium
values: |
cilium:
cluster:
name: cilium
hubble:
enabled: false
envoy:
enabled: false
egressGateway:
enabled: false
ipam:
operator:
# adjust the podCIDR allocated for your cluster’s pods
clusterPoolIPv4PodCIDRList:
- 10.243.0.0/17 # default is 10.0.0.0/8. must aling with pods.cidrBlocks
kubeProxyReplacement: "false"
tunnelProtocol: geneve # default is "vxlan"
k8sServiceHost: "10.0.1.80" # must match API VIP if kubeProxyReplacement=true
# can be "auto" if kubeProxyReplacement=false
k8sServicePort: "6443" # must match API server port
Note
Keep clusterPoolIPv4PodCIDRList (and any other pod CIDR settings) consistent with the cluster pods.cidrBlocks.
Note
If you later enable kube-proxy replacement in Cilium, use the same choice on the control plane (kube-proxy disabled in standalone disabledComponents or via k0smotron on HCP). A mismatch will break networking.
Control plane configuration#
Use of Cilium CNI requires k0s to switch a network provider to "custom" so the default CNI is not installed.
Standalone control plane#
konnectivity-server is disabled because it is not required on standalone clusters. If you enable kube-proxy replacement, disable kube-proxy in the control plane configuration as well.
spec:
template: capm3-standalone-cp-0-5-0
config:
controlPlane:
disabledComponents: konnectivity-server,kube-proxy
k0s:
provider: custom # Disables default CNI
Hosted control plane (HCP)#
For HCP clusters, if kube-proxy replacement is enabled, pass k0smotron flags to disable kube-proxy.
spec:
template: capm3-hosted-cp-0-5-0
config:
k0smotron:
controllerPlaneFlags:
- --disable-components=kube-proxy # for kubeProxyReplacement
k0s:
network:
provider: custom # Disables default CNI
For more information, see Cilium with kube-proxy replacement on Hosted Control Plane in the k0smotron documentation.
Optional configuration#
Cluster-pool IPAM customization#
You may need to tune Cilium IPAM when running on a cloud provider or when nodes or pods need specific address ranges. Adjust the values block for the Cilium service in your ClusterDeployment or MultiClusterService.
cilium:
ipam:
mode: cluster-pool # default mode
operator:
# adjust the podCIDR allocated for your cluster’s pods
clusterPoolIPv4PodCIDRList:
- 10.243.0.0/17 # default is 10.0.0.0/8. must fit into pods.cidrBlocks
# adjust the CIDR size that should be allocated for each node
clusterPoolIPv4MaskSize: 25 # optional. 24 by default
For more information, see IP Address Management in the Cilium documentation.
Kube-proxy replacement#
To use Cilium’s eBPF datapath for service load balancing, set kubeProxyReplacement to true. This must match control-plane settings (kube-proxy disabled via disabledComponents or k0smotron, as shown above).
cilium:
kubeProxyReplacement: "true"
k8sServiceHost: "10.0.1.80" # must match your API VIP or LoadBalancer IP
k8sServicePort: "6443" # must match API server port
For more information, see Kube-Proxy Replacement in the Cilium documentation.
Hubble observability#
Enable Hubble relay and UI for flow visibility.
cilium:
hubble:
enabled: true
relay:
enabled: true
ui:
enabled: true
For more information, see Hubble Observability in the Cilium documentation.