Ingress Support for Hosted Control Planes#
Note
Supported k0s Versions: v1.34.1+k0s.0 and later.
Mirantis k0rdent Enterprise enables deployment of hosted control plane clusters with the API server and konnectivity exposed via an ingress controller. This allows cluster access through hostnames instead of direct service endpoints, reducing the number of required load balancers.
k0smotron automatically creates an Ingress resource that routes traffic to the control plane service. Each worker
node runs a local HAProxy sidecar, which proxies pod traffic to the ingress controller. The kubelet connects
directly to the ingress controller for control plane communication, while pods use the HAProxy sidecar.
Prerequisites#
-
An ingress controller deployed on a cluster where hosted control plane components run. The ingress controller must support SSL passthrough such as HAProxy, NGINX, or Traefik.
Note
When deploying a cluster, make sure to use the correct SSL passthrough annotation for your ingress controller implementation. For example, HAProxy Ingress (community) requires
haproxy-ingress.github.io/ssl-passthrough: "true". -
A
ClusterTemplatereferenced by theClusterDeployment, which uses a Helm chart configured to set theK0smotronControlPlanespec.ingressfield appropriately. - DNS must be properly configured in both the cluster hosting control plane components and the child cluster.
API server and konnectivity hostnames should resolve to the ingress controller's external IP address. Alternatively,
use a DNS service like sslip.io or nip.io (e.g.,
api.<cluster-name>.<ingress-ip>.nip.ioandkonnectivity.<cluster-name>.<ingress-ip>.nip.io) that automatically resolves the hostnames to the IP address embedded in the hostname.
Warning
When deploying a hosted control plane cluster with ingress enabled and using --cloud-provider=external in kubelet
args, the CCM must be configured with the correct API host and port to use the ingress API hostname directly for
in-cluster config, instead of the default Kubernetes Service address. See the
troubleshooting section
below for details.
ClusterTemplate and Helm Chart Configuration#
In Mirantis k0rdent Enterprise v1.3.0, ingress support is enabled in the default openstack-hosted-cp ClusterTemplate (version
1.0.22 and later). When using this ClusterTemplate, you can enable ingress for your OpenStack hosted control plane
cluster by setting the following parameters in the ClusterDeployment:
spec:
config:
k0smotron:
service:
type: ClusterIP
ingress:
enabled: true
className: "haproxy"
apiHost: api.test-cluster.example.com
konnectivityHost: konnectivity.test-cluster.example.com
port: 443
Note
- Ensure that
k0smotron.service.typeis configured appropriately for your setup. By default, theopenstack-hosted-cpClusterTemplate sets this value toLoadBalancer, which provisions an external load balancer for a control plane service. When using ingress, this is unnecessary. To avoid unnecessary LoadBalancer provisioning, setk0smotron.service.typetoClusterIPorNodePortinstead. - By default, the
openstack-hosted-cpClusterTemplateis configured to pass thehaproxy.org/ssl-passthrough: "true"annotation to theK0smotronControlPlanespec.ingress.annotationsfield. If you're using a different ingress controller, make sure to set the correct SSL passthrough annotation. - Set
apiHostandkonnectivityHostto hostnames that resolve to the ingress controller’s external IP address. Each hostname must be unique per cluster. When using dynamic DNS services such as sslip.io or nip.io, verify that the hostname resolves to the correct IP. For more details, see: Dynamic DNS Services Resolve Incorrect IP for Ingress Hostnames.
When using a custom ClusterTemplate, ensure that the K0smotronControlPlane spec.ingress field is properly configured
in the Helm chart. The K0smotronControlPlane spec.ingress field includes:
apiHost: Hostname for the Kubernetes API (for example,kube-api.example.com). Required when ingress is enabled.konnectivityHost: Hostname for the konnectivity server (for example,konnectivity.example.com). Required when ingress is enabled.annotations: Additional annotations for the ingress controller service. Must include the SSL passthrough annotation.className: The ingress class name used by the ingress controller (for example,haproxy).deploy: Whether to deploy an ingress resource for the cluster or let the user do it manually. Defaults totrue.port: Port used by the ingress controller. Defaults to443.
The Helm chart must be adapted to consume ingress values to properly configure the K0smotronControlPlane object. See the examples below.
Example: K0smotronControlPlane spec.ingress Configuration#
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: K0smotronControlPlane
metadata:
name: {{ include "k0smotroncontrolplane.name" . }}
spec:
{{- if and .Values.ingress.enabled .Values.ingress.apiHost .Values.ingress.konnectivityHost }}
ingress:
deploy: true
className: {{ .Values.ingress.className }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 6 }}
{{- end }}
port: {{ .Values.ingress.port }}
apiHost: {{ .Values.ingress.apiHost }}
konnectivityHost: {{ .Values.ingress.konnectivityHost }}
{{- end }}
...
Example: ClusterTemplate ingress Configuration in values.yaml#
ingress:
enabled: false
className: "haproxy"
apiHost: ""
konnectivityHost: ""
port: 443
annotations: {}
Note
The parameter names in values.yaml are not significant, but the values must be correctly mapped to the corresponding
fields in the K0smotronControlPlane spec.ingress.
Example: Hosted ClusterDeployment Configuration with Ingress Enabled#
This example assumes the referenced ClusterTemplate is configured to set the K0smotronControlPlane
spec.ingress field based on values provided in the ClusterDeployment spec.config.ingress field. Ensure your ClusterTemplate is configured accordingly.
When using an ingress controller with SSL passthrough enabled via annotation, make sure to set the correct SSL passthrough annotation for your controller in the spec.config.ingress.annotations field.
apiVersion: k0rdent.mirantis.com/v1beta1
kind: ClusterDeployment
metadata:
name: test-cluster
namespace: kcm-system
spec:
template: custom-hosted-cp-1-0-22
credential: test-credential
config:
ingress:
enabled: true
className: "haproxy"
apiHost: api.test-cluster.172.96.1.2.nip.io
konnectivityHost: konnectivity.test-cluster.172.96.1.2.nip.io
port: 443
annotations:
haproxy-ingress.github.io/ssl-passthrough: "true"
Troubleshooting#
ClusterDeployment Stuck Waiting for Worker Nodes to Become Ready When Ingress Is Enabled#
When deploying a hosted control plane cluster with ingress enabled and using the external cloud provider in kubelet
args, the ClusterDeployment may get stuck waiting for the worker nodes to become ready.
The ingress architecture uses a local HAProxy sidecar on each worker to proxy pod-to-API traffic. However, with
--cloud-provider=external, a deadlock can occur: the HAProxy sidecar cannot be configured until the CCM reports
worker node addresses, but the CCM cannot reach the API server because the HAProxy is not yet set up.
Workaround#
Override the in-cluster config used by the CCM by explicitly setting KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT
in the CCM pod spec, pointing directly to the ingress hostname:
env:
- name: KUBERNETES_SERVICE_HOST
value: "api.test-cluster.example.com"
- name: KUBERNETES_SERVICE_PORT
value: "443"
This can be done either by modifying the CCM deployment directly after the API server becomes reachable or, if the CCM configuration is managed by the helm chart, by setting these env vars in the CCM helm chart values.
Note
When using the default openstack-hosted-cp ClusterTemplate, the workaround is already implemented in the CCM helm
chart, and the env vars will be automatically set when ingress is enabled in the ClusterDeployment.
See details:
- Cloud Controller Manager fails to start with
--cloud-provider=externalwhen using Ingress support - GitHub issue
Dynamic DNS Services Resolve Incorrect IP for Ingress Hostnames#
In certain cluster configurations, hostnames generated for ingress resources may be incorrectly resolved by dynamic DNS services like sslip.io or nip.io. This typically happens when the hostname includes multiple consecutive numeric components, which can cause the DNS service to misinterpret the intended IP address.
For example: api.test-cluster-1.172.19.114.90.sslip.io. Instead of resolving to 172.19.114.90 as expected,
the DNS service will resolve it to 1.172.19.114 due to the way it parses the hostname.
Symptoms:
- Ingress endpoints are unreachable.
- DNS resolution returns an unexpected IP address.
Resolution#
To avoid ambiguity, ensure that the hostname structure clearly separates the cluster identifier from the IP address.
Best Practices:
- Ensure hostnames include a non-numeric separator before the IP portion (e.g.,
-api,-konnectivity, etc.) (for example:test-cluster-1-api.172.19.114.90.sslip.io) - Avoid consecutive dot-separated numeric segments outside the intended IP address.
- Validate DNS resolution (
dig,nslookup) when troubleshooting ingress connectivity.