Built-in load balancing¶
Caution
Since Container Cloud 2.27.3 (Cluster release 16.2.3), support for vSphere-based clusters is suspended. For details, see Deprecation notes.
The Mirantis Container Cloud managed clusters that are based on vSphere or bare metal use MetalLB for load balancing of services and HAProxy with VIP managed by Virtual Router Redundancy Protocol (VRRP) with Keepalived for the Kubernetes API load balancer.
Kubernetes API load balancing¶
Every control plane node of each Kubernetes cluster runs the kube-api
service in a container. This service provides a Kubernetes API endpoint.
Every control plane node also runs the haproxy
server that provides
load balancing with backend health checking for all kube-api
endpoints as
backends.
The default load balancing method is least_conn
. With this method,
a request is sent to the server with the least number of active
connections. The default load balancing method cannot be changed
using the Container Cloud API.
Only one of the control plane nodes at any given time serves as a front end for Kubernetes API. To ensure this, the Kubernetes clients use a virtual IP (VIP) address for accessing Kubernetes API. This VIP is assigned to one node at a time using VRRP. Keepalived running on each control plane node provides health checking and failover of the VIP.
Keepalived is configured in multicast mode.
Note
The use of VIP address for load balancing of Kubernetes API requires that all control plane nodes of a Kubernetes cluster are connected to a shared L2 segment. This limitation prevents from installing full L3 topologies where control plane nodes are split between different L2 segments and L3 networks.
Caution
External load balancers for services are not supported by the current version of the Container Cloud vSphere provider. The built-in load balancing described in this section is the only supported option and cannot be disabled.
Services load balancing¶
The services provided by the Kubernetes clusters, including
Container Cloud and user services, are balanced by MetalLB.
The metallb-speaker
service runs on every worker node in
the cluster and handles connections to the service IP addresses.
MetalLB runs in the MAC-based (L2) mode. It means that all control plane nodes must be connected to a shared L2 segment. This is a limitation that does not allow installing full L3 cluster topologies.
Caution
External load balancers for services are not supported by the current version of the Container Cloud vSphere provider. The built-in load balancing described in this section is the only supported option and cannot be disabled.