Mirantis Container Cloud deploys Ceph on the baremetal and Equinix Metal based managed clusters using Helm charts with the following components:
Ceph changes in Container Cloud 2.20
Since Container Cloud 2.20.0, the Ceph cluster does not deploy on management and regional clusters to reduce resource consumption.
Ceph cluster is automatically removed from existing management and regional clusters during the Container Cloud update to 2.20.0.
Managed clusters continue using Ceph as a distributed storage system.
- Rook Ceph Operator
A storage orchestrator that deploys Ceph on top of a Kubernetes cluster. Also known as
Rook Operator. Rook operations include:
Deploying and managing a Ceph cluster based on provided Rook CRs such as
CephObjectStore, and so on.
Orchestrating the state of the Ceph cluster and all its daemons.
KaaSCephClustercustom resource (CR)
Represents the customization of a Kubernetes installation and allows you to define the required Ceph configuration through the Container Cloud web UI before deployment. For example, you can define the failure domain, Ceph pools, Ceph node roles, number of Ceph components such as Ceph OSDs, and so on. The
ceph-kcc-controllercontroller on the Container Cloud management cluster manages the
- Ceph Controller
A Kubernetes controller that obtains the parameters from Container Cloud through a CR, creates CRs for Rook and updates its CR status based on the Ceph cluster deployment progress. It creates users, pools, and keys for OpenStack and Kubernetes and provides Ceph configurations and keys to access them. Also, Ceph Controller eventually obtains the data from the OpenStack Controller for the Keystone integration and updates the RADOS Gateway services configurations to use Kubernetes for user authentication. Ceph Controller operations include:
Transforming user parameters from the Container Cloud Ceph CR into Rook CRs and deploying a Ceph cluster using Rook.
Providing integration of the Ceph cluster with Kubernetes.
Providing data for OpenStack to integrate with the deployed Ceph cluster.
- Ceph Status Controller
A Kubernetes controller that collects all valuable parameters from the current Ceph cluster, its daemons, and entities and exposes them into the
KaaSCephClusterstatus. Ceph Status Controller operations include:
Collecting all statuses from a Ceph cluster and corresponding Rook CRs.
Collecting additional information on the health of Ceph daemons.
Provides information to the
statussection of the
- Ceph Request Controller
A Kubernetes controller that obtains the parameters from Container Cloud through a CR and manages Ceph OSD lifecycle management (LCM) operations. It allows for a safe Ceph OSD removal from the Ceph cluster. Ceph Request Controller operations include:
Providing an ability to perform Ceph OSD LCM operations.
Obtaining specific CRs to remove Ceph OSDs and executing them.
Pausing the regular Ceph Controller reconcile until all requests are completed.
A typical Ceph cluster consists of the following components:
Ceph Monitors - three or, in rare cases, five Ceph Monitors.
Ceph Managers - one Ceph Manager in a regular cluster.
RADOS Gateway services - Mirantis recommends having three or more RADOS Gateway instances for HA.
Ceph OSDs - the number of Ceph OSDs may vary according to the deployment needs.
A Ceph cluster with 3 Ceph nodes does not provide hardware fault tolerance and is not eligible for recovery operations, such as a disk or an entire Ceph node replacement.
A Ceph cluster uses the replication factor that equals 3. If the number of Ceph OSDs is less than 3, a Ceph cluster moves to the degraded state with the write operations restriction until the number of alive Ceph OSDs equals the replication factor again.
The placement of Ceph Monitors and Ceph Managers is defined in the
The following diagram illustrates the way a Ceph cluster is deployed in Container Cloud:
The following diagram illustrates the processes within a deployed Ceph cluster: