Overview

Overview

Mirantis Container Cloud deploys Ceph on the baremetal-based management and managed clusters using Helm charts with the following components:

  • Ceph controller - a Kubernetes controller that obtains the parameters from Container Cloud through a custom resource (CR), creates CRs for Rook, and updates its CR status based on the Ceph cluster deployment progress. It creates users, pools, and keys for OpenStack and Kubernetes and provides Ceph configurations and keys to access them. Also, Ceph controller eventually obtains the data from the OpenStack Controller for the Keystone integration and updates the RADOS Gateway services configurations to use Kubernetes for user authentication.

  • Ceph operator

    • Transforms user parameters from the Container Cloud web UI into Rook credentials and deploys a Ceph cluster using Rook.

    • Provides integration of the Ceph cluster with Kubernetes

    • Provides data for OpenStack to integrate with the deployed Ceph cluster

  • Custom resource (CR) - represents the customization of a Kubernetes installation and allows you to define the required Ceph configuration through the Container Cloud web UI before deployment. For example, you can define the failure domain, pools, Ceph node roles, number of Ceph components such as Ceph OSDs, and so on.

  • Rook - a storage orchestrator that deploys Ceph on top of a Kubernetes cluster.

A typical Ceph cluster consists of the following components:

Ceph Monitors

Three or, in rare cases, five Ceph Monitors.

Ceph Managers

Mirantis recommends having three Ceph Managers in every cluster

RADOS Gateway services

Mirantis recommends having three or more RADOS Gateway services for HA.

Ceph OSDs

The number of Ceph OSDs may vary according to the deployment needs.

Warning

  • A Ceph cluster with 3 Ceph nodes does not provide hardware fault tolerance and is not eligible for recovery operations, such as a disk or an entire Ceph node replacement.

  • A Ceph cluster uses the replication factor that equals 3. If the number of Ceph OSDs is less than 3, a Ceph cluster moves to the degraded state with the write operations restriction until the number of alive Ceph OSDs equals the replication factor again.

The placement of Ceph Monitors and Ceph Managers is defined in the custom resource.

The following diagram illustrates the way a Ceph cluster is deployed in Container Cloud:

../../_images/ceph-deployment.png

The following diagram illustrates the processes within a deployed Ceph cluster:

../../_images/ceph-data-flow.png