Configure Ceph Shared File System (CephFS)
Warning
This procedure is valid for MOSK clusters that use the unsupported KaaSCephCluster custom resource (CR) instead of the
MiraCeph CR that is available since MOSK 25.2 as a new Ceph configuration
entrypoint.
For the equivalent procedure with the MiraCeph CR, refer to the following
section:
Caution
Since Ceph Pacific, Ceph CSI driver does not propagate the
777 permission on the mount point of persistent volumes based on any
StorageClass of the CephFS data pool.
The Ceph Shared File System, or CephFS, provides the capability to create
read/write shared file system Persistent Volumes (PVs). These PVs support the
ReadWriteMany access mode for the FileSystem volume mode.
CephFS deploys its own daemons called MetaData Servers or Ceph MDS. For
details, see Ceph Documentation: Ceph File System.
Note
By design, CephFS data pool and metadata pool must be replicated
only.
Limitations
CephFS is supported as a Kubernetes CSI plugin that only supports creating Kubernetes Persistent Volumes based on the
FileSystemvolume mode. For a complete modes support matrix, see Ceph CSI: Support Matrix.Re-creating of the CephFS instance in a cluster requires a different value for the
nameparameter.
CephFS specification
The KaaSCephCluster CR includes the
spec.cephClusterSpec.sharedFilesystem.cephFS section with the following
CephFS parameters:
Configure CephFS
Optional. Override the CSI CephFS gRPC and liveness metrics port. For example, if an application is already using the default CephFS ports
9092and9082, which may cause conflicts on the node.Open the
ClusterCR of a MOSK cluster for editing:kubectl -n <moskClusterProjectName> edit cluster
Substitute
<moskClusterProjectName>with the corresponding value.In the
spec.providerSpec.helmReleasessection, configurecsiCephFsGPCMetricsPortandcsiCephFsLivenessMetricsPortas required. For example:spec: providerSpec: helmReleases: ... - name: ceph-controller ... values: ... rookExtraConfig: csiCephFsEnabled: true csiCephFsGPCMetricsPort: "9092" # should be a string csiCephFsLivenessMetricsPort: "9082" # should be a string
Rook will enable the CephFS CSI plugin and provisioner.
Open the
KaasCephClusterCR of a MOSK cluster for editing:kubectl edit kaascephcluster -n <moskClusterProjectName>
Substitute
<moskClusterProjectName>with the corresponding value.In the
sharedFilesystemsection, specify parameters according to CephFS specification. For example:spec: cephClusterSpec: sharedFilesystem: cephFS: - name: cephfs-store dataPools: - name: cephfs-pool-1 deviceClass: hdd replicated: size: 3 failureDomain: host metadataPool: deviceClass: nvme replicated: size: 3 failureDomain: host metadataServer: activeCount: 1 activeStandby: false
Define the
mdsrole for the corresponding nodes where Ceph MDS daemons should be deployed. Mirantis recommends labeling only one node with themdsrole. For example:spec: cephClusterSpec: nodes: ... worker-1: roles: ... - mds
Once CephFS is specified in the KaaSCephCluster CR, Ceph Controller will
validate it and request Rook to create CephFS. Then Ceph Controller will create
a Kubernetes StorageClass, required to start provisioning the storage,
which will operate the CephFS CSI driver to create Kubernetes PVs.