Configure Ceph Shared File System (CephFS)¶
Available since 2.23.1 (Cluster release 12.7.0)
Caution
Since Ceph Pacific, Ceph CSI driver does not propagate the
777
permission on the mount point of persistent volumes based on any
StorageClass
of the CephFS data pool.
The Ceph Shared File System, or CephFS, provides the capability to create
read/write shared file system Persistent Volumes (PVs). These PVs support the
ReadWriteMany
access mode for the FileSystem
volume mode.
CephFS deploys its own daemons called MetaData Servers or Ceph MDS. For
details, see Ceph Documentation: Ceph File System.
Note
By design, CephFS data pool and metadata pool must be replicated
only.
Limitations
CephFS is supported as a Kubernetes CSI plugin that only supports creating Kubernetes Persistent Volumes based on the
FileSystem
volume mode. For a complete modes support matrix, see Ceph CSI: Support Matrix.Ceph Controller supports only one CephFS installation per Ceph cluster.
Re-creating of the CephFS instance in a cluster requires a different value for the
name
parameter.
CephFS specification¶
The KaaSCephCluster
CR includes the
spec.cephClusterSpec.sharedFilesystem.cephFS
section with the following
CephFS parameters:
Parameter |
Description |
---|---|
|
CephFS instance name. |
|
A list of CephFS data pool specifications. Each spec contains the
cephClusterSpec:
sharedFilesystem:
cephFS:
- name: cephfs-store
dataPools:
- name: default-pool
deviceClass: ssd
replicated:
size: 3
failureDomain: host
- name: second-pool
deviceClass: hdd
erasureCoded:
dataChunks: 2
codingChunks: 1
Where Warning Modifying of |
|
CephFS metadata pool spec that should only contain cephClusterSpec:
sharedFilesystem:
cephFS:
- name: cephfs-store
metadataPool:
deviceClass: nvme
replicated:
size: 3
failureDomain: host
where Warning Modifying of |
|
Defines whether to delete the data and metadata pools if CephFS is
deleted. Set to |
|
Metadata Server settings correspond to the Ceph MDS daemon settings. Contains the following fields:
For example: cephClusterSpec:
sharedFilesystem:
cephFS:
- name: cephfs-store
metadataServer:
activeCount: 1
activeStandby: false
resources: # example, non-prod values
requests:
memory: 1Gi
cpu: 1
limits:
memory: 2Gi
cpu: 2
|
Configure CephFS¶
Optional. Override the CSI CephFS gRPC and liveness metrics port. For example, if an application is already using the default CephFS ports
9092
and9082
, which may cause conflicts on the node.Open the
Cluster
CR of a managed cluster for editing:kubectl -n <managedClusterProjectName> edit cluster
Substitute
<managedClusterProjectName>
with the corresponding value.In the
spec.providerSpec.helmReleases
section, configurecsiCephFsGPCMetricsPort
andcsiCephFsLivenessMetricsPort
as required. For example:spec: providerSpec: helmReleases: ... - name: ceph-controller ... values: ... rookExtraConfig: csiCephFsEnabled: true csiCephFsGPCMetricsPort: "9092" # should be a string csiCephFsLivenessMetricsPort: "9082" # should be a string
Rook will enable the CephFS CSI plugin and provisioner.
Open the
KaasCephCluster
CR of a managed cluster for editing:kubectl edit kaascephcluster -n <managedClusterProjectName>
Substitute
<managedClusterProjectName>
with the corresponding value.In the
sharedFilesystem
section, specify parameters according to CephFS specification. For example:spec: cephClusterSpec: sharedFilesystem: cephFS: - name: cephfs-store dataPools: - name: cephfs-pool-1 deviceClass: hdd replicated: size: 3 failureDomain: host metadataPool: deviceClass: nvme replicated: size: 3 failureDomain: host metadataServer: activeCount: 1 activeStandby: false
Define the
mds
role for the corresponding nodes where Ceph MDS daemons should be deployed. Mirantis recommends labeling only one node with themds
role. For example:spec: cephClusterSpec: nodes: ... worker-1: roles: ... - mds
Once CephFS is specified in the KaaSCephCluster
CR, Ceph Controller will
validate it and request Rook to create CephFS. Then Ceph Controller will create
a Kubernetes StorageClass
, required to start provisioning the storage,
which will operate the CephFS CSI driver to create Kubernetes PVs.