Enable Ceph Shared File System (CephFS)¶
TechPreview
The Ceph Shared File System, or CephFS, provides the capability to create
read/write shared file system Persistent Volumes (PVs). These PVs support the
ReadWriteMany
access mode for the FileSystem
volume mode.
CephFS deploys its own daemons called MetaData Servers or Ceph MDS. For
details, see Ceph Documentation: Ceph File System.
Note
By design, CephFS data pool and metadata pool must be replicated
only.
Note
Due to the Technology Preview status of the feature, the following restrictions apply:
CephFS is supported as a Kubernetes CSI plugin that only supports creating Kubernetes Persistent Volumes based on the
FileSystem
volume mode. For a complete modes support matrix, see Ceph CSI: Support Matrix.Ceph Controller supports only one CephFS installation per Ceph cluster.
Prior to Container Cloud 2.19.0, Ceph Controller supports only one data pool per CephFS installation.
Since Container Cloud 2.19.0 for non-MOSK based clusters, Ceph Controller supports multiple data pools per CephFS installation.
CephFS specification¶
The KaaSCephCluster
CR includes the
spec.cephClusterSpec.sharedFilesystem.cephFS
section with the following
CephFS parameters:
Parameter |
Description |
---|---|
|
CephFS instance name. |
|
CephFS data pool spec that should only contain cephClusterSpec:
sharedFilesystem:
cephFS:
- name: cephfs-store
dataPool:
replicated:
size: 3
failureDomain: host
where Warning
|
|
A list of CephFS data pool specifications. Each spec contains the
cephClusterSpec:
sharedFilesystem:
cephFS:
- name: cephfs-store
dataPools:
- name: default-pool
replicated:
size: 3
failureDomain: host
- name: second-pool
erasureCoded:
dataChunks: 2
codingChunks: 1
Where Warning When using the non-recommended Ceph pools Warning Modifying of |
|
CephFS metadata pool spec that should only contain cephClusterSpec:
sharedFilesystem:
cephFS:
- name: cephfs-store
metadataPool:
replicated:
size: 3
failureDomain: host
where Warning Modifying |
|
Defines whether to delete the data and metadata pools if CephFS is
deleted. Set to |
|
Metadata Server settings correspond to the Ceph MDS daemon settings. Contains the following fields:
For example: cephClusterSpec:
sharedFilesystem:
cephFS:
- name: cephfs-store
metadataServer:
activeCount: 1
activeStandby: false
resources: # example, non-prod values
requests:
memory: 1Gi
cpu: 1
limits:
memory: 2Gi
cpu: 2
|
Enable CephFS¶
Open the corresponding
Cluster
resource for editing:kubectl -n <managedClusterProjectName> edit cluster
Substitute
<managedClusterProjectName>
with the corresponding value.In the
spec.providerSpec.helmReleases
section, enable the CephFS CSI plugin installation:spec: providerSpec: helmReleases: ... - name: ceph-controller ... values: ... rookExtraConfig: csiCephFsEnabled: true
You can also override the CSI Ceph FS gRPC and liveness metrics port. For example, if an application is already using the default CephFS ports
9092
and9082
, which may cause conflicts on the node.spec: providerSpec: helmReleases: ... - name: ceph-controller ... values: ... rookExtraConfig: csiCephFsEnabled: true csiCephFsGPCMetricsPort: "9092" # should be a string csiCephFsLivenessMetricsPort: "9082" # should be a string
Rook will enable the CephFS CSI plugin and provisioner.
Save
Cluster
and close the editor.Open the
KaasCephCluster
CR of a managed cluster for editing:kubectl edit kaascephcluster -n <managedClusterProjectName>
Substitute
<managedClusterProjectName>
with the corresponding value.In the
sharedFilesystem
section, specify parameters according to CephFS specification. For example:Prior to Container Cloud 2.19.0:
spec: cephClusterSpec: sharedFilesystem: cephFS: - name: cephfs-store dataPool: replicated: size: 3 failureDomain: host metadataPool: replicated: size: 3 failureDomain: host metadataServer: activeCount: 1 activeStandby: false
Since Container Cloud 2.19.0 for non-MOSK based clusters:
spec: cephClusterSpec: sharedFilesystem: cephFS: - name: cephfs-store dataPools: - name: cephfs-pool-1 replicated: size: 3 failureDomain: host metadataPool: replicated: size: 3 failureDomain: host metadataServer: activeCount: 1 activeStandby: false
Define the
mds
role for the corresponding nodes where Ceph MDS daemons should be deployed. Mirantis recommends labeling only one node with themds
role. For example:spec: cephClusterSpec: nodes: ... worker-1: roles: ... - mds
Save
KaaSCephCluster
and close the editor.
Once CephFS is specified in the KaaSCephCluster
CR, Ceph Controller will
validate it and request Rook to create CephFS. Then Ceph Controller will create
a Kubernetes StorageClass
, required to start provisioning the storage,
which will operate the CephFS CSI driver to create Kubernetes PVs.
Note
The Storage Class will be named as <cephfs-name>-cephfs
. Also,
the provisioner
will be set to rook-ceph.cephfs.csi.ceph.com
. To use
CephFS for provisioning volumes, StorageClass
must match
<cephfs-name>-cephfs
in PersistentVolumeClaim
(PVC). For example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: cephfs-pvc-example
namespace: some-namespace
spec:
storageClassName: <cephfs-name>-cephfs
...