Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Configure Ceph Shared File System (CephFS)¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph custom
resource (CR), which is available since MOSK 25.2 to replace the deprecated
KaaSCephCluster. For the equivalent procedure with the KaaSCephCluster
CR, refer to the following section:
Caution
Since Ceph Pacific, Ceph CSI driver does not propagate the
777 permission on the mount point of persistent volumes based on any
StorageClass of the CephFS data pool.
The Ceph Shared File System, or CephFS, provides the capability to create
read/write shared file system Persistent Volumes (PVs). These PVs support the
ReadWriteMany access mode for the FileSystem volume mode.
CephFS deploys its own daemons called MetaData Servers or Ceph MDS. For
details, see Ceph Documentation: Ceph File System.
Note
By design, CephFS data pool and metadata pool must be replicated
only.
Limitations
CephFS is supported as a Kubernetes CSI plugin that only supports creating Kubernetes Persistent Volumes based on the
FileSystemvolume mode. For a complete modes support matrix, see Ceph CSI: Support Matrix.Re-creating of the CephFS instance in a cluster requires a different value for the
nameparameter.
CephFS specification¶
The MiraCeph CR spec includes the sharedFilesystem.cephFS section
with the following CephFS parameters:
Parameter |
Description |
|---|---|
|
CephFS instance name. |
|
A list of CephFS data pool specifications. Each spec contains the
sharedFilesystem:
cephFS:
- name: cephfs-store
dataPools:
- name: default-pool
deviceClass: ssd
replicated:
size: 3
failureDomain: host
- name: second-pool
deviceClass: hdd
erasureCoded:
dataChunks: 2
codingChunks: 1
Where Warning When using the non-recommended Ceph pools For example, if Warning Modifying of |
|
CephFS metadata pool spec that should only contain sharedFilesystem:
cephFS:
- name: cephfs-store
metadataPool:
deviceClass: nvme
replicated:
size: 3
failureDomain: host
where Warning Modifying of |
|
Defines whether to delete the data and metadata pools if CephFS is
deleted. Set to |
|
Metadata Server settings correspond to the Ceph MDS daemon settings. Contains the following fields:
For example: sharedFilesystem:
cephFS:
- name: cephfs-store
metadataServer:
activeCount: 1
activeStandby: false
resources: # example, non-prod values
requests:
memory: 1Gi
cpu: 1
limits:
memory: 2Gi
cpu: 2
|
Configure CephFS¶
Optional. Override the CSI CephFS gRPC and liveness metrics port. For example, if an application is already using the default CephFS ports
9092and9082, which may cause conflicts on the node.Perform the following substeps on a management cluster:
Open the
ClusterCR of a MOSK cluster for editing:kubectl -n <moskClusterProjectName> edit cluster
Substitute
<moskClusterProjectName>with the corresponding value.In the
spec.providerSpec.helmReleasessection, configurecsiCephFsGPCMetricsPortandcsiCephFsLivenessMetricsPortas required. For example:spec: providerSpec: helmReleases: ... - name: ceph-controller ... values: ... rookExtraConfig: csiCephFsEnabled: true csiCephFsGPCMetricsPort: "9092" # should be a string csiCephFsLivenessMetricsPort: "9082" # should be a string
Rook will enable the CephFS CSI plugin and provisioner.
Open the
MiraCephCR on a MOSK cluster for editing:kubectl -n ceph-lcm-mirantis edit miraceph
In the
sharedFilesystemsection, specify parameters according to CephFS specification. For example:spec: sharedFilesystem: cephFS: - name: cephfs-store dataPools: - name: cephfs-pool-1 deviceClass: hdd replicated: size: 3 failureDomain: host metadataPool: deviceClass: nvme replicated: size: 3 failureDomain: host metadataServer: activeCount: 1 activeStandby: false
Define the
mdsrole for the corresponding nodes where Ceph MDS daemons should be deployed. Mirantis recommends labeling only one node with themdsrole. For example:spec: nodes: ... worker-1: roles: ... - mds
Once CephFS is specified in the MiraCeph CR, Ceph Controller will
validate it and request Rook to create CephFS. Then Ceph Controller will create
a Kubernetes StorageClass, required to start provisioning the storage,
which will operate the CephFS CSI driver to create Kubernetes PVs.