For MKE clusters that are part of MOSK infrastructure, the feature
is not supported yet.
This section describes how to share a Ceph cluster with another managed
cluster of the same management cluster and how to manage such Ceph cluster.
A shared Ceph cluster allows connecting of a consumer cluster to a producer
cluster. The consumer cluster uses the Ceph cluster deployed on the producer
to store the necessary data. In other words, the producer cluster contains the
Ceph cluster with mon, mgr, osd, and mds daemons. And the
consumer cluster contains clients that require access to the Ceph storage.
For example, an NGINX application that runs in a cluster without storage
requires a persistent volume to store data. In this case, such a cluster can
connect to a Ceph cluster and use it as a block or file storage.
Limitations
Before Container Cloud 2.24.0, connection to a shared Ceph cluster is
possible only through the client.admin user.
The producer and consumer clusters must be located in the same
management cluster.
The LCM network of the producer cluster must be available in the
consumer cluster.
Ceph requires a non-admin client to share the producer cluster resources with
the consumer cluster. To connect the consumer cluster with the producer
cluster, the Ceph client requires the following caps (permissions):
Read-write access to Ceph Managers
Read and role-definer access to Ceph Monitors
Read-write access to Ceph Metadata servers if CephFS pools must be shared
Profile access to shared RBD/CephFS pools’ access for Ceph OSDs
To create a Ceph non-admin client, add the following snippet to the
clients section of the KaaSCephCluster object:
spec:cephClusterSpec:clients:-name:<nonAdminClientName>caps:mgr:"allowrw"mon:"allowr,profilerole-definer"mds:"allowrw"# if CephFS must be sharedosd:<poolsProfileCaps>
Substitute <nonAdminClientName> with a Ceph non-admin client name and
<poolsProfileCaps> with a comma-separated profile list of RBD and CephFS
pools in the following format:
profilerbdpool=<rbdPoolName> for each RBD pool
allowrwtagcephfsdata=<cephFsName> for each CephFS pool
For backward compatibility, the Ceph client.admin client is
available as <clientName>. However, Mirantis does not recommend
using client.admin for security reasons.
Connect to the producer cluster and generate connectionString.
Proceed according to the Container CLoud version used:
Since Container Cloud 2.25.0
Create a KaaSCephOperationRequest resource in a managed cluster
namespace of the management cluster:
apiVersion:kaas.mirantis.com/v1alpha1kind:KaaSCephOperationRequestmetadata:name:test-share-requestnamespace:<managedClusterProject>spec:k8sCluster:name:<managedClusterName>namespace:<managedClusterProject>kaasCephCluster:name:<managedKaaSCephClusterName>namespace:<managedClusterProject>share:clientName:<clientName>clusterID:<namespace/name>opts:cephFS:true# if the consumer cluster will use the CephFS storage
After KaaSCephOperationRequest is applied, wait until the
Prepared state displays in the status.shareStatus section.
Obtain connectionString from the status.shareStatus section.
The example of the status section:
<consumerClusterProjectName> is the project name of the consumer
managed cluster on the management cluster.
<clusterName> is the consumer managed cluster name.
<generatedConnectionString> is the connection string generated in
the previous step.
<clusterNetCIDR> and <publicNetCIDR> are values that must match
the same values in the producer KaaSCephCluster object.
Note
The spec.cephClusterSpec.network and
spec.cephClusterSpec.nodes parameters are mandatory.
The connectionString parameter is specified in the
spec.cephClusterSpec.external section of the KaaSCephCluster CR.
The parameter contains an encrypted string with all the configurations
needed to connect the consumer cluster to the shared Ceph cluster.
Apply consumer-kcc.yaml on the management cluster:
kubectlapply-fconsumer-kcc.yaml
Once the Ceph cluster is specified in the KaaSCephCluster CR of the
consumer cluster, Ceph Controller validates it and requests Rook to connect
the consumer and producer.
Substitute <managedClusterProjectName> with the corresponding value.
In the spec.cephClusterSpec.pools, specify pools from the producer
cluster to be used by the consumer cluster. For example:
Caution
Each name in the pools section must match the
corresponding full pool name of the
producer cluster. You can find full pool names in the
KaaSCephCluster CR by the following path:
status.fullClusterInfo.blockStorageStatus.poolsStatus.
After specifying pools in the consumer KaaSCephCluster CR, Ceph Controller
creates a corresponding StorageClass for each specified pool, which can be
used for creating ReadWriteOnce persistent volumes (PVs) in the consumer
cluster.
Substitute <managedClusterProjectName> with the corresponding value.
In the sharedFilesystem section of the consumer cluster, specify
the dataPools to share.
Note
Sharing CephFS also requires specifying the metadataPool and
metadataServer sections similarly to the corresponding sections of the
producer cluster. For details, see CephFS specification.
After specifying CephFS in the KaaSCephCluster CR of the consumer
cluster, Ceph Controller creates a corresponding StorageClass that allows
creating ReadWriteMany (RWX) PVs in the consumer cluster.