Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Now, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Share Ceph across two MOSK clusters¶
TechPreview
Warning
This procedure is valid for MOSK clusters that use the deprecated
KaaSCephCluster custom resource (CR) instead of the MiraCeph CR that is
available since MOSK 25.2 as a new Ceph configuration entrypoint. For the
equivalent procedure with the MiraCeph CR, refer to the following section:
This section describes how to share a Ceph cluster with another MOSK cluster of the same management cluster and how to manage such Ceph cluster.
A shared Ceph cluster allows connecting of a consumer cluster to a producer
cluster. The consumer cluster uses the Ceph cluster deployed on the producer
to store the necessary data. In other words, the producer cluster contains the
Ceph cluster with mon, mgr, osd, and mds daemons. And the
consumer cluster contains clients that require access to the Ceph storage.
For example, an NGINX application that runs in a cluster without storage requires a persistent volume to store data. In this case, such a cluster can connect to a Ceph cluster and use it as a block or file storage.
Limitations
The producer and consumer clusters must be located in the same management cluster.
The
LCMnetwork of the producer cluster must be available in the consumer cluster.
Plan a shared Ceph cluster¶
To plan a shared Ceph cluster, select resources to share on the producer Ceph cluster:
Select the RADOS Block Device (RBD) pools to share from the Ceph cluster
Select the CephFS name to share from the Ceph cluster
To obtain resources to share on the producer Ceph cluster:
Open the
KaaSCephClusterobject.In
spec.cephClusterSpec.pools, identify the Ceph cluster pools assigned to RBD pools.To obtain full names of RBD pools:
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph osd lspools
Example of system response:
... 2 kubernetes-hdd 3 anotherpool-hdd ...
In the example above,
kubernetes-hddandanotherpool-hddare RBD pools.In
spec.cephClusterSpec.sharedFilesystem, identify the CephFS name, for example:spec: cephClusterSpec: sharedFilesystem: cephFS: - name: cephfs-store dataPools: - name: cephfs-pool-1 deviceClass: hdd replicated: size: 3 failureDomain: host metadataPool: deviceClass: nvme replicated: size: 3 failureDomain: host metadataServer: activeCount: 1 activeStandby: false
In the example above, the CephFS name is
cephfs-store.
Create a Ceph non-admin client for a shared Ceph cluster¶
Ceph requires a non-admin client to share the producer cluster resources with
the consumer cluster. To connect the consumer cluster with the producer
cluster, the Ceph client requires the following caps (permissions):
Read-write access to Ceph Managers
Read and role-definer access to Ceph Monitors
Read-write access to Ceph Metadata servers if CephFS pools must be shared
Profile access to shared RBD/CephFS pools’ access for Ceph OSDs
To create a Ceph non-admin client, add the following snippet to the
clients section of the KaaSCephCluster object:
spec:
cephClusterSpec:
clients:
- name: <nonAdminClientName>
caps:
mgr: "allow rw"
mon: "allow r, profile role-definer"
mds: "allow rw" # if CephFS must be shared
osd: <poolsProfileCaps>
Substitute <nonAdminClientName> with a Ceph non-admin client name and
<poolsProfileCaps> with a comma-separated profile list of RBD and CephFS
pools in the following format:
profile rbd pool=<rbdPoolName>for each RBD poolallow rw tag cephfs data=<cephFsName>for each CephFS pool
For example:
spec:
cephClusterSpec:
clients:
- name: non-admin-client
caps:
mgr: "allow rw"
mon: "allow r, profile role-definer"
mds: "allow rw"
osd: "profile rbd pool=kubernetes-hdd,profile rbd pool=anotherpool-hdd,allow rw tag cephfs data=cephfs-store"
To verify the status of the created Ceph client, inspect the status
section of the KaaSCephCluster object. For example:
status:
fullClusterInfo:
blockStorageStatus:
clientsStatus:
non-admin-client:
present: true
status: Ready
...
miraCephSecretsInfo:
lastSecretCheck: "2023-05-19T12:18:16Z"
lastSecretUpdate: "2023-05-19T12:18:16Z"
secretInfo:
clientSecrets:
...
- name: client.non-admin-client
secretName: rook-ceph-client-non-admin-client
secretNamespace: rook-ceph
state: Ready
Connect the producer to the consumer¶
Enable the
ceph-controllerHelm release in the consumer cluster:Open the
Clusterobject for editing:kubectl -n <consumerClusterProjectName> edit cluster <consumerClusterName>
In the
specsection, add theceph-controllerHelm release:spec: providerSpec: value: helmReleases: - name: ceph-controller values: {}
Obtain
namespace/nameof the consumer cluster:kubectl -n <consumerClusterProjectName> get cluster -o jsonpath='{range .items[*]}{@.metadata.namespace}{"/"}{@.metadata.name}{"\n"}{end}'
Example output:
mosk-ns/mosk-cluster
Obtain the previously created Ceph non-admin client as described in Create a Ceph non-admin client for a shared Ceph cluster to use it as
<clientName>in the following step.Note
For backward compatibility, the Ceph
client.adminclient is available as<clientName>. However, Mirantis does not recommend usingclient.adminfor security reasons.Connect to the producer cluster and generate
connectionString:Create a
KaaSCephOperationRequestresource in a MOSK cluster namespace of the management cluster:apiVersion: kaas.mirantis.com/v1alpha1 kind: KaaSCephOperationRequest metadata: name: test-share-request namespace: <moskClusterProject> spec: k8sCluster: name: <moskClusterName> namespace: <moskClusterProject> kaasCephCluster: name: <moskKaaSCephClusterName> namespace: <moskClusterProject> share: clientName: <clientName> clusterID: <namespace/name> opts: cephFS: true # if the consumer cluster will use the CephFS storage
After
KaaSCephOperationRequestis applied, wait until thePreparedstate displays in thestatus.shareStatussection.Obtain
connectionStringfrom thestatus.shareStatussection. The example of thestatussection:status: kaasRequestState: ok phase: Completed shareStatus: connectionString: | 674a68494da7d135e5416f6566818c0b5da72e5cc44127308ba670a591db30824e814aa9cc45b6f07176d3f907de4f89292587cbd0e8f8fd71ec508dc9ed9ee36a8b87db3e3aa9c0688af916091b938ac0bd825d18fbcd548adb8821859c1d3edaf5f4a37ad93891a294fbcc39e3dc40e281ba19548f5b751fab2023a8e1a340d6e884514b478832880766e80ab047bf07e69f9c598b43820cc5d9874790e0f526851d3d2f3ce1897d98b02d560180f6214164aee04f20286d595cec0c54a2a7bd0437e906fc9019ab06b00e1ba1b1c47fe611bb759c0e0ff251181cb57672dd76c2bf3ca6dd0e8625c84102eeb88769a86d712eb1a989a5c895bd42d47107bc8105588d34860fadaa71a927329fc961f82e2737fe07b68d7239b3a9817014337096bcb076051c5e2a0ee83bf6c1cc2cb494f57fef9c5306361b6c0143501467f0ec14e4f58167a2d97f2efcb0a49630c2f1a066fe4796b41ae73fe8df4213de3a39b7049e6a186dda0866d2535bbf943cb7d7bb178ad3f5f12e3351194808af687de79986c137d245ceeb4fbc3af1b625aa83e2b269f24b56bc100c0890c7c9a4e02cf1aa9565b64e86a038af2b0b9d2eeaac1f9e5e2daa086c00bf404e5a4a5c0aeb6e91fe983efda54a6aa983f50b94e181f88577f6a8029250f6f884658ceafbc915f54efc8fd3db993a51ea5a094a5d7db71ae556b8fa6864682baccc2118f3971e8c4010f6f23cc7b727f569d0 state: Prepared
Create the consumer
KaaSCephClusterobject file, for example,consumer-kcc.yamlwith the following content:apiVersion: kaas.mirantis.com/v1alpha1 kind: KaaSCephCluster metadata: name: <consumerClusterProjectName> namespace: <clusterName> spec: cephClusterSpec: external: enable: true connectionString: <generatedConnectionString> network: clusterNet: <clusterNetCIDR> publicNet: <publicNetCIDR> nodes: {} k8sCluster: name: <clusterName> namespace: <consumerClusterProjectName>
Specify the following values:
<consumerClusterProjectName>is the project name of the consumer MOSK cluster on the management cluster.<clusterName>is the consumer MOSK cluster name.<generatedConnectionString>is the connection string generated in the previous step.<clusterNetCIDR>and<publicNetCIDR>are values that must match the same values in the producerKaaSCephClusterobject.
Note
The
spec.cephClusterSpec.networkandspec.cephClusterSpec.nodesparameters are mandatory.The
connectionStringparameter is specified in thespec.cephClusterSpec.externalsection of theKaaSCephClusterCR. The parameter contains an encrypted string with all the configurations needed to connect the consumer cluster to the shared Ceph cluster.Apply
consumer-kcc.yamlon the management cluster:kubectl apply -f consumer-kcc.yaml
Once the Ceph cluster is specified in the KaaSCephCluster CR of the
consumer cluster, Ceph Controller validates it and requests Rook to connect
the consumer and producer.
Consume pools from the Ceph cluster¶
Open the
KaasCephClusterCR of the consumer cluster for editing:kubectl -n <moskClusterProjectName> edit kaascephcluster
Substitute
<moskClusterProjectName>with the corresponding value.In the
spec.cephClusterSpec.pools, specify pools from the producer cluster to be used by the consumer cluster. For example:Caution
Each
namein thepoolssection must match the corresponding full poolnameof the producer cluster. You can find full poolnamesin theKaaSCephClusterCR by the following path:status.fullClusterInfo.blockStorageStatus.poolsStatus.spec: cephClusterSpec: pools: - default: true deviceClass: ssd useAsFullName: true name: kubernetes-ssd role: kubernetes-ssd - default: false deviceClass: hdd useAsFullName: true name: volumes-hdd role: volumes
After specifying pools in the consumer KaaSCephCluster CR, Ceph Controller
creates a corresponding StorageClass for each specified pool, which can be
used for creating ReadWriteOnce persistent volumes (PVs) in the consumer
cluster.
Enable CephFS on a consumer Ceph cluster¶
Open the
KaasCephClusterCR of the consumer cluster for editing:kubectl -n <moskClusterProjectName> edit kaascephcluster
Substitute
<moskClusterProjectName>with the corresponding value.In the
sharedFilesystemsection of the consumer cluster, specify thedataPoolsto share.Note
Sharing
CephFSalso requires specifying themetadataPoolandmetadataServersections similarly to the corresponding sections of the producer cluster. For details, see CephFS specification.For example:
spec: cephClusterSpec: sharedFilesystem: cephFS: - name: cephfs-store dataPools: - name: cephfs-pool-1 replicated: size: 3 failureDomain: host metadataPool: replicated: size: 3 failureDomain: host metadataServer: activeCount: 1 activeStandby: false
After specifying CephFS in the KaaSCephCluster CR of the consumer
cluster, Ceph Controller creates a corresponding StorageClass that allows
creating ReadWriteMany (RWX) PVs in the consumer cluster.