Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!

Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.

Share Ceph across two MOSK clusters

TechPreview

Warning

This procedure is valid for MOSK clusters that use the MiraCeph custom resource (CR), which is available since MOSK 25.2 to replace the deprecated KaaSCephCluster. For the equivalent procedure with the KaaSCephCluster CR, refer to the following section:

Share Ceph across two managed clusters

This section describes how to share a Ceph cluster with another MOSK cluster of the same management cluster and how to manage such Ceph cluster.

A shared Ceph cluster allows connecting of a consumer cluster to a producer cluster. The consumer cluster uses the Ceph cluster deployed on the producer to store the necessary data. In other words, the producer cluster contains the Ceph cluster with mon, mgr, osd, and mds daemons. And the consumer cluster contains clients that require access to the Ceph storage.

For example, an NGINX application that runs in a cluster without storage requires a persistent volume to store data. In this case, such a cluster can connect to a Ceph cluster and use it as a block or file storage.

Limitations

  • The producer and consumer clusters must be located in the same management cluster.

  • The LCM network of the producer cluster must be available in the consumer cluster.

Plan a shared Ceph cluster

To plan a shared Ceph cluster, select resources to share on the producer Ceph cluster:

  • Select the RADOS Block Device (RBD) pools to share from the Ceph cluster

  • Select the CephFS name to share from the Ceph cluster

To obtain resources to share on the producer Ceph cluster:

  1. Open the MiraCeph object.

  2. In pools section, identify the Ceph cluster pools assigned to RBD pools.

    To obtain full names of RBD pools:

    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph osd lspools
    

    Example of system response:

    ...
    2 kubernetes-hdd
    3 anotherpool-hdd
    ...
    

    In the example above, kubernetes-hdd and anotherpool-hdd are RBD pools.

  3. In sharedFilesystem, identify the CephFS name. For example:

    sharedFilesystem:
      cephFS:
      - name: cephfs-store
        dataPools:
        - name: cephfs-pool-1
          deviceClass: hdd
          replicated:
            size: 3
          failureDomain: host
        metadataPool:
          deviceClass: nvme
          replicated:
            size: 3
          failureDomain: host
        metadataServer:
          activeCount: 1
          activeStandby: false
    

    In the example above, the CephFS name is cephfs-store.

Create a Ceph non-admin client for a shared Ceph cluster

Ceph requires a non-admin client to share the producer cluster resources with the consumer cluster. To connect the consumer cluster with the producer cluster, the Ceph client requires the following caps (permissions):

  • Read-write access to Ceph Managers

  • Read and role-definer access to Ceph Monitors

  • Read-write access to Ceph Metadata servers if CephFS pools must be shared

  • Profile access to shared RBD/CephFS pools’ access for Ceph OSDs

To create a Ceph non-admin client, add the following snippet to the clients section of the MiraCeph object:

spec:
  clients:
  - name: <nonAdminClientName>
    caps:
      mgr: "allow rw"
      mon: "allow r, profile role-definer"
      mds: "allow rw" # if CephFS must be shared
      osd: <poolsProfileCaps>

Substitute <nonAdminClientName> with a Ceph non-admin client name and <poolsProfileCaps> with a comma-separated profile list of RBD and CephFS pools in the following format:

  • profile rbd pool=<rbdPoolName> for each RBD pool

  • allow rw tag cephfs data=<cephFsName> for each CephFS pool

For example:

spec:
  clients:
  - name: non-admin-client
    caps:
      mgr: "allow rw"
      mon: "allow r, profile role-definer"
      mds: "allow rw"
      osd: "profile rbd pool=kubernetes-hdd,profile rbd pool=anotherpool-hdd,allow rw tag cephfs data=cephfs-store"

To verify the status of the created Ceph client, inspect the status section of the MiraCephLog object. For example:

status:
  fullClusterInfo:
    blockStorageStatus:
      clientsStatus:
        non-admin-client:
          present: true
          status: Ready
  ...
  miraCephSecretsInfo:
     lastSecretCheck: "2023-05-19T12:18:16Z"
     lastSecretUpdate: "2023-05-19T12:18:16Z"
     secretInfo:
       clientSecrets:
       ...
       - name: client.non-admin-client
         secretName: rook-ceph-client-non-admin-client
         secretNamespace: rook-ceph
     state: Ready

Connect the producer to the consumer

  1. Enable the ceph-controller Helm release in the consumer cluster:

    1. Open the Cluster object for editing:

      kubectl -n <consumerClusterProjectName> edit cluster <consumerClusterName>
      
    2. In the spec section, add the ceph-controller Helm release:

      spec:
        providerSpec:
          value:
            helmReleases:
            - name: ceph-controller
              values: {}
      
  2. Obtain namespace/name of the consumer cluster:

    kubectl -n <consumerClusterProjectName> get cluster -o jsonpath='{range .items[*]}{@.metadata.namespace}{"/"}{@.metadata.name}{"\n"}{end}'
    

    Example output:

    mosk-ns/mosk-cluster
    
  3. On the consumer cluster, add cluster parameter to metadata.annotations of MiraCeph CR:

    metadata:
      annotations:
        cluster: "<namespace/name>"
    

    Substitute <namespace/name> with namespace/name value obtained on the previous step, for example, mosk-ns/mosk-cluster.

  4. Obtain the previously created Ceph non-admin client as described in Create a Ceph non-admin client for a shared Ceph cluster to use it as <clientName> in the following step.

    Note

    For backward compatibility, the Ceph client.admin client is available as <clientName>. However, Mirantis does not recommend using client.admin for security reasons.

  5. Connect to the producer cluster and generate connectionString:

    1. Create a CephShareRequest resource on a MOSK cluster:

      apiVersion: lcm.mirantis.com/v1alpha1
      kind: CephShareRequest
      metadata:
        name: test-share-request
        namespace: ceph-lcm-mirantis
      spec:
        share:
          clientName: <clientName>
          clusterID: <namespace/name>
          opts:
            cephFS: true # if the consumer cluster will use the CephFS storage
      
    2. After CephShareRequest is applied, wait until the Prepared state displays in the status.shareStatus section.

    3. Obtain connectionString from the status.shareStatus section. The example of the status section:

      status:
      kaasRequestState: ok
      phase: Completed
      shareStatus:
        connectionString: |
          674a68494da7d135e5416f6566818c0b5da72e5cc44127308ba670a591db30824e814aa9cc45b6f07176d3f907de4f89292587cbd0e8f8fd71ec508dc9ed9ee36a8b87db3e3aa9c0688af916091b938ac0bd825d18fbcd548adb8821859c1d3edaf5f4a37ad93891a294fbcc39e3dc40e281ba19548f5b751fab2023a8e1a340d6e884514b478832880766e80ab047bf07e69f9c598b43820cc5d9874790e0f526851d3d2f3ce1897d98b02d560180f6214164aee04f20286d595cec0c54a2a7bd0437e906fc9019ab06b00e1ba1b1c47fe611bb759c0e0ff251181cb57672dd76c2bf3ca6dd0e8625c84102eeb88769a86d712eb1a989a5c895bd42d47107bc8105588d34860fadaa71a927329fc961f82e2737fe07b68d7239b3a9817014337096bcb076051c5e2a0ee83bf6c1cc2cb494f57fef9c5306361b6c0143501467f0ec14e4f58167a2d97f2efcb0a49630c2f1a066fe4796b41ae73fe8df4213de3a39b7049e6a186dda0866d2535bbf943cb7d7bb178ad3f5f12e3351194808af687de79986c137d245ceeb4fbc3af1b625aa83e2b269f24b56bc100c0890c7c9a4e02cf1aa9565b64e86a038af2b0b9d2eeaac1f9e5e2daa086c00bf404e5a4a5c0aeb6e91fe983efda54a6aa983f50b94e181f88577f6a8029250f6f884658ceafbc915f54efc8fd3db993a51ea5a094a5d7db71ae556b8fa6864682baccc2118f3971e8c4010f6f23cc7b727f569d0
        state: Prepared
      
  6. Create the consumer MiraCeph object file, for example, consumer-mc.yaml with the following content:

    apiVersion: lcm.mirantis.com/v1alpha1
    kind: MiraCeph
    metadata:
      name: rook-ceph
      namespace: ceph-lcm-mirantis
    spec:
      external:
        enable: true
        connectionString: <generatedConnectionString>
      network:
        clusterNet: <clusterNetCIDR>
        publicNet: <publicNetCIDR>
      nodes: {}
    

    Specify the following values:

    • <generatedConnectionString> is the connection string generated in the previous step.

    • <clusterNetCIDR> and <publicNetCIDR> are values that must match the same values in the producer MiraCeph object.

    Note

    The spec.network and spec.nodes parameters are mandatory.

    The connectionString parameter is specified in the spec.external section of the MiraCeph CR. The parameter contains an encrypted string with all the configurations needed to connect the consumer cluster to the shared Ceph cluster.

  7. Apply consumer-mc.yaml on the consumer MOSK cluster:

    kubectl apply -f consumer-mc.yaml
    

Once the Ceph cluster is specified in the MiraCeph CR of the consumer cluster, Ceph Controller validates it and requests Rook to connect the consumer and producer.

Consume pools from the Ceph cluster

  1. Open the MiraCeph CR on the consumer cluster for editing:

    kubectl -n ceph-lcm-mirantis edit miraceph
    
  2. In the spec.pools, specify pools from the producer cluster to be used by the consumer cluster. For example:

    Caution

    Each name in the pools section must match the corresponding full pool name of the producer cluster. You can find full pools name in the MiraCephLog CR by the following path: status.fullClusterInfo.blockStorageStatus.poolsStatus.

    pools:
    - default: true
      deviceClass: ssd
      useAsFullName: true
      name: kubernetes-ssd
      role: kubernetes-ssd
    - default: false
      deviceClass: hdd
      useAsFullName: true
      name: volumes-hdd
      role: volumes
    

After specifying pools in the consumer MiraCeph CR, Ceph Controller creates a corresponding StorageClass for each specified pool, which can be used for creating ReadWriteOnce persistent volumes (PVs) in the consumer cluster.

Enable CephFS on a consumer Ceph cluster

  1. Open the MiraCeph CR of the consumer cluster for editing:

    kubectl -n ceph-lcm-mirantis edit miraceph
    
  2. In the sharedFilesystem section of the consumer cluster, specify the dataPools to share.

    Note

    Sharing CephFS also requires specifying the metadataPool and metadataServer sections similarly to the corresponding sections of the producer cluster. For details, see CephFS specification.

    For example:

    sharedFilesystem:
      cephFS:
      - name: cephfs-store
        dataPools:
        - name: cephfs-pool-1
          replicated:
            size: 3
          failureDomain: host
        metadataPool:
          replicated:
            size: 3
          failureDomain: host
        metadataServer:
          activeCount: 1
          activeStandby: false
    

After specifying CephFS in the MiraCeph CR of the consumer cluster, Ceph Controller creates a corresponding StorageClass that allows creating ReadWriteMany (RWX) PVs in the consumer cluster.