Share Ceph across two managed clusters

TechPreview Available since 2.22.0

Caution

For MKE clusters that are part of MOSK infrastructure, the feature is not supported yet.

This section describes how to share a Ceph cluster with another managed cluster of the same management cluster and how to manage such Ceph cluster.

A shared Ceph cluster allows connecting of a consumer cluster to a producer cluster. The consumer cluster uses the Ceph cluster deployed on the producer to store the necessary data. In other words, the producer cluster contains the Ceph cluster with mon, mgr, osd, and mds daemons. And the consumer cluster contains clients that require access to the Ceph storage.

For example, an NGINX application that runs in a cluster without storage requires a persistent volume to store data. In this case, such a cluster can connect to a Ceph cluster and use it as a block or file storage.

Limitations

  • Before Container Cloud 2.24.0, connection to a shared Ceph cluster is possible only through the client.admin user.

  • The producer and consumer clusters must be located in the same management cluster.

  • The LCM network of the producer cluster must be available in the consumer cluster.

Plan a shared Ceph cluster

To plan a shared Ceph cluster, select resources to share on the producer Ceph cluster:

  • Select the RADOS Block Device (RBD) pools to share from the Ceph cluster

  • Select the CephFS name to share from the Ceph cluster

To obtain resources to share on the producer Ceph cluster:

  1. Open the KaaSCephCluster object.

  2. In spec.cephClusterSpec.pools, identify the Ceph cluster pools assigned to RBD pools.

    To obtain full names of RBD pools:

    kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph osd lspools
    

    Example of system response:

    ...
    2 kubernetes-hdd
    3 anotherpool-hdd
    ...
    

    In the example above, kubernetes-hdd and anotherpool-hdd are RBD pools.

  3. In spec.cephClusterSpec.sharedFilesystem, identify the CephFS name, for example:

    spec:
     cephClusterSpec:
       sharedFilesystem:
         cephFS:
         - name: cephfs-store
           dataPools:
           - name: cephfs-pool-1
             deviceClass: hdd
             replicated:
               size: 3
             failureDomain: host
           metadataPool:
             deviceClass: nvme
             replicated:
               size: 3
             failureDomain: host
           metadataServer:
             activeCount: 1
             activeStandby: false
    

    In the example above, the CephFS name is cephfs-store.

Create a Ceph non-admin client for a shared Ceph cluster

Available since 2.24.0

Note

Before Container Cloud 2.24.0, skip this section and proceed to Connect the producer to the consumer.

Ceph requires a non-admin client to share the producer cluster resources with the consumer cluster. To connect the consumer cluster with the producer cluster, the Ceph client requires the following caps (permissions):

  • Read-write access to Ceph Managers

  • Read and role-definer access to Ceph Monitors

  • Read-write access to Ceph Metadata servers if CephFS pools must be shared

  • Profile access to shared RBD/CephFS pools’ access for Ceph OSDs

To create a Ceph non-admin client, add the following snippet to the clients section of the KaaSCephCluster object:

spec:
  cephClusterSpec:
    clients:
    - name: <nonAdminClientName>
      caps:
        mgr: "allow rw"
        mon: "allow r, profile role-definer"
        mds: "allow rw" # if CephFS must be shared
        osd: <poolsProfileCaps>

Substitute <nonAdminClientName> with a Ceph non-admin client name and <poolsProfileCaps> with a comma-separated profile list of RBD and CephFS pools in the following format:

  • profile rbd pool=<rbdPoolName> for each RBD pool

  • allow rw tag cephfs data=<cephFsName> for each CephFS pool

For example:

spec:
  cephClusterSpec:
    clients:
    - name: non-admin-client
      caps:
        mgr: "allow rw"
        mon: "allow r, profile role-definer"
        mds: "allow rw"
        osd: "profile rbd pool=kubernetes-hdd,profile rbd pool=anotherpool-hdd,allow rw tag cephfs data=cephfs-store"

To verify the status of the created Ceph client, inspect the status section of the KaaSCephCluster object. For example:

status:
  fullClusterInfo:
    blockStorageStatus:
      clientsStatus:
        non-admin-client:
          present: true
          status: Ready
  ...
  miraCephSecretsInfo:
     lastSecretCheck: "2023-05-19T12:18:16Z"
     lastSecretUpdate: "2023-05-19T12:18:16Z"
     secretInfo:
       clientSecrets:
       ...
       - name: client.non-admin-client
         secretName: rook-ceph-client-non-admin-client
         secretNamespace: rook-ceph
     state: Ready

Connect the producer to the consumer

  1. Enable the ceph-controller Helm release in the consumer cluster:

    1. Open the Cluster object for editing:

      kubectl -n <consumerClusterProjectName> edit cluster <consumerClusterName>
      
    2. In the spec section, add the ceph-controller Helm release:

      spec:
        providerSpec:
          value:
            helmReleases:
            - name: ceph-controller
              values: {}
      
  2. Obtain namespace/name of the consumer cluster:

    kubectl -n <consumerClusterProjectName> get cluster -o jsonpath='{range .items[*]}{@.metadata.namespace}{"/"}{@.metadata.name}{"\n"}{end}'
    

    Example output:

    managed-ns/managed-cluster
    
  3. Since Container Cloud 2.24.0, obtain the previously created Ceph non-admin client as described in Create a Ceph non-admin client for a shared Ceph cluster to use it as <clientName> in the following step.

    Note

    For backward compatibility, the Ceph client.admin client is available as <clientName>. However, Mirantis does not recommend using client.admin for security reasons.

  4. Connect to the producer cluster and generate connectionString. Proceed according to the Container CLoud version used:

    1. Create a KaaSCephOperationRequest resource in a managed cluster namespace of the management cluster:

      apiVersion: kaas.mirantis.com/v1alpha1
      kind: KaaSCephOperationRequest
      metadata:
        name: test-share-request
        namespace: <managedClusterProject>
      spec:
        k8sCluster:
          name: <managedClusterName>
          namespace: <managedClusterProject>
        kaasCephCluster:
          name: <managedKaaSCephClusterName>
          namespace: <managedClusterProject>
        share:
          clientName: <clientName>
          clusterID: <namespace/name>
          opts:
            cephFS: true # if the consumer cluster will use the CephFS storage
      
    2. After KaaSCephOperationRequest is applied, wait until the Prepared state displays in the status.shareStatus section.

    3. Obtain connectionString from the status.shareStatus section. The example of the status section:

      status:
      kaasRequestState: ok
      phase: Completed
      shareStatus:
        connectionString: |
          674a68494da7d135e5416f6566818c0b5da72e5cc44127308ba670a591db30824e814aa9cc45b6f07176d3f907de4f89292587cbd0e8f8fd71ec508dc9ed9ee36a8b87db3e3aa9c0688af916091b938ac0bd825d18fbcd548adb8821859c1d3edaf5f4a37ad93891a294fbcc39e3dc40e281ba19548f5b751fab2023a8e1a340d6e884514b478832880766e80ab047bf07e69f9c598b43820cc5d9874790e0f526851d3d2f3ce1897d98b02d560180f6214164aee04f20286d595cec0c54a2a7bd0437e906fc9019ab06b00e1ba1b1c47fe611bb759c0e0ff251181cb57672dd76c2bf3ca6dd0e8625c84102eeb88769a86d712eb1a989a5c895bd42d47107bc8105588d34860fadaa71a927329fc961f82e2737fe07b68d7239b3a9817014337096bcb076051c5e2a0ee83bf6c1cc2cb494f57fef9c5306361b6c0143501467f0ec14e4f58167a2d97f2efcb0a49630c2f1a066fe4796b41ae73fe8df4213de3a39b7049e6a186dda0866d2535bbf943cb7d7bb178ad3f5f12e3351194808af687de79986c137d245ceeb4fbc3af1b625aa83e2b269f24b56bc100c0890c7c9a4e02cf1aa9565b64e86a038af2b0b9d2eeaac1f9e5e2daa086c00bf404e5a4a5c0aeb6e91fe983efda54a6aa983f50b94e181f88577f6a8029250f6f884658ceafbc915f54efc8fd3db993a51ea5a094a5d7db71ae556b8fa6864682baccc2118f3971e8c4010f6f23cc7b727f569d0
        state: Prepared
      

    Connect to the producer cluster and generate connectionString in the ceph-controller Pod:

    Note

    If the consumer cluster will use the CephFS storage, add the --cephfs-enabled flag to ceph-cluster-connector command.

    kubectl -n ceph-lcm-mirantis exec -it deploy/ceph-controller -c ceph-controller -- sh
    ceph-cluster-connector --cluster-id <clusterNamespacedName> --client-name <clientName> --verbose
    

    Substitute the following parameters:

    • <clusterNamespacedName> with namespace/name of the consumer cluster

    • <clientName> with the Ceph client name from the previous step in the client.<name> format. For example, client.non-admin-client.

    Example of a positive system response:

    I1221 14:20:29.921024     139 main.go:17] Connector code version: 1.0.0-mcc-dev-ebcd6677
    I1221 14:20:29.921085     139 main.go:18] Go Version: go1.18.8
    I1221 14:20:29.921097     139 main.go:19] Go OS/Arch: linux/amd64
    I1221 14:20:30.801832     139 connector.go:71] Your connection string is:
    d0e64654d0551e7c3a940b8f460838261248193365a7115e54a3424aa2ad122e9a85bd12ec453ca5a092c37f6238e81142cf839fd15a4cd6aafa1238358cb50133d21b1656641541bd6c3bbcad220e8a959512ef11461d14fb11fd0c6110a54ed7e9a5f61eb677771cd5c8e6a6275eb7185e0b3e49e934c0ee08c6c2f37a669fc1754570cfdf893d0918fa91d802c2d36045dfc898803e423639994c2f21b03880202dfb9ed6e784f058ccf172d1bee78d7b20674652132886a80b0a8c806e23d9f69e9d0c7473d8caf24aaf014625727cbe08146e744bf0cf8f37825521d038
    

    Connect to the producer cluster and generate connectionString in the ceph-controller Pod:

    Note

    If the consumer cluster will use the CephFS storage, add the --cephfs-enabled flag to ceph-cluster-connector command.

    kubectl -n ceph-lcm-mirantis exec -it deploy/ceph-controller -c ceph-controller -- sh
    ceph-cluster-connector --cluster-id <clusterNamespacedName>
    

    Substitute <clusterNamespacedName> with namespace/name of the consumer cluster.

    Example of a positive system response:

    I1221 14:20:29.921024     139 main.go:17] Connector code version: 1.0.0-mcc-dev-ebcd6677
    I1221 14:20:29.921085     139 main.go:18] Go Version: go1.18.8
    I1221 14:20:29.921097     139 main.go:19] Go OS/Arch: linux/amd64
    I1221 14:20:30.801832     139 connector.go:71] Your connection string is:
    d0e64654d0551e7c3a940b8f460838261248193365a7115e54a3424aa2ad122e9a85bd12ec453ca5a092c37f6238e81142cf839fd15a4cd6aafa1238358cb50133d21b1656641541bd6c3bbcad220e8a959512ef11461d14fb11fd0c6110a54ed7e9a5f61eb677771cd5c8e6a6275eb7185e0b3e49e934c0ee08c6c2f37a669fc1754570cfdf893d0918fa91d802c2d36045dfc898803e423639994c2f21b03880202dfb9ed6e784f058ccf172d1bee78d7b20674652132886a80b0a8c806e23d9f69e9d0c7473d8caf24aaf014625727cbe08146e744bf0cf8f37825521d038
    
  5. Create the consumer KaaSCephCluster object file, for example, consumer-kcc.yaml with the following content:

    apiVersion: kaas.mirantis.com/v1alpha1
    kind: KaaSCephCluster
    metadata:
      name: <consumerClusterProjectName>
      namespace: <clusterName>
    spec:
      cephClusterSpec:
        external:
          enable: true
          connectionString: <generatedConnectionString>
        network:
          clusterNet: <clusterNetCIDR>
          publicNet: <publicNetCIDR>
        nodes: {}
      k8sCluster:
        name: <clusterName>
        namespace: <consumerClusterProjectName>
    

    Specify the following values:

    • <consumerClusterProjectName> is the project name of the consumer managed cluster on the management cluster.

    • <clusterName> is the consumer managed cluster name.

    • <generatedConnectionString> is the connection string generated in the previous step.

    • <clusterNetCIDR> and <publicNetCIDR> are values that must match the same values in the producer KaaSCephCluster object.

    Note

    The spec.cephClusterSpec.network and spec.cephClusterSpec.nodes parameters are mandatory.

    The connectionString parameter is specified in the spec.cephClusterSpec.external section of the KaaSCephCluster CR. The parameter contains an encrypted string with all the configurations needed to connect the consumer cluster to the shared Ceph cluster.

  6. Apply consumer-kcc.yaml on the management cluster:

    kubectl apply -f consumer-kcc.yaml
    

Once the Ceph cluster is specified in the KaaSCephCluster CR of the consumer cluster, Ceph Controller validates it and requests Rook to connect the consumer and producer.

Consume pools from the Ceph cluster

  1. Open the KaasCephCluster CR of the consumer cluster for editing:

    kubectl -n <managedClusterProjectName> edit kaascephcluster
    

    Substitute <managedClusterProjectName> with the corresponding value.

  2. In the spec.cephClusterSpec.pools, specify pools from the producer cluster to be used by the consumer cluster. For example:

    Caution

    Each name in the pools section must match the corresponding full pool name of the producer cluster. You can find full pool names in the KaaSCephCluster CR by the following path: status.fullClusterInfo.blockStorageStatus.poolsStatus.

    spec:
      cephClusterSpec:
        pools:
        - default: true
          deviceClass: ssd
          useAsFullName: true
          name: kubernetes-ssd
          role: kubernetes-ssd
        - default: false
          deviceClass: hdd
          useAsFullName: true
          name: volumes-hdd
          role: volumes
    

After specifying pools in the consumer KaaSCephCluster CR, Ceph Controller creates a corresponding StorageClass for each specified pool, which can be used for creating ReadWriteOnce persistent volumes (PVs) in the consumer cluster.

Enable CephFS on a consumer Ceph cluster

  1. Open the KaasCephCluster CR of the consumer cluster for editing:

    kubectl -n <managedClusterProjectName> edit kaascephcluster
    

    Substitute <managedClusterProjectName> with the corresponding value.

  2. In the sharedFilesystem section of the consumer cluster, specify the dataPools to share.

    Note

    Sharing CephFS also requires specifying the metadataPool and metadataServer sections similarly to the corresponding sections of the producer cluster. For details, see CephFS specification.

    For example:

    spec:
      cephClusterSpec:
        sharedFilesystem:
          cephFS:
          - name: cephfs-store
            dataPools:
            - name: cephfs-pool-1
              replicated:
                size: 3
              failureDomain: host
            metadataPool:
              replicated:
                size: 3
              failureDomain: host
            metadataServer:
              activeCount: 1
              activeStandby: false
    

After specifying CephFS in the KaaSCephCluster CR of the consumer cluster, Ceph Controller creates a corresponding StorageClass that allows creating ReadWriteMany (RWX) PVs in the consumer cluster.