Connect to and manage a shared Ceph cluster

TechPreview Available since 2.22.0

Caution

For MOSK-based deployments, the feature is not supported yet.

A shared Ceph cluster allows connecting of a consumer cluster to a producer cluster. The consumer cluster uses the Ceph cluster deployed on the producer to store the necessary data. In other words, the producer cluster contains the Ceph cluster with mon, mgr, osd, and mds daemons. And the consumer cluster contains clients that require access to the Ceph storage.

For example, an NGINX application that runs in a cluster without storage requires a persistent volume to store data. In this case, such a cluster can connect to a Ceph cluster and use it as a block or file storage.

Limitations

  • Connection to a shared Ceph cluster is possible only through the client.admin user.

  • The producer and consumer cluster should be in one region.

  • The producer cluster LCM network should be available in the consumer cluster.

Connect the producer to the consumer

  1. From the producer KaaSCephCluster custom resource (CR), obtain the consumer cluster name:

    kubectl -n <managedClusterProjectName> get kaascephcluster -o jsonpath='{.items[0].spec.k8sCluster.namespace}/{.items[0].spec.k8sCluster.name}{"\n"}'
    

    Substitute <managedClusterProjectName> with the corresponding value.

  2. Connect to the producer cluster and generate connectionString in the ceph-controller Pod:

    kubectl -n ceph-lcm-mirantis exec -it $(kubectl -n ceph-lcm-mirantis get pod -l "app=ceph-controller" -o name | head -1) -c ceph-controller -- ceph-cluster-connector --cluster-id <clusterName>
    

    Substitute <clusterName> with the corresponding value from the previous step.

    Example of a positive system response:

    I1221 14:20:29.921024     139 main.go:17] Connector code version: 1.0.0-mcc-dev-ebcd6677
    I1221 14:20:29.921085     139 main.go:18] Go Version: go1.18.8
    I1221 14:20:29.921097     139 main.go:19] Go OS/Arch: linux/amd64
    I1221 14:20:30.801832     139 connector.go:71] Your connection string is:
    d0e64654d0551e7c3a940b8f460838261248193365a7115e54a3424aa2ad122e9a85bd12ec453ca5a092c37f6238e81142cf839fd15a4cd6aafa1238358cb50133d21b1656641541bd6c3bbcad220e8a959512ef11461d14fb11fd0c6110a54ed7e9a5f61eb677771cd5c8e6a6275eb7185e0b3e49e934c0ee08c6c2f37a669fc1754570cfdf893d0918fa91d802c2d36045dfc898803e423639994c2f21b03880202dfb9ed6e784f058ccf172d1bee78d7b20674652132886a80b0a8c806e23d9f69e9d0c7473d8caf24aaf014625727cbe08146e744bf0cf8f37825521d038
    
  3. Open the KaasCephCluster CR of the consumer cluster:

    kubectl -n <managedClusterProjectName> edit kaascephcluster
    

    Substitute <managedClusterProjectName> with the corresponding value.

  4. In the external section, specify the following parameters:

    • spec.cephClusterSpec.network

    • spec.cephClusterSpec.nodes

    • connectionString

    Note

    The spec.cephClusterSpec.network and spec.cephClusterSpec.nodes parameters are mandatory.

    The connectionString parameter is specified in the spec.cephClusterSpec.external section of the KaaSCephCluster CR. The parameter contains an encrypted string with all the configurations needed to connect the consumer cluster to the shared Ceph cluster.

    spec:
      cephClusterSpec:
        external:
          enable: true
          connectionString: <generatedConnectionString>
        network:
          clusterNet: <CIDR>
          publicNet: <CIDR>
        nodes: {}
    

    Substitute <generatedConnectionString> with the generated string from the producer cluster and substitute <CIDR> with networks of the Ceph cluster.

  5. Save the KaaSCephCluster CR and close the editor.

Once the Ceph cluster is specified in the KaaSCephCluster CR of the consumer cluster, Ceph Controller validates it and requests Rook to connect the consumer and producer.

Consume pools from the Ceph cluster

  1. Open the KaasCephCluster CR of the consumer cluster:

    kubectl -n <managedClusterProjectName> edit kaascephcluster
    

    Substitute <managedClusterProjectName> with the corresponding value.

  2. In the spec.cephClusterSpec.pools, specify pools from the producer cluster to be used by the consumer cluster. For example:

    Caution

    Each name in the pools section must match the corresponding full pool name of the producer cluster. You can find full pool names in the KaaSCephCluster CR by the following path: status.fullClusterInfo.blockStorageStatus.poolsStatus.

    spec:
      cephClusterSpec:
        pools:
        - default: true
          deviceClass: ssd
          useAsFullName: true
          name: kubernetes-ssd
          role: kubernetes-ssd
        - default: false
          deviceClass: hdd
          useAsFullName: true
          name: volumes-hdd
          role: volumes
    
  3. Save the KaaSCephCluster CR and close the editor.

After specifying pools in the consumer KaaSCephCluster CR, Ceph Controller creates a corresponding StorageClass for each specified pool, which can be used for creating ReadWriteOnce persistent volumes (PVs) in the consumer cluster.

Enable CephFS on a consumer Ceph cluster

  1. Applies before Container Cloud 2.22.0. Enable the CephFS feature on the consumer cluster using the steps 1-3 of the Enable CephFS.

    Note

    Since Container Cloud 2.22.0, CephFS is enabled by default.

  2. In the sharedFilesystem section of the consumer cluster, specify the dataPools to share.

    Note

    Sharing CephFS also requires specifying the metadataPool and metadataServer sections similarly to the corresponding sections of the producer cluster. For details, see CephFS specification.

    For example:

    spec:
      cephClusterSpec:
        sharedFilesystem:
          cephFS:
          - name: cephfs-store
            dataPools:
            - name: cephfs-pool-1
              replicated:
                size: 3
              failureDomain: host
            metadataPool:
              replicated:
                size: 3
              failureDomain: host
            metadataServer:
              activeCount: 1
              activeStandby: false
    

After specifying CephFS in the KaaSCephCluster CR of the consumer cluster, Ceph Controller creates a corresponding StorageClass that allows creating ReadWriteMany (RWX) PVs in the consumer cluster.