Enable Multisite for Ceph RGW Object Storage

Caution

This feature is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For the Technology Preview feature definition, refer to Technology Preview support scope.

The Ceph Multisite feature allows object storage to replicate its data over multiple Ceph clusters. Using Multisite, such object storage is independent and isolated from another object storage in the cluster. For more details, see Ceph documentation: Multisite.

Warning

Rook does not handle Multisite configuration changes and removal. Therefore, once you enable Multisite for Ceph RGW Object Storage, perform such operations manually through the ceph-tools pod. For details, see Rook documentation: Multisite cleanup.

To enable the Multisite RGW Object Storage:

  1. Select from the following options:

    • If you do not have a Container cloud cluster yet, open kaascephcluster.yaml.template for editing.

    • If the Container cloud cluster is already deployed, open the KaasCephCluster CR of a managed cluster for editing:

      kubectl edit kaascephcluster -n <managedClusterProjectName>
      

      Substitute <managedClusterProjectName> with a corresponding value.

  2. Using the following table, update the cephClusterSpec.objectStorage.multisite section specification as required:

    Multisite parameters

    Parameter

    Description

    realms Technical Preview

    List of realms to use, represents the realm namespaces. Includes the following parameters:

    • name - the realm name.

    • pullEndpoint - optional, required only when the master zone is in a different storage cluster. The endpoint, access key, and system key of the system user from the realm to pull from. Includes the following parameters:

      • endpoint - the endpoint of the master zone in the master zone group.

      • accessKey - the access key of the system user from the realm to pull from.

      • secretKey - the system key of the system user from the realm to pull from.

    zoneGroups Technical Preview

    The list of zone groups for realms. Includes the following parameters:

    • name - the zone group name.

    • realmName - the realm namespace name to which the zone group belongs to.

    zones Technical Preview

    The list of zones used within one zone group. Includes the following parameters:

    • name - the zone name.

    • metadataPool - the settings used to create the Object Storage metadata pools. Must use replication. For details, see Pool parameters.

    • dataPool - the settings to create the Object Storage data pool. Can use replication or erasure coding. For details, see Pool parameters.

    • zoneGroupName - the zone group name.

    For example:

    objectStorage:
      multiSite:
        realms:
        - name: realm_from_cluster
        zoneGroups:
        - name: zonegroup_from_cluster
          realmName: realm_from_cluster
        zones:
        - name: secondary-zone
          zoneGroupName: zonegroup_from_cluster
          metadataPool:
            failureDomain: host
              replicated:
                size: 3
          dataPool:
            erasureCoded:
              codingChunks: 1
              dataChunks: 2
            failureDomain: host
    
  3. Select from the following options:

    • If you do not need to replicate data from a different storage cluster, do not specify the pullEndpoint parameter. The current zone used in the ObjectStorage RGW in KaaSCephCluster will be the master zone.

    • If a different storage cluster exists and its object storage data must be replicated, specify the same realm and zone group names and the pullEndpoint parameter. Additionally, specify the endpoint, access key, and system keys of the system user of the realm from which you need to replicate data. For details, see the step 2.

      1. To obtain the endpoint of the cluster zone that must be replicated, run the following command specifying the realm and zone group names of the required master zone:

        radosgw-admin zonegroup get --rgw-realm=<USER_NAME> --rgw-zonegroup=<ZONE_GROUP_NAME>
        
      2. To obtain the access key and the secret key of the system user, run the following command on the required Ceph cluster:

        radosgw-admin user info --uid="<USER_NAME>"
        

      For example:

      objectStorage:
        multiSite:
          realms:
          - name: realm_from_cluster
            pullEndpoint:
              endpoint: http://10.11.0.75:8080
              accessKey: DRND5J2SVC9O6FQGEJJF
              secretKey: qpjIjY4lRFOWh5IAnbrgL5O6RTA1rigvmsqRGSJk
          zoneGroups:
          - name: zonegroup_from_cluster
            realmName: realm_from_cluster
          zones:
          - name: secondary-zone
            zoneGroupName: zonegroup_from_cluster
            metadataPool:
              failureDomain: host
              replicated:
                size: 3
            dataPool:
              erasureCoded:
                codingChunks: 1
                dataChunks: 2
              failureDomain: host
      
  4. Configure the zone RADOS Gateway parameter as described in Enable Ceph RGW Object Storage. Leave dataPool and metadataPool empty. These parameters will be ignored because the zone block in the Multisite configuration specifies the pools parameters.

    Note

    If Ceph RGW Object Storage in your cluster is not set up for Multisite, see Ceph documentation: Migrating a single site system to multi-site.

    For example:

    rgw:
      dataPool: {}
      gateway:
        allNodes: false
        instances: 2
        port: 80
        securePort: 8443
      healthCheck:
        bucket:
          disabled: true
      metadataPool: {}
      name: store-test-pull
      preservePoolsOnDelete: false
      zone:
        name: "secondary-zone"
    

Once done, ceph-operator will create the required resources and Rook will handle the Multisite configuration. For details, see: Rook documentation: Object Multisite.