Configure Ceph Object Gateway TLS

Once you enable Ceph Object Gateway (radosgw) as described in Mirantis Container Cloud: Enable Ceph RGW Object Storage, you can configure the Transport Layer Security (TLS) protocol for a Ceph Object Gateway public endpoint using the following options:

  • Using MOSK TLS, if it is enabled and exposes its certificates and domain for Ceph. In this case, Ceph Object Gateway will automatically create an ingress rule with MOSK certificates and domain to access the Ceph Object Gateway public endpoint. Therefore, you only need to reach the Ceph Object Gateway public and internal endpoints and set the CA certificates for a trusted TLS connection.

  • Using custom ingress specified in the KaaSCephCluster CR. In this case, Ceph Object Gateway public endpoint will use the public domain specified using the ingress parameters.

Caution

  • External Ceph Object Gateway service is not supported and will be deleted during update. If your system already uses endpoints of an external Ceph Object Gateway service, reconfigure them to the ingress endpoints.

  • When using a custom or OpenStack ingress, configure the DNS name for RGW to look on an external IP address of that ingress. If you do not have an OpenStack or custom ingress, point the DNS to an external LB of RGW.

This section also describes how to specify a custom public endpoint for the Object Storage service.

To configure Ceph Object Gateway TLS:

  1. Verify whether MOSK TLS is enabled. The spec.features.ssl.public_endpoints section should be specified in the OpenStackDeployment CR.

  2. To generate an SSL certificate for internal usage, verify that the gateway securePort parameter is specified in the KaasCephCluster CR. For details, see Mirantis Container Cloud: Enable Ceph RGW Object Storage.

  3. Select from the following options:

    • If MOSK TLS is enabled, obtain the MOSK CA certificate for a trusted connection:

      kubectl -n openstack-ceph-shared get secret openstack-rgw-creds -o jsonpath="{.data.ca_cert}" | base64 -d
      
    • Configure Ceph Object Gateway TLS using a custom ingress:

      Warning

      The rgw section is deprecated and the ingress parameters are moved under cephClusterSpec.ingress. If you continue using rgw.ingress, it will be automatically translated into cephClusterSpec.ingress during the MOSK cluster release update.

      1. Open the KaasCephCluster CR for editing.

      2. Specify the ingress parameters:

        • publicDomain - domain name to use for the external service.

        • cacert - Certificate Authority (CA) certificate, used for the ingress rule TLS support.

        • tlsCert - TLS certificate, used for the ingress rule TLS support.

        • tlsKey - TLS private key, used for the ingress rule TLS support.

        • customIngress Optional - includes the following custom Ingress Controller parameters:

          • className - the custom Ingress Controller class name. If not specified, the openstack-ingress-nginx class name is used by default.

          • annotations - extra annotations for the ingress proxy. For details, see NGINX Ingress Controller: Annotations.

            By default, the following annotations are set:

            • nginx.ingress.kubernetes.io/rewrite-target is set to /

            • nginx.ingress.kubernetes.io/upstream-vhost is set to <rgwName>.rook-ceph.svc.

              The value for <rgwName> is spec.cephClusterSpec.objectStorage.rgw.name.

            Optional annotations:

            • nginx.ingress.kubernetes.io/proxy-request-buffering: "off" that disables buffering for ingress to prevent the 413 (Request Entity Too Large) error when uploading large files using radosgw.

            • nginx.ingress.kubernetes.io/proxy-body-size: <size> that increases the default uploading size limit to prevent the 413 (Request Entity Too Large) error when uploading large files using radosgw. Set the value in MB (m) or KB (k). For example, 100m.

          For example:

          customIngress:
            className: openstack-ingress-nginx
            annotations:
              nginx.ingress.kubernetes.io/rewrite-target: /
              nginx.ingress.kubernetes.io/upstream-vhost: openstack-store.rook-ceph.svc
              nginx.ingress.kubernetes.io/proxy-body-size: 100m
          

          Note

          An ingress rule is by default created with an internal Ceph Object Gateway service endpoint as a back end. Also, rgw dns name is specified in the Ceph configuration and is set to <rgwName>.rook-ceph.svc by default. You can override this option using the spec.cephClusterSpec.rookConfig key-value parameter. In this case, also change the corresponding ingress annotation.

        For example:

        spec:
          cephClusterSpec:
            objectStorage:
              rgw:
                name: rgw-store
            ingress:
              publicDomain: public.domain.name
              cacert: |
                -----BEGIN CERTIFICATE-----
                ...
                -----END CERTIFICATE-----
              tlsCert: |
                -----BEGIN CERTIFICATE-----
                ...
                -----END CERTIFICATE-----
              tlsKey: |
                -----BEGIN RSA PRIVATE KEY-----
                ...
                -----END RSA PRIVATE KEY-----
              customIngress:
                annotations:
                  "nginx.ingress.kubernetes.io/upstream-vhost": rgw-store.public.domain.name
            rookConfig:
              "rgw dns name": rgw-store.public.domain.name
        

        Warning

        • For clouds with the publicDomain parameter specified, align the upstream-vhost ingress annotation with the name of the Ceph Object Storage and the specified public domain.

        • Ceph Object Storage requires the upstream-vhost and rgw dns name parameters to be equal. Therefore, override the default rgw dns name to the corresponding ingress annotation value.

  4. To access internal and public Ceph Object Gateway endpoints:

    1. Obtain the Ceph Object Gateway public endpoint:

      kubectl -n rook-ceph get ingress
      
    2. To use the Ceph Object Gateway internal endpoint with TLS, configure trusted connection for the required CA certificate:

      kubectl -n rook-ceph get secret <rgwCacertSecretName> -o jsonpath="{.data.cacert}" | base64 -d
      

      Substitute <rgwCacertSecretName> with rgw-ssl-certificate.

    3. Obtain the internal endpoint name for Ceph Object Gateway:

      kubectl -n rook-ceph get svc -l app=rook-ceph-rgw
      

      The internal endpoint for Ceph Object Gateway has the https://<internal-svc-name>.rook-ceph.svc:<rgw-secure-port>/ format, where <rgw-secure-port> is spec.rgw.gateway.securePort specified in the KaaSCephCluster CR.

  5. Verify Ceph Object Storage host names:

    1. Enter the rook-ceph-tools pod:

      kubectl -n rook-ceph exec -it deployment/rook-ceph-tools -- bash
      
    2. Obtain Ceph Object Storage default zone group configuration:

      radosgw-admin zonegroup get --rgw-zonegroup=<objectStorageName> --rgw-zone=<objectStorageName> | tee zonegroup.json
      

      Substitute <objectStorageName> with the Ceph Object Storage name from spec.cephClusterSpec.objectStorage.rgw.name.

    3. Inspect zonegroup.json and verify that the hostnames key is a list that contains two endpoints: an internal endpoint and a custom public endpoint:

      "hostnames": ["rook-ceph-rgw-<objectStorageName>.rook-ceph.svc", <customPublicEndpoint>]
      

      Substitute <objectStorageName> with the Ceph Object Storage name and <customPublicEndpoint> with the public endpoint with a custom public domain.

    4. If one or both endpoints are omitted in the list, add the missing endpoints to the hostnames list in the zonegroup.json file and update Ceph Object Storage zone group configuration:

      radosgw-admin zonegroup set --rgw-zonegroup=<objectStorageName> --rgw-zone=<objectStorageName> --infile zonegroup.json
      radosgw-admin period update --commit
      
    5. Verify that the hostnames list contains both the internal and custom public endpoint:

      radosgw-admin --rgw-zonegroup=<objectStorageName> --rgw-zone=<objectStorageName> zonegroup get | jq -r ".hostnames"
      

      Example of system response:

      [
        "rook-ceph-rgw-obj-store.rook-ceph.svc",
        "obj-store.mcc1.cluster1.example.com"
      ]
      
    6. Exit the rook-ceph-tools pod:

      exit
      

Once done, Ceph Object Storage becomes available by the custom public endpoint with an S3 API client, OpenStack Swift CLI, and OpenStack Horizon Containers plugin.