Configure bucket replication with CLI#
-
Create TLS certificates that will be used by both MinIO deployments for replication:
openssl req -new -x509 -days 365 -nodes -out fullchain.pem -keyout privkey.pem -subj "/CN=*<domain>" -addext "subjectAltName = DNS:<FQDN MinIO source instance>,DNS:<FQDN MinIO destination instance>"Note
Include all DNS names used to access MinIO in the certificate Subject Alternative Names (SAN). This can include Ingress hostnames, LoadBalancer addresses, or host FQDNs.
For more information about all the available configurations for exposing a MinIO instance, refer to the official MinIO documentation:
-
Create a secret on both clusters using the generated
.pemfiles. Use the same secret name on both clusters.kubectl create secret generic <secret name> --from-file=public.crt=fullchain.pem --from-file=private.key=privkey.pem -n minio-tenantNote
The MinIO Operator expects the secret keys to be named
public.crtandprivate.keykeys, not thetls.crtandtls.keystandards. -
Deploy the MinIO Operator and MinIO Tenant on both clusters by following the Install MinIO and Velero guide.
-
Modify the MinIO Tenant
values.ymlfile to include the TLS secret created earlier:externalCertSecret: - name: myminio-external-tls requestAutoCert: false -
Install the MinIO Client (
mc) utility. -
Add source and destination MinIO instances as aliases:
mc alias set <source alias> --insecure https://<source MinIO instance API URL> <accessKey> <secretKey> mc alias set <destination alias> --insecure https://<destination MinIO instance API URL> <accessKey> <secretKey> -
Verify that both MinIO deployments are online:
mc admin info <source alias> --insecure mc admin info <destination alias> --insecureExample output:
$ mc admin info source-alias --insecure ● <MinIO instance API URL> Uptime: 8 minutes Version: 2025-04-08T15:41:24Z Network: 1/1 OK Drives: 4/4 OK Pool: 1 ┌──────┬───────────────────────┬─────────────────────┬──────────────┐ │ Pool │ Drives Usage │ Erasure stripe size │ Erasure sets │ │ 1st │ 1.6% (total: 193 GiB) │ 4 │ 1 │ └──────┴───────────────────────┴─────────────────────┴──────────────┘ 58 B Used, 1 Bucket, 1 Object, 1 Version, 1 Delete Marker 4 drives online, 0 drives offline, EC:2 -
On both clusters, create a bucket using the MinIO CLI and enable versioning.
$ mc mb <source alias>/<bucket name> --with-versioning --insecure $ mc mb <destination alias>/<bucket name> --with-versioning --insecure -
Create a replication rule on the source bucket:
mc replicate add <source alias>/<bucket name> \ --remote-bucket "https://<accessKey>:<secretKey>@<destination MinIO instance API URL>/<bucket name>" \ --replicate "delete,delete-marker,existing-objects" \ --insecureRefer to the official MinIO documentation for more details on Bucket Replication.
-
Verify the replication status:
mc replicate status <source alias>/<bucket name> --insecureExample output:
$ mc replicate status source-alias/source-1 --insecure Replication status since 15 minutes Summary: Replicated: 0 objects (0 B) Queued: ● 0 objects, 0 B (avg: 0 objects, 0 B ; max: 0 objects, 0 B) Workers: 0 (avg: 0; max: 0) Received: 0 objects (0 B) Transfer Rate: 0 B/s (avg: 0 B/s; max: 0 B/s) Errors: 0 in last 1 minute; 0 in last 1hr; 0 since uptime -
Start the replication manually:
-
Retrieve the replication rule ARN:
mc replicate ls <source alias>/<bucket name> --insecureExample output:
$ mc replicate ls source-alias/source-1 --insecure Rules: Remote Bucket: <MinIO instance API URL>/destination-1 Rule ID: d5eerna08m4cuktdg8mg Priority: 0 ARN: arn:minio:replication::39db5882-148c-4e9c-b20e-34d509e99f64:destination-1 -
Start a replication job:
$ mc replicate resync start <source alias>/<bucket name> --remote-bucket "<ARN value>" --insecure -
Check replication job status:
$ mc replicate resync status <source alias>/<bucket name> --remote-bucket "<ARN value>" --insecureExample output:
$ mc replicate resync status source-alias/source-1 --remote-bucket "arn:minio:replication::39db5882-148c-4e9c-b20e-34d509e99f64:destination-1" --insecure Resync status summary: ● arn:minio:replication::39db5882-148c-4e9c-b20e-34d509e99f64:destination-1 Status: Completed Replication Status | Size (Bytes) | Count Replicated | 0 B | 1 Failed | 0 B | 0
-
-
To restore data back to the original cluster, create a replication rule from the destination bucket to the source bucket following the same steps described above.