Backup and Restore¶
Important
The examples herein are illustrative. Always review and adjust all deployment details to align with your actual MSR 4 instance before you execute any commands.
Set MSR 4 to Repository Read-Only mode.
Before initiating the backup, to minimize inconsistencies set MSR 4 to Repository Read-Only mode. This prevents new data from being written during the process.
Log in to MSR 4 as an administrator.
Navigate to Administration > Configuration.
Under System Settings, enable the Repository Read-Only option.
Click Save to apply the changes.
Exclude the Redis Persistent Volume Claims (PVCs), Persistent Volumes (PVs) and PODs from backup:
kubectl -n <msr4 namespace> label pod <msr4 redis pod> velero.io/exclude-from-backup=true kubectl -n <msr4 namespace> label pvc <msr4 redis pvc> velero.io/exclude-from-backup=true kubectl label pv/$(kubectl -n <msr4 namespace> get pvc <msr4 redis pvc> --template={{.spec.volumeName}}) velero.io/exclude-from-backup=true
Exclude the Registry POD, Persistent Volume Claims (PVCs) and Persistent Volumes (PVs) from the backup:
kubectl label pod <msr4 registry pod> velero.io/exclude-from-backup=true kubectl label pvc <msr4 registry pvc> velero.io/exclude-from-backup=true kubectl label pv/$(kubectl get pvc <msr4 registry pvc> --template={{.spec.volumeName}}) velero.io/exclude-from-backup=true
Create the MSR 4 backup:
Note
If Redis and/or Postgres are in a different namespaces than MSR 4, include all the namespaces in the backup command:
velero backup create <backup name> --include-namespaces postgres,msr4 [..]
Save the Registry’s Persistent Volume Claims (PVCs) and Persistent Volumes (PVs) specifications in the YAML files:
kubectl get pvc <MSR4 registry PVC> -o yaml > msr4-pvc-registry.yml kubectl get pv <MSR4 registry PV> -o yaml > msr4-pv-registry.yaml
Modify the YAML files that will be applied to the target (Rrcovery) Kubernetes cluster:
In the
msr4-pvc-registry.ymlfile, ensure that:storageClassNameis empty.volumeNamematches the PV name.spec.capacity.storage: 5Giis the same as the original.accessModes: ReadWriteManyis the same as the original.
$ cat msr4-pvc-registry.yml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: msr4-harbor-registry namespace: <MSR4 namespace> spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: volumeName: <PV name>
In the
msr4-pv-registry.yamlfile, ensure that:spec.nfs.servercontains the IP or hostname of the NFS server.spec.nfs.pathcontains the exact export path on the NFS server where the blobs are stored.spec.capacity.storage: 5Giis the same as the original.spec.accessModes: ReadWriteManyis the same as the original.persistentVolumeReclaimPolicyis always retained to avoid data loss.The entire
statussection is removed.The entire
spec.claimRefsection is removed.
$ cat msr4-pv-registry.yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: nfs.csi.k8s.io volume.kubernetes.io/provisioner-deletion-secret-name: "" volume.kubernetes.io/provisioner-deletion-secret-namespace: "" finalizers: - external-provisioner.volume.kubernetes.io/finalizer - kubernetes.io/pv-protection labels: name: <PV name> spec: accessModes: - ReadWriteMany capacity: storage: 5Gi csi: driver: nfs.csi.k8s.io volumeAttributes: csi.storage.k8s.io/pv/name: <PV name> csi.storage.k8s.io/pvc/name: msr4-harbor-registry csi.storage.k8s.io/pvc/namespace: <MSR4 namespace> mountPermissions: "0" server: <NFS URL> share: <NFS path> subdir: <PV name> volumeHandle: <NFS URL>#var/nfs/general#<NFS path>## mountOptions: - nfsvers=4.1 persistentVolumeReclaimPolicy: Delete storageClassName: <NFS CSI storage class> volumeMode: Filesystem
Apply Persistent Volumes (PVs) to the target (recovery) Kubernetes cluster, and verify that it is in
Availablestatus:kubectl apply -f msr4-pv-registry.yaml kubectl get pv <PV name>
Example output:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE pvc-c9383d47-74b6-4c04-857c-6b5f05164171 5Gi RWX Delete Available nfs-csi <unset> 5s
Apply Persistent Volume Claim (PVC) and wait until it is in
Boundstatus:kubectl apply -f msr4-pvc-registry.yml kubectl get pvc <PVC name>
Example output:
kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE msr4-harbor-registry Pending pvc-c9383d47-74b6-4c04-857c-6b5f05164171 0 nfs-csi <unset> 6s kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE msr4-harbor-registry Bound pvc-c9383d47-74b6-4c04-857c-6b5f05164171 5Gi RWX nfs-csi <unset> 15s
If Postgres Operator is used as the database provider for MSR 4, verify that the target (recovery) Kubernetes cluster includes
ClusterRole postgres-pod:Verify that the
ClusterRoleexists:kubectl get clusterrole postgres-pod
If the output of the command above is empty, create the role manually:
$ cat postgres-pod.yml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: meta.helm.sh/release-name: postgres-operator meta.helm.sh/release-namespace: <msr4 namespace> labels: app.kubernetes.io/instance: postgres-operator app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: postgres-operator helm.sh/chart: postgres-operator-1.14.0 name: postgres-pod resourceVersion: "66364" rules: - apiGroups: - "" resources: - endpoints verbs: - create - delete - deletecollection - get - list - patch - update - watch - apiGroups: - "" resources: - pods verbs: - get - list - patch - update - watch - apiGroups: - "" resources: - services verbs: - create $ kubectl apply -f postgres-pod.yml
Restore MSR 4 on the target (recovery) Kubernetes cluster:
Reconfigure the restored instance of MSR 4 on the target (recovery) Kubernetes cluster:
Locate the Postgres Database service IP:
kubectl get svc \ -l application=spilo,cluster-name=msr-postgres,spilo-role=master \ -o jsonpath={.items..spec.clusterIP} -n <postgres or msr4 namespace>
Reconfigure MSR 4:
Note
Do not use the URL port for
expose.tls.auto.commonNameandexpose.ingress.hosts.coreif either is configured. Use only IP or DNS.helm upgrade <MSR4 Helm deployment name> \ oci://registry.mirantis.com/harbor/helm/msr \ --debug \ --set externalURL=https://<Restored MSR4 URL> \ -n <MSR4 namespace> \ --reuse-values \ --set expose.tls.auto.commonName=<Restored MSR4 URL> \ --set expose.ingress.hosts.core=<Restored MSR4 URL> \ --set database.external.host=<Postgres Database's service IP>