Configure caches for high availability

To ensure that your MSR cache is always available to users and is highly performant, configure it for high availability.

You will require the following to deploy MSR caches with high availability:

  • Multiple nodes, one for each cache replica

  • A load balancer

  • Shared storage system that has read-after-write consistency

With high availability, Mirantis recommends that you configure the replicas to store data using a shared storage system. MSR cache deployment is the same, though, regardless of whether you are deploying a single replica or multiple replicas.

When using a shared storage system, once an image layer is cached, any replica is able to serve it to users without having to fetch a new copy from MSR.

MSR caches support the following storage systems:

  • Alibaba Cloud Object Storage Service

  • Amazon S3

  • Azure Blob Storage

  • Google Cloud Storage

  • NFS

  • Openstack Swift

Note

If you are using NFS as a shared storage system, ensure read-after-write consistency by verifying that the shared directory is configured with:

/dtr-cache *(rw,root_squash,no_wdelay)

In addition, mount the NFS directory on each node where you will deploy an MSR cache replica.

To configure caches for high availability:

  1. Use SSH to log in to a manager node of the cluster on which you want to deploy the MSR cache. If you are using MKE to manage that cluster, you can also use a client bundle to configure your Docker CLI client to connect to the cluster.

  2. Label each node that is going to run the cache replica:

    docker node update --label-add dtr.cache=true <node-hostname>
    
  3. Create the cache configuration files by following the instructions for deploying a single cache replica. Be sure to adapt the storage object, using the configuration options for the shared storage of your choice.

  4. Deploy a load balancer of your choice to balance requests across your set of replicas.