Use NFS Storage

You can provide persistent storage for MKE workloads by using NFS storage. When mounted into the running container, NFS shares provide state to the application, managing data external to the container lifecycle.

Note

The following subjects are out of the scope of this topic:

  • Provisioning an NFS server

  • Exporting an NFS share

  • Using external Kubernetes plugins to dynamically provision NFS shares

There are two different ways to mount existing NFS shares within Kubernetes Pods:

  • Define NFS shares within the Pod definitions. NFS shares are defined manually by each tenant when creating a workload.

  • Define NFS shares as a cluster object through PersistentVolumes, with the cluster object lifecycle handled separately from the workload. This is common for operators who want to define a range of NFS shares for tenants to request and consume.

Define NFS shares in the Pod definition

While defining workloads in Kubernetes manifest files, users can reference the NFS shares that they want to mount within the Pod specification for each Pod. This can be a standalone Pod or it can be wrapped in a higher-level object like a Deployment, DaemonSet, or StatefulSet.

The following example includes a running MKE cluster and a downloaded client bundle with permission to schedule Pods in a namespace.

  1. Create nfs-in-a-pod.yaml with the following content:

    kind: Pod
    apiVersion: v1
    metadata:
      name: nfs-in-a-pod
    spec:
      containers:
        - name: app
          image: alpine
          volumeMounts:
            - name: nfs-volume
              mountPath: /var/nfs
          command: ["/bin/sh"]
          args: ["-c", "sleep 500000"]
      volumes:
        - name: nfs-volume
          nfs:
            server: nfs.example.com
            path: /share1
    
    • Change the value of mountPath to the location where you want the share to be mounted.

    • Change the value of server to your NFS server.

    • Change the value of path to the relevant share.

  2. Create the Pod specification:

    kubectl create -f nfs-in-a-pod.yaml
    
  3. Verify that the Pod is created successfully:

    kubectl get pods
    

    Example output:

    NAME                     READY     STATUS      RESTARTS   AGE
    nfs-in-a-pod             1/1       Running     0          6m
    
  4. Verify everything was mounted correctly by accessing a shell prompt within the container and searching for your mount:

  5. Access a shell prompt within the container:

    kubectl exec -it nfs-in-a-pod sh
    
  6. Verify that everything is correctly mounted by searching for your mount:

    mount | grep nfs.example.com
    

Note

MKE and Kubernetes are unaware of the NFS share because it is defined as part of the Pod specification. As such, when you delete the Pod, the NFS share detaches from the cluster, though the data remains in the NFS share.

Expose NFS shares as a cluster object

This method uses the Kubernetes PersistentVolume (PV) and PersistentVolumeClaim (PVC) objects to manage NFS share lifecycle and access.

You can define multiple shares for a tenant to use within the cluster. The PV is a cluster-wide object, so it can be pre-provisioned. A PVC is a claim by a tenant for using a PV within the tenant namespace.

To create PV objects at the cluster level, you will need a ClusterRoleBinding grant.

Note

The “NFS share lifecycle” refers to granting and removing the end user ability to consume NFS storage, rather than the lifecycle of the NFS server.


To define the PersistentVolume at the cluster level:

  1. Create pvwithnfs.yaml with the following content:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: my-nfs-share
    spec:
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Recycle
      nfs:
        server: nfs.example.com
        path: /share1
    
    • The 5Gi storage size is used to match the volume to the tenant claim.

    • The valid accessModes values for an NFS PV are:

      • ReadOnlyMany: the volume can be mounted as read-only by many nodes.

      • ReadWriteOnce: the volume can be mounted as read-write by a single node.

      • ReadWriteMany: the volume can be mounted as read-write by many nodes.

      The access mode in the PV definition is used to match a PV to a Claim. When a PV is defined and created inside of Kubernetes, a volume is not mounted. Refer to Access Modes for more information, including any changes to the valid accessModes.

    • The valid persistentVolumeReclaimPolicy values are:

      • Reclaim

      • Recycle

      • Delete

      MKE uses the reclaim policy to define what the cluster does after a PV is released from a claim. Refer to Reclaiming in the official Kubernetes documentation for more information, including any changes to the valid persistentVolumeReclaimPolicy values.

    • Change the value of server to your NFS server.

    • Change the value of path to the relevant share.

  2. Create the volume:

    kubectl create -f pvwithnfs.yaml
    
  3. Verify that the volume is created successfully:

    kubectl get pv
    

    Example output:

    NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                       STORAGECLASS   REASON    AGE
    
    my-nfs-share   5Gi        RWO            Recycle          Available                               slow                     7s
    

To define a PersistentVolumeClaim:

A tenant can now “claim” a PV for use within their workloads by using a Kubernetes PVC. A PVC exists within a namespace and it attempts to match available PVs to the tenant request.

Create myapp-cliam.yaml with the following content:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myapp-nfs
  namespace: default
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

To deploy this PVC, the tenant must have a RoleBinding that permits the creation of PVCs. If there is a PV that meets the tenant criteria, Kubernetes binds the PV to the claim. This does not, however, mount the share.

  1. Create the PVC:

    kubectl create -f myapp-claim.yaml
    

    Expected output:

    persistentvolumeclaim "myapp-nfs" created
    
  2. Verify that the claim is created successfully:

    kubectl get pvc
    

    Example output:

    NAME        STATUS    VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    myapp-nfs   Bound     my-nfs-share   5Gi        RWO            slow           2s
    
  3. Verify that the claim is associated with the PV:

    kubectl get pv
    

    Example output:

    NAME           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM              STORAGECLASS   REASON    AGE
    my-nfs-share   5Gi        RWO            Recycle          Bound     default/myapp-nfs  slow                     4m
    

To define a workload:

The final task is to deploy a workload to consume the PVC. The PVC is defined within the Pod specification, which can be a standalone Pod or wrapped in a higher-level object such as a Deployment, DaemonSet, or StatefulSet.

Create myapp-pod.yaml with the following content:

kind: Pod
apiVersion: v1
metadata:
  name: pod-using-nfs
spec:
  containers:
    - name: app
      image: alpine
      volumeMounts:
      - name: data
          mountPath: /var/nfs
      command: ["/bin/sh"]
      args: ["-c", "sleep 500000"]
  volumes:
  - name: data
    persistentVolumeClaim:
      claimName: myapp-nfs

Change the value of mountPath to the location where you want the share mounted.

  1. Deploy the Pod:

    kubectl create -f myapp-pod.yaml
    
  2. Verify that the Pod is created successfully:

    kubectl get pod
    

    Example output:

    NAME                     READY     STATUS      RESTARTS   AGE
    pod-using-nfs            1/1       Running     0          1m
    
  3. Access a shell prompt within the container:

    kubectl exec -it pod-using-nfs sh
    
  4. Verify that everything is correctly mounted by searching for your mount:

    mount | grep nfs.example.com
    

See also