Use Azure Disk Storage

You can provide persistent storage for MKE workloads on Microsoft Azure by using Azure Disk Storage. You can either pre-provision Azure Disk Storage to be consumed by Kubernetes Pods, or you can use the Azure Kubernetes integration to dynamically provision Azure Disks as needed.

This guide assumes that you have already provisioned an MKE environment on Microsoft Azure and that you have provisioned a cluster after meeting all of the prerequisites listed in Install MKE on Azure.

To complete the steps in this topic, you must download and configure the client bundle.

Manually provision Azure Disks

You can use existing Azure Disks or manually provision new ones to provide persistent storage for Kubernetes Pods. You can manually provision Azure Disks in the Azure Portal, using ARM Templates, or using the Azure CLI. The following example uses the Azure CLI to manually provision an Azure Disk.

  1. Create an environment variable for myresourcegroup:

    RG=myresourcegroup
    
  2. Provision an Azure Disk:

    az disk create \
    --resource-group $RG \
    --name k8s_volume_1  \
    --size-gb 20 \
    --query id \
    --output tsv
    

    This command returns the Azure ID of the Azure Disk Object.

    Example output:

    /subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Compute/disks/<diskname>
    
  3. Make note of the Azure ID of the Azure Disk Object returned by the previous step.

You can now create Kubernetes Objects that refer to this Azure Disk. The following example uses a Kubernetes Pod, though the same Azure Disk syntax can be used for DaemonSets, Deployments, and StatefulSets. In the example, the Azure diskName and diskURI refer to the manually created Azure Disk:

$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: mypod-azuredisk
spec:
  containers:
  - image: nginx
    name: mypod
    volumeMounts:
      - name: mystorage
        mountPath: /data
  volumes:
      - name: mystorage
        azureDisk:
          kind: Managed
          diskName: k8s_volume_1
          diskURI: /subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Compute/disks/<diskname>
EOF

Dynamically provision Azure Disks

Kubernetes can dynamically provision Azure Disks using the Azure Kubernetes integration, configured at the time of your MKE installation. For Kubernetes to determine which APIs to use when provisioning storage, you must create Kubernetes StorageClass objects specific to each storage back end.

There are two different Azure Disk types that can be consumed by Kubernetes: Azure Disk Standard Volumes and Azure Disk Premium Volumes.

Depending on your use case, you can deploy one or both of the Azure Disk storage classes.


To define the Azure Disk storage class:

  1. Create the storage class:

    cat <<EOF | kubectl create -f -
    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: standard
    provisioner: kubernetes.io/azure-disk
    parameters:
      storageaccounttype: <disk-type>
      kind: Managed
    EOF
    

    For storageaccounttype, enter Standard_LRS for the standard storage class Premium_LRS for the premium storage class.

  2. Verify which storage classes have been provisioned:

    kubectl get storageclasses
    

    Example output:

    NAME       PROVISIONER                AGE
    premium    kubernetes.io/azure-disk   1m
    standard   kubernetes.io/azure-disk   1m
    

To create an Azure Disk with a PersistentVolumeClaim:

After you create a storage class, you can use Kubernetes Objects to dynamically provision Azure Disks. This is done using Kubernetes PersistentVolumesClaims.

The following example uses the standard storage class and creates a 5 GiB Azure Disk. Alter these values to fit your use case.

  1. Create a PersistentVolumeClaim:

    cat <<EOF | kubectl create -f -
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: azure-disk-pvc
    spec:
      storageClassName: standard
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
    EOF
    
  2. Verify the creation of the PersistentVolumeClaim:

    kubectl get persistentvolumeclaim
    

    Example output:

    NAME              STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    azure-disk-pvc    Bound     pvc-587deeb6-6ad6-11e9-9509-0242ac11000b   5Gi        RWO            standard       1m
    
  3. Verify the creation of the PersistentVolume:

    kubectl get persistentvolume
    

    Expected output:

    NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS    CLAIM                     STORAGECLASS   REASON    AGE
    pvc-587deeb6-6ad6-11e9-9509-0242ac11000b   5Gi        RWO            Delete           Bound     default/azure-disk-pvc    standard                 3m
    
  4. Verify the creation of a new Azure Disk in the Azure Portal.


To attach the new Azure Disk to a Kubernetes Pod:

You can now mount the Kubernetes PersistentVolume into a Kubernetes Pod. The disk can be consumed by any Kubernetes object type, including a Deployment, DaemonSet, or StatefulSet. However, the following example simply mounts the PersistentVolume into a standalone Pod.

Attach the new Azure Disk to a Kubernetes pod:

cat <<EOF | kubectl create -f -
kind: Pod
apiVersion: v1
metadata:
  name: mypod-dynamic-azuredisk
spec:
  containers:
    - name: mypod
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: storage
  volumes:
    - name: storage
      persistentVolumeClaim:
        claimName: azure-disk-pvc
EOF

Data disk capacity of an Azure Virtual Machine

Azure limits the number of data disks that can be attached to each Virtual Machine. Refer to Azure Virtual Machine Sizes for this information. Kubernetes prevents Pods from deploying on Nodes that have reached their maximum Azure Disk Capacity. In such cases, Pods remain stuck in the ContainerCreating status, as demonstrated in the following example:

  1. Review Pods:

    kubectl get pods
    

    Example output:

    NAME                  READY     STATUS              RESTARTS   AGE
    mypod-azure-disk      0/1       ContainerCreating   0          4m
    
  2. Describe the Pod to display troubleshooting logs, which indicate the node has reached its capacity:

    kubectl describe pods mypod-azure-disk
    

    Example output:

    Warning  FailedAttachVolume  7s (x11 over 6m)  attachdetach-controller  \
    AttachVolume.Attach failed for volume "pvc-6b09dae3-6ad6-11e9-9509-0242ac11000b" : \
    Attach volume "kubernetes-dynamic-pvc-6b09dae3-6ad6-11e9-9509-0242ac11000b" to instance \
    "/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Compute/virtualMachines/worker-03" \
    failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: \
    StatusCode=409 -- Original Error: failed request: autorest/azure: \
    Service returned an error. Status=<nil> Code="OperationNotAllowed" \
    Message="The maximum number of data disks allowed to be attached to a VM of this size is 4." \
    Target="dataDisks"
    

See also