Platform operators can provide persistent storage for workloads running on Docker Enterprise and Microsoft Azure by using Azure Disk. Platform operators can either pre-provision Azure Disks to be consumed by Kubernetes Pods, or can use the Azure Kubernetes integration to dynamically provision Azure Disks on demand.
This guide assumes you have already provisioned a UCP environment on Microsoft Azure. The Cluster must be provisioned after meeting all of the prerequisites listed in Install UCP on Azure.
Additionally, this guide uses the Kubernetes Command Line tool
$ kubectl
to provision Kubernetes objects within a UCP cluster.
Therefore, this tool must be downloaded, along with a UCP client bundle.
An operator can use existing Azure Disks or manually provision new ones to provide persistent storage for Kubernetes Pods. Azure Disks can be manually provisioned in the Azure Portal, using ARM Templates or the Azure CLI. The following example uses the Azure CLI to manually provision an Azure Disk.
$ RG=myresourcegroup
$ az disk create \
--resource-group $RG \
--name k8s_volume_1 \
--size-gb 20 \
--query id \
--output tsv
Using the Azure CLI command in the previous example should return the Azure ID of the Azure Disk Object. If you are provisioning Azure resources using an alternative method, make sure you retrieve the Azure ID of the Azure Disk, because it is needed for another step.
/subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Compute/disks/<diskname>
You can now create Kubernetes Objects that refer to this Azure Disk. The following example uses a Kubernetes Pod. However, the same Azure Disk syntax can be used for DaemonSets, Deployments, and StatefulSets. In the following example, the Azure Disk Name and ID reflect the manually created Azure Disk.
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
name: mypod-azuredisk
spec:
containers:
- image: nginx
name: mypod
volumeMounts:
- name: mystorage
mountPath: /data
volumes:
- name: mystorage
azureDisk:
kind: Managed
diskName: k8s_volume_1
diskURI: /subscriptions/<subscriptionID>/resourceGroups/<resourcegroup>/providers/Microsoft.Compute/disks/<diskname>
EOF
Kubernetes can dynamically provision Azure Disks using the Azure Kubernetes integration, which was configured when UCP was installed. For Kubernetes to determine which APIs to use when provisioning storage, you must create Kubernetes Storage Classes specific to each storage backend.
In Azure, there are two different Azure Disk types that can be consumed by Kubernetes: Azure Disk Standard Volumes and Azure Disk Premium Volumes.
Depending on your use case, you can deploy one or both of the Azure Disk storage Classes (Standard and Advanced).
To create a Standard Storage Class:
$ cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: standard
provisioner: kubernetes.io/azure-disk
parameters:
storageaccounttype: Standard_LRS
kind: Managed
EOF
To Create a Premium Storage Class:
$ cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: premium
provisioner: kubernetes.io/azure-disk
parameters:
storageaccounttype: Premium_LRS
kind: Managed
EOF
To determine which Storage Classes have been provisioned:
$ kubectl get storageclasses
NAME PROVISIONER AGE
premium kubernetes.io/azure-disk 1m
standard kubernetes.io/azure-disk 1m
After you create a Storage Class, you can use Kubernetes Objects to dynamically provision Azure Disks. This is done using Kubernetes Persistent Volumes Claims.
The following example uses the standard storage class and creates a 5 GiB Azure Disk. Alter these values to fit your use case.
$ cat <<EOF | kubectl create -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: azure-disk-pvc
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
EOF
At this point, you should see a new Persistent Volume Claim and Persistent Volume inside of Kubernetes. You should also see a new Azure Disk created in the Azure Portal.
$ kubectl get persistentvolumeclaim
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
azure-disk-pvc Bound pvc-587deeb6-6ad6-11e9-9509-0242ac11000b 5Gi RWO standard 1m
$ kubectl get persistentvolume
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-587deeb6-6ad6-11e9-9509-0242ac11000b 5Gi RWO Delete Bound default/azure-disk-pvc standard 3m
Now that a Kubernetes Persistent Volume has been created, you can mount this into a Kubernetes Pod. The disk can be consumed by any Kubernetes object type, including a Deployment, DaemonSet, or StatefulSet. However, the following example just mounts the persistent volume into a standalone pod.
$ cat <<EOF | kubectl create -f -
kind: Pod
apiVersion: v1
metadata:
name: mypod-dynamic-azuredisk
spec:
containers:
- name: mypod
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: storage
volumes:
- name: storage
persistentVolumeClaim:
claimName: azure-disk-pvc
EOF
In Azure, there are limits to the number of data disks that can be attached to each Virtual Machine. This data is shown in Azure Virtual Machine Sizes. Kubernetes is aware of these restrictions, and prevents pods from deploying on Nodes that have reached their maximum Azure Disk Capacity.
This can be seen if a pod is stuck in the ContainerCreating
stage:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mypod-azure-disk 0/1 ContainerCreating 0 4m
Describing the pod displays troubleshooting logs, showing the node has reached its capacity:
$ kubectl describe pods mypod-azure-disk
<...>
Warning FailedAttachVolume 7s (x11 over 6m) attachdetach-controller AttachVolume.Attach failed for volume "pvc-6b09dae3-6ad6-11e9-9509-0242ac11000b" : Attach volume "kubernetes-dynamic-pvc-6b09dae3-6ad6-11e9-9509-0242ac11000b" to instance "/subscriptions/<sub-id>/resourceGroups/<rg>/providers/Microsoft.Compute/virtualMachines/worker-03" failed with compute.VirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=409 -- Original Error: failed request: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="The maximum number of data disks allowed to be attached to a VM of this size is 4." Target="dataDisks"