Use vSphere Volumes

The vSphere Storage for Kubernetes driver enables you to satisfy persistent storage requirements for Kubernetes Pods in vSphere environments. The driver allows you to create a PersistentVolume (PV) on a Virtual Machine File System (VMFS) and use it to manage persistent storage requirements independently of the Pod and VM life cycle. vSphere Cloud Provider supports volumes, provisioning volumes, PVs, and storage classes.

Note

VMFS is the only Kubernetes storage back end offered by vSphere that Mirantis supports.

Prerequisites

Complete the following prerequisite steps prior to configuring vSphere Storage for use with MKE:

  1. Populate vsphere.conf. Refer to Create a Kubernetes Secret for vSphere Container Storage Plug-in for more information.

  2. Set the disk.EnableUUID value on the worker VMs to True.

Configure vSphere Storage

  1. Use the --cloud-provider option so that no work is scheduled until ucp-kube-controller-manager initializes kubelet:

    docker container run --rm -it --name ucp -e REGISTRY_USERNAME=$REGISTRY_USERNAME -e REGISTRY_PASSWORD=$REGISTRY_PASSWORD \
    -v /var/run/docker.sock:/var/run/docker.sock \
    "dockereng/ucp:3.1.0-tp2" \
    install \
    --host-address <HOST_ADDR> \
    --admin-username admin \
    --admin-password XXXXXXXX \
    --cloud-provider=vsphere \
    --image-version latest:
    
  2. Create a StorageClass with a user-specified disk format. Specifying a value for the datastore is optional. If you do not specify the datastore, then the volume is created on the datastore specified in the vSphere configuration file used to initialize the vSphere Cloud Provider. Both options follow:

    • Create a StorageClass without a user-specified datastore:

      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: fast
      provisioner: kubernetes.io/vsphere-volume
      parameters:
        diskformat: zeroedthick
      
    • Create a StorageClass with a user-specified datastore:

      apiVersion: storage.k8s.io/v1
      kind: StorageClass
      metadata:
        name: fast
      provisioner: kubernetes.io/vsphere-volume
      parameters:
        diskformat: zeroedthick
        datastore: VSANDatastore
      

    The valid values for diskformat are: thin, zeroedthick, and eagerzeroedthick. The default is thin.

Deploy vSphere Volumes

You can now create PVs that deploy volumes attached to hosts and mounted inside of Pods. A PersistentVolumeClaim (PVC) is a claim for storage resources that are bound to a PV when storage resources are granted.

Mirantis recommends that you use the StorageClass and PVC resources, as these abstraction layers provide more portability and cross-environmental control over the storage layer.

  1. Create a PVC from the vSphere plugin. Defining a PVC to use the StorageClass automatically creates and binds a PV:

    cat << EOF | kubectl create -f -
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: fast-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: fast
    EOF
    
  2. Start a Pod, referencing the new PVC:

    cat << EOF | kubectl create -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx
          volumeMounts:
            - mountPath: "/var/www/html"
              name: pd
      volumes:
        - name: pd
          persistentVolumeClaim:
            claimName: fast-pvc
    EOF