VMware vSphere 6.7 CSI Implementation Guide for MKE, MSR, and MCR

VMware vSphere 6.7 CSI Implementation Guide for MKE, MSR, and MCR

Overview

MKE, MSR, and MCR Solution Guides enable you to integrate our container platform with popular third-party ecosystem solutions for networking, load balancing, storage, logging and monitoring, access management, and more.

This Solution Guide explains how to install VMware vSphere CSI driver on an MKE 3.2 installation for Docker Kubernetes Service.

vSphere CSI Driver Overview

vSphere CSI driver enables customers to address persistent storage requirements for MKE (Kubernetes orchestration) in vSphere environments. Mirantis users can now consume vSphere Storage (vSAN, VMFS, NFS) to support persistent storage for Docker Kubernetes Service

vSphere CSI driver is Docker Certified to use with MKE, MSR, and MCR and is available in Docker Hub.

Prerequisites

  • Installations of MKE, MSR, and MCR (this guide was tested using MKE 3.2.4, MSR 2.7.2, MCR 19.03)

  • VMware vSphere 6.7 update 3

Installation and Configuration

On ESX

Change the worker node orchestrator type to Kubernetes.

MKE Configuration
  1. Log in to the MKE cluster web UI using your Docker username and password.

    Login to MKE

    MKE Dashboard:

    MKE Dashboard
  2. Double click on the node you want to change.

    MKE Configuration
  3. Click the gear icon in the upper right, and change the Orchestrator to kubernetes.

    MKE Configuration
  4. Repeat the foregoing steps for all worker nodes.

When you are finished editing each worker node, the dashboard will display their Type as kubernetes.

MKE Configuration

Generate and download a MKE client bundle from MKE Web UI

  1. Click on your account name and then click on My Profile

  2. Click on the New Client Bundle button then Generate Client Bundle button.

  3. Locate the generated client bundle archive file and unzip it.

    Note

    The generated client bundle archive file will be downloaded to whatever folder your browser’s Download folder is configured for. You may have to move the generated client bundle archive file to the Docker client machine if it is a different machine than the machine it was downloaded to.

    Run the following command from the MKE client command shell to unzip the client bundle archive file.

    $ unzip ucp-bundle-admin.zip
    

    Example:


  1. Configure the MKE client command shell.

    Run the following command from the MKE command shell.

    $ eval "$(<env.sh)"
    

    Example:

  2. Test the MKE client bundle and configuration.

    Run the docker version command from the Docker client command shell. (download Docker for Mac or Docker for Windows on your laptop or install Docker on your linux client)

    $ docker version --format '{{println .Server.Platform.Name}}Client: {{.Client.Version}}{{range .Server.Components}}{{println}}{{.Name}}: {{.Version}}{{end}}'
    

    Example:

Kubernetes kubectl command

The Kubernetes kubectl command must be installed on the Docker client machine. Refer to Install and Set Up kubectl to download and install the version of the kubectl command that matches the version of Kubernetes included with the MKE version you are running. You can run the docker version command to display which version of Kubernetes is installed.

Prerequisite for installing vSphere CSI driver

  1. Get cluster node information

$ kubectl get nodes -o wide

Example:

  1. Make sure master node(s) are tainted, requires 2 taints as shown below (master= and uninitialized=true)

$ kubectl taint node 5bd6e49773ee-managers-1 node-role.kubernetes.io/master=:NoSchedule
$ kubectl taint node 5bd6e49773ee-managers-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
  1. Make sure all the worker nodes are tainted with node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule

$ kubectl taint node 5bd6e49773ee-workers-1 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
$ kubectl taint node 5bd6e49773ee-workers-2 node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule

Verify the nodes are tainted

$ kubectl describe nodes | egrep "Taints:|Name:"

Example:

Install vSphere CSI driver

For detail on how to install the vSphere CSI driver, refer to the vSphere documentation, Install vSphere Container Storage Interface Driver.

Note

Since Docker MKE doesn’t support cloud-provider option at the moment, will have to rely on the vSphere Cloud Provider (VCP) instead of Cloud Controller Manager (CCM) to discover the UUID of each VM.

Uninitialized taint needs to be manually removed from all nodes after CSI driver is installed.

Certification Test

The fastest way to verify the installation is to run the cert tests at https://github.com/docker/vol-test on Linux hosts.

To install the vol-test tools:

  1. Clone https://github.com/docker/vol-test/

    $ git clone https://github.com/docker/vol-test.git
    $ cd vol-test
    $ cd kubernetes
    
  2. Run the tests as per the readme:

    Refer to the readme for the voltest package to tailor the configuration variables for your environment. This is the test result for the run.

    Note

    Please update the storageClassName in testapp.yaml that maps to an existing storage class that references to an existing storage policy in vSphere.

    $ ./voltestkube -podurl http://10.156.129.220:33208
    Pod voltest-0 is Running
    http://10.156.129.220:33208/status
    Reset test data for clean run
    Shutting down container
    Waiting for container restart - we wait up to 10 minutes
    Should be pulling status from http://10.156.129.220:33208/status
    .Get http://10.156.129.220:33208/status: dial tcp 10.156.129.220:33208: connect: connection refused
    .Get http://10.156.129.220:33208/status: dial tcp 10.156.129.220:33208: connect: connection refused
    
    Container restarted successfully, moving on
    Pod node voltest-0 is docker-2
    Pod was running on docker-2
    Shutting down container for forced reschedule
    http error okay here
    Waiting for container rechedule - we wait up to 10 minutes
    ......Container rescheduled successfully, moving on
    Pod is now running on docker-3
    Going into cleanup...
    Cleaning up taint on docker-2
    Test results:
    +-------------------------------------------------------+
    Kubernetes Version: v1.14.7-docker-1        OK
    Test Pod Existence: Found pod voltest-0 in namespace default        OK
    Confirm Running Pod: Pod running        OK
    Initial Textfile Content Confirmation: Textcheck passes as expected     OK
    Initial Binary Content Confirmation: Bincheck passes as expected        OK
    Post-restart Textfile Content Confirmation: Textcheck passes as expected        OK
    Post-restart Binaryfile Content Confirmation: Bincheck passes as expected       OK
    Rescheduled Textfile Content Confirmation: Textcheck passes as expected     OK
    Rescheduled Binaryfile Content Confirmation: Bincheck passes as expected        OK
    All tests passed.
    

Monitoring and Troubleshooting

Monitor the VMware driver container on each node - make sure it’s active and running.

$ kubectl -n kube-system get pod -l app=vsphere-csi-node

Example Output:

$ kubectl -n kube-system get pod -l app=vsphere-csi-node
NAME                     READY   STATUS    RESTARTS   AGE
vsphere-csi-node-8wm48   3/3     Running   0          14d
vsphere-csi-node-tstqr   3/3     Running   0          14d
vsphere-csi-node-wnn7r   3/3     Running   0          14d