Configuring iSCSI

Internet Small Computer System Interface (iSCSI) is an IP-based standard that provides block-level access to storage devices. iSCSI takes requests from clients and fulfills these requests on remote SCSI devices. iSCSI support in MKE enables Kubernetes workloads to consume persistent storage from iSCSI targets.

iSCSI components

The iSCSI Initiator is any client that consumes storage and sends iSCSI commands. In a MKE cluster, the iSCSI initiator must be installed and running on any node where pods can be scheduled. Configuration, target discovery, and login/logout to a target are primarily performed by two software components: iscsid (service) and iscsiadm (CLI tool).

These two components are typically packaged as part of open-iscsi on Debian systems and iscsi-initiator-utils on RHEL/Centos/Fedora systems.

  • iscsid is the iSCSI initiator daemon and implements the control path

    of the iSCSI protocol. It communicates with iscsiadm and kernel modules.

  • iscsiadm is a CLI tool that allows discovery, login to iSCSI targets,

    session management, and access and management of the open-iscsi database.

The iSCSI Target is any server that shares storage and receives iSCSI commands from an initiator.


iSCSI kernel modules implement the data path. The most common modules used across Linux distributions are scsi_transport_iscsi.ko,``libiscsi.ko`` and iscsi_tcp.ko. These modules need to be loaded on the host for proper functioning of the iSCSI initiator.


  • Basic Kubernetes and iSCSI knowledge is assumed.

  • iSCSI storage provider hardware and software set up is complete. There is no significant demand for RAM/Disk when running external provisioners in MKE clusters. For setup information specific to a storage vendor, refer to the vendor documentation.

  • Kubectl must be set up on clients.

  • The iSCSI server must be accessible to MKE worker nodes.


  • Not supported on Windows.


The following steps are required for configuring iSCSI in Kubernetes via MKE:

  1. Configure iSCSI target.

  2. Configure generic iSCSI initiator.

  3. Configure MKE.

Configure iSCSI target

An iSCSI target can run on dedicated/stand-alone hardware, or can be configured in a hyper-converged manner to run alongside container workloads on MKE nodes. To provide access to the storage device, each target is configured with one or more logical unit numbers (LUNs).

iSCSI targets are specific to the storage vendor. Refer to the documentation of the vendor for set up instructions, including applicable RAM and disk space requirements, and expose them to the MKE cluster.

Exposing iSCSI targets to the MKE cluster involves the following steps:

  1. The Target is configured with client IQNs if necessary for access control.

  2. Challenge-Handshake Authentication Protocol (CHAP) secrets must be configured for authentication.

  3. Each iSCSI LUN must be accessible by all nodes in the cluster. Configure the iSCSI service to expose storage as an iSCSI LUN to all nodes in the cluster. This can be done by allowing all MKE nodes, and essentially their IQNs, to be part of the target’s ACL list.

Configure generic iSCSI initiator

Every Linux distribution packages the iSCSI initiator software in a particular way. Follow the instructions specific to the storage provider, using the following steps as a guideline.

First, prepare all MKE nodes by installing OS-specific iSCSI packages and loading the necessary iSCSI kernel modules. In the following example, scsi_transport_iscsi.ko and libiscsi.ko are pre-loaded by the Linux distro. The iscsi_tcp kernel module must be loaded with a separate command.

For CentOS/Red Hat systems:

sudo yum install -y iscsi-initiator-utils sudo modprobe iscsi_tcp

For Ubuntu systems:

sudo apt install open-iscsi sudo modprobe iscsi_tcp

Next, set up MKE nodes as iSCSI initiators. Configure initiator names for each node as follows:

sudo sh -c 'echo "InitiatorName=iqn.<>:<uniqueID>" >
/etc/iscsi/ <initiatorname>.iscsi sudo systemctl restart iscsid

The iqn must be in the following format:

Configure MKE

Update the MKE configuration file with the following options:

  • Configure --storage-iscsi=true to enable iSCSI based PVs in Kubernetes.

  • Configure --iscsiadm-path=<path> to specify the absolute path of the iscsiadm binary on the host. The default value is ‘’/usr/sbin/iscsiad’’.

  • Configure --iscsidb-path=<path> to specify the path of the iSCSI database on the host. The default value is “/etc/iscsi”.

In-tree iSCSI volumes

The Kubernetes in-tree iSCSI plugin only supports static provisioning. For static provisioning:

  1. You must ensure the desired iSCSI LUNs are pre-provisioned in the iSCSI targets.

  2. You must create iSCSI PV objects, which correspond to the pre-provisioned LUNs, with the appropriate iSCSI configuration.

  3. As PVCs are created to consume storage, the iSCSI PVs bind to the PVCs and satisfy the request for persistent storage.

To configure and create a PersistentVolume object:

  1. Create a YAML file for the PersistentVolume object:

    apiVersion: v1
    kind: PersistentVolume
      name: iscsi-pv
        storage: 12Gi
        - ReadWriteOnce
         iqn: iqn.2017-10.local.example.server:disk1
         lun: 0
         fsType: 'ext4'
         readOnly: false
  2. Replace the following values with information appropriate for your environment:

    • 12Gi with the size of the storage available.

    • with the IP address and port number of the iSCSI target in your environment. Refer to the storage provider documentation for port information.

    • iqn.2017-10.local.example.server:disk1 is the IQN of the iSCSI initiator, which in this case is the MKE worker node. Each MKE worker should have a unique IQN. Replace iqn.2017-10.local.example.server:disk1 with a unique name for the identifier. More than one iqn can be specified, but must be the following format:

  3. Create the PersistentVolume using your YAML file by running the following command on the master node:

    kubectl create -f pv-iscsi.yml
    persistentvolume/iscsi-pv created

External provisioner and Kubernetes objects

An external provisioner is a piece of software running out of process from Kubernetes that is responsible for creating and deleting PVs. External provisioners monitor the Kubernetes API server for PV claims and create PVs accordingly.

When using an external provisioner, you must perform the following additional steps:

  1. Configure external provisioning based on your storage provider. Refer to your storage provider documentation for deployment information.

  2. Define storage classes. Refer to your storage provider dynamic provisioning documentation for configuration information.

  3. Define PVC and Pod. When you define a PVC to use the storage class, a PV is created and bound.

  4. Start a Pod using the PVC that you defined.


Some on-premises storage providers have external provisioners for PV provisioning to backend storage.


CHAP secrets are supported for both iSCSI discovery and session management.


Frequently encountered issues are highlighted in the following list:

  • Host might not have iscsi kernel modules loaded. To avoid this, always prepare your MKE worker nodes by installing the iSCSI packages and the iscsi kernel modules prior to installing MKE. If worker nodes are not prepared correctly prior to MKE install, prepare the nodes and restart the ‘ucp-kubelet’ container for changes to take effect.

  • Some hosts have depmod confusion. On some Linux distros, the kernel modules cannot be loaded until the kernel sources are installed and depmod is run. If you experience problems with loading kernel modules, make sure you run depmod after kernel module installation.


  1. See iSCSI-targetd provisioner for a reference external provisioner implementation using a target-based external provisioner.

  2. On your client machine with kubectl installed and the configuration specifying the IP address of a master node, perform the following steps:

    1. Create a StorageClass object in a YAML file named `iscsi-storageclass.yaml, as shown in the following example:

      kind: StorageClass
        name: iscsi-targetd-vg-targetd
      provisioner: iscsi-targetd
        iscsiInterface: default
        volumeGroup: vg-targetd
        chapAuthDiscovery: "false"
        chapAuthSession: "false"
    2. Use the StorageClass YAML file and run the following command:

      $ kubectl apply -f iscsi-storageclass.yaml
      storageclass "iscsi-targetd-vg-targetd" created
      $ kubectl get sc
      NAME                       PROVISIONER     AGE
      iscsi-targetd-vg-targetd   iscsi-targetd   30s
    3. Create a PersistentVolumeClaim object in a YAML file named pvc-iscsi.yml on the master node, open it in an editor, and include the following content:

      kind: PersistentVolumeClaim
      apiVersion: v1
        name: iscsi-claim
        storageClassName: "iscsi-targetd-vg-targetd"
        - ReadWriteOnce
            storage: 100Mi

      Supported accessModes values for iSCSI include ReadWriteOnce and ReadOnlyMany. You can also change the requested storage size by changing the storage value to a different value.

      Note that the scheduler automatically ensures that pods with the same PVC run on the same worker node.

    4. Apply the PersistentVolumeClaim YAML file by running the following command on the master node:

      kubectl apply -f pvc-iscsi.yml -n $NS
      persistentvolumeclaim "iscsi-claim" created
  3. Verify that the PersistentVolume and PersistentVolumeClaim were created successfully and that the PersistentVolumeClaim is bound to the correct volume:

    $ kubectl get pv,pvc
    iscsi-claim   Bound     pvc-b9560992-24df-11e9-9f09-0242ac11000e   100Mi      RWO              iscsi-targetd-vg-targetd   1m
    pvc-b9560992-24df-11e9-9f09-0242ac11000e   100Mi      RWO Delete Bound     default/iscsi- claim   iscsi-targetd-vg-targetd  36s
  4. Set up pods to use the PersistentVolumeClaim when binding to thePersistentVolume. Here a ReplicationController is created and used to set up two replica pods running web servers that use the PersistentVolumeClaim to mount the PersistentVolume onto a mountpath containing shared resources.

  5. Create a ReplicationController object in a YAML file named rc-iscsi.yml and open it in an editor to include the following content:

    apiVersion: v1     kind: ReplicationController     metadata:       name: rc-iscsi-test          spec:       replicas: 2       selector:         app: nginx       template:         metadata:           labels:             app: nginx         spec:           containers:           - name: nginx             image: nginx             ports:             - name: nginx               containerPort: 80             volumeMounts:             - name: iscsi               mountPath: "/usr/share/nginx/html"           volumes:           - name: iscsi             persistentVolumeClaim:               claimName: iscsi-claim
  6. Use the ReplicationController YAML file and run the following command on the master node:

    $ kubectl create -f rc-iscsi.yml
    replicationcontroller "rc-iscsi-test" created
  7. Verify that the pods were created:

    $ kubectl get pods
    NAME                  READY     STATUS    RESTARTS   AGE
    rc-iscsi-test-05kdr   1/1       Running   0          9m
    rc-iscsi-test-wv4p5   1/1       Running   0          9m

See also