Internet Small Computer System Interface (iSCSI) is an IP-based standard that provides block-level access to storage devices. iSCSI takes requests from clients and fulfills these requests on remote SCSI devices. iSCSI support in MKE enables Kubernetes workloads to consume persistent storage from iSCSI targets.
The iSCSI Initiator is any client that consumes storage and sends iSCSI
commands. In a MKE cluster, the iSCSI initiator must be installed and
running on any node where pods can be scheduled. Configuration, target
discovery, and login/logout to a target are primarily performed by two
iscsid (service) and
iscsiadm (CLI tool).
These two components are typically packaged as part of
open-iscsi on Debian
iscsi-initiator-utils on RHEL/Centos/Fedora systems.
iscsidis the iSCSI initiator daemon and implements the control path
of the iSCSI protocol. It communicates with
iscsiadmand kernel modules.
iscsiadmis a CLI tool that allows discovery, login to iSCSI targets,
session management, and access and management of the
The iSCSI Target is any server that shares storage and receives iSCSI commands from an initiator.
iSCSI kernel modules implement the data path. The most common modules used
across Linux distributions are
iscsi_tcp.ko. These modules need to be loaded on the host for
proper functioning of the iSCSI initiator.
Basic Kubernetes and iSCSI knowledge is assumed.
iSCSI storage provider hardware and software set up is complete. There is no significant demand for RAM/Disk when running external provisioners in MKE clusters. For setup information specific to a storage vendor, refer to the vendor documentation.
Kubectl must be set up on clients.
The iSCSI server must be accessible to MKE worker nodes.
Not supported on Windows.
The following steps are required for configuring iSCSI in Kubernetes via MKE:
Configure iSCSI target.
Configure generic iSCSI initiator.
Configure iSCSI target¶
An iSCSI target can run on dedicated/stand-alone hardware, or can be configured in a hyper-converged manner to run alongside container workloads on MKE nodes. To provide access to the storage device, each target is configured with one or more logical unit numbers (LUNs).
iSCSI targets are specific to the storage vendor. Refer to the documentation of the vendor for set up instructions, including applicable RAM and disk space requirements, and expose them to the MKE cluster.
Exposing iSCSI targets to the MKE cluster involves the following steps:
The Target is configured with client IQNs if necessary for access control.
Challenge-Handshake Authentication Protocol (CHAP) secrets must be configured for authentication.
Each iSCSI LUN must be accessible by all nodes in the cluster. Configure the iSCSI service to expose storage as an iSCSI LUN to all nodes in the cluster. This can be done by allowing all MKE nodes, and essentially their IQNs, to be part of the target’s ACL list.
Configure generic iSCSI initiator¶
Every Linux distribution packages the iSCSI initiator software in a particular way. Follow the instructions specific to the storage provider, using the following steps as a guideline.
First, prepare all MKE nodes by installing OS-specific iSCSI packages and
loading the necessary iSCSI kernel modules. In the following example,
libiscsi.ko are pre-loaded by the
Linux distro. The
iscsi_tcp kernel module must be loaded with a
For CentOS/Red Hat systems:
sudo yum install -y iscsi-initiator-utils sudo modprobe iscsi_tcp
For Ubuntu systems:
sudo apt install open-iscsi sudo modprobe iscsi_tcp
Next, set up MKE nodes as iSCSI initiators. Configure initiator names for each node as follows:
sudo sh -c 'echo "InitiatorName=iqn.<2019-01.com.example>:<uniqueID>" > /etc/iscsi/ <initiatorname>.iscsi sudo systemctl restart iscsid
iqn must be in the following format:
Update the MKE configuration file with the following options:
--storage-iscsi=trueto enable iSCSI based PVs in Kubernetes.
--iscsiadm-path=<path>to specify the absolute path of the iscsiadm binary on the host. The default value is ‘’/usr/sbin/iscsiad’’.
--iscsidb-path=<path>to specify the path of the iSCSI database on the host. The default value is “/etc/iscsi”.
In-tree iSCSI volumes¶
The Kubernetes in-tree iSCSI plugin only supports static provisioning. For static provisioning:
You must ensure the desired iSCSI LUNs are pre-provisioned in the iSCSI targets.
You must create iSCSI PV objects, which correspond to the pre-provisioned LUNs, with the appropriate iSCSI configuration.
As PVCs are created to consume storage, the iSCSI PVs bind to the PVCs and satisfy the request for persistent storage.
To configure and create a
Create a YAML file for the
apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 12Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 192.0.2.100:3260 iqn: iqn.2017-10.local.example.server:disk1 lun: 0 fsType: 'ext4' readOnly: false
Replace the following values with information appropriate for your environment:
12Giwith the size of the storage available.
192.0.2.100:3260with the IP address and port number of the iSCSI target in your environment. Refer to the storage provider documentation for port information.
iqn.2017-10.local.example.server:disk1is the IQN of the iSCSI initiator, which in this case is the MKE worker node. Each MKE worker should have a unique IQN. Replace
iqn.2017-10.local.example.server:disk1with a unique name for the identifier. More than one
iqncan be specified, but must be the following format:
PersistentVolumeusing your YAML file by running the following command on the master node:
kubectl create -f pv-iscsi.yml persistentvolume/iscsi-pv created
External provisioner and Kubernetes objects¶
An external provisioner is a piece of software running out of process from Kubernetes that is responsible for creating and deleting PVs. External provisioners monitor the Kubernetes API server for PV claims and create PVs accordingly.
When using an external provisioner, you must perform the following additional steps:
Configure external provisioning based on your storage provider. Refer to your storage provider documentation for deployment information.
Define storage classes. Refer to your storage provider dynamic provisioning documentation for configuration information.
Define PVC and Pod. When you define a PVC to use the storage class, a PV is created and bound.
Start a Pod using the PVC that you defined.
Some on-premises storage providers have external provisioners for PV provisioning to backend storage.
CHAP secrets are supported for both iSCSI discovery and session management.
Frequently encountered issues are highlighted in the following list:
Host might not have iscsi kernel modules loaded. To avoid this, always prepare your MKE worker nodes by installing the iSCSI packages and the iscsi kernel modules prior to installing MKE. If worker nodes are not prepared correctly prior to MKE install, prepare the nodes and restart the ‘ucp-kubelet’ container for changes to take effect.
Some hosts have
depmodconfusion. On some Linux distros, the kernel modules cannot be loaded until the kernel sources are installed and
depmodis run. If you experience problems with loading kernel modules, make sure you run
depmodafter kernel module installation.
See iSCSI-targetd provisioner for a reference external provisioner implementation using a target-based external provisioner.
On your client machine with
kubectlinstalled and the configuration specifying the IP address of a master node, perform the following steps:
StorageClassobject in a YAML file named `iscsi-storageclass.yaml, as shown in the following example:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: iscsi-targetd-vg-targetd provisioner: iscsi-targetd parameters: targetPortal: 172.31.8.88 iqn: iqn.2019-01.org.iscsi.docker:targetd iscsiInterface: default volumeGroup: vg-targetd initiators: iqn.2019-01.com.example:node1, iqn.2019-01.com.example:node2 chapAuthDiscovery: "false" chapAuthSession: "false"
StorageClassYAML file and run the following command:
$ kubectl apply -f iscsi-storageclass.yaml storageclass "iscsi-targetd-vg-targetd" created $ kubectl get sc NAME PROVISIONER AGE iscsi-targetd-vg-targetd iscsi-targetd 30s
PersistentVolumeClaimobject in a YAML file named
pvc-iscsi.ymlon the master node, open it in an editor, and include the following content:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: iscsi-claim spec: storageClassName: "iscsi-targetd-vg-targetd" accessModes: - ReadWriteOnce resources: requests: storage: 100Mi
accessModesvalues for iSCSI include
ReadOnlyMany. You can also change the requested storage size by changing the
storagevalue to a different value.
Note that the scheduler automatically ensures that pods with the same PVC run on the same worker node.
PersistentVolumeClaimYAML file by running the following command on the master node:
kubectl apply -f pvc-iscsi.yml -n $NS persistentvolumeclaim "iscsi-claim" created
Verify that the
PersistentVolumeClaimwere created successfully and that the
PersistentVolumeClaimis bound to the correct volume:
$ kubectl get pv,pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE iscsi-claim Bound pvc-b9560992-24df-11e9-9f09-0242ac11000e 100Mi RWO iscsi-targetd-vg-targetd 1m NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b9560992-24df-11e9-9f09-0242ac11000e 100Mi RWO Delete Bound default/iscsi- claim iscsi-targetd-vg-targetd 36s
Set up pods to use the
PersistentVolumeClaimwhen binding to the
PersistentVolume. Here a
ReplicationControlleris created and used to set up two replica pods running web servers that use the
PersistentVolumeClaimto mount the
PersistentVolumeonto a mountpath containing shared resources.
Create a ReplicationController object in a YAML file named
rc-iscsi.ymland open it in an editor to include the following content:
apiVersion: v1 kind: ReplicationController metadata: name: rc-iscsi-test spec: replicas: 2 selector: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - name: nginx containerPort: 80 volumeMounts: - name: iscsi mountPath: "/usr/share/nginx/html" volumes: - name: iscsi persistentVolumeClaim: claimName: iscsi-claim
Use the ReplicationController YAML file and run the following command on the master node:
$ kubectl create -f rc-iscsi.yml replicationcontroller "rc-iscsi-test" created
Verify that the pods were created:
$ kubectl get pods NAME READY STATUS RESTARTS AGE rc-iscsi-test-05kdr 1/1 Running 0 9m rc-iscsi-test-wv4p5 1/1 Running 0 9m