Integration of Iscsi Storage with openshift | Part-2
Kritik Sachdeva
Technical Support Professional at IBM | RHCA-XII | Openshift | Ceph |Satellite| 3Scale | Gluster | Ansible | Red Hatter
Hi guys, in this post I will cover the iscsi server setup and how to automate it with ansible and use cases of integrating block storage with the openshift.
iSCSI is a protocol that allows us to use SCSI commands over TCP/IP networks. It is most frequently used when we want to access storage without even attaching them locally as it would be present physically on their system. Aka it provides us a service of block storage.
iSCSI has two components: Server-side ( called Target ) and Client side ( called Initiator ) which work on port number 3260/TCP. Using this iSCSI protocol we can also set up the Redundant Access to the storage arrays.
Set up the iSCSI Target:
To set up the target, we need to add some configuration elements and to do so we use the command targetcli which comes from the software package targetcli.
After this, we need to identify what kind of storage we want to share with the initiators (aka clients). iSCSI provides the sharing of the following storage types:
- File
- Block Device
- Logical Volume
- Partition Disk
- PSCSI or Physical SCSI
- RAM Disk ( They are really fast but does not provide persistent data )
Then, all these are exported into LUN (Logical Unit Number), and this is the final element that we will share with the initiators. Next, to access them we must add them to the TPG ( Trusted Portal Group)
In the TPG, we can also add the ACL for user or initiators who can access the LUN. And to do so we have to assign the IQN ( iSCSI Qualified Name) to both the Target and the initiator. The syntax or format of the IQN is
iqn.YYY-MM.<domain_in_reverse_order>:<friendly_name>
For the demonstration, the final picture of the configuration will look like this:
# Here in my case, I have attached block device /dev/sdb to share it with initiators and giving IQN for target and initiator as iqn.2020-12.com.redhat:server and iqn.2020-12.com.redhat:client respectively.
After configuring we have to start the server, and allow the firewall to allow;
systemctl enable --now target.service
firewall-cmd --add-service=iscsi-target --permanent
Set up the iSCSI client:
Here in our case client is the openshift Pod or container that is gonna use this block storage. But first, let's talk about some of the use cases of this block storage.
Block storage is best suitable and most commonly used by the Database server and Message Server where more data integrity is required rather than only providing persistent storage. So where we are using the simple application on pod where only data need to be persistent, there we can use NFS kind of storage (or GlusterFS).
# To create an static persistent volume provisioned in openshift we use Persistent Volume and Persistent Volume Claim aks PV and PVC.
# PersistentVolume.yaml
# Important Note: Make sure u have used the Ip address instead of hostname, otherwise it won't mount the volume inside the pod, and make sure that you specify the initiator name ( if adding ACLs) otherwise set the authentication attribute parameter in the target to zero.
# PersistentVolumeClaim.yaml
# Pod.yaml
apiVersion: v1 kind: Pod metadata: name: iscsipod spec: containers: - name: iscsi image: nginx volumeMounts: - mountPath: "/var/www/html" name: iscsivol volumes: - name: iscsivol persistentVolumeClaim: claimName: iscsipvc
The output would look like the following:
PeristentVolume
PeristentVolumeClaim
Pod
# In above image, /dev/sdc is mounted at /var/www/html as written in the pod.yaml file
Now last part, How to setup the iSCSI server using Ansible:
(In this for a client we can't use the ansible but for a different client, we can use ansible as well)
So as such there is no module available in ansible that we can use directly to set up the iSCSI Target. But there is are two Open Source and the most widely used role is available to do so, and the name of the role I have used is OndrejHome.targetcli.
And to use it directly, first, we need to install this role or download this role. And to download it use the command:
# ansible-galaxy role search targetcli
# ansible-galaxy role install OndrejHome.targetcli
# ansible-galaxy role list
Now the Ansible Playbook looks like,
That's all for this post, and If you have any doubts then drop me a message I will try to answer your question. Thank you
Link for the code: Click here
Thank you very much for your interesting article. In that case I don't understand how the pvc is linked with the pv, in some other kind of storage for example VSphere there is an additional specification ?in the pvc ( storageClassName: vsphere-sc ) to make sure the pvc is linked with the correct pv. What is your default storage class?