Integration of Iscsi Storage with openshift | Part-2
#redhat #openshift #container #blockstorage #storage #iSCSI #ansible #automation #devops #linux #centos #rhel

Integration of Iscsi Storage with openshift | Part-2


No alt text provided for this image

Hi guys, in this post I will cover the iscsi server setup and how to automate it with ansible and use cases of integrating block storage with the openshift.

iSCSI is a protocol that allows us to use SCSI commands over TCP/IP networks. It is most frequently used when we want to access storage without even attaching them locally as it would be present physically on their system. Aka it provides us a service of block storage.

iSCSI has two components: Server-side ( called Target ) and Client side ( called Initiator ) which work on port number 3260/TCP. Using this iSCSI protocol we can also set up the Redundant Access to the storage arrays.

Set up the iSCSI Target:

To set up the target, we need to add some configuration elements and to do so we use the command targetcli which comes from the software package targetcli.

After this, we need to identify what kind of storage we want to share with the initiators (aka clients). iSCSI provides the sharing of the following storage types:

  1. File
  2. Block Device
  3. Logical Volume
  4. Partition Disk
  5. PSCSI or Physical SCSI
  6. RAM Disk ( They are really fast but does not provide persistent data )

Then, all these are exported into LUN (Logical Unit Number), and this is the final element that we will share with the initiators. Next, to access them we must add them to the TPG ( Trusted Portal Group)

In the TPG, we can also add the ACL for user or initiators who can access the LUN. And to do so we have to assign the IQN ( iSCSI Qualified Name) to both the Target and the initiator. The syntax or format of the IQN is

iqn.YYY-MM.<domain_in_reverse_order>:<friendly_name>

For the demonstration, the final picture of the configuration will look like this:

No alt text provided for this image

# Here in my case, I have attached block device /dev/sdb to share it with initiators and giving IQN for target and initiator as iqn.2020-12.com.redhat:server and iqn.2020-12.com.redhat:client respectively.

After configuring we have to start the server, and allow the firewall to allow;

systemctl enable --now target.service
firewall-cmd --add-service=iscsi-target --permanent

Set up the iSCSI client:

Here in our case client is the openshift Pod or container that is gonna use this block storage. But first, let's talk about some of the use cases of this block storage.

Block storage is best suitable and most commonly used by the Database server and Message Server where more data integrity is required rather than only providing persistent storage. So where we are using the simple application on pod where only data need to be persistent, there we can use NFS kind of storage (or GlusterFS).

# To create an static persistent volume provisioned in openshift we use Persistent Volume and Persistent Volume Claim aks PV and PVC.

# PersistentVolume.yaml

PeristentVolume.yaml

# Important Note: Make sure u have used the Ip address instead of hostname, otherwise it won't mount the volume inside the pod, and make sure that you specify the initiator name ( if adding ACLs) otherwise set the authentication attribute parameter in the target to zero.

# PersistentVolumeClaim.yaml

No alt text provided for this image

# Pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: iscsipod
spec:
  containers:
  - name: iscsi
    image: nginx
    volumeMounts:
    - mountPath: "/var/www/html"
      name: iscsivol
  volumes:
  - name: iscsivol
    persistentVolumeClaim:
     claimName: iscsipvc

The output would look like the following:

PeristentVolume

No alt text provided for this image

PeristentVolumeClaim

No alt text provided for this image

Pod

No alt text provided for this image

# In above image, /dev/sdc is mounted at /var/www/html as written in the pod.yaml file

Now last part, How to setup the iSCSI server using Ansible:

(In this for a client we can't use the ansible but for a different client, we can use ansible as well)

So as such there is no module available in ansible that we can use directly to set up the iSCSI Target. But there is are two Open Source and the most widely used role is available to do so, and the name of the role I have used is OndrejHome.targetcli.

And to use it directly, first, we need to install this role or download this role. And to download it use the command:

# ansible-galaxy role search targetcli
# ansible-galaxy role install OndrejHome.targetcli
# ansible-galaxy role list

Now the Ansible Playbook looks like,

No alt text provided for this image

That's all for this post, and If you have any doubts then drop me a message I will try to answer your question. Thank you

Link for the code: Click here

Thank you very much for your interesting article. In that case I don't understand how the pvc is linked with the pv, in some other kind of storage for example VSphere there is an additional specification ?in the pvc ( storageClassName: vsphere-sc ) to make sure the pvc is linked with the correct pv. What is your default storage class?

回复

要查看或添加评论,请登录

Kritik Sachdeva的更多文章

  • Cephadm | Part-1

    Cephadm | Part-1

    In this small article, I will be covering a one of most of important component and critical component of ceph "cephadm"…

    3 条评论
  • Kubernetes Custom Controller Part-2

    Kubernetes Custom Controller Part-2

    This is a second blog of a two part series for the custom k8s controller. If you have no idea or knowledge about the…

  • Kubernetes Custom Controllers part-1

    Kubernetes Custom Controllers part-1

    What is a Controller? And what is a custom controller? Controller is an application or feature in k8s that looks up for…

    6 条评论
  • Deep dive into Ceph Scrubbing

    Deep dive into Ceph Scrubbing

    Definition Scrubbing is a mechanism in Ceph to maintain data integrity, similar to fsck in the file system, that will…

    13 条评论
  • "Ceph" a new era in the Storage world Part-1

    "Ceph" a new era in the Storage world Part-1

    Ceph is a Software Defined Storage solution created by Sage Weil in 2003 as part of his Ph.D project in California.

    6 条评论
  • Configure HA Cluster on top of AWS using terraform and ansible

    Configure HA Cluster on top of AWS using terraform and ansible

    In this post, I will cover only the setting up of the HA cluster, not the resource creation part. So let's start with…

  • Storage Integration with Openshift

    Storage Integration with Openshift

    In this post, I am gonna show you how to integrate some of the storage solutions with openshift. And for the…

  • Podman vs Docker deploying a WordPress application

    Podman vs Docker deploying a WordPress application

    Container technology is the most popular tool used widely for the faster deployment of the application on to servers…

  • Amazon Kubernetes as a Service

    Amazon Kubernetes as a Service

    Why we need Aws to run our Kubernetes? What is Kubernetes? and so on..

    2 条评论
  • Basic Example to explain the Workflow of DevOps using webserver

    Basic Example to explain the Workflow of DevOps using webserver

    In companies, three different teams are working in parallel to push the application into the production world, so that…

其他会员也浏览了