Storage Integration with Openshift
#storage #nfs #glusterfs #ubuntu #openshift #docker #kubernetes #minishift #static_provisioning #redhat #servers

Storage Integration with Openshift

In this post, I am gonna show you how to integrate some of the storage solutions with openshift. And for the demonstration, I am gonna use, Openshift v3.11 and the static method of provisioning storage in the openshift.

So, I am gonna take up 3 types of storage solutions as an example for now that are: GlusterFS, NFS, iSCSI. In the next post, I will try to add Ceph and other cloud Storage integrations as well.

GlusterFS

GlusterFS is an open-source and enterprise storage solution that provides the File System storage or shared file system and object storage kind of storage services. It mainly provides a POSIX-compliant based file system, that removes the need to rewrite applications for the storage service.

It is based on the concept of a decentralized cluster, that ultimately provides a benefit of Highly Available storage services.

Some of the use cases of this storage solution are:

  1. One of the most common use cases are nearline and archival storage, that is for storing the Big data
  2. Another interesting use case would be high-performance computing, where we need a fast retrieval of data

For setting up the Gluster Cluster, I have used Ubuntu20.04 on the AWS cloud as an ec2 instance. And for the purpose of the demonstration, I have used the single node cluster.

On Ubuntu Server 20.04, to setup glusterfs follow the following steps or commands:

If wants to make a multi-node cluster, then the following command should be run on all the glusterFS cluster nodes and make sure the /etc/hosts must have an entry for all the glusterFS nodes, like the following

. . .

127.0.0.1       localhost

first_ip_address gluster0.example.com gluster0

second_ip_address gluster1.example.com gluster1

third_ip_address gluster2.example.com gluster2


. . .

But if you have created this cluster on AWS, then the IP must be public IP or if you want to make only one IP be public, then add an entry for that node public IP and the other IP would be private IP. ( But then you won't get the high availability feature of glusterFS )

sudo add-apt-repository ppa:gluster/glusterfs-7

sudo apt update

sudo apt install glusterfs-server

sudo systemctl start glusterd.service

sudo systemctl enable glusterd.service

sudo systemctl status glusterd.service


If a firewall is enabled, only then use the following command for nodes inter-communication: ( If using AWS, then add rules to security groups for port 24007 )

sudo ufw allow from gluster1_ip_address to any port 24007

sudo ufw allow from gluster2_ip_address to any port 24007

.
.
.

sudo ufw allow from client_ip_address to any port 24007


#Executed from gluster1 system for all the nodes in th cluster


sudo gluster peer probe gluster2 

sudo gluster peer probe gluster3

.
.
.

To see the status, use command:

sudo gluster peer status

In this post, I have created the distributed kind of glusterFS storage,

sudo gluster volume create volume_name <public-ip1>:/path/to/data/directory 

<public-ip2>:/path/to/data/directory force

sudo gluster volume start volume_name

sudo gluster volume status

## Make sure you use the public IP, else client won't be able to access the volumes

Now add a firewall rule for a client, ( if using AWS, then allow port 49152 )

sudo ufw allow from client_ip_address to any port 49152

To configure the glusterfs client:

sudo apt install glusterfs-client

sudo mkdir /storage-pool

sudo mount -t glusterfs public-ip:/volume_name /path/to/mount/point

In our case, this client is the openshift. But perform the above command in order to confirm the mount.

Now to integrate glusterFS proceed the following steps:

No alt text provided for this image

Here addresses would be a list of all the IP addresses of all the glusterFS nodes.

Next, create a persistent volume for this glusterFS:

No alt text provided for this image

Next, create a persistent volume claim:

No alt text provided for this image

Now add this PVC, in the pod definition file as:

No alt text provided for this image

Before creating the pod, there is one more thing we need to do that is adding an anyuid security context constraint for the default or builder or deployer service account depending upon the requirement.

No alt text provided for this image

Now create the PV, then PVC and finally pod file using the command

oc create -f <file-name>

No alt text provided for this image
No alt text provided for this image

NFS

Next, storage solution I am using would be NFS

To configure NFS, I have used centos os and the client would be openshift and proceed with the following steps:

  • Add firewall service for nfs-server:
firewall-cmd --permanent --zone=public --add-service=ssh

firewall-cmd --permanent --zone=public --add-service=nfs

firewall-cmd --reload


  • Install the software and enable the service
yum -y install nfs-utils


systemctl enable nfs-server.service

systemctl start nfs-server.service


  • Create an export of directory and add an entry in the /etc/exports director
mkdir /var/nfs

chown nfsnobody:nfsnobody /var/nfs
chmod 755 /var/nfs

echo -e "/var/nfs        *(rw,sync,no_root_squash)\n" >> /etc/exports

exportfs -a

( The id for user nfsnobody is 65534 ) or any it could any other user based on that we have to make permission within the pod definition file )

No alt text provided for this image

Now create a persistent volume for this nfs server with IP of nfs server and path to be exported directory path of server:

No alt text provided for this image

Next, create a persistent volume claim for the PV:

No alt text provided for this image

Next, create a pod with that use this nfs server storage:

No alt text provided for this image

Output for the pod being deployed and nfs mounted successfully

No alt text provided for this image
No alt text provided for this image


In the next post, I will take the iSCSI server with openshift.

要查看或添加评论,请登录

Kritik Sachdeva的更多文章

  • Cephadm | Part-1

    Cephadm | Part-1

    In this small article, I will be covering a one of most of important component and critical component of ceph "cephadm"…

    3 条评论
  • Kubernetes Custom Controller Part-2

    Kubernetes Custom Controller Part-2

    This is a second blog of a two part series for the custom k8s controller. If you have no idea or knowledge about the…

  • Kubernetes Custom Controllers part-1

    Kubernetes Custom Controllers part-1

    What is a Controller? And what is a custom controller? Controller is an application or feature in k8s that looks up for…

    6 条评论
  • Deep dive into Ceph Scrubbing

    Deep dive into Ceph Scrubbing

    Definition Scrubbing is a mechanism in Ceph to maintain data integrity, similar to fsck in the file system, that will…

    13 条评论
  • "Ceph" a new era in the Storage world Part-1

    "Ceph" a new era in the Storage world Part-1

    Ceph is a Software Defined Storage solution created by Sage Weil in 2003 as part of his Ph.D project in California.

    6 条评论
  • Integration of Iscsi Storage with openshift | Part-2

    Integration of Iscsi Storage with openshift | Part-2

    Hi guys, in this post I will cover the iscsi server setup and how to automate it with ansible and use cases of…

    1 条评论
  • Configure HA Cluster on top of AWS using terraform and ansible

    Configure HA Cluster on top of AWS using terraform and ansible

    In this post, I will cover only the setting up of the HA cluster, not the resource creation part. So let's start with…

  • Podman vs Docker deploying a WordPress application

    Podman vs Docker deploying a WordPress application

    Container technology is the most popular tool used widely for the faster deployment of the application on to servers…

  • Amazon Kubernetes as a Service

    Amazon Kubernetes as a Service

    Why we need Aws to run our Kubernetes? What is Kubernetes? and so on..

    2 条评论
  • Basic Example to explain the Workflow of DevOps using webserver

    Basic Example to explain the Workflow of DevOps using webserver

    In companies, three different teams are working in parallel to push the application into the production world, so that…

社区洞察

其他会员也浏览了