Storage Integration with Openshift
Kritik Sachdeva
Technical Support Professional at IBM | RHCA-XII | Openshift | Ceph |Satellite| 3Scale | Gluster | Ansible | Red Hatter
In this post, I am gonna show you how to integrate some of the storage solutions with openshift. And for the demonstration, I am gonna use, Openshift v3.11 and the static method of provisioning storage in the openshift.
So, I am gonna take up 3 types of storage solutions as an example for now that are: GlusterFS, NFS, iSCSI. In the next post, I will try to add Ceph and other cloud Storage integrations as well.
GlusterFS
GlusterFS is an open-source and enterprise storage solution that provides the File System storage or shared file system and object storage kind of storage services. It mainly provides a POSIX-compliant based file system, that removes the need to rewrite applications for the storage service.
It is based on the concept of a decentralized cluster, that ultimately provides a benefit of Highly Available storage services.
Some of the use cases of this storage solution are:
- One of the most common use cases are nearline and archival storage, that is for storing the Big data
- Another interesting use case would be high-performance computing, where we need a fast retrieval of data
For setting up the Gluster Cluster, I have used Ubuntu20.04 on the AWS cloud as an ec2 instance. And for the purpose of the demonstration, I have used the single node cluster.
On Ubuntu Server 20.04, to setup glusterfs follow the following steps or commands:
If wants to make a multi-node cluster, then the following command should be run on all the glusterFS cluster nodes and make sure the /etc/hosts must have an entry for all the glusterFS nodes, like the following
. . . 127.0.0.1 localhost first_ip_address gluster0.example.com gluster0 second_ip_address gluster1.example.com gluster1 third_ip_address gluster2.example.com gluster2
. . .
But if you have created this cluster on AWS, then the IP must be public IP or if you want to make only one IP be public, then add an entry for that node public IP and the other IP would be private IP. ( But then you won't get the high availability feature of glusterFS )
sudo add-apt-repository ppa:gluster/glusterfs-7 sudo apt update sudo apt install glusterfs-server sudo systemctl start glusterd.service sudo systemctl enable glusterd.service sudo systemctl status glusterd.service
If a firewall is enabled, only then use the following command for nodes inter-communication: ( If using AWS, then add rules to security groups for port 24007 )
sudo ufw allow from gluster1_ip_address to any port 24007 sudo ufw allow from gluster2_ip_address to any port 24007 . . . sudo ufw allow from client_ip_address to any port 24007 #Executed from gluster1 system for all the nodes in th cluster sudo gluster peer probe gluster2 sudo gluster peer probe gluster3 . . .
To see the status, use command:
sudo gluster peer status
In this post, I have created the distributed kind of glusterFS storage,
sudo gluster volume create volume_name <public-ip1>:/path/to/data/directory <public-ip2>:/path/to/data/directory force sudo gluster volume start volume_name sudo gluster volume status
## Make sure you use the public IP, else client won't be able to access the volumes
Now add a firewall rule for a client, ( if using AWS, then allow port 49152 )
sudo ufw allow from client_ip_address to any port 49152
To configure the glusterfs client:
sudo apt install glusterfs-client sudo mkdir /storage-pool sudo mount -t glusterfs public-ip:/volume_name /path/to/mount/point
In our case, this client is the openshift. But perform the above command in order to confirm the mount.
Now to integrate glusterFS proceed the following steps:
Here addresses would be a list of all the IP addresses of all the glusterFS nodes.
Next, create a persistent volume for this glusterFS:
Next, create a persistent volume claim:
Now add this PVC, in the pod definition file as:
Before creating the pod, there is one more thing we need to do that is adding an anyuid security context constraint for the default or builder or deployer service account depending upon the requirement.
Now create the PV, then PVC and finally pod file using the command
oc create -f <file-name>
NFS
Next, storage solution I am using would be NFS
To configure NFS, I have used centos os and the client would be openshift and proceed with the following steps:
- Add firewall service for nfs-server:
firewall-cmd --permanent --zone=public --add-service=ssh firewall-cmd --permanent --zone=public --add-service=nfs firewall-cmd --reload
- Install the software and enable the service
yum -y install nfs-utils systemctl enable nfs-server.service systemctl start nfs-server.service
- Create an export of directory and add an entry in the /etc/exports director
mkdir /var/nfs chown nfsnobody:nfsnobody /var/nfs
chmod 755 /var/nfs echo -e "/var/nfs *(rw,sync,no_root_squash)\n" >> /etc/exports exportfs -a
( The id for user nfsnobody is 65534 ) or any it could any other user based on that we have to make permission within the pod definition file )
Now create a persistent volume for this nfs server with IP of nfs server and path to be exported directory path of server:
Next, create a persistent volume claim for the PV:
Next, create a pod with that use this nfs server storage:
Output for the pod being deployed and nfs mounted successfully
In the next post, I will take the iSCSI server with openshift.