INTEGRATION OF LVM Partition WITH HADOOP CLUSTER

INTEGRATION OF LVM Partition WITH HADOOP CLUSTER

WHAT IS LVM ?

No alt text provided for this image
  • LVM stands for the Logical Volume Management. It is a system of managing logical volumes or file systems that is more advanced than the traditional method of partitioning the disk into one or more segments.
  • It enables the combining of multiple individual hard drives or disk partitions into single volume group (VG). That volume group can then be subdivided into logical volumes (LV) or used as a single large volume. Regular file systems, such as EXT3 or EXT4, can then be created on a logical volume.
  • The EXT2, 3, and 4 filesystems all allow both offline (unmounted) and online (mounted) resizing when increasing the size of a filesystem, and offline resizing when reducing the size.
  • LVM helps to provides Elasticity to the storage Device and it’s an advance version of partition.

WHAT IS THE USE OF LVM ?

Whenever a company needs to change the size of the hard disk in its server on the fly/online

means without stopping services then there they use the concept of LVM.

STEPS TO FOLLOW :-

  • Add Virtual Hard Disks
  • Create Physical Volumes(PV)
  • Create Volume Group(VG)
  • Create Logical Volume(LV)
  • Format the Logical Volume
  • Mount the Logical Volume
  • Start Datanode & Namenode and Check Report
  • Extend Size of Logical Volume on the Fly
  • Check Report

Task 7.1 Description :-

ELASTICITY TASK :

?? Integrating LVM with Hadoop and providing Elasticity to DataNode Storage.

?? Increase or Decrease the Size of Static Partition in Linux.

????Lets get started…

STEP-1 : Add Physical Virtual Harddisks to the datanode, here I have added two HD.

/dev/xvdf (20GiB) and

/dev/xvde (10GiB)

?? To check if attached or not use command :

fdisk -l
No alt text provided for this image

STEP 2 : We have to convert this HD into Physical Volume (PV).

?? To Convert Harddisk into physical volumes and to check the commands used are :

pvcreate /dev/xvdd(first HD) /dev/xvde(second HD)

No alt text provided for this image

STEP 3 : Create Volume Group (VG) with physical volumes.

?? For creating VG use command as :

vgcreate taskarth (vg_name) /dev/xvdd /dev/xvde
No alt text provided for this image

?? To see whether the VG is created or not use command :

vgdisplay taskarth(vg_name)

No alt text provided for this image

STEP 4 : Create partition i.e. Logical Volume (LV) of a volume group of size you want to contribute to namenode. Here I am contributing 25GB.

?? For creating a partition (LV) use command :

lvcreate  --size 15G  --name mylv(LV_name) taskarth(vg_name)

No alt text provided for this image

We know that for using the new partition for storing any data we have to format it first.

STEP 5 : Format the partition using command :

mkfs.ext4  /dev/vg_name/LV_name

No alt text provided for this image

STEP 6 : Mount that partition on data node folder (/dn1) using command :

mount /dev/vg_name/LV_name /dn1

No alt text provided for this image

STEP 7 : Start the datanode/namenode daemon service and check the volume contribution to namenode from report.

?? For starting the namenode service use command :

Before starting namenode first format it by using below command:

No alt text provided for this image
hadoop-daemon.sh start namenode

No alt text provided for this image
No alt text provided for this image

?? For starting the datanode service use command :

Before starting datanode first format it by using below command:

No alt text provided for this image
hadoop-daemon.sh start datanode

No alt text provided for this image
No alt text provided for this image

?? For checking the contribution report use command :

hadoop dfsadmin -report

No alt text provided for this image

On the fly we can increase/decrease the storage to be contributed to namenode without unmounting or stopping any services.

We can only able to increase the size upto the space available currently in volume group (here 30GB). So check for size availability.

STEP 8 : For extending the volume contribution and then format the extended part using the command :

lvextend --size +5G /dev/vg_name/LV_name

resize2fs /dev/vg_name/LV_name

No alt text provided for this image

STEP 9 : Now again check the size of volume contribution of datanode to namenode.

?? For this use the command as:

No alt text provided for this image

We can clearly see that on the fly we have increased the size of storage from 15GB to 20 GB i.e. using elasticity concepts increased the size of storage by 5GB and made it dynamic from static.

TASK COMPLETED !!

Thanks for Reading. ??

Keep Learning. Keep Sharing.



要查看或添加评论,请登录

Udit Agarwal的更多文章

社区洞察

其他会员也浏览了