Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

?? 7.1: Elasticity Task

??Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

??Increase or Decrease the Size of Static Partition in Linux.

??Automating LVM Partition using Python-Script.

step1: I have created 2 hard disks. Of 25Gib and 50Gib

No alt text provided for this image
No alt text provided for this image

Check the hard disks using

fdisk -l
No alt text provided for this image

Step 2: Create the Physical Volume for both the hard disks using

pvcreate /dev/sdc


pvcreate /dev/sdb

To check the physical volume is created or not using

pvdisplay /dev/sdb

pvdisplay /dev/sdc
No alt text provided for this image
No alt text provided for this image

Step3: Create the volumegroup (vg) using

vgcreate <vg_name> <HD1_name> <HD2_name>


vgcreate lvmhadoop /dev/sdb /dev/sdc
No alt text provided for this image
No alt text provided for this image

Step4: Create the logical volume with size of 10Gi from lvmhadoop volumegroup(vg) using

lvcreate --size 10G --name <LV_name> <VG_name>

lvcreate --size 10G --name mylv1 lvmhadoop
No alt text provided for this image

Format the logical volume using mkfs.ext4 /dev/lvmhadoop/mylv1

No alt text provided for this image

Make the new directory and mount the logical volume to that directory

No alt text provided for this image
No alt text provided for this image

Step5: Start the services of Hadoop namenode

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Step6: Start the services of Hadoop NameNode

No alt text provided for this image
No alt text provided for this image

Step7: Now I want to increase the lv from 10Gi to 30Gi

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

After extending the lv to 30Gi, we must format it too. In this case, if we use mkfs.ext4, the data will be lost. So, we must use resize2fs.

No alt text provided for this image
No alt text provided for this image

Step8: To reduce the lv, we must follow this 5steps

              a) make the partition offline. i.e unmount the drive

             b) clean/scan the drive using e2fsck

              c) format the drive using resize2fs

              d) reduce the logical volume

              e) make the partition online i.e mount the drive

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Note: During reducing, the data present in the reducing part will be lost

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Lets Start with integration with python.

check out the video of automation of lvm with hadoop

GitHub: https://github.com/Anuddeeph/Task7.1-Automation-of-Hadoop-Datanode-Using-LVM-.git


要查看或添加评论,请登录

Anudeep Nalla的更多文章

社区洞察

其他会员也浏览了