Integrating LVM with Hadoop & providing Elasticity to Data-Node Storage

Integrating LVM with Hadoop & providing Elasticity to Data-Node Storage

LVMlogical volume management  provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes to store volumes. In particular, a volume manager can concatenate, stripe together or otherwise combine partitions (or block devices in general) into larger virtual partitions that administrators can re-size or move, potentially without interrupting system use.

No alt text provided for this image

For doing this task I’m using 2 AWS instances and two EBS volumes

Prerequisites : Concepts of Linux partition

You need to attach the EBS volume to the slave node

here PV = physical volume , VG = volume group , LV = logical volume

type lsblk : To check all hard disk

No alt text provided for this image

If your instance/os/machine does not has lvm on it then it can be installed using yum installer.

command: yum install lvm2

Ways to provide ELASTICITY to Data-Node storage:-

PART 1: TO CREATE PV

Command: pvcreate <hard-disk_name>

No alt text provided for this image

PART 2: TO CREATE VG

Command : vgcreate <VG_name> <hardisk_names>

No alt text provided for this image

PART 3 : TO CREATE LV

Command : lvcreate — size <size> — name <name of LV> <name of VG>

No alt text provided for this image

PART 4 : TO FORMAT

No alt text provided for this image

PART 5 : TO MOUNT IT TO THE SLAVE NODE DIRECTORY

Command : mount /dev/myvg/mylv /nn

THANK YOU FOR READING!!!!!



要查看或添加评论,请登录

Tarun Ranjan Mahapatra的更多文章

社区洞察

其他会员也浏览了