Integrating LVM with Hadoop & providing Elasticity to Data-Node Storage
LVM : logical volume management provides a method of allocating space on mass-storage devices that is more flexible than conventional partitioning schemes to store volumes. In particular, a volume manager can concatenate, stripe together or otherwise combine partitions (or block devices in general) into larger virtual partitions that administrators can re-size or move, potentially without interrupting system use.
For doing this task I’m using 2 AWS instances and two EBS volumes
Prerequisites : Concepts of Linux partition
You need to attach the EBS volume to the slave node
here PV = physical volume , VG = volume group , LV = logical volume
type lsblk : To check all hard disk
If your instance/os/machine does not has lvm on it then it can be installed using yum installer.
command: yum install lvm2
Ways to provide ELASTICITY to Data-Node storage:-
PART 1: TO CREATE PV
Command: pvcreate <hard-disk_name>
PART 2: TO CREATE VG
Command : vgcreate <VG_name> <hardisk_names>
PART 3 : TO CREATE LV
Command : lvcreate — size <size> — name <name of LV> <name of VG>
PART 4 : TO FORMAT
PART 5 : TO MOUNT IT TO THE SLAVE NODE DIRECTORY
Command : mount /dev/myvg/mylv /nn
THANK YOU FOR READING!!!!!