?? 7.1-A: Elasticity Task

?? 7.1-A: Elasticity Task

??Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

??Increase or Decrease the Size of Static Partition in Linux.

??Automating LVM Partition using Python-Script.

1. Integrating LVM with Hadoop and providing Elasticity to DataNode Storage

STEP 1:- First thing we have to do is to add 2 hard disks at least to perform this task on LVM. So, we will add 2 hard disks to the data node system.

No alt text provided for this image

STEP 2:- Now we need to start name node and data node.

No alt text provided for this image

STEP 3:- Now we can see the added hard disks on the data node system by the command "fdisk -l".

No alt text provided for this image

STEP 4:- Now we need to create physical volume for both hard disks and for this we will use "pvcreate /dev/sdc" and "pvcreate /dev/sdd". Later we can see the created physical volumes details using "pvdisplay /dev/sdc" and "pvdisplay /dev/sdd".

No alt text provided for this image

STEP 5:- Now we will create volume group and we will use command "vgcreate LVM_task /dev/sdc /dev/sdd" and we can see the details of this newly created volume using "vgdisplay LVM_task". In this section we are basically taking the added 2 hard disk at on place and we gave name it as "LVM_task" and the size will be 20 + 40 = 60GB.

No alt text provided for this image

STEP 6:- Now we will create the logical volume for the the created volume group. It is like how much of part of the volume group we want to use. It is like partition. In this section we will take 50 GB from 60 GB. For this we used "lvcreate --size 50GB --name LVM_1 LVM_task" and then we can see the details of this logical volume using "lvdisplay LVM_task/LVM_1".

No alt text provided for this image

STEP 7:- Now we just need to do format this partition using "mkfs.ext4 /dev/LVM_task/LVM_1" and then we will mount it with the folder that we are using as Data Node cluster using "mount /dev/LVM_task/LVM_1 /DataNode".

No alt text provided for this image

STEP 8:- Now we have to check the size of Hadoop data node cluster, it will be increased in size. The storage of cluster increased from 10 to 50.

No alt text provided for this image

2. Increase or Decrease the Size of Static Partition in Linux.

STEP 1:- We need a base Hard disk as static storage. So in the first step we will create a partition from and added hard disk. Here I have a 20 GB added Hard disk and I partitioned it into 5 GB of size.

No alt text provided for this image

STEP 2:- Now as usual we will do formatting, then will create a directory and will mount this directory to the new partition.

No alt text provided for this image

STEP 3:- Now we will create a file and then we will resize that partition by increasing and decreasing and will check if the created file is still safe or not.

No alt text provided for this image

STEP 4:- We can perform the increase and decrease in the size of the static memory without affecting the above created File. We were use "Parted" software on Linux to perform this task.

  • Install parted software if that is not on the Linux using "yum install parted".
  • Enter the command "parted /dev/sdc" and then we will increase the size of the partition.
  • There we need to use a command "resizepart" and it will ask the partition number and the increased size.
  • And later we can check the size of the partition, it will be increased and will have the previously created file safe with it.
  • A video clip will be uploaded in another post regarding this task
THIS IS THE ONLY WAY I FOUND TO INCREASE OR DECREASE THE SIZE OF CREATED PARTITION THAT HAS SOME DATA AND WE SEE THE DATA WILL BE SAFE.

3. Automating LVM Partition using Python-Script.

  • This part is completely included in the video attached to the post.
HOPE YOU WILL LIKE IT
THANK YOU

要查看或添加评论,请登录

ANUPAM KUMAR THAKUR的更多文章

社区洞察

其他会员也浏览了