Increase/Decrease Storage shared by Data Nodes in Hadoop Cluster on the Fly
Vaibhav Shah
Associate Consultant at T-Systems ICT India | Red Hat Ansible | OpenShift | DevOps | AWS | Azure | Cloud | Docker | K8s | Terraform | Observability | OpenTelemetry | Istio | Kiali | GitLab | Podman | Connected Vehicles
?? 7.1: Elasticity Task
??(A) - Integrating LVM with Hadoop and providing Elasticity to Data Node Storage
??(B) - Increase or Decrease the Size of Static Partition in Linux.
In this article, we will see that how we can Increase/Decrease the storage shared by Data Nodes in Hadoop cluster Statically as well as Dynamically.
Note: Whole setup of this task will be done on the top of AWS Cloud.
STATIC WAY :
We have created a Hadoop Cluster with 1 Master Node and 2 Data Nodes (Slave 1 & 2) .
Now we are going to attach the extra storages on Slave 1 and 2. We will use AWS EBS Service here.
You can see, I have connected 2 EBS of volume 10GiB each.
You can see the highlighted part.
Now, We need to Create a partition > Format > Mount
1. Creating partition in both Slaves :
So we created partition of 8GiB in Slave 1.
Similarly we create partition 5GiB in Slave 2 also.
2. Formatting and Mounting :
Result:
Hence you can see, the partition we created is successfully able to share in cluster.
DYNAMIC WAY (LVM) :
For LVM, we will gonna use some commands in both the slaves.
# pvcreate /slave1_1partition_path # pvcreate /slave1_2partition_path # vgcreate <vgname> /slave1_1partition_path /slave1_2partition_path # lvcreate --size <size> --name <lvname> <vgname> # mkfs.ext4 <lvpath> # mount <slave1_lvpath> <datanode folder>
After this you can observe the output
Now I am tying to increase the shared space on the fly.
Before running the cmd.
After running the cmd.
You can see we successfully able to achieve the aim of article.
Please Like and Share if you like the article!
Former SDE Intern @Raja Software Labs, Pune
3 年well explained