Contributing Limited Amount of Data-Node Storage to Hadoop Cluster.
Nikhil Suryawanshi
MLOps Engineer | 7x GCP | Kubernetes | Terraform | AWS | DevOps | Java | Python
Task 4.1 Description :-
??In a Hadoop cluster, find how to contribute limited/specific amount of storage as slave to the cluster?
let's begin :-
Configure NameNode -
Starting NameNode -
Checking if any Datanode is connected -
Till now no DataNode is connected .
To connect any DataNoade to NameNode first we have to configure DataNode .
Configuring DataNode -
Starting DataNode -
After starting DataNode it gets connected to the NameNode .
Now DataNode is connected to NameNode and it contributes all its storage .
But we cant also contributed limited storage to NameNode .
SO LET'S SEE HOW TO CONTRIBUTE LIMITED STORAGE -
Add New HardDisk To DataNode -
Here I am using Oracle Virtual Box .
To add new hard disk DataNode must be in "Stopped" state then Follow this steps -
Go Storage in Settings of DataNode -
Now Click on right "+" icon of "Controller: SATA" -
Now Click On "Create" .
Now Click on "Next ".
Again Click "Next" .
Give Your Hard Disk name & Choose the size of your Hard Disk & then Click on Create .
In my case My Hard Disk size is 8 GiB .
Choose your Hard Disk & then Click on "Create" .
Now our created Hard Disk is attached -
To check hard disk is attached or not run "fdisk -l" command -
You will see "/dev/sdb : 8 GiB"
To use new Hard Disk we have to create partition then format it and mount it .
Now Create Partition In Add Device At DataNode .
Run "fdisk /dev/sdb" command -
Format & Mount Partition the DataNode -
Run this command to format "mkfs.ext4 /dev/sdb1" -
And we want to mount Partition in the directory in which Hadoop Cluster Distributed File Storage -
Run this command to mount "mount /dev/sdb /dn1"
Our Hard Disk is mounted .
As You can see DataNode is connected & contributed around 7.81 GiB .
Thus we can set limitation of contribution of DataNode in Hadoop Cluster.
TASK COMPLETED !!!!!!
ThankYou For Reading .