Limiting The Storage In Hadoop Cluster By Data Node
TASK DESCRIPTION:
??In a Hadoop cluster, find how to contribute limited/specific amount of storage as slave to the cluster?
??Hint: Linux partitions
?? TASK COMPLETION:
=> Firstly we have create the Hadoop cluster on AWS And start the services Name Node and Data Node.
- Contribute storage of slave node to the cluster storage of '/' drive of data node = 8GB Lets check the share storage by data node
hadoop dfsadmin -report
2. While we have to share limited amount of storage of slave node to the cluster. we have to use the concept of partition And create the EBS volume and after then attach from data node.
? Check disk partition -
3. Create partition -
fdisk <device_name>
4. Check the new partition /dev/xvdf1 with size 2GB -
5. Command for format the partition -
mkfs.ext4 <device_name>
6. mount data node to the new partition /dev/xvdf1 of size 2GB -
7. We have to find to contribute limited/specific amount of storage as a slave node /dn1 to the cluster - Total storage of EBS volume = 5GB Share/contribute storage = 2GB
??????Here we have successfully performed the task??????
Thanks Mr.Vimal Daga (Mentor) for giving such a researchable task which helps me to explore my core concepts of Big Data Hadoop.