Task 4.1 :- Hadoop Task In a Hadoop cluster, find how to contribute limited/specific amount of storage as slave/datanode to the cluster ?
Nitin Tilwani
Serving Notice Period | SDE@ DevOps engineer at Nykaa | Python | RHCSA | Docker | Kubernetes | Ansible | AWS | Kong || RHCE Certified
We can think in a way that .... lets suppose you are having a hard disk of size 2GiB and you only have to provide 1GiB of that hard disk.
So we will do this task in several steps :
STEP1:- Firstly as I said , we will attach a hard disk of size 2GiB .
STEP2:-And after that we will create partition of that size that we want to provide to cluster , like I want to give 1GiB from 2GiB hard disk.
STEP3:- So from "fdisk -l" command you can see that I have created a partition of device /dev/sdb1 of 1GiB.
STEP4 :- Whenever there is space , you can create a file only when you have partioned that space and after that you have formatted that space and after that you can use by connecting to folders. And here we have done partition in previous step , now we will format and connect(mount). So there are many types of format/filesystem depending upon which type you want. Like here you can see , ext4 , ext3 ,vfat , fat and many more. But we choose ext4 type.
STEP5:- After formatting , you have to connect to a folder , and inside that folder you can store your data by creating directory and files. And for mounting , we use command mount to which ever folder you want to mount . I have mounted to tp in / directory.
STEP6:- After opening the hdfs-site.xml file of hadoop , we will connect that partitioned folder named tp inside / directory.
STEP7:- And after that we will start our data node so that it can connect to our master and give that restricted space. And here you can see that our allocated space is 975.9MB which is approximately 1GiB.
So this is how you can limit your space and provide it to your cluster.
Software Engineer @ FinSpectra | Platform Developer-1 Certified Salesforce Developer | Salesforce QA Tester
4 年Great Nitin Tilwani