Hadoop | Static Partition | LVM

Hadoop | Static Partition | LVM

Hey Guys back with another article !! In this Article you will find how we can perform how we can Static Partition and how we can achieve elasticity using LVM in hadoop.

So Lets start this. I ensure you that you had already created a hdfs cluster. You can understand the first half of the article using below link. ????

No alt text provided for this image

First step is to conect the connect the datanodes with the namenode while forming a hdfs cluster.

No alt text provided for this image


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://0.0.0.0.:9001</value>
</property>
</configuration>




------------------------------------------------------------------------


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
<property>
<name>dfs.name.dir</name>
<value>/nn</value>
</property>
</configuration>

No alt text provided for this image
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://13.232.90.242:9001</value>
</property>
</configuration>




------------------------------------------------------------------------


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
<property>
<name>dfs.data.dir</name>
<value>/dn1</value>
</property>
</configuration>
No alt text provided for this image
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://13.232.90.242:9001</value>
</property>
</configuration>




------------------------------------------------------------------------


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>


<!-- Put site-specific property overrides in this file. -->


<configuration>
<property>
<name>dfs.data.dir</name>
<value>/dn2</value>
</property>
</configuration>



hadoop namenode  -format           -----> will format the namenode
hadoop-daeom.sh start namenode     -----> will start the name node

hadoop-daeom.sh start namenode     -----> will start the data node

hadoop  dfsadmin  -report          -----> will give you the report of hdfs cluster

Now we can check report of hdfs cluster.

No alt text provided for this image

Now we can proceed with the partition. We need to attach the Volume first to both the Data Nodes.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


No alt text provided for this image


No alt text provided for this image


No alt text provided for this image


No alt text provided for this image

Now We have to create the same partition inside slave 2 also.

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image

Now we can check the report of hdfs cluster.

No alt text provided for this image

Static Volume will not increase dynamically. So we need to use the LVM. Below is the Architecture of LVM.

No alt text provided for this image


No alt text provided for this image
No alt text provided for this image


No alt text provided for this image


No alt text provided for this image
No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


Similarly, You need to proceed to slave node 2 to achieve elasticity.

No alt text provided for this image

No alt text provided for this image
No alt text provided for this image
No alt text provided for this image


No alt text provided for this image
lvextend   --size  +5G   /dev/hadoopSlave2/hadoopSlave2LV

this above command will extend the Logical Volume by 5GB


No alt text provided for this image


No alt text provided for this image

Now, the size of datanodes increasing dynamically. Hope you had liked this article !! Thank You :-)


Ajeenkya S.

Jr. Soft Engg @Cognizant, EDI-Maps Developer, 2X OCI, 1xAWS Certified, 1X Aviatrix Certified, AT&T Summer Learning Academy Extern, LW summer Research Intern, ARTH Learner, 1X Gitlab Certified Associate, ARTH 2.0 LW_TV

4 年

you nailed it amit, I really appreciate ur efforts man!!

Deepak Shah

Terraform || Openshift(EX180) || AWS(CLF-CO2) || AWS(SAA-C03) Certified

4 年

Amazing Sir????

Parth Patel

DevOps Engineer@Jio | Hadoop | Linux Administrator | Python Developer | AWS | Kubernetes | DevOps | Cloud Computing

4 年

Great work

Pratik Patel

IT Trainee | Specializing in Support, Cybersecurity, and Data management.

4 年

Great work Bro Amit Sharma

要查看或添加评论,请登录

Amit Sharma的更多文章

社区洞察

其他会员也浏览了