Hadoop | Static Partition | LVM
Amit Sharma
CKA || 1xAWS || 4xGCP || 1xAzure || 2xRedHat Certified || DevOps Engineer [???????]@Searce Inc || Freelancer || Terraform || Ansible || GitLab || Jenkins || Kubernetes || Docker || Openshift || AWS || GCP || Azure
Hey Guys back with another article !! In this Article you will find how we can perform how we can Static Partition and how we can achieve elasticity using LVM in hadoop.
So Lets start this. I ensure you that you had already created a hdfs cluster. You can understand the first half of the article using below link. ????
First step is to conect the connect the datanodes with the namenode while forming a hdfs cluster.
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://0.0.0.0.:9001</value> </property> </configuration> ------------------------------------------------------------------------ <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.name.dir</name> <value>/nn</value> </property> </configuration>
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://13.232.90.242:9001</value> </property> </configuration> ------------------------------------------------------------------------ <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.data.dir</name> <value>/dn1</value> </property> </configuration>
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://13.232.90.242:9001</value> </property> </configuration> ------------------------------------------------------------------------ <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.data.dir</name> <value>/dn2</value> </property> </configuration>
hadoop namenode -format -----> will format the namenode hadoop-daeom.sh start namenode -----> will start the name node hadoop-daeom.sh start namenode -----> will start the data node hadoop dfsadmin -report -----> will give you the report of hdfs cluster
Now we can check report of hdfs cluster.
Now we can proceed with the partition. We need to attach the Volume first to both the Data Nodes.
Now We have to create the same partition inside slave 2 also.
Now we can check the report of hdfs cluster.
Static Volume will not increase dynamically. So we need to use the LVM. Below is the Architecture of LVM.
Similarly, You need to proceed to slave node 2 to achieve elasticity.
lvextend --size +5G /dev/hadoopSlave2/hadoopSlave2LV this above command will extend the Logical Volume by 5GB
Now, the size of datanodes increasing dynamically. Hope you had liked this article !! Thank You :-)
Jr. Soft Engg @Cognizant, EDI-Maps Developer, 2X OCI, 1xAWS Certified, 1X Aviatrix Certified, AT&T Summer Learning Academy Extern, LW summer Research Intern, ARTH Learner, 1X Gitlab Certified Associate, ARTH 2.0 LW_TV
4 年you nailed it amit, I really appreciate ur efforts man!!
Terraform || Openshift(EX180) || AWS(CLF-CO2) || AWS(SAA-C03) Certified
4 年Amazing Sir????
DevOps Engineer@Jio | Hadoop | Linux Administrator | Python Developer | AWS | Kubernetes | DevOps | Cloud Computing
4 年Great work
IT Trainee | Specializing in Support, Cybersecurity, and Data management.
4 年Great work Bro Amit Sharma