Contribute specific storage amount to Hadoop Master

Saurav Rana
3 min readOct 18, 2020

Hi there ! So today we are going to see how we can contribute limited/specific amount of storage from a datanode to hadoop namenode.This is pretty simple task if you know how to make partitions.

So lets get into it.I used :

  • Local VM as one datanode
  • ec2 instance as another datanode
  • ec2 instance as master or namenode

So lets see how its done first i added and 12 Gib addition hardisk to my local VM and then i created partition of 2 Gib in it and formated it and mounted to a folder.

Here now we have mounted the partition but there is one small issue when we will reboot our system it will unmount the partition so to avoid that we have to write the same command in rc.local and make it executable using chmod.

After doing it we are ready to go.Now we just need to setup datanode like this.

This completes our all process to setup datanode on local VM and you can verify by running jps command.Now lets setup the same on ec2.

Now for ec2 datanode i made an extra volume of 1Gib made partition of 500 Mib and attached it to my ec2 instance and all other steps are same as of that in local VM

Now we have mounted the partition and it is ready to go.Now we just need to setup datanode again like this.

Now both datanode has been setup and connected to master successfully now lets check the results.

Here you can see that local VM (dns-srv) has around 2 Gib contibuted and ec2 instance has around 500 Mib contributed.This completes our setup of hadoop.

Thanks for reading…!!!

--

--

Saurav Rana

When you want to know how things really work, study them when they’re coming apart.