Painless setting up hadoop cluster and running Wordcount program


Pre preparation

First of all, open my own virtual machine. I use CentOS 7 system, but the operation of different systems is not different.

View local network information

Enter the virtual network editor

Enter NAT settings and view the following information

View network connection status

You can see the network successfully connected

It is not in our habit to enter ifconfig command and find that there is no eth0 (if it is eth0, you can skip this step). And remote ssh connection is not possible

cd /etc/sysconfig/network-scripts/
mv ifcfg-ens33 ifconfig-eth0

Change network information

If you have eth0, you can execute it from here
Enter administrator mode, because if you do not enter, it will show that you cannot save.


vim /etc/sysconfig/network-scripts/ifcfg-eth0

Make changes to the following information. Note that the ip and gateway here need to be recorded by yourself.

Restart the network card, and you can see that the changes take effect

service network restart

If the network card fails to restart

vim /etc/default/grub

Add the following

Execute the following command to change our configuration

grub2-mkconfig -o /boot/grub2/grub.cfg

If it doesn't work, do it


Change host name

vim /etc/hostname

Reboot the computer by executing the reboot command

Finally, check whether the configuration is correct:

Clone virtual machine to obtain slave1 and slave2 nodes

Shut down the virtual machine and enter the virtual machine clone

Click Next

Go to the next step again

Complete cloning

Select installation location

Click finish and wait.

Configure parameter information of slave1 and slave2

Use the same method above to configure the parameter information of slave1 and slave2. Remember to select different ip addresses, and select slave1 and slave2 as the host names.

vim /etc/sysconfig/network-scripts/ifcfg-eth0

vim /etc/hostname

reboot to see if the configuration is successful

Mapping host name to ip

vim /etc/hosts

Add the following

Check whether the configuration is successful

Configure ssh password free login

For your convenience, I have synthesized one more command, and input "enter" all the time during the operation.

ssh-keygen -t rsa&&cd  ~/.ssh/&&cat  ~/.ssh/ >>  ~/.ssh/authorized_keys&&chmod 600 ~/.ssh/authorized_keys&&cat ~/.ssh/authorized_keys&&ls

The results are as follows

master slave1 slave2 executes this command and authorizes_ Keys (the red part of the icon above), copied to the authorized part of the master node_ Keys.
The final results are as follows:

Pass these public keys to the child nodes, and test whether they can be password free login.

scp ~/.ssh/authorized_keys root@slave1:~/.ssh/
scp ~/.ssh/authorized_keys root@slave2:~/.ssh/

Turn off firewall and SELinux

Do the following on all nodes:

 yum install iptables-services
 systemctl stop firewalld

On the master node, do the following:

vim /etc/selinux/config

Install JDK

Download address:

Enter the folder where the jdk is placed and execute (remember to change to the name of your own compressed package)

mkdir -p /usr/local/java # Create the folder you want
tar -vzxf jdk-8u251-linux-x64.tar.gz -C /usr/local/java/ # Extract to the specified location

View name

At the bottom of the file or specify file add

export JAVA_HOME=/usr/local/java/jdk1.8.0_251
export PATH=$PATH:$JAVA_HOME/bin

Execute code application environment variable

source /etc/profile

To see if the installation was successful:

java -version

Create a new user

adduser hadoop

Do the following

The following information was found to indicate that the configuration was successful

Give their superuser privileges:

vim /etc/sudoers

Change to the following form

hadoop environment configuration

Download and install

Download website:
Place hadoop in the folder you specify, enter the directory, and execute the following command.

tar -vzxf hadoop-3.1.3.tar.gz -C /usr/&&cd /usr&&cd ./hadoop-3.1.3&&mkdir -p dfs/name&&mkdir -p dfs/name&&mkdir temp&&ls

Environment configuration

cd ./etc/hadoop/&&vim

Add the following environment variables

export JAVA_HOME=/usr/local/java/jdk1.8.0_251/


Add the following:

if [ "$JAVA_HOME" != "" ];then
  #echo "run java in $JAVA_HOME"

Open slaves or workers in the current folder

vim workers

Delete the hostname and add your own node name.

vim /etc/profile

Add the following environment variables

export HADOOP_HOME=/usr/hadoop-3.1.3

Change profile

Change sh file

cd /usr/hadoop-3.1.3/sbin/

Set , Add the following parameters at the top of both files

HDFS_SECONDARYNAMENODE_USER=root , The following should also be added at the top:


Change xml file

cd /usr/hadoop-3.1.3/etc/hadoop/
vim core-site.xml

Add the following information

               <description>Abase for other temporary   directories.</description>

vim hdfs-site.xml

Add the following information

vim mapred-site.xml

Add the following information

vim yarn-site.xml

Add the following information


Check success

hadoop version


hadoop classpath

Copy the printed information and add it to yarn as I do below- site.xml Medium.


Transfer and connect

Transfer to two child nodes

scp -r /usr/hadoop-3.1.3/ root@slave1:/usr/&&scp -r /usr/hadoop-3.1.3/ root@slave2:/usr/

Format namenode

/usr/hadoop-3.1.3/bin/hdfs namenode -format

Open cluster


Check whether it is opened successfully

hdfs dfsadmin -report

If live datanodes is not 0, it is successful

If unsuccessful solution 1
If jps is executed on the slave node without this, it is caused by executing the / usr/hadoop-3.1.3/bin/hdfs namenode -format code multiple times:

Enter hdfs-site.xml Find the following two paths, and delete all the contents on the master and slave nodes.

Execute the following command again

/usr/hadoop-3.1.3/bin/hdfs namenode -format
hdfs dfsadmin -report

If unsuccessful solution II
If you have the following datanodes

It should be caused by not closing the firewall:
Execute the following command on all nodes:

systemctl stop firewalld

Execute the following command again

hdfs dfsadmin -report

Run the Wordcount program

Randomly find several txt files to place in the specified path

hadoop dfs -mkdir -p /usr/hadoop-3.1.3/input&&hadoop dfs -put You put txt Path to/*   /usr/hadoop-3.1.3/input&&hadoop dfs -ls  /usr/hadoop-3.1.3/input

Note that the output path cannot exist in advance. If it exists, delete it with the following command:

hadoop dfs -rmr /usr/hadoop-3.1.3/output

Run the Wordcount program:

hadoop jar /usr/hadoop-3.1.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /usr/hadoop-3.1.3/input /usr/hadoop-3.1.3/output

The following results show that the operation is successful

View output folder

hadoop dfs -ls /usr/hadoop-3.1.3/output

Print the results

hadoop dfs -cat /usr/hadoop-3.1.3/output/part-r-00000

Tags: Hadoop vim network ssh

Posted on Sun, 07 Jun 2020 06:55:31 -0400 by jrolands