hadoop high availability installation (HA)

High availability hadoop installation HA Write at the beginning Installation procedure 1. Distribute JDK 1.1 install j...
1. Distribute JDK
2. Synchronization time
3. Configuration file check before installation
4. Secret key free settings of NN and other three machines
5. No secret key between two NN
6. Modify some configuration information of namenode
7.zookeeper installation
8. Start the journal node
9. MapReduce configuration

High availability hadoop installation HA

Write at the beginning

The premise of this article is that the installation of hadoop pseudo distribution is completed.
hadoop high availability fool installation (fool to most details)
Completion of a college assignment

Let's first introduce the general needs of high availability hadoop?

High availability hadoop generally requires four linux virtual machines to be turned on and seven nodes to be hadoop HA.
Three of them are zookeeper cluster, three are journal node cluster, two are zkfc, two are namenode, three are datanode, two are ResourceManager, three are nodemanager
So these four virtual machines have multiple functions.

For the convenience of operation, you can download the xshell software as the case may be and directly connect with the virtual machine for operation

So we set up four linux virtual machines, named node01, node02, node03, node04 (which have been solved in hadoop pseudo Distributed installation)

Installation procedure

1. Distribute JDK

In the pseudo Distributed installation, the preliminary construction of four virtual machines has been completed. We first distribute the jdk in node01 to the other three.

//It's a symbol, not a symbol. It's the key on the left of the horizontal number key 1 scp jdk-7u67-linux-x64.rpm node02:`pwd` scp jdk-7u67-linux-x64.rpm node03:`pwd` scp jdk-7u67-linux-x64.rpm node04:`pwd`

Then we input the command ll (lowercase L) under each virtual machine to check whether there are jdk files under the paths of all virtual machines. The success picture is as follows:

If xshell is installed, you can click the three horizontal bars in the lower right corner of the software

Click all sessions, and then directly enter ll in the command bar below, all machines can execute the command

1.1 install jdk

Execute rpm installation commands on node02, 03 and 04 respectively

rpm -i jdk-7u67-linux-x64.rpm

Enter command on node01

cd /etc

In this directory, the profile file is distributed to node02, 03, 04.

scp profile node04:`pwd`

Refresh the file by using all session columns of Xshell, or entering the sub input instructions of each machine

source /etc/profile

Use the Xshell all session bar, or enter each machine to input instructions

jps

Check whether the jdk of the three machines 02, 03 and 04 is installed.
The success picture is as follows:

2. Synchronization time

Use the Xshell all session bar, or enter each machine to input instructions

date

To view the current time of each machine, if the time difference between machines is too large, some processes in the cluster will not run.
If the time is not synchronized, the instruction is used for time synchronization.
First, use the Xshell all session bar, or enter each machine to input instructions

yum -y install ntp

Install the time synchronizer. And then each performs a synchronization command

ntpdate time1.aliyun.com

This code is for time synchronization with Alibaba cloud server.

3. Configuration file check before installation

Use the Xshell all session bar, or enter each machine to input instructions

cat /etc/sysconfig/network

Check that HOSTNAME is correct.
Use the Xshell all session bar, or enter each machine to input instructions

cat /etc/hosts

Check whether the IP mapping is correct. If not, you can modify the file.
Success picture:

Use the Xshell all session bar, or enter each machine to input instructions

cat /etc/sysconfig/selinux

Check if your SELINUX=disabled is modified.
Use the Xshell all session bar, or enter each machine to input instructions

service iptables status

See if the firewall is turned off.
The success picture is as follows:

4. Secret key free settings of NN and other three machines

1. Execute instructions in home directory

ll -a

Check whether there is a. ssh file. If not, enter the command

ssh localhost

Remember! Remember to input the command exit after ssh localhost, because the node0X entered after ssh is not the node0X you input the command, but two things.
Home directory is to input the command cd, that is, when you are in [node02 ~], you are in the home directory.

The figure is as follows:

2. When you have finished executing, have the. ssh file, and then input the command

cd .ssh //Reenter command ll


3. There will be many files. We will send the public key of node01 to the other three computers

scp id_dsa.pub node02:`pwd`/node01.pub scp id_dsa.pub node03:`pwd`/node01.pub scp id_dsa.pub node04:`pwd`/node01.pub

The schematic diagram is as follows:

If there is no. ssh file on node02, node03 and node04, remember ssh localhost and don't forget to exit

4. In the. ssh directory of node02, we can see node01.pub, and then we append the public key to authorized_keys

cat node01.pub >> authorized_keys

And on node01, ssh node02 to see if it's secret key free. Remember to exit later
After the test is successful, add node01.pub to node05 and node06

5. No secret key between two NN

1.node01 and node02 are keyless to each other: now node01 can log in node02 with the secret key, so node02 can log in node01 with the secret key

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

Then try ssh localhost.
Redistribute to node01:

scp id_dsa.pub node01:`pwd`/node02.pub

Under the. ssh directory of node01

cat node02.pub >> authorized_keys

Finally, verify whether you can log in to node01 without secret key on node02

6. Modify some configuration information of namenode

Let's go to node01 first.
We first go to the hdfs.xml Documents. The folder directive is

cd /opt/ldy/hadoop-2.6.5/etc/hadoop/ //Reenter command ll

We can find the directory under this folder

Input command

vi hdfs-site.xml

This profile appears

We remove the configuration of snn existing in the folder (ignore it if not), and then configure the file

//Press the I key on the keyboard i //To modify the document //After the modification is completed, press esc key, and then input: wq to complete the modification and exit esc :wq
//Then configure the file in < configuration > <property> <name>dfs.namenode.secondary.http-address</name> <value>node01:50090</value> </property> <property> <name>dfs.nameservices</name> <value>mycluster</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.ha.namenodes.mycluster</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn1</name> <value>node01:8020</value> </property> <property> <name>dfs.namenode.rpc-address.mycluster.nn2</name> <value>node02:8020</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn1</name> <value>node01:50070</value> </property> <property> <name>dfs.namenode.http-address.mycluster.nn2</name> <value>node02:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://node01:8485;node02:8485;node03:8485/mycluster</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/var/ldy/hadoop/ha/jn</value> </property> <property> <name>dfs.client.failover.proxy.provider.mycluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/root/.ssh/id_dsa</value> </property>

You can copy and paste directly. If there is any change, you can modify the corresponding attribute.
And then we configure the core-site.xml Property.
Still in our folder.

vi core-site.xml //Then configure the file in < configuration > <!– Cluster name mycluster--> <property> <name>fs.defaultFS</name> <value>hdfs://mycluster</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/var/ldy/hadoop/pseudo</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>node02:2181,node03:2181,node04:2181</value> </property>

Then we go to the slave file.

vi slaves //Modify configuration node04 node05 node06

We have installed hadoop in node01 in the pseudo distribution, and then we complete the file configuration of hadoop here. Now we distribute hadoop to the other three machines.

cd /opt

Enter this folder, and distribute the self named ldy folder in which hadoop is placed to node02, node03, node04

scp -r ldy/ node02:`pwd` scp -r ldy/ node03:`pwd` scp -r ldy/ node04:`pwd`

Then hdfs-site.xml And core-site.xml Send to node02, node03, node04 respectively

cd /opt/ldy/hadoop-2.6.5/etc/hadoop/ scp hdfs-site.xml core-site.xml node02:`pwd` scp hdfs-site.xml core-site.xml node03:`pwd` scp hdfs-site.xml core-site.xml node04:`pwd`

This completes the configuration and installation of hadoop.
Then we install zookeeper

7.zookeeper installation

Let's first go to node02, put the zookeeper installation package in the home directory, and then unzip zookeeper
Here are some.

tar xf zookeeper-3.4.6.tar.gz -C /opt/ldy

Then modify the configuration file of zookeeper

cd /opt/ldy/zookeeper-3.4.6/conf ll

We can see zoo_sample.cfg Documents. We're going to give zookeeper's zoo_sample.cfg Change the name. It is recommended to copy zoo here_ sample.cfg File, so that if there is an error in the configuration file, the source file is reused. So let's copy the file first

cp zoo_sample.cfg zoo.cfg //zoo_sample.cfg Renamed zoo.cfg

And then we modify it zoo.cfg

vi zoo.cfg dataDir=/var/ldy/zk //And add at the end server.1=node02:2888:3888 server.2=node03:2888:3888 server.3=node04:2888:3888 //Among them, 2888 primary and secondary communication ports and 3888 are the ports for election mechanism when the primary is suspended

Then we distribute zookeeper to other nodes

scp -r zookeeper-3.4.6/ node03:`pwd` scp -r zookeeper-3.4.6/ node04:`pwd`

And in each machine down / opt/ldy / ll see if the distribution is successful.
We continue to create paths in the configuration file for each machine.

mkdir -p /var/ldy/zk

For node2:

echo 1 > /var/ldy/zk/myid cat /var/ldy/zk/myid

For node03:

echo 2 > /var/ldy/zk/myid cat /var/ldy/zk/myid

For node04:

echo 3 > /var/ldy/zk/myid cat /var/ldy/zk/myid

Then we configure the file in / etc/profile

cd /etc/profile export ZOOKEEPER_HOME=/opt/ldy/zookeeper-3.4.6 export PATH=$PATH:/usr/java/jdk1.7.0_67/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin

Then distribute / etc/profile to other node03 and node04

scp /etc/profile node05:/etc scp /etc/profile node06:/etc

In node02, node03 and node04, source /etc/profile is used to refresh files.
After verification, input zkCli.s and press TAB key to incomplete the name zkCli.sh That's it.
And then we start zookeeper, in the entire session

zkServer.sh start

Then use

zkServer.sh status

Check the status of each zookeeper node. The picture is as follows

If zookeeper can't run, you can go to Java in / etc/profile_ Home change
It's an absolute path.

cd /etc vi profile


All three machines need to be changed

8. Start the journal node

We started the journal node to synchronize the data between the two namenode s
Start the journal node on node01, node02 and node03.

hadoop-daemon.sh start journalnode

We can use the command jps to check whether it starts

We randomly choose a namenode to execute

hdfs namenode –format

Another namenode does not need to be executed, otherwise the clusterID changes and the cluster cannot be found. The success is as follows

Then start to format the namenode

hadoop-daemon.sh start namenode

Enter jps and another namenode will appear
Then another namenode needs to synchronize the data. Let's enter the following command

hdfs namenode -bootstrapStandby

Enter jps, and you have namenode
Then we need to format zkfc

hdfs zkfc -formatZK

We execute on node02 zkCli.sh Open the zookeeper client to see if Hadoop HA is open

zkCli.sh ls /


Then we start the hdfs cluster on node03

start-dfs.sh

If a node is not opened, you can go to the log under the node to view it

If all are turned on, input jps, and the pictures of the four machines are as follows




Access node01:50070 and node02:50070 with a browser. They must be in active state and standby state.


At this point, hadoop high availability is basically installed
hadoop shutdown instruction

//Shutdown cluster command: stop-dfs.sh //To turn off the zookeeper command: zkServer.sh stop

Then we will prepare for MapReduce.

9. MapReduce configuration

We go to the hadoop directory of node01

cd /opt/ldy/hadoop-2.6.5/etc/hadoop/] ll

You can see that there is a mapred-site.xml.template Documents. For security, we also copy the file for a backup. Rename the copied file

cp mapred-site.xml.template mapred-site.xml

And then we're at mapred-site.xml Add the following property

vi mapred-site.xml <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>

Yarn in the same directory- site.xml Add property to

<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>cluster1</value> </property> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>node03</value> </property> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>node04</value> </property> <property> <name>yarn.resourcemanager.zk-address</name> <value>node02:2181,node03:2181,node04:2181</value> </property>

Then we send these two modified configuration files to node02, node03 and node04

scp mapred-site.xml yarn-site.xml node04:`pwd` scp mapred-site.xml yarn-site.xml node05:`pwd` scp mapred-site.xml yarn-site.xml node06:`pwd`

Since node03 and node04 are resource managers, they should be keyless to each other. First, log in to node04 on node03
Generate the secret key in the. ssh directory of node03

cd .ssh ssh-keygen -t dsa -P '' -f ./id_dsa //And add it to authorized_keys cat id_dsa.pub >> authorized_keys

Use ssh localhost to verify whether a password is required, and don't forget to exit
Then distribute the public key of node03 to node04

scp id_dsa.pub node04:`pwd`/node03.pub

In the. ssh directory of node06, add node05.pub

cat node03.pub >> authorized_keys

ssh node04 on node03, is it secret key free
Same as node04

//Generate the key in the. ssh directory of node04 ssh-keygen -t dsa -P '' -f ./id_dsa //And add it to authorized_keys cat id_dsa.pub >> authorized_keys //Use ssh localhost to verify whether you need a password. Don't forget to exit //Distribute node04's public key to node03 scp id_dsa.pub node05:`pwd`/node06.pub //In the. ssh directory of node03, add node06.pub cat node04.pub >> authorized_keys //ssh node03 on node04 to see if it is key free

1. Start zookeeper, all sessions zkServer.sh start
2. Start hdfs and start on node01- dfs.sh
3. Start yarn and start on node01- yarn.sh
4. Start resource manager on node03 and 04 respectively

yarn-daemon.sh start resourcemanager

5. All sessions jps, see the process is not complete
The picture is as follows




Visit node05:8088 in the browser to view the content managed by resource manager

In this way, the sub installation is basically completed
Finally, shut down the cluster

//Shut down the cluster: //node01: stop-dfs.sh //node01: (stop nodemanager) stop-yarn.sh //node03,node04: yarn-daemon.sh stop resourcemanager //Node02,03,04: zkServer.sh stop

15 June 2020, 03:30 | Views: 3390

Add new comment

For adding a comment, please log in
or create account

0 comments