1, Prepare virtual machine
The ip address of the first Hadoop 01 virtual machine is changed to 192.168.65.101
The ip address of the second Hadoop 02 virtual machine is changed to 192.168.65.102
The ip address of the virtual machine of the third Hadoop 03 is changed to 192.168.65.103
Hadoop 01, Hadoop 02 and Hadoop 03 are all in NAT mode. Hadoop 01 memory is set to 1G (2G is recommended for 16G memory), Hadoop 02 and Hadoop 03 are 512M, and the number of CPU cores is set to 2
cd /etc/sysconfig/network-scripts #Enter the network configuration directory dir ifcfg* #Network card profile found ifcfg-ens33 #Find the latest version of the file and modify it vim ifcfg-ens33 perhaps vim /etc/sysconfig/network-scripts/ifcfg-ens33
2, Profile content
TYPE=Ethernet BOOTPROTO=static #Change to static for NAT NAME=eno16777736 UUID=4cc9c89b-cf9e-4847-b9ea-ac713baf4cc8 DEVICE=eno16777736 DNS1=114.114.114.114 #Same as gateway ONBOOT=yes #Boot this network card IPADDR=192.168.65.101 #Fixed IP address NETMASK=255.255.255.0 #Subnet mask GATEWAY=192.168.65.2 #Gateway and NAT are automatically configured the same, but cannot log in if they are different
3, Restart the network
## Reload profile sudo nmcli c reload ## service network restart nmcli c up ens33
ens33 is the name of the network card. The virtual machine should be this by default, but the physical machine may be a common eth0
4, Turn off firewall
systemctl stop firewalld.service #Turn off firewall service systemctl disable firewalld.service #Disable firewall startup systemctl restart firewalld.service #Restart the firewall for the configuration to take effect systemctl enable firewalld.service #Set firewall startup firewall-cmd --state #View the status of the protective wall
5, Modify host name
vi /etc/hostname
Modify the three host names as Hadoop 01, Hadoop 02 and Hadoop 03
6, Modify hosts file
vi /etc/hosts
Add ip address mapping in the three host configuration files in turn
192.168.65.101 hadoop01 192.168.65.102 hadoop02 192.168.65.103 hadoop03
7, Restart of three machines
reboot
Ping Hadoop 02 on 192.168.65.101 machine
8, Set password free login
1 three machines generate public and private keys
All three machines generate public and private keys and execute SSH keygen
After executing the command, press three enter
2 copy the public key to the same machine
The three machines execute the command: SSH copy ID Hadoop 01 copy the public keys of the three machines to Hadoop 01
3 copy the certification of the first machine to other machines
Copy the public key of the first machine to other machines
Execute the following command on the first machine
scp /root/.ssh/authorized_keys hadoop02:/root/.ssh scp /root/.ssh/authorized_keys hadoop03:/root/.ssh
4 remote login test
ssh hadoop02
You do not need to enter a password to enter directly. It indicates success and exit
9, Clock synchronization of three machines
Clock synchronization over the network
The virtual machine must be connected to the external network for clock synchronization
Alibaba cloud clock synchronization server ntpdate ntp4.aliyun.com
If an error is reported
-bash: ntpdate: command not found
Sequential execution
#Add yum source for wlnmp rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm #Install time synchronization software yum install wntp
Configure the scheduled task commands for the three machines in turn to synchronize the clocks directly with the Alibaba cloud server
crontab -e # *The number indicates synchronization every minute, hour, day, week and month */1 * * * * /usr/sbin/ntpdate ntp4.aliyun.com;
10, Three machine installation jdk
Check the built-in openjdk rpm -qa | grep java
If yes, please uninstall the openjdk that comes with your system
Create directory for three machines
Installation path of all software mkdir -p /opt/servers
Storage path of all software compressed packages MKDIR - P / opt / software
Upload JDK to / opt / software path, and decompress tar -xvzf jdk-8u65-linux-x64.tar.gz - C.. / servers / # - C to specify the file path
jdk download address:
Link: https://pan.baidu.com/s/1b0l0HsVVEnCvfXHPcjgmYg
Extraction code: 1bre
Configure the environment variable vim /etc/profile
Press "O" to skip to the end of the file and add the following two lines at the end of the file
export JAVA_HOME=/opt/servers/jdk1.8.0_65 export PATH=$JAVA_HOME/bin:$PATH
When the configuration file takes effect, execute source /etc/profile
Test java -version
Send files to Hadoop 02 and Hadoop 03
# -r refers to recursion scp -r /opt/servers/jdk1.8.0_65/ hadoop02:/opt/servers/ scp -r /opt/servers/jdk1.8.0_65/ hadoop03:/opt/servers/ scp /etc/profile hadoop02:/etc/ scp /etc/profile hadoop03:/etc/
After the occurrence, execute source /etc/profile on each node to make the configuration file effective
Test java -version
11, Modify the hosts file in windows
Add the following mapping to the hosts file in windows
192.168.65.101 hadoop01 192.168.65.102 hadoop02 192.168.65.103 hadoop03
After changing the dos window, execute the command ping hadoop01 to test