HA mode of building Hadoop environment

1, Overview of Hadoop HA

1. Ha (high availability), that is, high availability (service interruption is not allowed).

2. The key strategy to achieve high availability is to eliminate single point of failure. Strictly speaking, ha should be divided into HA mechanism of each component: ha of HDFS and ha of YARN.

3. Before Hadoop 2.0, there was a single point of failure (SPOF) in the NameNode in the HDFS cluster.

4. NameNode mainly affects HDFS cluster in the following two aspects
(1) If the namenode machine fails, the cluster will not be available until the administrator restarts.
(2) The namenode machine needs to be upgraded, including software and hardware upgrades. At this time, the cluster cannot be used.

5. The HDFS HA function solves the above problems by configuring the Active/Standby two namenodes to realize the hot standby of namenodes in the cluster. In case of failure, such as machine crash or machine need to be upgraded for maintenance, NameNode can be quickly switched to another machine in this way.

2, HDFS-HA introduction

2.1 key points of hdfs-ha

1. Metadata management mode needs to be changed
A copy of metadata is saved in memory. Only the Active NameNode node can write the Edits log. Both namenodes can read the Edits. The shared Edits are managed in a shared storage.

2. Need a status management function module
A zkfailover is implemented, which is resident in the node where each namenode is located. Each zkfailover is responsible for monitoring its own namenode node, using zk for state identification. When state switching is required, zkfailover is responsible for switching, and the phenomenon of brain split should be prevented during switching.

3. It must be ensured that ssh can be used to log in without password between two namenodes.

4. Fence, that is, only one NameNode provides external services at the same time.

2.2 HDFS-HA automatic failover mechanism

The command hdfs haadmin can be used for manual failover, but in this mode, even if the active NameNode has failed, the system will not automatically transfer from the active NameNode to the standby NameNode, so we need to configure automatic failover, which adds two new components to the HDFS deployment: ZooKeeper and zkfailover controller (zkfc) processes, ZooKeeper is a highly available service that maintains a small amount of coordination data, informs clients of changes in these data and monitors client failures. The automatic failover of HA depends on the following features of ZooKeeper:

1. Fault detection: each NameNode in the cluster maintains a persistent session in ZooKeeper. If the machine crashes,
The session in ZooKeeper will terminate and ZooKeeper informs another NameNode that a failover needs to be triggered.

2. Active name node selection: ZooKeeper provides a simple mechanism for selecting a node as active only. If the current active NameNode crashes, another node may obtain a special exclusive lock from ZooKeeper to indicate that it should become an active NameNode.

3. ZKFC is another new component in automatic failover, which is the client of ZooKeeper and also monitors and manages the status of NameNode. Each host running NameNode also runs a ZKFC process, which is responsible for:
(1) Health monitoring: ZKFC uses a health check command to ping the NameNode of the same host regularly. As long as the NameNode returns to the health status in time, ZKFC believes that the node is healthy. If the node crashes, freezes, or enters an unhealthy state, the health monitor identifies the node as unhealthy.

(2) ZooKeeper session management: when the local NameNode is healthy, ZKFC maintains a session opened in ZooKeeper. If the local NameNode is in the active state, ZKFC also maintains a special znode lock, which uses ZooKeeper's support for short-term nodes. If the session is terminated, the lock node will be deleted automatically.

(3) Based on the selection of ZooKeeper: if the local NameNode is healthy and ZKFC finds that no other node currently holds the znode lock, it will obtain the lock for itself. If successful, it has won the choice and is responsible for running the failover process so that its local NameNode is active. The failover process is similar to the manual failover described above. First, change the down server state to standby, and then the local NameNode state to active.

3, HDFS-HA deployment

3.1 environmental preparation

1. Modify ip
2. Modify host name and ip mapping
3. Turn off firewall
4. ssh password free login
5. Install jdk, hadoop and configure environment variables

3.2 planning cluster

hadoop101 hadoop102 hadoop103
NameNode NameNode
JournalNode JournalNode JournalNode
DataNode DataNode DataNode
ZK ZK ZK
ResourceManager
NodeManager NodeManager NodeManager

3.3 configure Zookeeper cluster

Please refer to another article: Zookeeper Distributed installation deployment

3.4 configure HDFS-HA cluster

1. Configure hadoop-env.sh

export JAVA_HOME=/opt/module/jdk1.8.0_144

2. Configure core-site.xml

<configuration>
		<!-- Take two. NameNode The address of is assembled into a cluster mycluster -->
		<property>
			<name>fs.defaultFS</name>
        	<value>hdfs://mycluster</value>
		</property>

		<!-- Appoint hadoop Storage directory where files are generated at run time -->
		<property>
			<name>hadoop.tmp.dir</name>
			<value>/opt/module/hadoop-2.7.2/data/tmp</value>
		</property>
</configuration>

3. Configure hdfs-site.xml

<configuration>
	<!-- Fully distributed cluster name -->
	<property>
		<name>dfs.nameservices</name>
		<value>mycluster</value>
	</property>

	<!-- Cluster NameNode What are the nodes -->
	<property>
		<name>dfs.ha.namenodes.mycluster</name>
		<value>nn1,nn2</value>
	</property>

	<!-- nn1 Of RPC Mailing address -->
	<property>
		<name>dfs.namenode.rpc-address.mycluster.nn1</name>
		<value>hadoop101:9000</value>
	</property>

	<!-- nn2 Of RPC Mailing address -->
	<property>
		<name>dfs.namenode.rpc-address.mycluster.nn2</name>
		<value>hadoop102:9000</value>
	</property>

	<!-- nn1 Of http Mailing address -->
	<property>
		<name>dfs.namenode.http-address.mycluster.nn1</name>
		<value>hadoop101:50070</value>
	</property>

	<!-- nn2 Of http Mailing address -->
	<property>
		<name>dfs.namenode.http-address.mycluster.nn2</name>
		<value>hadoop102:50070</value>
	</property>

	<!-- Appoint NameNode Metadata in JournalNode Storage location on -->
	<property>
		<name>dfs.namenode.shared.edits.dir</name>
	<value>qjournal://hadoop101:8485;hadoop102:8485;hadoop103:8485/mycluster</value>
	</property>

	<!-- Configure the isolation mechanism, that is, only one server can respond to the external at the same time -->
	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>sshfence</value>
	</property>

	<!-- When using isolation mechanism ssh Keyless login-->
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/test/.ssh/id_rsa</value>
	</property>

	<!-- statement journalnode Server storage directory-->
	<property>
		<name>dfs.journalnode.edits.dir</name>
		<value>/opt/module/hadoop-2.7.2/data/jn</value>
	</property>

	<!-- Turn off permission check-->
	<property>
		<name>dfs.permissions.enable</name>
		<value>false</value>
	</property>

	<!-- Access agent class: client´╝îmycluster´╝îactive Configuration failure auto switch implementation mode-->
	<property>
  		<name>dfs.client.failover.proxy.provider.mycluster</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>
</configuration>

4. Copy the configured hadoop environment to other machines

3.5 start HDFS-HA cluster

1. On each journalnode node, start the journalnode service

hadoop101:

[test@hadoop101 module]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-journalnode-hadoop101.out
[test@hadoop101 module]$ jps
4000 Jps
3946 JournalNode

hadoop102:

[test@hadoop102 module]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-journalnode-hadoop102.out
[test@hadoop102 module]$ jps
3952 Jps
3895 JournalNode

hadoop103:

[test@hadoop103 module]$ hadoop-daemon.sh start journalnode
starting journalnode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-journalnode-hadoop103.out
[test@hadoop103 module]$ jps
3960 Jps
3903 JournalNode

2. On Hadoop 101, format it and start nn1

[test@hadoop101 module]$ hdfs namenode -format
[test@hadoop101 module]$ hadoop-daemon.sh start namenode
starting namenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-namenode-hadoop101.out
[test@hadoop101 module]$ jps
4167 Jps
4090 NameNode
3946 JournalNode

3. On Hadoop 102, synchronize metadata information of Hadoop 102

[test@hadoop102 module]$ hdfs namenode -bootstrapStandby

4. Start nn2 on Hadoop 102

[test@hadoop102 module]$ hadoop-daemon.sh start namenode
starting namenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-namenode-hadoop102.out
[test@hadoop102 module]$ jps
4112 Jps
4035 NameNode
3895 JournalNode

5. View web page
hadoop101:

hadoop102:


6. Start datanode on each machine
hadoop101:

[test@hadoop101 module]$ hadoop-daemons.sh start datanode
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 71:37:62:fc:6e:8d:cd:d5:85:74:c6:6a:bf:e0:31:69.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop101.out
[test@hadoop101 module]$ jps
4486 Jps
4407 DataNode
4090 NameNode
3946 JournalNode

hadoop102:

[test@hadoop102 module]$ hadoop-daemons.sh start datanode
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 71:37:62:fc:6e:8d:cd:d5:85:74:c6:6a:bf:e0:31:69.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop102.out
[test@hadoop102 module]$ jps
4035 NameNode
4421 Jps
3895 JournalNode
4330 DataNode

hadoop103:

[test@hadoop103 jn]$ hadoop-daemons.sh start datanode
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 71:37:62:fc:6e:8d:cd:d5:85:74:c6:6a:bf:e0:31:69.
Are you sure you want to continue connecting (yes/no)? yes
localhost: Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
localhost: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop103.out
[test@hadoop103 jn]$ jps
4092 DataNode
4173 Jps
3903 JournalNode

7. Switch nn1 on Hadoop 101 to Active

[test@hadoop101 module]$ hdfs haadmin -transitionToActive nn1

8. Check whether it is Active

[test@hadoop101 module]$ hdfs haadmin -getServiceState nn1
active

3.5 HDFS-HA manual failover

1. View current cluster status
hadoop101:

[test@hadoop101 module]$ jps
4407 DataNode
4090 NameNode
3946 JournalNode
4974 Jps

hadoop102

[test@hadoop102 module]$ jps
4035 NameNode
3895 JournalNode
4330 DataNode
5165 Jps

hadoop103:

[test@hadoop103 jn]$ jps
4092 DataNode
4383 Jps
3903 JournalNode

2. kill the nn1 process of active on Hadoop 101
This step is to simulate the server down of nn1

[test@hadoop101 module]$ kill -9 4090
[test@hadoop101 module]$ jps
5012 Jps
4407 DataNode
3946 JournalNode

3. web side view
hadoop101:

hadoop102:

4. Activate namenode on Hadoop 102

[test@hadoop102 module]$ hdfs haadmin -transitionToActive  nn2
20/03/11 22:37:04 INFO ipc.Client: Retrying connect to server: hadoop101/192.168.2.101:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Unexpected error occurred  Call From hadoop102/192.168.2.102 to hadoop101:9000 failed on connection exception: java.net.ConnectException: connection denied; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
Usage: haadmin [-transitionToActive [--forceactive] <serviceId>]

We will find that an error will be reported when executing the activation command, because in order to prevent brain crack in the HDFS ha architecture, two namenode must exist at the same time.

5. Manually start namenode on Hadoop 101, and then activate nn2

[test@hadoop101 module]$ hadoop-daemon.sh start namenode
starting namenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-namenode-hadoop101.out
[test@hadoop102 module]$ hdfs haadmin -transitionToActive  nn2
[test@hadoop102 module]$ hdfs haadmin -getServiceState nn2
active

At this point, we find that nn2 switches to active state.

6, summary
Manual failover requires that both processes must exist. For disk corruption, manual failover does not work. It can only solve the problem of process death and can be repaired manually in a short period of time, so it has great limitations.

3.6 configure HDFS-HA automatic failover

1. Add in hdfs-site.xml

<property>
	<name>dfs.ha.automatic-failover.enabled</name>
	<value>true</value>
</property>

2. Add in core-site.xml

<property>
	<name>ha.zookeeper.quorum</name>
	<value>hadoop101:2181,hadoop102:2181,hadoop103:2181</value>
</property>

3. Turn off all services of hdfs

4. Distribute hadoop to other machines

5. Start Zookeeper cluster

[test@hadoop101 hadoop]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[test@hadoop102 module]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

[test@hadoop103 jn]$ zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

6. Initialize the state of HA in Zookeeper
Just operate on Hadoop 101

[test@hadoop101 hadoop]$ hdfs zkfc -formatZK

7. Start HDFS service
Use group script directly to start

[test@hadoop101 .ssh]$ start-dfs.sh 
Starting namenodes on [hadoop101 hadoop102]
hadoop101: starting namenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-namenode-hadoop101.out
hadoop102: starting namenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-namenode-hadoop102.out
hadoop101: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop101.out
hadoop102: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop102.out
hadoop103: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop103.out
Starting journal nodes [hadoop101 hadoop102 hadoop103]
hadoop101: starting journalnode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-journalnode-hadoop101.out
hadoop103: starting journalnode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-journalnode-hadoop103.out
hadoop102: starting journalnode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-journalnode-hadoop102.out
Starting ZK Failover Controllers on NN hosts [hadoop101 hadoop102]
hadoop101: starting zkfc, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-zkfc-hadoop101.out
hadoop102: starting zkfc, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-zkfc-hadoop102.out

8. View jps process
hadoop101:

[test@hadoop101 .ssh]$ jps
5137 DataNode
4979 NameNode
5526 JournalNode
5817 Jps
5676 DFSZKFailoverController
3102 QuorumPeerMain

hadoop102:

[test@hadoop102 ~]$ jps
4243 JournalNode
4325 DFSZKFailoverController
4390 Jps
4023 NameNode
3112 QuorumPeerMain
4104 DataNode

hadoop103:

[test@hadoop103 .ssh]$ jps
3107 QuorumPeerMain
3672 Jps
3497 DataNode
3611 JournalNode

9. View the status of nn1 and nn2 on the web side


10. kill nn1
Analog server down

[test@hadoop101 .ssh]$ kill -9 4979
[test@hadoop101 .ssh]$ jps
5137 DataNode
5526 JournalNode
5676 DFSZKFailoverController
3102 QuorumPeerMain
5967 Jps

11. View web browser
hadoop101:

hadoop102:

So far, the automatic failover of HDFS has been realized.

12. When nn1 returns to normal, it will return to standby state

[test@hadoop101 .ssh]$ hadoop-daemon.sh start namenode
starting namenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-namenode-hadoop101.out
[test@hadoop101 .ssh]$ jps
6128 Jps
5137 DataNode
5526 JournalNode
6043 NameNode
5676 DFSZKFailoverController
3102 QuorumPeerMain

hadoop101:

hadoop102:

4, YARN-HA configuration

4.1 working mechanism of yarn-ha

1. Official documents
http://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html

2. YARN-HA working mechanism

4.2 configure YARN-HA cluster

1. Environmental preparation
(1) Modify ip
(2) Modify host name and ip mapping
(3) Turn off firewall
(4) ssh password free login
(5) Install jdk, hadoop and configure environment variables
(6) Configure Zookeeper cluster

2. Planning cluster

hadoop101 hadoop102 hadoop103
NameNode NameNode
JournalNode JournalNode JournalNode
DataNode DataNode DataNode
ZK ZK ZK
ResourceManager ResourceMananger
NodeManager NodeManager NodeManager

3. Specific configuration
(1) yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <!--Enable resourcemanager ha-->
    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
 
    <!--Statement two resourcemanager Address-->
    <property>
        <name>yarn.resourcemanager.cluster-id</name>
        <value>cluster-yarn1</value>
    </property>

    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>hadoop101</value>
    </property>

    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>hadoop102</value>
    </property>
 
    <!--Appoint zookeeper Address of the cluster--> 
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>hadoop101:2181,hadoop102:2181,hadoop103:2181</value>
    </property>

    <!--Enable automatic recovery--> 
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
 
    <!--Appoint resourcemanager The status information of is stored in zookeeper colony--> 
    <property>
        <name>yarn.resourcemanager.store.class</name>     
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
	</property>
</configuration>

(2) Distribute the configured files to other machines

4. Start hdfs

[test@hadoop101 hadoop]$ start-dfs.sh 
Starting namenodes on [hadoop101 hadoop102]
hadoop101: starting namenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-namenode-hadoop101.out
hadoop102: starting namenode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-namenode-hadoop102.out
hadoop103: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop103.out
hadoop101: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop101.out
hadoop102: starting datanode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-datanode-hadoop102.out
Starting journal nodes [hadoop101 hadoop102 hadoop103]
hadoop101: starting journalnode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-journalnode-hadoop101.out
hadoop103: starting journalnode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-journalnode-hadoop103.out
hadoop102: starting journalnode, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-journalnode-hadoop102.out
Starting ZK Failover Controllers on NN hosts [hadoop101 hadoop102]
hadoop101: starting zkfc, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-zkfc-hadoop101.out
hadoop102: starting zkfc, logging to /opt/module/hadoop-2.7.2/logs/hadoop-test-zkfc-hadoop102.out

5. Start yarn on Hadoop 101

[test@hadoop101 hadoop]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-test-resourcemanager-hadoop101.out
hadoop101: starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-test-nodemanager-hadoop101.out
hadoop103: starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-test-nodemanager-hadoop103.out
hadoop102: starting nodemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-test-nodemanager-hadoop102.out

6. Execute the following command on Hadoop 102

[test@hadoop102 ~]$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-test-resourcemanager-hadoop102.out

This is the difference between hdfs group start script and yarn group start script. After hdfs configuration is highly available, it will start automatically. Yarn needs to start resource manager manually

7. View jps process
hadoop101:

[test@hadoop101 hadoop]$ jps
7617 JournalNode
8405 Jps
8056 NodeManager
7289 NameNode
7802 DFSZKFailoverController
7947 ResourceManager
7405 DataNode
3102 QuorumPeerMain

hadoop102:

[test@hadoop102 ~]$ jps
5728 ResourceManager
5154 NameNode
5236 DataNode
3112 QuorumPeerMain
5592 NodeManager
5338 JournalNode
5453 DFSZKFailoverController
5821 Jps

hadoop103:

[test@hadoop103 .ssh]$ jps
4371 JournalNode
3107 QuorumPeerMain
4605 Jps
4477 NodeManager
4269 DataNode

8. web side view
hadoop101:

hadoop102:
Note here: when you enter 192.168.2.102:8088, you will automatically jump to Hadoop 101:8088. Here you need to configure the host name and ip mapping on windows, otherwise the following situations will occur:

The page after configuration is:

This will jump to the active resource manager.

9. kill the resource manager on Hadoop 101
Simulate the resource manager down machine

[test@hadoop101 hadoop]$ kill -9 7947
[test@hadoop101 hadoop]$ jps
7617 JournalNode
8760 Jps
8056 NodeManager
7289 NameNode
7802 DFSZKFailoverController
7405 DataNode
3102 QuorumPeerMain

At this time, the resource manager status on Hadoop 102 is activated, and the web end views Hadoop 102. The result is as follows:

View Hadoop 101 on the web:

10. Restart resource manager on Hadoop 101

[test@hadoop101 hadoop]$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /opt/module/hadoop-2.7.2/logs/yarn-test-resourcemanager-hadoop101.out
[test@hadoop101 hadoop]$ jps
7617 JournalNode
9030 Jps
8056 NodeManager
7289 NameNode
7802 DFSZKFailoverController
8986 ResourceManager
7405 DataNode
3102 QuorumPeerMain

Enter again on the web side, and the result is:

Jump to the resource manager page on Hadoop 102 of active.

135 original articles published, 88 praised, 40000 visitors+
Private letter follow

Tags: Hadoop Zookeeper NodeManager ssh

Posted on Thu, 12 Mar 2020 01:05:33 -0400 by ReverendDexter