Using IDEA to submit MapReduce jobs to pseudo distributed Hadoop remotely

Environmental Science

  1. VirtualBox 6.1
  2. IntelliJ IDEA 2020.1.1
  3. Ubuntu-18.04.4-live-server-amd64
  4. jdk-8u251-linux-x64
  5. hadoop-2.7.7

Install pseudo distributed Hadoop

Install pseudo distributed reference: Hadoop installation tutorial single machine / pseudo distributed configuration Hadoop 2.6.0 (2.7.1) / Ubuntu 14.04 (16.04)

Let's not go over it here, note that you need to install yarn.

That is to say, I use the host only network mode.

After starting successfully, using jps, the display should have the following items:

Modify configuration

First, use ifconfig to view the local IP address. I'm here. I'll use the IP address as an example to show.

Modify core-site.xml to change localhost to server IP


Modify mapred-site.xml and add mapreduce.jobhistory.address


If this item is not added, the following error will be reported

[main] INFO  org.apache.hadoop.ipc.Client - Retrying connect to server: Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)  

Modify yarn-site.xml and add the following


If it is not added, an error will be reported

INFO ipc.Client: Retrying connect to server: Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)

After configuration, you need to restart dfs, yarn, and history server.

Configure Hadoop running environment of Windows

First, extract hadoop-2.7.7.tar.gz in Linux to a directory in Windows. In this paper, D:\ProgramData\hadoop

Then configure the environment variables:




In addition, the PATH variable is appended at the end;% HADOOP_HOME%\bin

Then go to download winutils. The download address is Find the corresponding version to download. Here, download version 2.7.7.

Copy winutils.exe to the $Hadoop? Home \ bin directory and hadoop.dll to the C:\Windows\System32 directory.

Write WordCount

First create the data file wc.txt

hello world
dog fish
hello world
dog fish
hello world
dog fish

Then move to Linux, and use HDFS dfs - put / path / wc.txt. / input to put the data file into dfs

Then use IDEA to create maven project and modify pom.xml file

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns=""






The next step is to write the WordCount program. Here I refer to

Then modify WordcountDriver

package cabbage;

import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

 * It is equivalent to the client of a yarn cluster,
 * We need to encapsulate the relevant running parameters of our mr program here, and specify the jar package
 * Final submission to yarn
public class WordcountDriver {
     * remove directory specified
     * @param conf
     * @param dirPath
     * @throws IOException
    private static void deleteDir(Configuration conf, String dirPath) throws IOException {
        FileSystem fs = FileSystem.get(conf);
        Path targetPath = new Path(dirPath);
        if (fs.exists(targetPath)) {
            boolean delResult = fs.delete(targetPath, true);
            if (delResult) {
                System.out.println(targetPath + " has been deleted sucessfullly.");
            } else {
                System.out.println(targetPath + " deletion failed.");


    public static void main(String[] args) throws Exception {
        System.setProperty("HADOOP_USER_NAME", "hadoop");
        // 1 get configuration information or job object instance
        Configuration configuration = new Configuration();
        System.setProperty("hadoop.home.dir", "D:\\ProgramData\\hadoop");
        configuration.set("", "yarn");
        configuration.set("", "hdfs://");
        configuration.set("", "true");//Cross platform submission
        // 8. The configuration is submitted to yarn to run. windows and Linux variables are inconsistent
//        configuration.set("", "yarn");
//        configuration.set("yarn.resourcemanager.hostname", "node22");
        //Delete the output directory first
        deleteDir(configuration, args[args.length - 1]);

        Job job = Job.getInstance(configuration);

        // 6 specify the local path of the jar package of this program
//        job.setJar("/home/admin/wc.jar");

        // 2. Specify the mapper/Reducer business class to be used by this business job

        // 3 specify kv type of mapper output data

        // 4 specifies the kv type of the final output data

        // 5 specify the directory of the original input file of the job
        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        // 7. Submit the relevant parameters configured in the job and the jar package of the java class used in the job to yarn for running
//        job.submit();
        boolean result = job.waitForCompletion(true);

critical code

System.setProperty("HADOOP_USER_NAME", "hadoop");

If you do not add this line, you will get permission error

org.apache.hadoop.ipc.RemoteException: Permission denied: user=administration, access=WRITE, inode="/":root:supergroup:drwxr-xr-x

If you modify or report an error, you can consider modifying the file permissions 777

Here I mainly refer to several articles

System.setProperty("hadoop.home.dir", "D:\\ProgramData\\hadoop");
configuration.set("", "yarn");
configuration.set("", "hdfs://");
configuration.set("", "true");//Cross platform submission

If you don't add this line, you will report an error

Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class cabbage.WordcountMapper not found

Here I mainly refer to

//Delete the output directory first
deleteDir(configuration, args[args.length - 1]);

output will not be overwritten every time it runs. If it is not deleted, an error will be reported, which should be known here.

Add dependency

Then add the dependent Libary reference. Right click on the project - > open module settings or press F12 to open module properties

Then click the plus sign on the right of dependencies - > Library

Then import all the corresponding packages under $Hadoop? Home

Then import $Hadoop? Home \ share \ Hadoop \ tools \ Lib

Then use maven's package to package the jar package

Add resources

Create in resources and add the following

log4j.rootLogger=INFO, stdout
log4j.appender.stdout.layout.ConversionPattern=%d{ABSOLUTE} | %-5.5p | %-16.16t | %-32.32c{1} | %-32.32C %4L | %m%n

Then move the core-site.xml, hdfs-site.xml, mapred-site.xml and yarn-site.xml in Linux. The final project structure is shown in the figure below

Configure IDEA

After the above configuration, you can set the operation parameters

Pay attention to two places

  1. Program documents, specify the input file and output folder, note that hdsf://ip:9000/user/hadoop/xxx

  2. Working Directory, i.e. Working Directory, specified as the directory where $Hadoop? Home is located


Click Run. If the error report says that there is no dependency, for example, I will report that there is no slf4j log package, and then add it to the dependency.

After running, IDEA is shown as follows:

Then look at the output in the output file. Enter HDFS DFS - cat. / output / *, and the following results will be displayed, which is correct.

If you have any questions, you can put them up in the comment area and discuss them together.

reference resources


Tags: Java Hadoop Apache log4j xml

Posted on Thu, 14 May 2020 03:50:06 -0400 by Helios