Java spark series 6-Spark SQL programming practice

1, Spark DataFrame overview

In Spark semantics, DtatFrame is a distributed row set, which can be imagined as a table in a relational database or an Excel table with column headers. Like RDD, it has the following characteristics:

  1. Immutable: once the RDD and DataFrame are created, they cannot be changed. New RDD and DataFrame can only be generated through tranformation
  2. Lazy Evaluations: only action can start the execution of Transformation.
  3. Distributed: DataFrame is distributed like RDD.

1.1 create DataFrame

Supported data sources:

  1. Parquet Files
  2. ORC Files
  3. JSON Files
  4. Hive Tables
  5. JDBC
  6. Avro Files

Syntax for creating DataFrame:

Dataset<Row> df = spark.read().json("examples/src/main/resources/people.json");

Starting point of Spark SQL: SparkSession
code:

import org.apache.spark.sql.SparkSession;

SparkSession spark = SparkSession
  .builder()
  .appName("Java Spark SQL basic example")
  .config("spark.some.config.option", "some-value")
  .getOrCreate();

With SparkSession, applications can create DataFrames from existing RDD S, Hive tables, or Spark data sources.

1.1.1 create DataFrame through json file

Json test file:

{"name": "Michael",  "age": 12}
{"name": "Andy",  "age": 13}
{"name": "Justin",  "age": 8}

code:

package org.example;

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SparkSQLTest4 {
    public static void main(String[] args){
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest4")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        Dataset<Row> df = spark.read().json("file:///home/pyspark/test.json");
        df.show();

        spark.stop();
    }

}

Test record:

1.1.2 creating a DataFrame from a CSV file

csv test file:

code:

package org.example;

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SparkSQLTest5 {
    public static void main(String[] args){
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest4")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        Dataset<Row> df = spark.read().format("csv").option("header", "true").load("file:///home/pyspark/emp.csv");
        df.show();

        spark.stop();
    }

}


Test record:

1.1.3 create DataFrame through hive table

code:

package org.example;

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SparkSQLTest2 {
    public static void main(String[] args){
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest2")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        Dataset<Row> sqlDF = spark.sql("SELECT * FROM test.ods_fact_sale limit 100");
        sqlDF.show();

        spark.stop();
    }

}

Test record:

1.1.4 create DataFrame through jdbc data source

code:

package org.example;

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SparkSQLTest3 {
    public static void main(String[] args){
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest3")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        Dataset<Row> jdbcDF = spark.read()
                .format("jdbc")
                .option("url", "jdbc:mysql://10.31.1.123:3306/test")
                .option("dbtable", "(SELECT * FROM EMP) tmp")
                .option("user", "root")
                .option("password", "abc123")
                .load();

        jdbcDF.printSchema();
        jdbcDF.show();

        spark.stop();
    }

}

Test record:

2, Spark SQL practice

We use four tables under the classic scoot user to simulate Spark SQL Combat:

emp
dept
bonus
salgrade

2.1 statistics of dataframe

When generating a DataFrame, the statistical information will be retained, which is somewhat similar to the statistical information of a relational database

code:

package org.example;

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SparkSQLTest7 {
    public static void main(String[] args){
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest7")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        spark.sql("use test");
        Dataset<Row> sqlDF = spark.sql("SELECT * FROM emp");

        sqlDF.describe().show();

        spark.stop();

    }


}

Test record:
As can be seen from the following figure, the DataFrame makes statistics for each column.

  1. count is the total number of columns that are not empty
  2. Mean mean
  3. stddev standard deviation
  4. min
  5. max

2.2 select operation of dataframe

In some application scenarios, we only need some columns of the DataFrame, which can be implemented through select:

code:

package org.example;

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SparkSQLTest8 {
    public static void main(String[] args){
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest8")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        spark.sql("use test");
        Dataset<Row> sqlDF = spark.sql("SELECT * FROM emp");
        sqlDF.select("ename","hiredate").show();

        spark.stop();
    }
}

Test record:

2.3 DataFrame operations on columns

In some application scenarios, we need to alias, add and delete columns.

code:

package org.example;

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SparkSQLTest9 {
    public static void main(String[] args){
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest8")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        spark.sql("use test");
        Dataset<Row> sqlDF = spark.sql("SELECT * FROM emp");

        //Output to see which columns are available
        System.out.println("\n" + "\n" + "\n");
        for ( String col:sqlDF.columns() ){
            System.out.println(col);
        }
        System.out.println("\n" + "\n" + "\n");

        //Delete a column
        sqlDF.drop("comm").show();

        //Add (or replace) a column
        //sqlDF.withColumn("new_comm", "sal").show();

        //Rename column
        sqlDF.withColumnRenamed("comm","comm_new").show();


        spark.stop();
    }
}

Test record:
Display column information:

Delete a column:

Replace column name:

2.3 filtering data

Filter is used to filter data. In fact, where can also be used. Where is the alias of filter

code:

package org.example;

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SparkSQLTest10 {
    public static void main(String[] args) {
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest10")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        spark.sql("use test");
        Dataset<Row> sqlDF = spark.sql("SELECT * FROM emp");

        sqlDF.where("comm is not null").show();
        spark.stop();
    }
}

Test record:

2.4 simple aggregation operation

Common aggregation operations:

operationdescribe
avg/meanaverage value
countNumber of Statistics
countDistinctCount the number of unique
maxFind the maximum value
minFind the minimum value
sumSum
sumDistinctTotal of statistics unique value
skewnessSkewness
stddevstandard deviation

2.4.1 simple polymerization

code:

package org.example;

import org.apache.spark.sql.*;

public class SparkSQLTest11 {
    public static void main(String[] args) {
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest11")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        spark.sql("use test");
        Dataset<Row> sqlDF = spark.sql("SELECT * FROM emp");

        sqlDF.groupBy("deptno").agg(functions.avg("sal").alias("avg_sal"),
                                          functions.max("comm").alias("max_comm")).show();
        spark.stop();

    }

}


Test record:

2.5 user defined functions

We want to use custom functions to implement some complex scenarios.

code:

package org.example;

import org.apache.spark.sql.*;
import org.apache.spark.sql.api.java.UDF1;
import org.apache.spark.sql.types.DataTypes;

public class SparkSQLTest12 {
    public static void main(String[] args) {
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest12")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        spark.udf().register("plusOne", new UDF1<Integer, Integer>() {
            @Override
            public Integer call(Integer x) {
                return x + 1;
            }
        }, DataTypes.IntegerType);
        spark.sql("SELECT plusOne(5)").show();
        spark.stop();

    }


}

Test record:

2.6 meter connection

Syntax:

DataFrame.join(other, on=None, how=None)

other            Need to connect DataFrame
on                str, list or Column, Optional
how             str, Optional
                   default inner. Must be one of: inner, cross, outer, full, fullouter, full_outer, left, leftouter, left_outer, right, rightouter, right_outer, semi, leftsemi, left_semi, anti, leftanti and left_anti                    

2.6.1 internal connection

code:

package org.example;

import org.apache.spark.sql.*;

public class SparkSQLTest13 {
    public static void main(String[] args) {
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest13")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        spark.sql("use test");
        Dataset<Row> df1 = spark.sql("SELECT * FROM emp");
        Dataset<Row> df2 = spark.sql("SELECT * FROM dept");

        Dataset<Row> df3 = df1.join(df2, df1.col("deptno").equalTo(df2.col("deptno")) ,"inner").select(df1.col("empno"),df1.col("ename"),df2.col("dname"),df2.col("loc"));
        df3.show();
        spark.stop();
    }


}

Test record:

2.6.2 external connection

Here we use a right connection

code:

package org.example;

import org.apache.spark.sql.*;

public class SparkSQLTest14 {
    public static void main(String[] args) {
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest14")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        spark.sql("use test");
        Dataset<Row> df1 = spark.sql("SELECT * FROM emp");
        Dataset<Row> df2 = spark.sql("SELECT * FROM dept");

        Dataset<Row> df3 = df1.join(df2, df1.col("deptno").equalTo(df2.col("deptno")) ,"right").select(df1.col("empno"),df1.col("ename"),df2.col("dname"),df2.col("loc"));
        df3.show();
        spark.stop();
    }


}

Test record:

2.7 sorting

Syntax:

DataFrame.orderBy(*cols, **kwargs)
-- Returns a new column sorted by the specified column DataFrame

parameter:      ascending   bool or list,Optional
              Boolean or Boolean list(Default to True). Sort ascending and descending. Specify a list for multiple sort orders. If a list is specified, the length of the list must be equal to cols The length of the. 

code:

package org.example;

import org.apache.spark.sql.*;

public class SparkSQLTest15 {
    public static void main(String[] args) {
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest15")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        spark.sql("use test");
        Dataset<Row> df1 = spark.sql("SELECT * FROM emp");
        Dataset<Row> df2 = spark.sql("SELECT * FROM dept");

        Dataset<Row> df3 = df1.join(df2, df1.col("deptno").equalTo(df2.col("deptno")) ,"right").select(df1.col("empno"),df1.col("ename"),df2.col("dname"),df2.col("loc"));
        Dataset<Row> df4 = df3.orderBy(df3.col("dname").desc(),df3.col("ename").asc() );
        df4.show();
        spark.stop();

    }
}

Test record:

2.8 SparkSQL operation file

sql functions on SparkSession allow applications to run sql queries programmatically and return the results as datasets.

code:

package org.example;

import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;

public class SparkSQLTest16 {
    public static void main(String[] args){
        SparkSession spark = SparkSession
                .builder()
                .appName("SparkSQLTest16")
                .config("spark.some.config.option", "some-value")
                .getOrCreate();

        Dataset<Row> df = spark.read().json("file:///home/pyspark/test.json");
        df.createOrReplaceTempView("people");

        spark.sql("select * from people where age = 12").show();

        spark.stop();
    }

}

Test record:

reference resources:

  1. http://spark.apache.org/docs/2.4.2/sql-getting-started.html
  2. http://spark.apache.org/docs/latest/sql-ref-functions-udf-scalar.html
  3. http://spark.apache.org/docs/latest/api/java/index.html

Posted on Sun, 26 Sep 2021 05:22:02 -0400 by morleypotter