New features of Java syntax_ java5 to java11

1, Foreword

I never thought, it's 0202, Sun is dead, I have to start from the new features of Java 5, and focus on the new features of Java 8...

In fact, there are a lot of such things on the Internet. Why do I have to write them?

  1. Because leaders think that we are too (really) busy (lazy) and have no time to study at ordinary times, so let's focus on it and make a quick (should) payment.
  2. All the information on the Internet is very good and comprehensive, but for us, we should focus on the new features that can improve the production efficiency.
  3. Starting from the new features of java 5, the apparent reason is that we can see the development of java more clearly. The real reason is that you can taste it by yourself...

The features of this express edition are as follows:

  1. It mainly talks about new features that have a great impact on development, such as the addition or enhancement of Java class library API, syntax sugar at the level of compilation (javac), etc. Other improvements at the bytecode level, the overall Java architecture level, and the internal virtual machine level have little impact on development. This quick version of the data will not be covered. (I can't speak well either... )
  2. The new features from Java 5 to Java 7 are just going through, focusing on the new features of Java 8.
  3. Java 9 and Java 10 are over versions, so their new features will be discussed together with Java 11.
  4. Relevant codes are all running on JDK11 + IDEA 2019.1 environment.

Of course, just don't practice fake tricks. All the sample codes of this quick release materials are located in the following github or gitee warehouse. Please download them by yourself. After you have prepared Java and IDE locally, you can reference and practice them by yourself: (the code is under src/test/java)

https://github.com/zhaochuninhefei/study-czhao/tree/master/jdk11-test
or : https://gitee.com/XiaTangShaoBing/study/tree/master/jdk11-test

2, New features from Java 5 to Java 7

This chapter mainly talks about some important new features of Java 5 to Java 7 in syntax, as well as some important new class library API s.

2.1 new features of Java 5

There are many new features in Java 5, but most of us are familiar with them. Let's go through them briefly:

  • Generic Generics:

Generics are parameterized types. After the introduction of generics, it is allowed to specify the type of elements in the collection, which avoids mandatory type conversion and allows type checking at compile time. Generics are the cornerstone of variable length parameter lists (varargs), annotations, enumerations, and collections.

List<String> lst01 = new ArrayList<String>();

// Accept any type with? To avoid type checking warnings when calling methods.
private void test01(List<?> list) {
    for (Iterator<?> i = list.iterator(); i.hasNext(); ) {
        System.out.println((i.next().toString()));
    }
}

// Restricted type, which means that the parameter type must inherit TestCase01Generic
private <T extends TestCase01Generic> void test02(T t) {
    t.doSomething();
}
  • Enumeration:

Enumeration class is a special class, which has its own member variables, member methods and constructors (only private access modifiers can be used, so constructors cannot be called from outside, and constructors are only called when constructing enumeration values); enum defined enumeration classes inherit by default java.lang.Enum Class, and implements the java.lang.Seriablizable And java.lang.Comparable Two interfaces; all enumeration values are public static final by default (no need to add explicitly), and non Abstract enumeration classes can no longer be subclassed; all instances (enumeration values) of enumeration classes must be explicitly listed in the first row of enumeration classes, otherwise the enumeration class will never produce instances. When listing these instances (phyll value), the system will automatically add the public static final decoration without the programmer's explicit addition.

enum Color {
    black, white, red, yellow
}

// Enumerations are often used in switch statements
private void test01(Color color) {
    switch (color) {
        case red:
            System.out.println("Frost leaves red in February flowers");
            break;
        case black:
            System.out.println("Black clouds crush the city");
            break;
        case white:
            System.out.println("A line of egrets in the sky");
            break;
        case yellow:
            System.out.println("The old man said goodbye to the Yellow Crane Tower in the West");
            break;
    }

    System.out.println(Color.black.compareTo(color));
    System.out.println(Color.white.compareTo(color));
    System.out.println(Color.red.compareTo(color));
    System.out.println(Color.yellow.compareTo(color));
}
  • Autoboxing & unboxing:

Automatic boxing and unboxing of eight primitive types and their encapsulated reference types: Boolean, Byte, Short, Character, Integer, Long, Float, Double

List<Integer> lstInt = new ArrayList<Integer>();
lstInt.add(1);
lstInt.add(2);
lstInt.add(3);

for (int i = 0; i < lstInt.size(); i++) {
    System.out.println(lstInt.get(i).toString());
    System.out.println(lstInt.get(i) + 1);
}
  • Variable length argument list varargs number of arguments: when the argument types are the same, overload functions are combined.
me.test01("One ring to rule them all,");
me.test01("one ring to find them,", "One ring to bring them all ", "and in the darkness bind them.");

private void test01(String ... args) {
    for (String s : args) {
        System.out.println(s);
    }
}
  • Annotations:

Annotations are used to provide metadata for Java code. Generally speaking, annotations do not directly affect code execution. Many annotations are used to make data constraints and standard definitions, which can be understood as code specifications (code templates). However, some annotations can survive to the JVM runtime, so they can be combined with other means (such as reflection) to affect the actual running code logic. Therefore, the purpose of annotation is generally two: one is to standardize the code; the other is to inject dynamically (it needs to be implemented with other means).

Generally, annotations can be divided into four categories:

  1. Java's own standard annotations, such as @ Override, @ Deprecated, @ SuppressWarnings, etc., are usually used to check the code when compiling;
  2. Meta annotation is used to define annotation, including @ Retention, @ Target, @ Inherited, @ Documented, etc.
  3. Third party annotations, such as spring, mybatis and lombok, provide their own annotations.
  4. Custom annotation, defined with @ interface and meta annotation.

// When the compiler sees the @ Override annotation, it knows that this method must Override the method of the parent class
// Therefore, it will strictly check whether the method declaration information is the same as the corresponding method of the parent class
// Such as return value type, parameter list, etc
@Override
public String toString() {
    return "Untie the three autumn leaves, you can bloom in February.";
}

// An example of a custom annotation for non empty checking of method parameters in AOP
@Target(ElementType.PARAMETER)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface ParamNotEmpty {
}
  • foreach loop: a syntactic sugar of the iterator loop.
List<Integer> numbers = new ArrayList<Integer>();
for (int i = 0; i < 10; i++) {
    numbers.add(i + 1);
}

for(Integer number : numbers) {
    System.out.println(number);
}
  • Static import import static: nothing to say, just look at the code, not recommended.
package java5;

import static java5.TestCase07ImportStatic.TestInner.test;
import static java.lang.System.out;
import static java.lang.Integer.*;

/**
 * @author zhaochun
 */
public class TestCase07ImportStatic {
    public static void main(String[] args) {
        test();
        out.println(MIN_VALUE);
        out.println(toBinaryString(100));
    }

    static class TestInner {
        public static void test() {
            System.out.println("TestInner");
        }
    }
}
  • Formatting: an interpreter for printf style formatted strings has been added in Java 5.
private void test01_formatter() {
    StringBuilder sb = new StringBuilder();
    Formatter formatter = new Formatter(sb);
    // "I don't see the ancients before, I don't see the newcomers after. Read the long world, only Pathetique and tears. "
    formatter.format("%4$7s,%3$7s. %2$7s,%1$7s. %n", "Alone and pathetic", "Read the world", "No one to come", "No ancients before");
    // "Zuchongzhi's number of fans: + 3.1415927"
    formatter.format("Zu Chongzhi's Enigma number: %+5.7f %n", Math.PI);
    // "Price of a mobile phone: ¥ 5988.00"
    formatter.format("Price of a mobile phone : ¥ %(,.2f", 5988.0);
    System.out.println(formatter.toString());
    formatter.close();
}

private void test02_printf() {
    List<String> lines = new ArrayList<>();
    lines.add("Sweet scented osmanthus falls at leisure,");
    lines.add("The night is still and the spring is empty.");
    lines.add("The rising of the moon startles the birds,");
    lines.add("In the spring stream.");
    for (int i = 0; i < lines.size(); i++) {
        System.out.printf("Line %d: %s%n", i + 1, lines.get(i));
    }
}

private void test03_stringFormat() {
    Calendar c = new GregorianCalendar(2020, Calendar.MAY, 28);
    System.out.println(String.format("Today is a good day: %1$tY-%1$tm-%1$te", c));
}

private void test04_messageFormat() {
    String msg = "Hello!{0}!Have your express delivery! succeed in inviting sb.{1}Take your express delivery from cabinet No{2}Hourly rate{3}Yuan~~~";
    MessageFormat mf = new MessageFormat(msg);
    String fmsg = mf.format(new Object[]{"Zhang San", 3, 8, 2});
    System.out.println(fmsg);
}

private void test05_dateFormat() {
    String str = "2020-05-28 14:55:21";
    SimpleDateFormat format1 = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
    SimpleDateFormat format2 = new SimpleDateFormat("yyyyMMddHHmmss");
    try {
        System.out.println(format2.format(format1.parse(str)));
    } catch (Exception e) {
        e.printStackTrace();
    }
}

Other examples include ProcessBuilder, Scanner, enhanced reflection, enhanced collection framework, StringBuilder, concurrent toolkit, etc. because they are either used less, or they are familiar with each other, we will not introduce them one by one here.

2.2 new features of Java 6

There are few new features in Java 6 that have little impact on development. Take a look.

  • WebService annotation support
  • Introduced an engine that can run Javascript, python and other scripting languages
  • Compiler API, dynamic compilation of java source code in runtime
  • Http Server API
  • General Annotations support
  • JDBC 4.0
  • The collection framework has been enhanced to add some uncommon interfaces, classes and methods.

There are some others, not listed.

2.3 new features of Java 7

There are not many new features in Java 7, but there are several new syntax or new class library API that can improve the development efficiency compared with Java 6. Let's take a look.

  • switch supports String
private String test01_switch(String title) {
    switch (title) {
        case "Deer firewood":
            return "There are no people in the empty mountain, but people speak loudly. Return to the deep forest and take a look at the moss.";
        case "Farewell in the mountains":
            return "Send each other off in the mountains, and cover the wood gate at dusk. Spring grass will be green next year, but Wang sun will not return.";
        case "Weichengqu":
            return "Weicheng Dynasty rain light dust, green willow new guest houses. I would like to persuade you to make a glass of wine even more. There is no one in Yangguan.";
        default:
            return "";
    }
}
  • Automatically infer generic types on instantiation
List<String> tempList = new ArrayList<>();
  • Auto close interface: some resource management classes, such as file IO and JDBC Conection, implement auto close interface. They can use try with resources new syntax.
String filePath = "/home/work/sources/jdk11-test/src/test/java/java7/TestCaseForJava7.java";
try (FileInputStream fis = new FileInputStream(filePath);
        InputStreamReader isr = new InputStreamReader(fis, StandardCharsets.UTF_8);
        BufferedReader br = new BufferedReader(isr)) {
    String line;
    while ((line = br.readLine()) != null) {
        System.out.println(line);
    }
} catch (IOException e) {
    e.printStackTrace();
}
  • Catch multiple exceptions
try {
    if (n < 0) {
        throw new FileNotFoundException();
    }
    if (n > 0) {
        throw new SQLException();
    }
    System.out.println("No Exceptions.");
} catch (FileNotFoundException | SQLException e) {
    e.printStackTrace();
}
  • Number enhancement: java7 supports using underscores to divide long numbers, and supports using 0b to write binary numbers directly.
int num1 = 1_000_000;
System.out.println(num1);

int num2 = 0b11;
System.out.println(num2);
  • New IO 2.0: java7 provides some new file operation API s, such as Path, and provides a WatchService to monitor the specified directory, which can listen to the events of adding, deleting and modifying files in the specified directory. (but it is not allowed to directly monitor the content of file changes)
private void test06_newIO2() {
    Path path = Paths.get("/home/zhaochun/test");
    System.out.printf("Number of nodes: %s %n", path.getNameCount());
    System.out.printf("File name: %s %n", path.getFileName());
    System.out.printf("File root: %s %n", path.getRoot());
    System.out.printf("File parent: %s %n", path.getParent());

    try {
        Files.deleteIfExists(path);
        Files.createDirectory(path);
        watchFile(path);
    } catch (IOException | InterruptedException e) {
        e.printStackTrace();
    }
}

private void watchFile(Path path) throws IOException, InterruptedException {
    WatchService service = FileSystems.getDefault().newWatchService();
    Path pathAbs = path.toAbsolutePath();
    pathAbs.register(service,
            StandardWatchEventKinds.ENTRY_CREATE,
            StandardWatchEventKinds.ENTRY_MODIFY,
            StandardWatchEventKinds.ENTRY_DELETE);
    while (true) {
        WatchKey key = service.take();
        for (WatchEvent<?> event : key.pollEvents()) {
            String fileName = event.context().toString();
            String kind = event.kind().name();

            System.out.println(String.format("%s : %s", fileName, kind));
            if ("end".equals(fileName) && "ENTRY_DELETE".equals(kind)) {
                return;
            }
        }
        key.reset();
    }
}
  • JDBC 4.1: some methods have been added to the connection interface. If there is a previous implementation or encapsulation of JDBC Connection, it will not compile after upgrading to Java 7. If you always use the jdbc driver package provided by each database, you only need to confirm that the version supports JDBC 4.1 or above.
  • fork/join framework: Java 7 adds a new multi-threaded programming framework, fork/join. It is rarely used directly, and Java 8 has added a parallel mode of collection operation based on this parallel programming framework later, so we will briefly talk about the fork/join mechanism when we learn the new features of Java 8 later, not to mention here.

There are other new features in Java 7 that have little impact on development, so I won't cover them here.

3, What's new in Java 8

Java 8 is another milestone version of Java after Java 5, with many revolutionary new features.

Of course, although there are many new features in Java8, we mainly talk about the new features in syntax that have a great impact on Development:

  • lambda expressions
  • Stream API
  • Interface default method
  • Optional
  • Map operation and HashMap performance optimization
  • Date API
  • CompletableFuture

3.1 lambda expression

The most important new feature of Java 8 is to add the support for lambda expression, so that Java can carry out functional programming.

3.1.1 what is a lambda expression

Lambda expressions are blocks of code that can be passed by reference, similar to the concept of closures in other languages: they are codes that implement a function, can accept one or more input parameters, and can return a result value. Closures are defined in a context that accesses values from that context.

In Java 8, lambda expression can be expressed as a concrete implementation of functional interface. The so-called functional interface is the interface that only defines an abstract method. (a functional interface can be annotated with @ FunctionalInterface to force the interface to check at compile time if it has only one abstract method. But this annotation is not required. )

Let's look at a specific example:

Suppose we have such an interface, which has only one abstract method and is a functional interface:

@FunctionalInterface
interface TestLambda {
    String join(String a, String b);
}

And a method to use it: (obviously this method doesn't need to know who the class that implements the TestLambda interface is.)

private String joinStr(TestLambda testLambda, String a, String b) {
    return testLambda.join(a, b);
}

Next, we try to connect two strings using the joinStr method. Before Java 8, we used anonymous inner classes to directly implement the TestLambda interface where needed:

String s1 = joinStr(new TestLambda() {
    @Override
    public String join(String a, String b) {
        return a + ", " + b;
    }
}, "How worried can you be", "Like a river flowing eastward in spring");
System.out.println(s1);

Obviously, anonymous inner classes are bloated and not intuitionistic in semantics. Are you fed up?

Starting from Java8, you can use lambda expressions instead of anonymous inner classes, which are (a, b) - > A + "," + B in the following code. This writing method is simple, semantic intuitive, and closer to natural language:

TestLambda simpleJoin = (a, b) -> a + ", " + b;
String s2 = joinStr(simpleJoin, "High hall mirror sad white hair", "Though silken-black at morning, have changed by night to snow");
System.out.println(s2);

Or write directly as:

String s3 = joinStr((a, b) -> a + ", " + b, "High hall mirror sad white hair", "Though silken-black at morning, have changed by night to snow");
System.out.println(s3);

When the interface logic you want to implement is complex, you can use {} to package the code block; you can also declare the type for each input parameter:

TestLambda joinWithCheck = (String a, String b) -> {
    if (a != null && b != null) {
        return a + ", " + b;
    } else {
        return "absolutely empty";
    }
};
String s4 = joinStr(joinWithCheck, null, null);
System.out.println(s4);

Now we can know:

  • For those methods whose parameters are functional interfaces, a lambda expression can be passed in when calling. This lambda expression is a specific implementation of the interface.
  • lambda expressions are formally expressed as (parameter list of function) - > {function implementation}.
  • The {} used to wrap function implementation can be omitted when there is only one line.
  • When there is only one line of implementation without {}, the calculation result of this line of code is returned by default (when the function has a return value).
  • When there is {} in a multiline implementation, the calculation result of the corresponding type needs to be returned explicitly (when the function has a return value).
  • lambda expression is equivalent to the anonymous inner class in effect. (however, the implementation mechanism of the two is not the same. lambda expressions cannot be simply regarded as high-level syntactic sugar of anonymous inner classes. )
  • lambda expressions can be inline or referred to as separate variables or method references.
  • Why can a functional interface implemented by a lambda expression define only one abstract method? Because lambda expressions don't use method names... I don't know which method to call when there are many methods...

3.1.2 access restriction of lambda expression to context

Inside a lambda expression, external variables are accessible. However, it should be noted that if the external variable is a local variable, the local variable must be final (it can not be declared final, but it cannot be assigned a second time, that is, it needs to be implicit final).

private void test02_finalVars() {
    String a = "Wang Wei";
    new Thread(() -> {
        // External final local variables can be used in lambda expressions (final is not explicitly declared)
        System.out.println(a);
        // However, the following sentence cannot be re assigned to "external local variables used in lambda expressions".
        // That is, the external local variables used inside the lambda are implicit final.
//            a = "Li Bai";
    }).start();
    // A cannot also be reassigned outside the lambda, because it needs to be used in lambda expressions, so a is implicitly final.
//        a = "Li Bai";
}

Note that local variables cannot be reassigned. For instance variables, static variables can be accessed at will in the lambda expression, including reassignment.

3.1.3 method reference

Java 8 provides a simple form of method reference in addition to the standard (compared to other languages) lambda expressions.

  • Method reference of object instance instance::method
new Thread(this::test02_finalVars).start();
// The above sentence is equivalent to the following sentence:
new Thread(() -> this.test02_finalVars()).start();

test02_finalVars is an example method in the previous example.

  • Static method reference of Class::static_method
new Thread(TestCase01Lambda::printSomething).start();
// Equivalent to:
new Thread(() -> TestCase01Lambda.printSomething()).start();
...
private static void printSomething() {
    System.out.println("Desert smoke straight, long river yen.");
}
  • Instance method of class refers to Class::method
List<String> lines = new ArrayList<>();
lines.add("a005");
lines.add("a001");
lines.add("a003");
Collections.sort(lines, String::compareTo);
// Equivalent to:
Collections.sort(lines, (o1, o2) -> o1.compareTo(o2));
System.out.println(lines);
  • Constructor reference class < T >:: New
Set<String> lineSet = transferElements(lines, HashSet::new);
// Equivalent to
lineSet = transferElements(lines, () -> new HashSet<>());
System.out.println(lineSet);
...
private static <T, SOURCE extends Collection<T>, DEST extends Collection<T>> DEST transferElements(
        SOURCE sourceCollection,
        Supplier<DEST> collectionFactory) {

    DEST result = collectionFactory.get();
    result.addAll(sourceCollection);
    return result;
}

3.1.4 standard functional interface

As we have said before, lambda expressions can only implement functional interfaces, that is, interfaces defined by only one abstract method. Java 8 also adds new java.util.function Package, which defines some functional interfaces that can be widely used in lambda.

  • Function: accept a parameter and return the result based on the parameter value
  • Predicate: accepts a parameter and returns a Boolean value based on the parameter value
  • BiFunction: accepts two parameters and returns the result based on the parameter value
  • Supplier: parameter not accepted, return a result
  • Consumer: accepts a parameter, no result (void)

These standard functional interfaces are widely used in Stream operation. We will see them everywhere when we talk about Stream later.

If you look at the source code of these interfaces now, you will find that although they only define an abstract method, there are often some default instance methods inside. Isn't it a bit muddled? Isn't there no instance method for the interface? Let's talk about another new feature of Java 8 (interface default method) later.

3.2 Stream API

The new stream API in Java 8 is an enhancement of Collection object function. It focuses on a variety of very convenient and efficient aggregate operation s or bulk data operations on Collection objects. Stream API greatly improves programming efficiency and program readability with the help of the same new Lambda expression. At the same time, it provides two modes of convergence operation: serial mode and parallel mode. The concurrent mode can make full use of the advantages of multi-core processors, and use the fork/join parallel mode (a new feature of Java 7, because it is rarely used directly, we did not talk about this) to split tasks and accelerate processing. It is usually difficult and error prone to write parallel code, but using stream API can easily write high-performance concurrent programs without writing a line of multi-threaded code. So, for the first time in Java 8 java.util.stream It is a product of the comprehensive influence of functional language + multi-core era.

The so-called aggregate operation refers to various statistical operations on data sets, such as: average, sum, minimum, maximum, count, etc. In our developed information system, these aggregation operations are often completed through various queries of relational database SQL. If we want to complete these operations in Java applications, we need to develop our own set operations, which are achieved by iterating the set explicitly and repeatedly executing the operation logic. These programs are not only tedious to develop, but also not easy to maintain. At the same time, performance problems will occur if you are not careful.

The Stream API provided by Java8 makes the development of aggregation operation very simple, code readability is higher, and the performance of using parallel mode will be better when using time-consuming concurrent aggregation operation on multi-core machine.

3.2.1 Stream overview

Now we have a preliminary concept that Stream is aggregating data sets. Let's first look at a typical example of a Stream completing an aggregation operation:

int sum = Stream.of("", "1", null, "2", " ", "3")
        .filter(s -> s != null && s.trim().length() > 0)
        .map(s -> Integer.parseInt(s))
        .reduce((left, right) -> right += left)
        .orElse(0);

This example is to calculate the total value of all the numbers in a set.

First, briefly explain the process of the above Stream operation:

  1. Stream.of ("", "1", null, "2", "3"): get the stream object of the data source;
  2. . filter (s - > s! = null & & s.trim(). Length() > 0): filter the previously returned Stream object and return the filtered new Stream object;
  3. .map(s -> Integer.parseInt (s) ): converts the string in the previously returned Stream object to a number, and returns a new Stream object;
  4. . reduce ((left, right) - > right + = left): it is another new feature of Java8 to aggregate the previously returned Stream object and return the total value (Optional object, including the last orElse). Later, we will ignore it here.

Let's talk about the basic flow of Stream operation

From the classic example above, we can see that a Stream operation can be divided into three basic steps:

1. Get data source - > 2. Data conversion transform - > 3. Execute Operation

In more detail, it can be regarded as a pipe flow operation:

Dataset: stream| filter:Stream  A kind of map:Stream  | reduce

Among them, filter and map belong to data transformation, while reduce belongs to Operation execution. Each time a Transform is performed, the original Stream object will not be changed, but a new Stream object will be returned. Therefore, chaining Operation is allowed to form a pipeline.

The main ways to obtain data sources are:

1. From Collection and array

Collection.stream()
Collection.parallelStream()
Arrays.stream(T array) or Stream.of()

2. From BufferedReader

java.io.BufferedReader.lines()

3. Static factory

java.util.stream.IntStream.range()
java.nio.file.Files.walk()

4. Build by yourself

java.util.Spliterator

5. Others

Random.ints()
BitSet.stream()
Pattern.splitAsStream(java.lang.CharSequence)
JarFile.stream()

I'll talk about the Stream operation example later. Don't worry.

Stream operation type

Stream operation type:

  • Intermediate: the intermediate operation corresponds to the previous Transform. Its purpose is to open the previous Stream object, define the data mapping or filtering and other transformation processing (Transform), and then return a new Stream object for the next operation. Syntactically, multiple intermediate operations can be chained together. But this kind of operation is lazy, that is to say, just calling this kind of method does not really start the Stream traversal.

Common Intermediate operations: map (mapToInt, flatMap, etc.), filter, distinct, sorted, peek, limit, skip, parallel, sequential, unordered

  • Terminal: the terminal Operation corresponds to the previous Operation. A Stream Operation on a dataset can only have a terminal Operation. When this Operation is executed, the last Stream object returned by the previous chained intermediate Operation (or the Stream object of the data source directly without intermediate Operation) will actually start to traverse the dataset, and then the Stream object can no longer be operated. So this must be the last Operation. When the terminal Operation is executed, the data set traversal will actually start and produce results.

Common Terminal operations: forEach, forEachOrdered, toArray, reduce, collect, min, max, count, anyMatch, allMatch, noneMatch, findFirst, findAny, iterator

  • Short circuit: short circuit operation does not conflict with the first two. A short circuit operation is also Intermediate or Terminal. It needs to return a limited Stream object (Intermediate) or a limited calculation result (Terminal) when dealing with an infinite Stream. But the short circuit operation can be used for finite Stream objects.

Common short circuit operations: anyMatch, allMatch, noneMatch, findFirst, findAny, limit

Multiple Intermediate operations will not lead to multiple data set traversal, because these Intermediate operations are inert, and these conversion operations will only be fused during the Terminal operation, and the traversal is completed once.

As for which operations of Stream are Intermediate and which are Terminal, a simple standard is to see whether the return value of the method is Stream.

3.2.2 use of stream common operations

If you haven't used Stream, the introduction to Stream before you finish reading may be just a blur. Come, Sao Nian, let's start to roll up the code with me.

First, prepare a dataset with the following elements (Poet, Poet):

class Poet {
        private String name;
        private int age;
        private int evaluation;

        public Poet() {
        }

        public Poet(String name, int age, int evaluation) {
            this.name = name;
            this.age = age;
            this.evaluation = evaluation;
        }

        @Override
        public String toString() {
            return "Poet{" +
                    "name='" + name + '\'' +
                    ", age=" + age +
                    ", evaluation=" + evaluation +
                    '}';
        }

        public String getName() {
            return name;
        }

        public void setName(String name) {
            this.name = name;
        }

        public int getAge() {
            return age;
        }

        public void setAge(int age) {
            this.age = age;
        }

        public int getEvaluation() {
            return evaluation;
        }

        public void setEvaluation(int evaluation) {
            this.evaluation = evaluation;
        }
    }

Then prepare a collection of famous poets of Tang Dynasty:

List<Poet> poets = preparePoets();
...
private List<Poet> preparePoets() {
    List<Poet> poets = new ArrayList<>();
    // Age may not be accurate, evaluation can not be taken seriously
    poets.add(new Poet("Wang Wei", 61, 4));
    poets.add(new Poet("Li Bai", 61, 5));
    poets.add(new Poet("Du Fu", 58, 5));
    poets.add(new Poet("Bai Juyi", 74, 4));
    poets.add(new Poet("Li Shangyin", 45, 4));
    poets.add(new Poet("Du Mu", 50, 4));
    poets.add(new Poet("Li He", 26, 4));
    return poets;
}
  • foreach:
// foreach is equivalent to poets.stream().forEach(System.out::println);
poets.forEach(System.out::println);

Note that the same Stream cannot be operated repeatedly, as shown below:

Stream<Poet> poetStream = poets.stream();
poetStream.forEach(System.out::println);
try {
    // You can't operate on the same stream object twice. Stream is a stream. You can't go back. You can't operate again after you operate once.
    poetStream.forEach(System.out::println);
} catch (Throwable t) {
    System.out.println("stream has already been operated upon or closed. Don't chew the sugarcane that others have chewed...");
}
// But getting stream from the collection again is repeatable because it is a new stream object.
poets.stream().forEach(System.out::println);
  • map -> Collectors
String strPoets = poets.stream()
        .map(poet -> poet.getName() + " Great poets of Tang Dynasty")
        .collect(Collectors.joining(","));
System.out.println(strPoets);

Collectors provide many operations, such as connecting elements, importing elements into other collections (lists or sets), and so on.

  • filter + map + collect into set collection
Set<String> poetsLi = poets.stream()
        .filter(poet -> poet.getName().startsWith("Plum"))
        .map(poet -> "Li Zhi, the third poet of Tang Dynasty " + poet.getName())
        .collect(Collectors.toSet());
System.out.println(poetsLi);

Previously, it was said that the same stream object can only be operated once. Why chain multiple operations here?
Because map and filter are Intermediate operations, they return a new stream object.

  • filter + findAny/findFirst to find a data satisfying the condition
Poet topPoet = poets.stream()
        .filter(poet -> poet.getEvaluation() > 4)
        .findAny()
//      .findFirst()
        // About orElse, I'll explain later when I talk about Optional
        .orElse(new Poet("Du Fu", 58, 5));
System.out.println("One of the best poets:" + topPoet.getName());
  • allMatch and anyMatch
boolean all50plus = poets.stream()
        .allMatch(poet -> poet.getAge() > 50);
System.out.println("Did the great poets live to be over 50 years old?" + (all50plus ? "yes" : "did not"));

boolean any50plus = poets.stream()
        .anyMatch(poet -> poet.getAge() > 50);
System.out.println("Do big poets live to be over 50?" + (any50plus ? "Yes, yes" : "Not really");
  • count max min sum
// 5-star poet count
System.out.println("5 Number of star poets:" + poets.stream()
        .filter(poet -> poet.getEvaluation() == 5)
        .count());
// The oldest poet
System.out.println("The oldest poet:" + poets.stream()
        .max(Comparator.comparingInt(Poet::getAge))
        .orElse(null));
// The youngest poet
System.out.println("The youngest poet:" + poets.stream()
        .min(Comparator.comparingInt(Poet::getAge))
        .orElse(null));
// Total age
System.out.println("Total age of poets:" + poets.stream()
        .mapToInt(Poet::getAge)
        .sum());

The Stream API of Java8 provides three methods for int, long, and double: mapToInt(),mapToLong(),mapToDouble(). Semantically, you can write a map operation to get a Stream object with the generics of Integer/Long/Double, and then do subsequent operations. But using mapToInt() directly can improve performance, because it will eliminate the automatic boxing and unboxing in the loop of subsequent operations.

  • Reduce is a special statistical operation. For example, here we can use reduce to calculate the total
int sumAge = poets.stream()
        .mapToInt(Poet::getAge)
        .reduce((age, sum) -> sum += age)
//      .reduce(Integer::sum)
        .orElse(0);
System.out.println("reduce Total age calculated:" + sumAge);

Note that reduce can have a starting value for statistics, for example:

// Suppose that the evaluation of other poets in the Tang Dynasty has been in total, assuming that it is 100, but not including the first seven, here we continue to count the total evaluation value from 100
int sumEvaluation = poets.stream()
        .mapToInt(Poet::getEvaluation)
        .reduce(100, (left, right) -> right += left);
//      .reduce(100, Integer::sum);
System.out.println("reduce Calculated evaluation total with starting value:" + sumEvaluation);
  • limit
System.out.println("Generate an equal difference array with a limit of 10:");
Stream.iterate(1, n -> n + 3).limit(10). forEach(x -> System.out.print(x + " "));
  • distinct
String distinctEvaluation = poets.stream()
        .map(poet -> String.valueOf(poet.getEvaluation()))
        .distinct()
        .collect(Collectors.joining(","));
System.out.println("Poet's evaluation score(duplicate removal): " + distinctEvaluation);
  • sorted
System.out.println("Poets by age:");
poets.stream()
        .sorted(Comparator.comparingInt(Poet::getAge))
        .forEach(System.out::println);
  • group
Map<String, List<Poet>> poetsByAge = poets.stream()
        .collect(Collectors.groupingBy(poet -> {
            int age = poet.getAge();
            if (age < 20) {
                return "1~19";
            } else if (age < 30) {
                return "20~29";
            } else if (age < 40) {
                return "30~39";
            } else if (age < 50) {
                return "40~49";
            } else if (age < 60) {
                return "50~59";
            } else if (age < 70) {
                return "60~69";
            } else {
                return "70~";
            }
        }));
System.out.println("Group poets by age:");
poetsByAge.keySet().stream()
        .sorted(String::compareTo)
        .forEach(s -> System.out.println(
                String.format("%s : %s", s, poetsByAge.get(s).stream().map(Poet::getName).collect(Collectors.joining(",")))));
  • flatmap [(poet1, poet2, poet3),(poet4,poet5)] --> [poet1, poet2, poet3, poet4, poet5]
System.out.println("adopt flatmap Flatten the poet collection after grouping:");
List<Poet> lstFromGroup = poetsByAge.values().stream()
        .flatMap(poets1 -> poets1.stream())
        .collect(Collectors.toList());
lstFromGroup.forEach(System.out::println);

3.2.3 parallel mode of stream

Just now, the examples are all serial mode of Stream. Now we get the parallel mode of Stream through the parallel Stream. Note that parallel mode and serial mode sometimes perform the same operation and get different results:

System.out.println("findAny:");
for (int i = 0; i < 10; i++) {
    Poet topPoet1 = poets.parallelStream()
            .filter(poet -> poet.getEvaluation() > 4)
            .findAny()
            .orElse(new Poet("XX", 50, 5));
    System.out.println("One of the best poets:" + topPoet1.getName());
}

System.out.println("findFirst:");
for (int i = 0; i < 10; i++) {
    Poet topPoet2 = poets.parallelStream()
            .filter(poet -> poet.getEvaluation() > 4)
            .findFirst()
            .orElse(new Poet("XX", 50, 5));
    System.out.println("One of the best poets:" + topPoet2.getName());
}

In the execution result of the above code, findFirst is not different from serial, but findAny is sometimes different from serial. Think about why.

Be careful when using parallel stream. Not all operations can be performed in parallel.

int sumEvaluation = poets.parallelStream()
        .mapToInt(Poet::getEvaluation)
        .reduce(100, Integer::sum);
System.out.println("reduce Parallel operation should not be used when there is initial value in calculation:" + sumEvaluation);

Parallel mode is attractive, but only if you know when to use it. This example shows that the reduce operation with initial value is not suitable for parallel mode.

  • The parallel stream mechanism is based on the Fork/Join framework introduced in Java 7. Just understand.

The essence of Fork/Join is the same as that of Hadoop MapReduce. It is based on the idea of divide and rule. It divides a task into several small tasks (Map, fork) that can be executed in parallel, and finally integrates them (Reduce, join). Of course, Hadoop is more complex. It deals with distributed processes on different nodes, and Fork/Join is multiple threads in a process (JVM).

Why do we rarely use Fork/Join directly? Because it's troublesome to use... Let's just say that...

  1. First, you need to define a ForkJoinPool like a thread pool, then define a ForkJoinTask to execute tasks, and submit the ForkJoinTask in the ForkJoinPool;
  2. Then, what kind of conditions or thresholds do you need to implement on your ForkJoinTask, disassemble the data set you want to process, correspond to several new new ForkJoinTask, then call these sub task fork methods, then call their join methods (divide and rule).
  3. The key mechanism in Fork/Join is called work stepping strategy, which puts subtasks into different dual end queues. Each queue corresponds to a thread to get and execute subtasks in the queue. The so-called two terminal queue is that a thread normally obtains the next subtask to be executed from one end of the queue. When a thread is idle, it will steal subtasks from the other end of the queue of other threads to execute... The advantage of work steeling is that it can make full use of threads for parallel computing; the disadvantage is that when there are fewer tasks in the queue, in order to avoid the competition of threads for subtasks, synchronization mechanism is needed, which will cause additional performance loss. (so when we later verify the performance of Stream, we will find that when the data volume is small, the parallel Stream will sometimes be slower. That's why. )

3.2.4 refactor lambda expression

In Stream operation, sometimes we need to write a long lambda function. At this time, we can flexibly use IDE's refactoring function to refactor a long lambda expression into a variable or method.

Predicate<Poet> poetPredicate = poet -> poet.getEvaluation() < 5;
Consumer<Poet> poetConsumer = poet -> System.out.println(poet.getName());
poets.stream()
        .filter(poetPredicate)
        .forEach(poetConsumer);

Function<Poet, String> poetStringFunction = poet -> {
    int age = poet.getAge();
    if (age < 20) {
        return "1~19";
    } else if (age < 30) {
        return "20~29";
    } else if (age < 40) {
        return "30~39";
    } else if (age < 50) {
        return "40~49";
    } else if (age < 60) {
        return "50~59";
    } else if (age < 70) {
        return "60~69";
    } else {
        return "70~";
    }
};
Map<String, List<Poet>> poetsByAge = poets.stream()
        .collect(Collectors.groupingBy(poetStringFunction));
System.out.println("Group poets by age:");
Consumer<String> stringConsumer = s -> System.out.println(
        String.format("%s : %s", s, poetsByAge.get(s).stream().map(Poet::getName).collect(Collectors.joining(","))));
poetsByAge.keySet().stream()
        .sorted(String::compareTo)
        .forEach(stringConsumer);

3.2.5 Stream performance

The performance of Stream can not be simply expressed as faster or slower than the previous set traversal operation, but should be confirmed according to different performance constraints of specific scenarios.

Three scenarios are briefly considered here:

  1. Simple traversal of a single data set;
  2. join operation of two data sets;
  3. Complex conversion operations for a single dataset.

The following code uses the hardware environment:

The CPU resources that I can use locally for my program: 6 core (i7 4 core 8 threads, but two cores are occupied by virtual machines all the year round, so a total of 6 cores can be used.)

Simple traversal of a single dataset

For the simple traversal of a single data set, generally speaking, the performance of Stream serial operation is about between the forI loop and the iterator loop; while the parallel mode of Stream can effectively improve the performance (better than fori, iterator, Stream serial) on the premise that the running platform has multiple cores and the single operation in the loop is relatively time-consuming.

For the traversal of a single dataset, from the following example code, we can find that the constraints that affect performance at least include the following points:

  1. Machine hardware conditions, such as whether it is multi-core or not, and how many cores there are. (the two cores may not be able to guarantee high efficiency of parallel than serial, because the loss of thread context switching should be considered. )
  2. The number of data sets, the number of data sets in different scales (100 pieces, 1000 pieces, 10000 pieces, 100000, millions, millions...)... )The performance of different traversal methods is significantly different.
  3. If a single cycle takes time, such as the nanosecond level, the Stream's parallel mode has no advantage (also because of the loss of thread context switching), but when the time is hundreds of milliseconds, the advantage of the parallel mode is quite obvious. (running on multi-core machines, of course)

For the following code, we suggest you try different constraints, such as:

  1. Sleep time adjustment, for example, from no sleep to 500ms sleep;
  2. Adjust the number of data sets, for example, from 100 pieces to 1000, 10000, 100000, millions, millions... (of course, when the number of pieces is large, reduce or even remove the sleep properly, so as not to run too long.)
  3. For machines with different hardware conditions, this conditional machine can try the result of running parallel mode on machines with large CPU core number difference, even if there is no condition.

In addition, the LocalDateTime and Duration in the code are another new feature of Java8, which will be introduced later, and you don't need to worry about them now.

List<String> numbers = new ArrayList<>();
for (int i = 0; i < 100; i++) {
    numbers.add("a" + i);
}

System.out.println("=== loop with fori ===");
LocalDateTime startTime = LocalDateTime.now();
for (int i = 0; i < numbers.size(); i++) {
    String whatever = numbers.get(i) + "b";
    try {
        Thread.sleep(500);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
}
LocalDateTime stopTime = LocalDateTime.now();
System.out.println("loop with fori time(millis):" + Duration.between(startTime, stopTime).toMillis());

System.out.println("=== loop with Iterator ===");
startTime = LocalDateTime.now();
for (String num : numbers) {
    String whatever = num + "b";
    try {
        Thread.sleep(500);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
}
stopTime = LocalDateTime.now();
System.out.println("loop with Iterator time(millis):" + Duration.between(startTime, stopTime).toMillis());

System.out.println("=== loop with stream ===");
startTime = LocalDateTime.now();
numbers.stream().forEach(num -> {
    String whatever = num + "b";
    try {
        Thread.sleep(500);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
});
stopTime = LocalDateTime.now();
System.out.println("loop with stream time(millis):" + Duration.between(startTime, stopTime).toMillis());

System.out.println("=== loop with parallelStream ===");
startTime = LocalDateTime.now();
numbers.parallelStream().forEach(num -> {
    String whatever = num + "b";
    try {
        Thread.sleep(500);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
});
stopTime = LocalDateTime.now();
System.out.println("loop with parallelStream time(millis):" + Duration.between(startTime, stopTime).toMillis());

When the above code is running locally, remember to turn down the sleep or even comment it out when the number of pieces is large. If you run for half a day, you won't get results...

join of two datasets

The above example is just a single dataset traversal, but in actual development, we often encounter more complex dataset operations. For example, the most typical join operation of two data sets.

First of all, we define two more Class:Evaluation And PoetExt:

class Evaluation {
    private int evaluation;
    private String description;

    public Evaluation() {
    }

    public Evaluation(int evaluation, String description) {
        this.evaluation = evaluation;
        this.description = description;
    }

    public int getEvaluation() {
        return evaluation;
    }

    public void setEvaluation(int evaluation) {
        this.evaluation = evaluation;
    }

    public String getDescription() {
        return description;
    }

    public void setDescription(String description) {
        this.description = description;
    }
}

class PoetExt extends Poet {
    private String description;

    public PoetExt(String name, int age, int evaluation, String description) {
        super(name, age, evaluation);
        this.description = description;
    }

    public String getDescription() {
        return description;
    }

    public void setDescription(String description) {
        this.description = description;
    }

    @Override
    public String toString() {
        return "PoetExt{" +
                "name='" + this.getName() + '\'' +
                ", description='" + description + '\'' +
                '}';
    }
}

Obviously, poet corresponds to the definition data of poets, and evaluation corresponds to the definition data of evaluation. The requirement we need to implement is that poems join with evaluations to get the PoetExt collection. In the case of relational database SQL, the primary table is set and the secondary table is evaluation Poet.evaluation = Evaluation.evaluation Query data for conditional connections.

Before Java 8, if we need to implement the join operation of such two datasets in Java applications, we often adopt the explicit double-layer iterator loop nesting writing method. From Java 8, we can use Stream operation to realize the join operation of two datasets. According to the requirements of the scenario, we can also use the Stream parallel mode.

The code is as follows, and the performance of three writing methods (explicit double-layer iterator traversal, Stream, parallel Stream) is compared respectively:

// Number of poems
int n = 100000;
// evaluations
int m = 100000;
List<Poet> poets = new ArrayList<>();
for (int i = 0; i < n; i++) {
    String name = String.format("poet%010d", i + 1);
    poets.add(new Poet(name, (int) (80 * Math.random()) + 10, (int) (m * Math.random()) + 1));
}
List<Evaluation> evaluations = new ArrayList<>();
for (int i = 0; i < m; i++) {
    evaluations.add(new Evaluation(i + 1, (i + 1) + "Star"));
}

// The logic to be implemented is to join poets and evaluations to get the PoetExt set

// The expression of loop nesting of explicit double-layer iterator:
List<PoetExt> poetExts = new ArrayList<>();
System.out.println("=== Explicit double iterator loop ===");
LocalDateTime startTime = LocalDateTime.now();
for(Poet poet : poets) {
    int eva = poet.getEvaluation();
    for(Evaluation evaluation : evaluations) {
        if (eva == evaluation.getEvaluation()) {
            PoetExt poetExt = new PoetExt(poet.getName(), poet.getAge(), eva, evaluation.getDescription());
            poetExts.add(poetExt);
            break;
        }
    }
}
LocalDateTime stopTime = LocalDateTime.now();
System.out.println("Explicit double iterator loop time(millis):" + Duration.between(startTime, stopTime).toMillis());
System.out.printf("%s Number of pieces: %d And the first result: %s %n", "Explicit double iterator loop", poetExts.size(), poetExts.get(0).toString());

// Stream:
System.out.println("=== Stream ===");
startTime = LocalDateTime.now();
poetExts = poets.stream()
        .map(poet -> {
            Evaluation eva = evaluations.stream()
                    .filter(evaluation -> evaluation.getEvaluation() == poet.getEvaluation())
                    .findAny()
                    .orElseThrow();
            return new PoetExt(poet.getName(), poet.getAge(), poet.getEvaluation(), eva.getDescription());
        })
        .collect(Collectors.toList());
stopTime = LocalDateTime.now();
System.out.println("Stream time(millis):" + Duration.between(startTime, stopTime).toMillis());
System.out.printf("%s Number of pieces: %d And the first result: %s %n", "Stream", poetExts.size(), poetExts.get(0).toString());

// parallelStream
System.out.println("=== parallelStream ===");
startTime = LocalDateTime.now();
poetExts = poets.parallelStream()
        .map(poet -> {
            Evaluation eva = evaluations.parallelStream()
                    .filter(evaluation -> evaluation.getEvaluation() == poet.getEvaluation())
                    .findAny()
                    .orElseThrow();
            return new PoetExt(poet.getName(), poet.getAge(), poet.getEvaluation(), eva.getDescription());
        })
        .collect(Collectors.toList());
stopTime = LocalDateTime.now();
System.out.println("parallelStream time(millis):" + Duration.between(startTime, stopTime).toMillis());
System.out.printf("%s Number of pieces: %d And the first result: %s %n", "parallelStream", poetExts.size(), poetExts.get(0).toString());

Running results under different local constraints: time unit: ms

Number of poems evaluations Explicit double iterator loop Stream parallelStream
1000 1000 53 44 145
10000 10000 772 603 520
100000 100000 27500 48351 11958
10000 100000 4375 4965 1510
100000 10000 3078 5053 1915
100000 1000000 421999 787188 186758
1000000 100000 278927 497239 122923
100000 100 140 306 895
100 100000 111 110 111

It can be seen that in the old local hardware environment (six core s are available), the data volume is small (the number of data sets on both sides of the join is less than 10000), and there is little difference between the three. The explicit double-layer iterator cycle is close to that of the Stream, while the parallel Stream even slows down when the data volume is 1000. When the data volume reaches the scale of more than 100000 pieces, the performance of the three shows a significant gap The advantages of parallel Stream are obvious, the explicit double-layer iterator is the second, and the Stream serial is the slowest.

  • When the data volume of both data sets is small, the performance of Stream in both serial mode and parallel mode is not much different from that of explicit double-layer iterator cycle, which is in an order of magnitude.
  • When the data volume of two data sets is large, parallel stream > explicit double-layer iterator loop > stream
  • When the main data set has a large amount of data and the secondary data set has a small amount of data, the explicit double-layer iterator loop > stream > parallelstream
  • The secondary data set has a large amount of data and the primary data set has a small amount of data, which are close to each other

Note:

  1. The above three join operations do not consider the algorithm optimization of space for time. For example, the evaluation is first converted to the HashMap, and then the target evaluation is directly obtained through the HashMap when traversing the poets. This optimization is not considered because what is compared here is the performance performance between the implicit bilevel traversal of Stream and the previous explicit bilevel traversal. With the optimization method of HashMap, all three can be used...
  2. Explicit double-layer traversal does not consider the fori loop, because the performance of the fori is not as good as the iterator loop, so there is no need to make a fool of yourself here...
  3. Whether the number of data sets is large or small depends on the hardware environment and cannot be generalized.
  4. The above tests are relatively simple, and each case has only been tested once. If you have time, it is recommended to test the case of each data amount more than 10 times to take the average value.

Complex conversion operations for a single dataset

In fact, after comparing the performance of the above two scenarios, we can get a rough impression:

  1. When the amount of data is small, the performance is almost the same;
  2. When the amount of data is large, as long as the business allows, the hardware is enough to try to be parallel;
  3. When it can only be serialized, there is a certain demand for performance. It is still faster for the explicit iterator to cycle.

But I still want to say that if there is no ultimate performance requirement, Stream operation is preferred.

Let's look at an example: multiple data conversion operations for a single dataset.

The first is still the collection of poets and evaluation

// Number of poems
int n = 100000;
// evaluations
int m = 1000;
List<Poet> poets = new ArrayList<>();
for (int i = 0; i < n; i++) {
    String name = String.format("poet%010d", i + 1);
    poets.add(new Poet(name, (int) (80 * Math.random()) + 10, (int) (m * Math.random()) + 1));
}
List<Evaluation> evaluations = new ArrayList<>();
for (int i = 0; i < m; i++) {
    evaluations.add(new Evaluation(i + 1, (i + 1) + "Star"));
}

To avoid double-layer traversal, we transform the evaluation set into a HashMap:

Map<Integer, String> evaluationMap = evaluations.stream()
        .collect(Collectors.toMap(Evaluation::getEvaluation, Evaluation::getDescription));

Let's simulate the logic: find all poets with evaluation > m / 2 from the poems, splice them into the field of "poet Name: evaluation description", and then filter out the records without 0 in "poet Name: evaluation description".

Although the above logic can be implemented in one cycle, in actual development, there are often more complex logic that leads us to divide it into several cycles according to business logic. Therefore, our simulation code below has not been optimized once.

System.out.println("=== Data conversion logic realized by multiple cycles ===");
LocalDateTime startTime = LocalDateTime.now();
List<Poet> betterPoets = new ArrayList<>();
for(Poet poet : poets) {
    if (poet.getEvaluation() > m / 2) {
        betterPoets.add(poet);
    }
}
List<String> poetWithEva2 = new ArrayList<>();
for(Poet poet : betterPoets) {
    poetWithEva2.add(poet.getName() + ":" + evaluationMap.get(poet.getEvaluation()));
}
List<String> poetWithEva3 = new ArrayList<>();
for(String s : poetWithEva2) {
    if (s != null && s.contains("0")) {
        poetWithEva3.add(s);
    }
}
LocalDateTime stopTime = LocalDateTime.now();
System.out.println("Data conversion logic realized by multiple cycles time(millis):" + Duration.between(startTime, stopTime).toMillis());

Then we use Stream to implement the same logic:

System.out.println("=== Stream Realize data conversion logic ===");
startTime = LocalDateTime.now();
List<String> poetWithEva = poets.stream()
        .filter(poet -> poet.getEvaluation() > m / 2)
        .map(poet -> poet.getName() + ":" + evaluationMap.get(poet.getEvaluation()))
        .filter(s -> s.contains("0"))
        .collect(Collectors.toList());
stopTime = LocalDateTime.now();
System.out.println("Stream Realize data conversion logic time(millis):" + Duration.between(startTime, stopTime).toMillis());

Then three explicit iterator cycles are optimized to one cycle:

System.out.println("=== One cycle to realize data conversion logic ===");
startTime = LocalDateTime.now();
List<String> lastLst = new ArrayList<>();
for(Poet poet : poets) {
    if (poet.getEvaluation() > m / 2) {
        String tmp = poet.getName() + ":" + evaluationMap.get(poet.getEvaluation());
        if (tmp.contains("0")) {
            lastLst.add(tmp);
        }
    }
}
stopTime = LocalDateTime.now();
System.out.println("One cycle to realize data conversion logic time(millis):" + Duration.between(startTime, stopTime).toMillis());

From the view of running results, the gap between Stream and primary cycle (iterator) is very small, but both of them have obvious advantages over multiple cycles. The reason is obvious, of course, because Stream is also the last traversal.

But Stream has a huge advantage in development efficiency: its semantics is simple and clear, and developers do not need to write multiple cycles logically first, and then optimize them into one cycle.

Of course, the high level programmers can write a cycle after optimization at a time, but if you look at the code of the two, you can ask which one is elegant? Which is easier to read the purpose of the code? As a result, it is obvious that Stream is much more readable and maintainable than explicit loop.

So again: if there is no ultimate performance requirement, Stream operation is preferred.

Suggestions on the use of Stream and parallel Stream

Direct conclusion:

  1. Where Stream can be used, try to use Stream (high development efficiency, easy to read and maintain code, performance close to iterator cycle);
  2. Do not use parallel Stream as long as there is no performance requirement that cannot be met by Stream. One is that not all data set operations can operate in parallel. The other is that parallel operations rely heavily on hardware, especially CPU cores. In a complex application with concurrent requests, requests from other businesses may not be able to grab enough resources...

As for the CPU consumption in parallel mode, when you run the previous performance test code locally, you can open the local resource monitor to see the CPU utilization in Stream serial and parallel mode. You will find that the Stream serial and explicit iterator loops basically have 100% utilization of only one core at runtime, while in parallel mode, all cores have 100% utilization. If there are other concurrent and CPU consuming requests in your application, do you think it will be slower, slower or slower than usual? If your application is still a highly concurrent system, can you ensure that the parallel operations that generate a lot of CPU consumption only occur in the period of low concurrency? (of course, it is assumed that your high concurrency system has a high concurrency peak time period. There is no high concurrency scenario beyond the peak time period.)... )

3.2.6 force to summarize a Stream

  • What is Stream?

Stream is not a set or an element of a set. It is not a data structure and does not save data. It is actually an operation framework for a set. It's more like an advanced version of Iterator. However, unlike Iterator, which can only explicitly traverse one element, stream will implicitly traverse inside and make corresponding data conversion as long as the developer gives the operation intention and function implementation (i.e. what to do and how to do), such as "filter out numbers less than 0", "fill 10 bits for each string from the left", etc.

What to do is which method of Stream you need to call, and how to do is what kind of function you need to pass to the Stream's method, that is, lambda expression!

  • So why is Stream?

First, Stream is a pipe flow operation. From the previous code example of Stream operation, we can see that the whole Stream operation is a pipe flow operation, and the start and intermediate operations always return a new Stream object, and then continue to operate the Stream object, just like relay, until the last operation is performed to get the result.

Second, the Stream is just like an Iterator. The last Terminal traverses the dataset in one direction and cannot reciprocate. The data can only be traversed once. After traversing once, it ends. It is irreversible, just like the water of the Yellow river rising up in the sky, running to the sea and never returning.

Hence the name Stream.

  • What are the characteristics of Stream compared with previous collection operations?

Compared with previous collection operations, stream is different in that previous collection operations (including Iterator) can only be command-based and serial operations. Stream has the following characteristics:

  1. The support of functional programming is realized by lambda expression. The semantics is closer to natural language and the code is easier to read;
  2. It supports the chain operation of pipeline flow, and can integrate a large number of traversal logic more succinctly;
  3. It supports parallel mode, which can divide the data into multiple segments, execute in different threads, and finally merge the output, and it does not need to write multi-threaded operations explicitly.

So many benefits, I ask you whether you are happy or not. Are you crazy?

  • As for the parallel mode of Stream, it has its development track.

From the point of view of Java's parallel programming API (or multithreaded programming), we can see that its development and growth in various major versions of Java are roughly as follows:

  1. Java 1 to Java 4 java.lang.Thread
  2. Java 5 starts to provide, and Java 6 continues to enhance java.util.concurrent
  3. The Fork/Join framework introduced by Java 7
  4. New Stream parallel mode in Java 8

3.3 interface default method

When we talked about the standard functional interfaces of Lamdba expressions, you guys should have noticed that there are implemented methods in these interfaces... What's going on? Isn't it a violation of Java's own stipulation that interfaces have no implementation methods?

emmm, it's a violation. Of course, there's a reason. Let's talk about it later... First, let's see how the method implementation in the interface works.

3.3.1 add default method to interface

Starting with Java 8, you can add a default method to the interface. As follows:

public interface Printer {
    default void print() {
        System.out.println("all birds fly high");
    }

    default void printAnathor() {
        System.out.println (Lonely clouds go to leisure alone);
    }
}

These default implementations do not require the Class rewriting of the interface to be implemented, as follows:

PrintClass printClass = new PrintClass();
printClass.print();
printClass.printAnathor();
...
class PrintClass implements Printer {
}

Of course, there is no problem that you have to rewrite the default method of the interface.

3.3.2 how to avoid defaut method conflict

Interface is different from abstract class. Abstract class uses inheritance, while Java is single inheritance, so there will be no method conflict of inheritance. But after the interface can write the default method, there is the possibility of method conflict. Because a class in Java can implement multiple interfaces, when there are the same default methods in these interfaces, there will be a default method conflict.

For example, the Printer2 interface also implements the method print:

public interface Printer2 {
    default void print() {
        System.out.println("Only Jingting mountain");
    }
}

At this time, if a class implements the interface Printer and Printer2 at the same time:

class PrintClass2 implements Printer, Printer2 {
}

A compilation error will occur due to a default method conflict.

How to solve it? We can override the print method in PrintClass2:

class PrintClass2 implements Printer, Printer2 {
    @Override
    public void print() {
        System.out.println("Look at each other");
    }
}

But what if you want to call the default method in an interface? At this time, you can use printer2 super.print (); implementation of this special writing method:

class PrintClass2 implements Printer, Printer2 {
    @Override
    public void print() {
        System.out.println("Look at each other");
        Printer2.super.print();
    }
}

The general rules are as follows:

  1. Class takes precedence over interface. If there is a method body or abstract method declaration in the inheritance chain, the methods defined in the interface can be ignored.
  2. Child priority is higher than parent. If an interface inherits another interface and both interfaces define a default method, the method defined in the sub interface wins.
  3. If neither of the above rules applies, the subclass either needs to implement the method or declare it as an abstract method.

3.3.3 static method implementation

In the Java 8 interface, you can write not only the default method, but also the static method:

public interface Printer2 {
    default void print() {
        System.out.println("Only Jingting mountain");
    }

    static void printHello(String name) {
        System.out.println("Hello " + name);
    }

    static void printBye(String name) {
        System.out.println("Goodbye " + name);
    }
}

When calling, use the interface. Static method to:

class PrintClass2 implements Printer, Printer2 {
    @Override
    public void print() {
        System.out.println("Look at each other");
        Printer2.super.print();
    }

    public void helloAndBye() {
        Printer2.printHello("Java8");
        Printer2.printBye("Java8");
    }
}

3.3.4 discussion of interface default method

Java 8 adds a default method to the interface, which is a new feature that causes a lot of different opinions. I don't like to think that this destroys the normalization of Java as an object-oriented language, which is easy to cause method reference confusion and hard to maintain, such as the old man before; I like to think that it increases the flexibility of Java, as long as I can control the scope, it's still good to use, such as the old man who feels really good now...

Why does Java add default methods for interfaces?

  • The reason why Java8 adds the default method to the interface, on the one hand, is to match the Stream API and Lambda expressions, such as the stream() method of Collection. Imagine that if the interface cannot provide the default method, then you need to implement the stream() method for all Collection classes...
  • On the other hand, the default method brings the advantage that when extending a new simple function, you can directly add a new default method to the relevant interface without adding a new implementation class. Adding new implementation classes may damage the existing code inheritance system, and often adding new implementation classes may even cause class explosion.

However, whether it is adding a new implementation class or adding a default method directly in the interface, it cannot be abused. The former will destroy the class inheritance system of the code and even cause class explosion, which makes the code difficult to maintain; the latter may lead to confusion of method references, which also makes the code difficult to maintain. It can be used but not abused.

At present, I have a little suggestion:

  1. The majority of application layer developers, because of the high frequency of code changes and the fast flow of people, so-called iron project flow programmers, do not use the interface default method at this time. Do not dig a hole from me!
  2. However, if you are ready for common or framework development, try it when you need it. After all, we have to believe in the basic qualities of programmers who can do common or framework.

3.4 Optional

When we talked about Stream earlier, we saw that some Terminal operations would return an Optional object, and we performed operations like orElse on it.

Optional is a new container class in Java 8 to solve NullPointerException, which contains references to other objects.

It's a bit cold. If you're not familiar with it, it's not very useful... It's not until you're done that you'll find it really fragrant...

Don't force, just look at the code.

Before Java 8, we always needed a lot of non empty judgments in our code:

private void printLineOld(String line) {
    if (line != null) {
        System.out.println(line.trim());
    }
}

After Java 8, you can use Optional to gracefully complete non empty judgments that are not elegant enough...

First of all, you need to use Optional to package objects that you don't know whether they are null or not:

// If you are sure that line is not null
Optional<String> line1 = Optional.of(line);
// If line is null, ofNullable is required
Optional<String> empty = Optional.ofNullable(line);

There are other ways to create optional objects, which are not covered here. In the face of unknown variables, I suggest using Optional.ofNullable Encapsulate it.

Then, where variables are used, use the Optional object instead:

// Suppose line is an object of type optional < string >
try {
    System.out.println(line.get().trim());
} catch (NoSuchElementException e) {
    System.out.println("Optional.get If line yes null,get Can throw NoSuchElementException Abnormal!");
}
// Execute the incoming lambda expression only when the original object is not null
line.ifPresent(s -> System.out.println(s.trim()));
// With orElse, when the original object is null, the default value passed in by orElse is used
System.out.println(line.orElse(""));
// Using orElseGet, when the original object is null, use the lambda expression passed in by orElseGet
System.out.println(line.orElseGet(() -> "It's natural that I'm useful," + "When the gold is gone, it will come again."));
// With orElseThrow, when the original object is null, an exception defined by itself is thrown
System.out.println(line.orElseThrow(() -> new RuntimeException("You can also throw your own defined exception!")));

Among them:

  • ifPresent: the following lambda expression will be executed only when the object is not null;
  • orElse: if the object is null, return the default value passed in later;
  • orElseGet: if the object is null, execute the lambda expression passed in later to get the return value;
  • orElseThrow: throws a self-defined exception if the object is null.

But it should be noted that using Optional requires a proper opening posture...

First look at an incorrect posture:

// It is not recommended to design parameter type as Optional, which is suitable for return value type
public void printLine(Optional<String> line) {
    ...
}

The method parameters are directly designed as optional, which is not recommended. Because as a method provider, how can you ensure that callers with unlimited IQ will use Optional.ofNullable What about explicitly passing in parameters? You can't stop people flying away. You can't stop people flying...

In addition, Optional should not be used for instance variables, because it cannot be serialized and can cause problems when used as field properties.

Let's look at the right open position:

private void test02_returnOptional(String line) {
    Optional<String> lineOpt = createLineOptional(line);

    // Execute the lambda expression passed in only if the original object is not null
    lineOpt.ifPresent(s -> System.out.println(s.trim()));
    // With orElse, when the original object is null, the default value passed in by orElse is used
    System.out.println(lineOpt.orElse(""));
    // Using orElseGet, when the original object is null, use the lambda expression passed in by orElseGet
    System.out.println(lineOpt.orElseGet(() -> "It's natural that I'm useful," + "When the gold is gone, it will come again."));
    // With orElseThrow, when the original object is null, an exception defined by itself is thrown
    System.out.println(lineOpt.orElseThrow(() -> new RuntimeException("You can also throw your own defined exception!")));
}

private Optional<String> createLineOptional(String line) {
    // In actual development, there may be more complex logic here to return an object, and the method does not guarantee that the returned object is not null;
    // Therefore, where this method is used, it is necessary to determine whether the return value is null...
    // But if we wrap the return value in Optional, non null judgment can be elegant for the place where the method is called.
    return Optional.ofNullable(line);
}

3.5 Map operation and HashMap performance optimization

When talking about Stream, careful partners will find that there is no operation to generate Stream from Map. Yes, Map doesn't have stream() method, so it can't get the Stream object of Map directly. Since Java doesn't support tuples or even two-dimensional tuples until now, the key value pair of Map elements can't be used as the generic type of Stream < T >...

Of course, Java 8's Map provides some new methods to meet the needs of our daily operations.

3.5.1 enhanced Map operation

Let's first see how a set (List or Set) is converted to a Map.

// Or a collection of poets
List<Poet> poets = Poet.preparePoets();
// Utilization Collectors.toMap  Convert dataset in Stream to Map
Map<String, Poet> poetMap = poets.stream().collect(Collectors.toMap(Poet::getName, poet -> poet));

There are similar examples in the sample code of Stream before... Powerful Collectors, we need to be close...

Next, let's take a look at the new ways Map can be used:

  • foreach
poetMap.forEach((s, poet) -> {
    System.out.printf("%s Live %s Years old. %n", s, poet.getAge());
    System.out.printf("%s evaluate : %s .  %n", s, poet.getEvaluation());
});

This looks like a two-dimensional tuple...

  • putIfAbsent: judge whether the target key already exists in the map. If it is not or null, put a value in it.
Poet censhen = poetMap.get("Censhen");
if (censhen == null) {
    censhen = new Poet("Censhen", 51, 4);
    poetMap.put("Censhen", censhen);
}
System.out.println(censhen);
// The above code can now use putIfAbsent directly.
poetMap.putIfAbsent("Censhen", new Poet("Censhen", 51, 5));
// Results the evaluation of Cen Shen is still 4 instead of 5, because putIfAbsent will not replace the existing value.
System.out.println(poetMap.get("Censhen"));

Is it a lot more elegant to compare the former writing method with the current one? Elegance is combat effectiveness, elegance is justice...

  • computeIfPresent: if the value of the specified key exists and is not empty, an attempt is made to calculate the new mapping given the key and its current mapping value.
// CEN Shen has joined poetMap
poetMap.computeIfPresent("Censhen", (s, poet) -> new Poet(s, 51,4));
// computeIfPresent will replace the existing value
System.out.println(poetMap.get("Censhen"));
// Meng Haoran has not joined poetMap
poetMap.computeIfPresent("meng haoran", (s, poet) -> new Poet(s, 51,3));
// computeIfPresent only replaces value when the key already exists
System.out.println(poetMap.containsKey("meng haoran"));
  • computeIfAbsent: only put a non empty value when the key does not exist
poetMap.computeIfAbsent("meng haoran", s -> new Poet(s, 51,3));
System.out.println(poetMap.get("meng haoran"));

The difference between computeIfAbsent and putIfAbsent lies in the difference of the incoming parameters, one is the lambda expression, the other is the specific value.

  • Remove (key, value): delete when both key and value match
poetMap.remove("meng haoran", new Poet("meng haoran", 51,3));
// Delete failed because value is not an object
System.out.println(poetMap.containsKey("meng haoran"));
poetMap.remove("meng haoran", poetMap.get("meng haoran"));
// Delete succeeded
System.out.println(poetMap.containsKey("meng haoran"));
  • getOrDefault
System.out.println(poetMap.getOrDefault("meng haoran", new Poet("XX", 20, 1)));
  • Merge: add new value when key does not exist, and merge value according to lambda expression when key exists
Map<String, String> lines = new HashMap<>();
lines.merge("Du Fu's famous sentence", "The stars hang flat and the fields are wide,", (value, newValue) -> value.concat(newValue));
System.out.println(lines.get("Du Fu's famous sentence"));
lines.merge("Du Fu's famous sentence", "The moon flows into the river.", String::concat);
System.out.println(lines.get("Du Fu's famous sentence"));

3.5.2 performance optimization of HashMap

Java 8 also optimizes the performance of HashMap.

This section is all about theory. If you are not familiar with the HashMap mechanism, you should go back and make up your own lessons...

Whether we know or pretend to know the mechanism of hashmap, here is a brief review (before Java 8):

  1. The hashmap storage in Java is a node array. The storage index of each key in the array is obtained by the sum of the hash value of key and the array length - 1;
  2. When the storage subscripts calculated by different keys conflict (many data are called hash conflicts, but most of them are not hash value conflicts), put the corresponding different key values into the node linked list corresponding to this subscript (yes, node is the linked list structure).
  3. The array length is not constant. The initial capacity is the fourth power of 2. The default load factor is 0.75. When the array length exceeds the capacity × load factor, HashMap will multiply the capacity by 2, that is, the index of 2 plus 1, and then everyone will arrange again (recalculate the subscript).
  4. To find the value corresponding to a key from the HashMap, the subscript is calculated according to the hash value of the key, and then the node linked list corresponding to the subscript is traversed to find the value corresponding to the key.

Before Java8, there were two main performance bottlenecks of HashMap:

  1. When there are many subscript conflicts, the subscript location should be calculated according to the key, and then the node list of the location should be traversed until the value corresponding to the key is found;
  2. Each time the HashMap is expanded, the subscripts of all elements are recalculated.

In Java 8, these two points are optimized to some extent:

  1. Nodes are no longer always linked lists. When the length of the linked list exceeds 8 and the capacity of the node array exceeds 64, the linked list is changed to a red black tree. Red black tree is a special balanced binary tree with balanced read-write performance. About binary trees, balanced binary trees, red black trees, interested little friends to understand it... Why specify a node array with a capacity of more than 64? Because the capacity is small and the possibility of subscript conflict is high, the expansion should be given priority.
  2. When expanding the capacity, the hash value of one calculation element is no longer renewed, but the original subscript value is directly shifted (because the capacity is always expanded by multiplying 2).

When expanding, it may cause the red black tree to be split into two linked lists.

3.6 Date API

Java 8 in package java.time It contains a new set of time and date API s, which are more powerful and secure.

3.6.1 Clock and time zone

  1. The clock class provides methods to access the current date and time. Clock is time sensitive and can be used to replace System.currentTimeMillis() to get the current number of microseconds. A specific point in time can also be represented by an Instant class, which can also be used to create an old java.util.Date Object.
  2. The time zone is represented by ZoneId in the new API. The time zone can be easily obtained by using the static method of. Time zone defines the time difference to UTS time, which is extremely important when converting Instant time point object to local date object.
// System Clock object adopts system default time zone
Clock clock = Clock.systemDefaultZone();
System.out.println(clock);

// System current delicacy
long millis = clock.millis();
System.out.println(millis);

// Get previous Date object
Instant instant = clock.instant();
Date legacyDate = Date.from(instant);
System.out.println(legacyDate);

// Get available time zones
System.out.println(ZoneId.getAvailableZoneIds());
// Get the specified time zone
ZoneId zoneSh = ZoneId.of("Asia/Shanghai");
System.out.println(zoneSh.getRules());
ZoneId zoneTk = ZoneId.of("Asia/Tokyo");
System.out.println(zoneTk.getRules());
ZoneId zoneNy = ZoneId.of("America/New_York");
System.out.println(zoneNy.getRules());

3.6.2 LocalTime, LocalDate and LocalDateTime

LocalTime, LocalDate and LocalDateTime are all new date API s provided by Java8. They have the following characteristics:

  1. Immutable, so thread safety;
  2. Cooperate with DateTimeFormatter for formatting, thread safety;
  3. It is more convenient to do time difference operation with Duration, chrononunit, etc;
  4. It is more convenient to get the current time of the system;
  5. wait...
  • LocalTime defines a time when there is no time zone information, such as 10 p.m. or 17:30:15.
// LocalTime has no date, month and time zone information, only hours, minutes and seconds and less
LocalTime localTimeNowDefault = LocalTime.now(ZoneId.systemDefault());
System.out.println(localTimeNowDefault);
LocalTime localTimeNowTk = LocalTime.now(ZoneId.of("Asia/Tokyo"));
System.out.println(localTimeNowTk);
// Calculate time difference
long hoursBetween = ChronoUnit.HOURS.between(localTimeNowDefault, localTimeNowTk);
System.out.println(hoursBetween);
long minutesBetween = ChronoUnit.MINUTES.between(localTimeNowDefault, localTimeNowTk);
System.out.println(minutesBetween);
// Get a LocalTime at any time
LocalTime late = LocalTime.of(23, 59, 59);
System.out.println(late);
// Convert the string to LocalTime according to the format (because LocalTime is only less than hours, so the format is limited, and you can only use FormatStyle.SHORT)
DateTimeFormatter dtf_localtime = DateTimeFormatter.ofLocalizedTime(FormatStyle.SHORT)
        .withLocale(Locale.GERMAN);
LocalTime leetTime = LocalTime.parse("13:37", dtf_localtime);
System.out.println(leetTime);
  • LocalDate represents an exact date, such as 2014-03-11.
// LocalDate
LocalDate today = LocalDate.now();
LocalDate tomorrow = today.plus(1, ChronoUnit.DAYS);
LocalDate yesterday = tomorrow.minusDays(2);
LocalDate new_year_day = LocalDate.of(2020, Month.JANUARY, 1);
DayOfWeek dayOfWeek = new_year_day.getDayOfWeek();
System.out.printf("Today is%s,Tomorrow is%s,Yesterday was%s,New year's Day is%s,%s.  %n", today, tomorrow, yesterday, new_year_day, dayOfWeek);
// format
DateTimeFormatter dtf_localdate = DateTimeFormatter.ofLocalizedDate(FormatStyle.MEDIUM).withLocale(Locale.GERMAN);
LocalDate children_day = LocalDate.parse("01.06.2020", dtf_localdate);
System.out.println(children_day);
  • LocalDateTime represents both time and date
// LocalDateTime date plus time
LocalDateTime now = LocalDateTime.now();
System.out.println(now);
LocalDateTime laborDay = LocalDateTime.of(2020, Month.MAY, 1, 14, 41, 3);
System.out.println(laborDay);
System.out.println(laborDay.getDayOfWeek());
System.out.println(laborDay.getMonth());
System.out.println(laborDay.getLong(ChronoField.MINUTE_OF_DAY));
// Convert to Date through a point in time Instance object
Instant laborInstant = laborDay.atZone(ZoneId.systemDefault()).toInstant();
Date laborDate = Date.from(laborInstant);
System.out.println(laborDate);

// Custom formatting
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS");
String strNow = formatter.format(LocalDateTime.now());
System.out.println(strNow);
LocalDateTime ldtNow = LocalDateTime.parse(strNow, formatter);
System.out.println(ldtNow);

// Calculate time difference
System.out.println(ChronoUnit.DAYS.between(ldtNow, laborDay));
System.out.println(Duration.between(ldtNow, laborDay).toDays());

3.7 CompletableFuture

Before Java8, in the multithreading development, if the main thread needs to end the child thread and then proceed to the next step, then it can only wait for synchronous blocking, whether you call the join method of the child thread in the main thread, or use the get method of Future.

Java 8 adds a new completabilefuture class, which can be used with lamda expression to pass in function to the sub thread for callback after the execution of the sub thread.

Let's take a simple example:

CompletableFuture<String> completableFuture = CompletableFuture.supplyAsync(() -> {
    try {
        Thread.sleep(1000);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    return "The moon rises from the Tianshan Mountains and the vast sea of clouds.";
});
completableFuture.thenApply(s -> {
    try {
        Thread.sleep(1000);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    return s.concat("\n").concat("The wind blows for tens of thousands of miles to pass Yumen pass.");
}).thenApply(s -> {
    try {
        Thread.sleep(1000);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    return s.concat("\n").concat("In the Han Dynasty, you can go down Baideng road and have a glimpse of Qinghai Bay.");
}).thenApply(s -> {
    try {
        Thread.sleep(1000);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    return s.concat("\n").concat("There is no one to return it.");
}).thenApply(s -> {
    try {
        Thread.sleep(1000);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    return s.concat("\n").concat("When a garrison looks at a border town, it is hard for him to return home.");
}).thenApply(s -> {
    try {
        Thread.sleep(1000);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    return s.concat("\n").concat("When this night is high, sigh is not idle.");
}).thenAccept(System.out::println);

System.out.println("GuanShanYue Tang Libai");
try {
    Thread.sleep(8000);
} catch (InterruptedException e) {
    e.printStackTrace();
}
System.out.println("==================");

In this case, CompletableFuture.supplyAsync A sub thread is defined to execute the incoming lamda expression asynchronously, which returns a completable future object. The supplyasync method is overloaded with two methods, one with only one parameter, as in the example above. Another overloaded method has two parameters, one is the incoming sub thread processing logic (lambda expression), the other is the thread pool object. When no thread pool object is passed in, the default thread pool (a fork join thread pool for multi-core machines) is used.

The thenApply method of the completabilefuture object passes in a callback function which will be called back by the sub thread after the execution of the sub thread. The callback function takes the execution return of the sub thread as the input parameter and returns the result of this callback processing. As you can see, when multiple callback functions are passed in consecutively with the thenApply method, they will be called back serially.

The callback function passed in by thenAccept of the completabilefuture object only receives the execution result of the sub thread, and has no return value.

It's a common use to end a string of thenapplications with thenAccept.

Let's take another example:

CompletableFuture<Double> futurePrice = CompletableFuture.supplyAsync(() -> {
    try {
        Thread.sleep((long) (Math.random() * 1000));
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    double price = Math.random() * 100;
    System.out.println("Price is " + price);
    return price;
});
CompletableFuture<Integer> futureCount = CompletableFuture.supplyAsync(() -> {
    try {
        Thread.sleep((long) (Math.random() * 1000));
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    int count = (int) (Math.random() * 100);
    System.out.println("Count is " + count);
    return count;
});
CompletableFuture<Double> futureTotal = futurePrice.thenCombine(futureCount, (price, count) -> price * count);
futureTotal.thenAccept(total -> System.out.println("Total is " + total));

System.out.println("How long does it take... What should I do...");
try {
    Thread.sleep(3000);
} catch (InterruptedException e) {
    e.printStackTrace();
}

In this example, we need to calculate price and count first, and then multiply them to get the total price. It is assumed that the processing of price and count cannot be completed until now.

So we first execute the price and count sub threads asynchronously, and then execute such a logic through the oncombine method: after the two sub threads are finished, execute the callback function with their return values as parameters.

In this way, we can achieve the logic of calling back after the two sub threads are finished, and at the same time, the main thread still does what it should do without blocking.

Completable future provides many methods. After understanding the above examples, you can go to see what functions these methods are and what scenarios they are applicable to:

  • The static method used to create a completable future: supplyAsync, runAsync. Previously, we used supplyAsync, which is a sub thread with return value, and runAsync is a sub thread without return value. They all have overloaded methods with thread pool parameters.
  • The completabilefuture object provides instance methods for specifying callback functions: thenAccept,thenApply,thenRun;thenCombine,thenAcceptBoth,runAfterBoth;applyToEither,acceptEither,runAfterEither;exceptionally;whenComplete,handle, and so on
  • Static method to get completabilefuture object: allOf,anyOf

3.8 other new features

Java 8 also has many new features, such as multiple annotations, Arrays.parallelSort , StampedLock and so on. I will not introduce them one by one here. You can learn by yourself if you need.

4, New features of Java9~Java11

Because java9 and Java10 are both transitional versions, we will directly talk about the comparison between 9 and 11 that affects the new features we develop on the boundary of Java11 (the first LTS version after Java8).

Compared with Java 8, Java 11 has few new features in syntax. It mainly includes:

  • Local variable type inference
  • HttpClient
  • Collection enhancements
  • Stream enhancement
  • Optional enhancements
  • String enhancement
  • InputStream enhancements

4.1 local variable type inference

Java 10 can define a local variable with var in the future, without explicitly writing its type. Note, however, that the variables defined by var are still static types, and the compiler tries to infer their types.

String strBeforeJava10 = "strBeforeJava10";
var strFromJava10 = "strFromJava10";
System.out.println(strBeforeJava10);
System.out.println(strFromJava10);

Therefore, pay attention to:

  • Incompatible types cannot be reassigned!
// For example, the following statement will fail to compile, "InCompatible types."
strFromJava10 = 10;
  • As long as the compiler cannot infer the variable type, it will compile an error!
// For example, none of the following can be compiled:
var testVarWithoutInitial;
var testNull = null;
var testLamda = () -> System.out.println("test");
var testMethodByLamda = () -> giveMeString();
var testMethod2 = this::giveMeString;

The recommended scenarios for type inference are:

  • Simplify generic declaration
// As shown below, map < string, list < integer > > can be simplified as a single var keyword
var testList = new ArrayList<Map<String, List<Integer>>>();
for (var curEle : testList) {
    // Curle can be inferred that the type is map < string, list < integer > >
    if (curEle != null) {
        curEle.put("test", new ArrayList<>());
    }
}
  • lambda parameter
// Starting with Java 11, the lambda parameter also allows the var keyword:
Predicate<String> predNotNull = (var a) -> a != null && a.trim().length() > 0;
String strAfterFilter = Arrays.stream((new String[]{"a", "", null, "x"}))
        .filter(predNotNull)
        .collect(Collectors.joining(","));
System.out.println(strAfterFilter);

4.2 HttpClient

Java 9 began to introduce the HttpClient API to handle HTTP requests. Starting from Java 11, this API officially enters the standard library package. Reference website: http://openjdk.java.net/groups/net/httpclient/intro.html

HttpClient has the following features:

  1. Support both HTTP 1.1 and HTTP 2 protocols, and support websocket
  2. Supports both synchronous and asynchronous programming models
  3. Process the request and response bodies as reactive streams and use the builder pattern

HttpClient

To send an http request, first create an HttpClient using its builder. This builder can configure the status of each client:

  • Preferred protocol version (HTTP/1.1 or HTTP/2)
  • Follow redirection or not
  • agent
  • Authentication

Once the build is complete, you can use HttpClient to send multiple requests.

HttpRequest

HttpRequest is created by its builder. The requested builder can be used to set:

  • Request URI
  • Request method (get, put, post)
  • Request principal (if any)
  • Timeout
  • Request header

HttpRequest is immutable after construction, but can be sent multiple times.

Synchronous or Asynchronous

Requests can be sent synchronously or asynchronously. Of course, synchronized APIs can cause threads to block until HttpResponse is available. The asynchronous API immediately returns a completable future. When HttpResponse is available, it will get HttpResponse and perform subsequent processing.

Completable future is a new feature added by Java 8 for composable asynchronous programming.

Data as reactive-streams

The body of the request and response serves as a responsive flow (asynchronous data flow with non blocking backpressure) for external use. HttpClient is actually the subscriber of the request body and the publisher of the response body bytes. The BodyHandler interface allows you to check the response code and header before receiving the actual response body, and is responsible for creating a response BodySubscriber.

HttpRequest and HttpResponse types provide many convenient factory methods for creating request publishers and response subscribers to handle common body types such as files, strings, and bytes. These convenient implementations either accumulate data until you can create higher-level Java types, such as String, or transfer data on a file stream. The BodySubscriber and BodyPublisher interfaces can be implemented to process data for a custom reaction flow.

HttpRequest and HttpResponse also provide converters for java.util.concurrent.Flow's Publisher/Subscriber type is converted to HTTP Client's BodyPublisher/BodySubscriber type.

HTTP/2

The Java HTTP Client supports HTTP/1.1 and HTTP/2. By default, clients send requests using HTTP/2. Requests sent to servers that do not yet support HTTP/2 are automatically downgraded to HTTP/1.1. The following are the main improvements brought about by HTTP/2:

  • Header compression. HTTP/2 uses HPACK compression to reduce overhead.
  • A single connection to the server reduces the number of round trips required to establish multiple TCP connections.
  • Multiplexing. Multiple requests are allowed at the same time on the same connection.
  • Server push. Other resources needed in the future can be sent to the client.
  • Binary format. More compact.

Because HTTP/2 is the default preferred protocol, and it can seamlessly implement fallback to HTTP/1.1 where needed, when HTTP/2 is deployed more widely, Java HTTP client does not need to modify its application code.

API documentation

https://docs.oracle.com/en/ja...

Demo code

In the URL requested in the code, localhost:30001 The related URIs of are from engineering https://github.com/zhaochuninhefei/study-czhao/tree/master/jdk11-test .

package jdk11;

import com.fasterxml.jackson.databind.ObjectMapper;

import java.net.URI;
import java.net.http.HttpClient;
import java.net.http.HttpRequest;
import java.net.http.HttpResponse;
import java.net.http.WebSocket;
import java.time.LocalDateTime;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.TimeUnit;

/**
 * HttpClient
 *
 * @author zhaochun
 */
 public class TestCase02HttpClient {
    public static void main(String[] args) throws Exception {
        TestCase02HttpClient me = new TestCase02HttpClient();
        me.testHttpClientGetSync();
        me.testHttpClientGetAsync();
        me.testHttpClientPost();

        // The same HttpClient first logs in to the website to obtain the token, and then requests the restricted resources, so as to crawl the resources requiring authentication
        me.testLogin();

        // HttpClient supports websocket
        me.testWebsocket();
    }

    private void testHttpClientGetSync() {
        var url = "https://openjdk.java.net/";
        var request = HttpRequest.newBuilder()
                .uri(URI.create(url))
                .GET()
                .build();
        var client = HttpClient.newHttpClient();
        try {
            System.out.println(String.format("send begin at %s", LocalDateTime.now()));
            // Synchronization request
            HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
            System.out.println(String.format("send end at %s", LocalDateTime.now()));
            System.out.println(String.format("receive response : %s", response.body().substring(0, 10)));
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    private void testHttpClientGetAsync() {
        var url = "https://openjdk.java.net/";
        var request = HttpRequest.newBuilder()
                .uri(URI.create(url))
                .GET()
                .build();
        var client = HttpClient.newHttpClient();
        try {
            System.out.println(String.format("sendAsync begin at %s", LocalDateTime.now()));
            // Asynchronous request
            client.sendAsync(request, HttpResponse.BodyHandlers.ofString())
                    .thenApply(stringHttpResponse -> {
                        System.out.println(String.format("receive response at %s", LocalDateTime.now()));
                        return stringHttpResponse.body();
                    })
                    .thenAccept(s -> System.out.println(String.format("receive response : %s at %s", s.substring(0, 10), LocalDateTime.now())));
            System.out.println(String.format("sendAsync end at %s", LocalDateTime.now()));

            // To prevent asynchronous requests from ending before returning to the main thread (jvm will exit), let the main thread sleep for 10 seconds
            System.out.println("Main Thread sleep 10 seconds start...");
            Thread.sleep(10000);
            System.out.println("Main Thread sleep 10 seconds stop...");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    private void testHttpClientPost() {
        var url = "http://localhost:30001/jdk11/test/helloByPost";
        var request = HttpRequest.newBuilder()
                .uri(URI.create(url))
                .header("Content-Type", "text/plain")
                .POST(HttpRequest.BodyPublishers.ofString("zhangsan"))
                .build();
        var client = HttpClient.newHttpClient();
        try {
            HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
            System.out.println(response.statusCode());
            System.out.println(response.body());
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

    private void testLogin() throws Exception {
        var client = HttpClient.newHttpClient();
        // A test environment user login URL
        var urlLogin = "http://x.x.x.x:xxxx/xxx/login";
        var requestObj = new HashMap<String, Object>();
        requestObj.put("username", "xxxxxx");
        requestObj.put("password", "xxxxxxxxxxxxxxxx");
        var objectMapper = new ObjectMapper();
        var requestBodyJson = objectMapper.writerWithDefaultPrettyPrinter().writeValueAsString(requestObj);
        var requestLogin = HttpRequest.newBuilder()
                .uri(URI.create(urlLogin))
                .header("Content-Type", "application/json;charset=UTF-8")
                .POST(HttpRequest.BodyPublishers.ofString(requestBodyJson))
                .build();
        HttpResponse<String> responseLogin = client.send(requestLogin, HttpResponse.BodyHandlers.ofString());
        // The login site here uses token instead of session, so we need to find token information from the returned message body;
        // If you are a website using session, you need to find "set Cookie" from the headers of the response to get the session id, and set sid to the Cookie of the header in the subsequent request.
        // For example: responseLogin.headers (). Map(). Get ("set cookie") gets cookies and looks for sid from them.
        var loginResponse = responseLogin.body();
        var mpLoginResponse = objectMapper.readValue(loginResponse, Map.class);
        var dataLogin = (Map<String, Object>) mpLoginResponse.get("data");
        var token = dataLogin.get("token").toString();
        // Test environment get the URL of a resource
        var urlGetResource = "http://xxxx:xxxx/xxx/resource";
        var requestRes = HttpRequest.newBuilder()
                .uri(URI.create(urlGetResource))
                .header("Content-Type", "application/json;charset=UTF-8")
                // Note that the token is not always set in the Authorization of the header, which depends on the way the website is verified. It is also possible that the token is also put in the cookie.
                // But for websites using session, sid is set in the cookie. For example,. header("Cookie", "JSESSIONID=" + sid)
                .header("Authorization", token)
                .GET()
                .build();
        HttpResponse<String> responseResource = client.send(requestRes, HttpResponse.BodyHandlers.ofString());
        var response = responseResource.body();
        System.out.println(response);
    }

    private void testWebsocket() {
        var wsUrl = "ws://localhost:30001/ws/test";
        var httpClient = HttpClient.newHttpClient();
        WebSocket websocketClient = httpClient.newWebSocketBuilder()
                .buildAsync(URI.create(wsUrl), new WebSocket.Listener() {
                    @Override
                    public void onOpen(WebSocket webSocket) {
                        System.out.println("onOpen : webSocket opened.");
                        webSocket.request(1);
                    }

                    @Override
                    public CompletionStage<?> onText(WebSocket webSocket, CharSequence data, boolean last) {
                        System.out.println("onText");
                        webSocket.request(1);
                        return CompletableFuture.completedFuture(data)
                                .thenAccept(System.out::println);
                    }

                    @Override
                    public CompletionStage<?> onClose(WebSocket webSocket, int statusCode, String reason) {
                        System.out.println("ws closed with status(" + statusCode + "). cause:" + reason);
                        webSocket.sendClose(statusCode, reason);
                        return null;
                    }

                    @Override
                    public void onError(WebSocket webSocket, Throwable error) {
                        System.out.println("error: " + error.getLocalizedMessage());
                        webSocket.abort();
                    }
                }).join();

        try {
            TimeUnit.SECONDS.sleep(3);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

        // The last parameter is used to indicate whether the data sent this time is the last part of the complete message.
        // If it is false, the websocketClient will not send the message to the listener in the background of websocket, but will cache the data;
        // When true is passed in, the previously cached data and the current data will be spliced and sent to the listener in the background of websocket.
        websocketClient.sendText("test1", false);
        websocketClient.sendText("test2", true);

        try {
            TimeUnit.SECONDS.sleep(3);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        websocketClient.sendText("org_all_request", true);

        try {
            TimeUnit.SECONDS.sleep(3);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        websocketClient.sendText("employee_all_request", true);

        try {
            TimeUnit.SECONDS.sleep(3);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }

        websocketClient.sendClose(WebSocket.NORMAL_CLOSURE, "Happy ending.");
    }
}

4.3 Collection enhancement

List, set and map have new enhancements: of and copyOf.

Of and copyOf of List

List.of Create a new immutable list set based on the passed parameter list; List.copyOf Create an immutable copy of the list object passed in.

var listImmutable = List.of("a", "b", "c");
var listImmutableCopy = List.copyOf(listImmutable);

Because the copy set itself is an immutable object, the copy does not actually create a new object, but directly uses the original immutable object.

// Result is true
System.out.println(listImmutable == listImmutableCopy);
// Immutable objects cannot be modified
try {
    listImmutable.add("d");
} catch (Throwable t) {
    System.out.println("listImmutable can not be modified!");
}
try {
    listImmutableCopy.add("d");
} catch (Throwable t) {
    System.out.println("listImmutableCopy can not be modified!");
}

If you want to quickly create a new variable collection object, you can directly use the previous immutable collection as a construction parameter to create a new variable collection.

var listVariable = new ArrayList<>(listImmutable);
var listVariableCopy = List.copyOf(listVariable);

The newly created mutable set is of course a new object, and the immutable copy copied from this new object is also a new object, not the previous immutable set.

System.out.println(listVariable == listImmutable); // false
System.out.println(listVariable == listVariableCopy); // false
System.out.println(listImmutable == listVariableCopy); // false
// Of course, the new variable set can be modified
try {
    listVariable.add("d");
} catch (Throwable t) {
    System.out.println("listVariable can not be modified!");
}
// A copy of a mutable set is still immutable
try {
    listVariableCopy.add("d");
} catch (Throwable t) {
    System.out.println("listVariableCopy can not be modified!");
}

Of and copyOf of Set

Set's of and copyOf are similar to List.

var set = Set.of("a", "c", "r", "e");
var setCopy = Set.copyOf(set);
System.out.println(set == setCopy);

Note, however, that when you create an immutable Set with of, you need to make sure that the elements are not repeated, or the runtime throws an exception:“ java.lang.IllegalArgumentException: duplicate element"

try {
    var setErr = Set.of("a", "b", "a");
} catch (Throwable t) {
    t.printStackTrace();
}

Of course, adding duplicate elements after creating a variable set will not throw an exception, but will be de duplicated

var setNew = new HashSet<>(set);
setNew.add("c");
System.out.println(setNew.toString());

Of and copyOf of Map

Map's of and copyOf are similar to list and set. Note that the parameter list of the of method is passed in key and value in turn:

var map = Map.of("a", 1, "b", 2);
var mapCopy = Map.copyOf(map);
System.out.println(map == mapCopy);

Of course, when creating immutable maps, the key cannot be repeated

try {
    var mapErr = Map.of("a", 1, "b", 2, "a", 3);
} catch (Throwable t) {
    t.printStackTrace();
}

4.4 Stream enhancement

Java8 introduced stream, Java11 provides some extensions:

  • A single element is directly constructed as a Stream object
  • dropWhile and takeWhile
  • The overload iterate method is used to limit the infinite flow range

A single element is directly constructed as a Stream object

Note the difference between null and '':

long size1 = Stream.ofNullable(null).count();
System.out.println(size1); // 0
long size2 = Stream.ofNullable("").count();
System.out.println(size2); // 1

dropWhile and takeWhile

dropWhile, for an ordered stream, remove the elements that meet the conditions from the beginning, and end when the elements that do not meet the conditions are encountered

List lst1 = Stream.of(1, 2, 3, 4, 5, 4, 3, 2, 1)
        .dropWhile(e -> e < 3)
        .collect(Collectors.toList());
System.out.println(lst1); // [3, 4, 5, 4, 3, 2, 1]

Take while: for an ordered stream, the elements that meet the conditions are retained from the beginning, and once the elements that do not meet the conditions are met, they end

List lst2 = Stream.of(1, 2, 3, 4, 5, 4, 3, 2, 1)
        .takeWhile(e -> e < 3)
        .collect(Collectors.toList());
System.out.println(lst2); // [1, 2]

Even if the rest of the elements are collected into the unordered set, before that, the stream object is ordered, so the result contains the last [a2] and [a1] in the original stream

Set set1 = Stream.of("a1", "a2", "a3", "a4", "a5", "a4", "a3", "a2", "a1")
        .dropWhile(e -> "a3".compareTo(e) > 0)
        .collect(Collectors.toSet());
System.out.println(set1); // [a1, a2, a3, a4, a5]

If you first create an unordered and unrepeated set set, the more accurate way to say that set unordered is not to guarantee that the order remains the same, in fact, there is order.
Therefore, it will be found here that dropWhile is determined according to the current element order of set. Once the condition is not met, it will end.

Set<String> set = new HashSet<>();
for (int i = 1; i <= 100 ; i++) {
    set.add("test" + i);
}
System.out.println(set);
Set setNew = set.stream()
        .dropWhile(s -> "test60".compareTo(s) > 0)
        .collect(Collectors.toSet());
System.out.println(setNew);

The overload iterate method is used to limit the infinite flow range

In java8, you can create an infinite flow, such as the following sequence. The starting value is 1, and each of the following items is * 2 + 1 based on the previous item. limit the length of the flow:

Stream<Integer> streamInJava8 = Stream.iterate(1, t -> 2 * t + 1);
// Print out the first ten of the sequence: 1, 3, 7, 15, 31, 631272555111023
System.out.println(streamInJava8.limit(10).map(Object::toString).collect(Collectors.joining(",")));

Starting with Java 9, the iterate method can add a decider, such as a limit of no more than 1000

Stream<Integer> streamFromJava9 = Stream.iterate(1, t -> t < 1000, t -> 2 * t + 1);
// The result printed here is 1,3,7,15,31,63127255511
System.out.println(streamFromJava9.map(Objects::toString).collect(Collectors.joining(",")));

4.5 Optional enhancements

You can convert an Optional object directly to a stream

Optional.of("Hello openJDK11").stream()
        .flatMap(s -> Arrays.stream(s.split(" ")))
        .forEach(System.out::println);

You can provide a default Optional object for the Optional object

System.out.println(Optional.empty()
        .or(() -> Optional.of("default"))
        .get());

4.6 String enhancement

In the aspect of String, some new methods are provided for white space characters (spaces, tabs, carriage returns, line feeds, etc.).

isBlank

Determines whether the target string is a blank character. The following results are all true:

// Half space
System.out.println(" ".isBlank());
// Full space
System.out.println(" ".isBlank());
// unicode character value of half space
System.out.println("\u0020".isBlank());
// unicode character value of full space
System.out.println("\u3000".isBlank());
// Tab
System.out.println("\t".isBlank());
// enter
System.out.println("\r".isBlank());
// Wrap
System.out.println("\n".isBlank());
// Various white space character splicing
System.out.println(" \t\r\n ".isBlank());

String, stringleading and stringtraining

Remove leading and trailing whitespace:

// Full space + Tab + enter + newline + half space + < content > + full space + Tab + enter + newline + Half Space
var strTest = " \t\r\n Hello jdk11 \t\r\n ";

// strip to remove white space on both sides
System.out.println("[" + strTest.strip() + "]");
// stripLeading removes leading whitespace
System.out.println("[" + strTest.stripLeading() + "]");
// Striptraining removes white space at the end
System.out.println("[" + strTest.stripTrailing() + "]");

repeat

Repeat the string content and splice the new string:

var strOri = "jdk11";
var str1 = strOri.repeat(1);
var str2 = strOri.repeat(3);
System.out.println(str1);
System.out.println(str2);
// When repeat passed in parameter is 1, a new String object will not be created, but the original String object will be returned directly.
System.out.println(str1 == strOri);

lines

The lines method cuts a string with r or n or rn and returns the stream object:

var strContent = "hello java\rhello jdk11\nhello world\r\nhello everyone";
// The lines method cuts a string and returns a stream object \ r\n or \ r\n
strContent.lines().forEach(System.out::println);
System.out.println(strContent.lines().count());

4.7 InputStream enhancements

InputStream provides a new method transferTo to transfer the input stream directly to the output stream:

inputStream.transferTo(outputStream);

Full sample code

package jdk11;

import java.io.*;

/**
 * InputStream enhance
 *
 * @author zhaochun
 */
public class TestCase07InputStream {
    public static void main(String[] args) {
        TestCase07InputStream me = new TestCase07InputStream();
        me.test01_transferTo();
    }

    private void test01_transferTo() {
        var filePath = "/home/work/sources/test/jdk11-test/src/main/resources/application.yml";
        var tmpFilePath = "/home/work/sources/test/jdk11-test/src/main/resources/application.yml.bk";

        File tmpFile = new File(tmpFilePath);
        if (tmpFile.exists() && tmpFile.isFile()) {
            tmpFile.delete();
        }

        try(InputStream inputStream = new FileInputStream(filePath);
            OutputStream outputStream = new FileOutputStream(tmpFilePath)) {
            // transferTo transfers data from InputStream directly to OutputStream
            inputStream.transferTo(outputStream);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}

4.8 other new features

There are other new features from Java9 to Java11, such as modular development, REPL interactive programming, direct execution of single file source code programs, new garbage collector, etc. for the current development, the impact is relatively small. Interested partners can refer to another article of my husband:

Tags: Java Lambda Programming JDBC

Posted on Wed, 24 Jun 2020 04:07:41 -0400 by fuzz01