The most complete and comprehensive java8 Stream features

Stream stream

Key points:

  1. The original iterator prohibits efficient concurrency, so it is not as good as Stream

  2. Stream streams can be created from collections, arrays, generators, iterators

  3. You can change the Stream with limit, distinct and sorted

  4. You can use the reduction operator to obtain results from the Stream, such as (count(), max(), min(), findFirst(), findAny), and may return the Option value

  5. The purpose of Optional is to safely replace the use of null values. You can use ifPresent or orElse methods

  6. You can get Stream results in collections, arrays, strings, and map s

  7. The groupingBy and partitioningBy of the Collections class can group the contents in the Stream and obtain the results of each group

  8. Each primitive type has a special stream and a special functional buckle

The difference between Stream and collection

  1. It is difficult to perform parallel operations when iterating over sets

  2. Stream looks like a collection

    1. Stream does not change the source data

    2. Stream itself does not store elements, which exist in the underlying collection and are generated as needed

    3. Stream may delay execution and only execute when results are needed. For example, as long as there are five numbers, the execution will stop after the results are met.

    4. The Stream principle is, "what to do, not how to do it"

    5. To work with stream operations

      1. Create Stream

      2. Convert Stream

      3. Collect results

1. Create Stream

1. All rely on StreamSupport to create streams

2. For those that implement the Collection interface, you can use the new default method:

3. For arrays or individual objects, you can use Stream.of() to create

4. The static methods Stream.iteratoe(), and Stream. Generatotr () of the Stream class can create an infinite Stream

1.1 default stream() method of collection

For classes that inherit the Collection interface, you can use its new methods, which are created by the splitter

//Create sequential flow
default Stream<E> stream() {
   return, false);

//Create a parallel stream that allows sequential streams to be returned
default Stream<E> parallelStream() {
  return, true);

1.2 Stream.of method using Stream

    //1. Create a single element Stream 
    public static<T> Stream<T> of(T t) {
        return Streams.StreamBuilderImpl<>(t), false);

    //2. Create a Stream based on an indefinite length parameter array
    public static<T> Stream<T> of(T... values) {

1.2 create from an array using the[] array) method

	//Arrays.Stream(values), which contains overloaded methods, can intercept a part to create a stream
	public static <T> Stream<T> stream(T[] array) {
        return stream(array, 0, array.length);
	//stream(array, 0, array.length);
	public static <T> Stream<T> stream(T[] array, int startInclusive, int endExclusive) {
        return, startInclusive, endExclusive), false);

1.3 generate an infinite Stream

 * 1,Stream.generater Generate an infinite length Stream, which can generate regular data,
 * 	The previous generated value will be used as a parameter to the second value. The first number of the sequence is seed, followed by f(seed)
 *   public static<T> Stream<T> iterate(final T seed, final UnaryOperator<T> f)
 *   UnaryOperator extends Function<T, T> It only restricts that the return value and input value must be of the same type.
 *   The following code generates an infinite increment sequence starting from 1
Stream<Integer> infiniteStreamIterate=Stream.iterate(1,a->a+1);

 * 2,public static<T> Stream<T> generate(Supplier<T> s)
 * Any type of (random / repeated) infinite data sequence can be generated
 * The following code generates an infinite Stream of 1
Stream<Integer> infiniteStreamGenerate=Stream.generate(()->1);

2. Convert Stream, filter, map, flatMap

  1. Converting a Stream is to process or filter a Stream to generate a new Stream
  2. The parameter of the filter method is a Predict object, a function from T to boolean
  3. map method, passing a converted function

2.1 filter and map transform stream and flatmap

filter and map

//filter filters streams less than 100
Stream<Integer> infiniteStreamIterateFilter=infiniteStreamIterate.filter(a->a<100);

//map converts Integer to String

flatmap horizontal conversion

Expand the result of a map that will return a Stream horizontally and put it into the same Stream instead of a separate Stream


  • Directly collect {"Xiao Liu", "Xiao Li", "Xiao Wang", "Xiao Zhang"} - > [[Xiao, Liu], [Xiao, Li]...]
  • flatmap level collection {"Xiao Liu", "Xiao Li", "Xiao Wang", "Xiao Zhang"} - > [Xiao, Liu, Xiao, li...]
//The difference between map and flatMap
//Cut the String into a Stream of Character
Function<String, Stream<Character>> function = a -> {
    List<Character> characterList = new ArrayList<>();
    for (char c : a.toCharArray()) {
String[] stringArray = {"Xiao Liu", "petty thief", "Xiao Wang", "Xiao Zhang"};
Stream<String> nameStream =;
//Directly use map to collect results {"Xiao Liu", "Xiao Li", "Xiao Wang", "Xiao Zhang"} - > [[Xiao, Liu], [Xiao, Li]....]
Stream<Stream<Character>> nameCharacterStream =;
//Use flatMap to collect Stream {"Xiao Liu", "Xiao Li", "Xiao Wang", "Xiao Zhang"} - > [Xiao, Liu, Xiao, li...]
Stream<Character> charStream = nameStream.flatMap(function);

2.2 extracting sub streams and composite streams

  1. Stream.limit(n) intercepts the first n elements. If there are fewer and n elements in the stream, all are returned

  2. Stream.skip(m) discards the first m elements

  3. Stream.peek (Function) generates the same stream as the original element, but each time an element is obtained, a function will be called for debugging

2.3 stateful transition flow, de duplication and sorting

  1. **Stateless: * * the elements in the stream are not related to each other, and the filtering result of a single element is independent of other elements

  2. **Stream.distinct() de duplication * * Stream uniqueWord=Stream.of({"Xiao Liu", "Xiao Liu"}). distinct();

  3. **Stream.sort() sort * * Stream.sorted(Comparator.comparing(String::length).reversed());

3. Aggregate result (aggregate into a value)

3.0 basic aggregation methods findFirst, findAny, anyMatch, allMatch and noneMatch

  1. findFirst retrieves the first value in the stream. It is generally used with filter to retrieve the first value of the stream

  2. findAny also takes out the first value in the stream, but in a parallel stream, it is not necessarily the first value in the stream sequence, but the first value that meets the condition in time. This value meets the condition and is taken out first, but not necessarily the first value in the sequence

  3. anyMatch receives the predict parameter to judge whether there are qualified elements in the stream, and the corresponding allMatch and noneMatch are used to judge whether all elements meet or not

3.2 aggregation operation stream.reduce

Note: the result of the previous operation is used as the parameter of the next operation until the whole stream is executed. It is a bit like in the lambda function Function composition

Except for case 1, which returns the corresponding Optional value of the Optional type, the other two cases directly return the object of the corresponding type.

Case 1:

Optional reduce~~(~~BinaryOperator accumulator);

//Call by reason

//Source code Description:
//Use the correlation accumulation function to aggregate the elements of this stream and return Optional (if any) describing the reduced value. 
//This is equivalent to:
//     boolean foundAny = false;
//     T result = null;
//     for (T element : this stream) {
//         if (!foundAny) {
//             foundAny = true;
//             result = element;
//         }
//         else
//             result = accumulator.apply(result, element);
//     }
//     return foundAny ? Optional.of(result) : Optional.empty();

//The accumulator function must be an associative function, but not limited to sequential execution.

//This is a terminal operation.
accumulator An associated, interference free, stateless function used to combine two values
Description of aggregation results Optional
NullPointerException – If the reduction result is empty
//You can also see:
reduce(Object, BinaryOperator) , min(Comparator) , max(Comparator)

Optional<T> reduce(BinaryOperator<T> accumulator);

Case 2

T reduce(T identity, BinaryOperator accumulator);
Similar to case 1, but with an additional initial value of type T, the value of the initial value will also be returned when the Stream is empty.

Case 3
The returned value type is inconsistent with the stream type, such as < T, t > - > M. in this case, if you need to provide one more method, you need to provide two methods

1. The calculator combines the result and the next element, and the result type is the return type
2. During parallel computing, the calculator will produce multiple results. It is necessary to provide another method to combine the result types

for example

int result=words.reduce

<U> U reduce(U identity,
                 BiFunction<U, ? super T, U> accumulator,
                 BinaryOperator<U> combiner);

4. Collect results

4.0 stream get iterator used to access elements

  • Generate a common iterator to get the elements
 * Returns an iterator for the elements of this stream.
 * <p>This is a <a href="package-summary.html#StreamOps">terminal
 * operation</a>.
 * @return the element iterator for this stream
Iterator<T> iterator();

4.1 collect streams into arrays

  • Use the Stream.toArray() method to collect into Object []

  • Use Stream.toarray (intfunction < T [] > generator) to generate Stream results of the specified type

//String Stream
Stream<String> nameStream=Stream.of("Da Liu","Xiao Liu","Xiao Wang","Xiao Wang 2","petty thief","Xiao Zhang");
//Default build object array
Object[] objectArray=nameStream.toArray();
//Generate a generic array of the specified type. JAVA cannot create a new generic array
// Therefore, use the functional interface R apply(int value) of IntFunction; Used to quickly create arrays of corresponding types
//(r) - > new string [R] can also be replaced by method reference string []: New
String[] stringArray=nameStream.toArray((r)->new String[r]);

4.2 collect method

4.2.1 the collect method has three parameters

1. Method to create an instance of the target object type(The target object is not necessarily a collection, for example, it can be a collection StringBuilder)

1. Method to add element to target element

1. A method of integrating two objects together, such as addAll

4.2.2 the collector interface provides various factory methods for common collection types

1. For example, to collect streams into a List or Set, use Collectors.toSet() or custom type Collectors.toCollection(TreeSet::new)

ArrayList<String> arrayNameList=nameStream.collect(Collectors.toCollection(ArrayList::new));
List<String> nameList=nameStream.collect(Collectors.toList());
Set<String> nameSet=nameStream.collect(Collectors.toSet());
//If you need to return other types, you can use collectors. ToCollection (constructor)
TreeSet<String> nameTreeSet=nameStream.collect(Collectors.toCollection(TreeSet::new));

2. Connect the stream as a string and collect Collectors.joining()

//If string type streams are spliced
String allString =nameStream.collect(Collectors.joining(","));
//Convert the object type to string type before splicing;

3. Calculate the summarziong(int|long|Double) of the sum, average, maximum and minimum values of the numerical flow

IntSummaryStatistics intSummaryStatistics=nameStream.collect(Collectors.summarizingInt(String::length));

4. Ergodic flow

1. use forEach However, for parallel streams, this method can not guarantee the traversal order

1. use forEachOrdered Guaranteed traversal order

5. Collect the results into the Map Collectors.toMap

Collectors.toMap has four function parameters

    1. The first is generation key

    1. Second generation value

    1. The third is key Same time right value Treatment of,**Simple grouping and more complete grouping can be realized. See section 5 below**

    1. The fourth is for Map Constructor with type requirements
//Collectors.toMap has four function parameters. The first is to generate key, the second is to generate value, the third is to process value when the key is the same, and the fourth is the constructor when there are type requirements for Map
//1. Generate different types of value values
Map<String,Person> personMap=personStream.collect(Collectors.toMap(Person::getIdNumber,s->s));
personMap=personStream.collect(Collectors.toMap(Person::getIdNumber, Function.identity()));
Map<String,String> personIdNameMap=personStream.collect(Collectors.toMap(Person::getIdNumber,Person::getName));
//2. For Key duplicate value processing - 1, if it is not processed, an exception IllegalStateException will be thrown. The latter one is reserved here

//3. For duplicate key values, process - 2 integrate the Map with Set or List as the Value
 (Simple grouping) Collections.singleton A collection is generated from an object
Map<String,Set<String>> personIdNameList=personStream.collect(Collectors.toMap(Person::getIdNumber, l->Collections.singleton(l.getName()),(existvalue,newvalue)->{
Set<String> s=new HashSet<>(existvalue);
return s;
//4. The fourth parameter can be used to customize the return Map type
HashMap<String,String> personIDMameMap2=personStream.collect(Collectors.toMap(Person::getIdNumber,Person::getName,(existValue,newValue)->newValue,HashMap::new));

//Collector.toMap method
public static <T, K, U>
Collector<T, ?, Map<K,U>> toMap(Function<? super T, ? extends K> keyMapper,
                                Function<? super T, ? extends U> valueMapper,
                                BinaryOperator<U> mergeFunction) {
    return toMap(keyMapper, valueMapper, mergeFunction, HashMap::new);

5. Grouping and slicing Collectors.groupingBy()

5.1 Collectors.groupingBy grouping

The three parameters of Collectors.groupingBy() have three overloaded methods according to the number of parameters

Parameter 1: get the grouped object from the element type,
Parameter 2: group into Map. Specify here to create a specific container integrated into the Map, such as TreeMap,
Parameter 3: the processing of grouping results by passing in the Collector object is equivalent to a collection of sub streams

  1. Divide the data into corresponding map < key (grouping field), < list grouping result > > according to the specified parameters
  3. When groupingBy is grouped by Boolean values, it is more efficient to use partitioningBy than groupingBy
  4. Collectors.groupingByConcurrent() will get a concurrent MAP. When the stream is a parallel stream, it will insert values concurrently. Similar to tocurrentmap
  5. The grouping method can accept the second parameter and downStream the grouped element group results, such as
  6. Common downStream collectors Collectors.toSet(), Collectors.counting(), Collectors.summarizingInt(), collectors. Summatingint(), Collectors.maxBy(), Collectors.minBy()
// Parameter 1: get the grouped object from the element type,
// Parameter 2: group into Map. Specify the specific container for creating Map here, such as TreeMap,
// Parameter 3: pass in the Collector object to process the grouping results (equivalent to a collection of sub streams)

//1. The first parameter of groupingBy() is grouped by gender. Example
Map<String, List<Person>> genderGroupListMap = -> p.getGender()));
genderGroupListMap =;
//2. When groupingBy is grouped by Boolean, it is more efficient to use partitioningBy than groupingBy
Map<Boolean, List<Person>> malePersonListMap = -> "male".equals(s.getGender())));
//3. Collectors.groupingByConcurrent() will get a concurrent MAP. When the stream is a parallel stream, it will insert values concurrently. It is similar to tocurrentmap
Map<Boolean, List<Person>> concurrentMap = personList.parallelStream().collect(Collectors.groupingByConcurrent(s -> "male".equals(s.getGender())));
//4. Statistics, summation, maximum, minimum, etc
//4.1 counting the total number of entries
Map<String, Long> genderCountMap =, Collectors.counting()));
//4.2 statistical total length summingInt
Map<String, Integer> nameLengthMap =, Collectors.summingInt(a -> a.getName().length())));
//4.3 find the longest name 
Map<String, Optional<Person>> maxNameLengthMap =, Collectors.maxBy(Comparator.comparing(s-> ));
maxNameLengthMap =, Collectors.minBy(Comparator.comparing(s-> ));

5.1 convert grouped elements Collectors.mapping()


Generate a value function for grouping,
Mapping (parameter 1 processes the elements of the downStream result, and parameter 2 processes the whole result of the grouping)


//groupingBy and Mapping, convert the elements in the grouping result, and then perform the downStream operation

**5.2 collection integration result downStream =Collectors.toSet()**

//Collection integration Set

5.3 summary statistics of grouping results aggregate analysis results, which can be used when grouped elements can be converted to long|int|double

//6. For double int long type, you can collect some aggregate analysis results after integration analysis, including maximum value, minimum value, average, total and sum

5.4 aggregation method of downstream method, reducing

Three forms, similar to 3.2

  1. reducing(binaryOperator)
  2. reducing(identity,binaryOperator)
  3. reducing(identity,mapper,binaryOperator)
    Generally, Stream.reducing in 3.2 can be used, and Collectors.reducing() is rarely used

5.5 summary downSteam collector

downSteam collectors can generate very complex expressions. Generally, they are only used when downSteam is generated through groupingBy or partitioningBy. Generally, only map, reduce, count, max and min need to be used directly for convection

6. Primitive type flow

Similar to the common Lambada function, the Stream stream also has a similar original type Stream, which avoids the inefficiency caused by repeated packaging in the processing of basic type streams.

For the original type, Stream Api provides the original stream types of IntStream, LongStream and DoubleStream

  1. If you need to store values of types short, char, byte, boolean, etc., you can directly use IntStream

  2. If you need to store float s, you can directly use the DoubleStream type.

7. Parallel stream

Stream makes parallel computing easy, and its processing process is almost automatic.

**Usage: * * to use a parallel Stream, you need to generate a parallel Stream. By default, you create a Stream. By default, you create a serial Stream. You can use 1. Collections.parallelstream to create a parallel Stream. 2. Parallel method to convert a serial Stream into a parallel Stream


  1. The correct parallel stream operation should return the operation results consistent with the serial stream, and these operations are stateless.

  2. It is necessary to ensure that the methods passed to parallel flow execution are thread safe

  3. By default, streams generated from ordered sets, range values, generators, iterators, or Stream.sorted are ordered. Some operations can run more effectively regardless of order. Using the Stream.unordered method can not care about order. For example, Stream.distinct can benefit from it.

  4. Map merging costs a lot, so Collectors.groupingByConcurrent uses a shared concurrent map. Marking the flow as parallel mode can improve the execution efficiency.

Tags: Java stream

Posted on Sun, 12 Sep 2021 21:47:24 -0400 by why not