Simple use of ElasticSearch7

brief introduction

es out of the box

es indexing is fast and querying is slow

es only supports json format

Install using yum or up2date

1. Download and install the YUM public key of ES

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

2. Configure the YUM source for ELASTICSEARCH

vim /etc/yum.repos.d/elasticsearch.repo
[elasticsearch-7.x] 
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1 

3. Install ELASTICSEARCH

yum install -y elasticsearch

4. The configuration files are in the / etc/elasticsearch / directory

vim /etc/elasticsearch/elasticsearch.yml

Set ip access, Internet access and modify port

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
cluster.name: my-application
node.name: node-1

discovery.seed_hosts: ["127.0.0.1", "[::1]"]
cluster.initial_master_nodes: ["node-1"]

http.cors.allow-origin: "*"
http.cors.enabled: true
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.cors.allow-credentials: true

Offline package installation

download

https://www.elastic.co/cn/downloads/elasticsearch

My is elasticsearch-7.1.1-linux-x86_64.tar.gz

Unzip and enter the folder

./bin/elasticsearch     #start-up

Question:

Caused by: java.lang.RuntimeException: can not run elasticsearch as root
	at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:102) ~[elasticsearch-7.1.1.jar:7.1.1]
	at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:169) ~[elasticsearch-7.1.1.jar:7.1.1]
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:325) ~[elasticsearch-7.1.1.jar:7.1.1]
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159) ~[elasticsearch-7.1.1.jar:7.1.1]
	... 6 more

Solution:

groupadd elsearch
useradd elsearch -g elsearch -p root
chown -R elsearch:elsearch elasticsearch-7.1.1    #Change the user and group of elasticsearch-7.1.1 folder and internal files to elsearch

problem

java.io.FileNotFoundException: /root/elasticsearch-7.1.1/logs/elasticsearch.

elsearc is not authorized to access elasticsearch-7.1.1

chown -R elsearch /root/elasticsearch-7.1.1

problem

max virtual memory areas vm.max_map_count [65530] is too low,, increase to at least [262144]

solve

vim /etc/sysctl.conf
vm.max_map_count=262144   #Add last file

start-up

su elsearch		#Switch users	
./bin/elasticsearch

# start-up
systemctl start elasticsearch.service

# Startup and self start
systemctl enable elasticsearch.service

# View status
systemctl status elasticsearch.service

IK word breaker

install

https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.11.1/elasticsearch-analysis-ik-7.11.1.zip

Download it, upload it to the plugins of elastic search-7.11.2, and decompress it

Configure your own dictionary

vim ik/config/IKAnalyzer.cfg.xml

IK word segmentation algorithm

  • ik_smart: minimum segmentation

  • ik_max_word: the most fine-grained partition

POST _analyze
{
  "analyzer": "ik_smart",
  "text": "Luo Jiankang, will you marry me today"
}

POST _analyze
{
  "analyzer": "ik_max_word",
  "text": "Luo Jiankang, will you marry me today"
}

Basic class

SearchSourceBuilder

Class used to build query criteria

QueryBuilders

Tool class for creating search query

HighlightBuilder

Highlight constructor

/**
     * Set encoder for the highlighting
     * are {@code html} and {@code default}.
     *
     * @param encoder name
     */
public HighlightBuilder encoder(String encoder) {
        this.encoder = encoder;
        return this;
    }
	
/**
     * Adds a field to be highlighted with default fragment size of 100 characters, and
     * default number of fragments of 5 using the default encoder
     *
     * @param name The field to highlight
     */
public HighlightBuilder field(String name) {
        return field(new Field(name));
    }

/**
     * Set a tag scheme that encapsulates a built in pre and post tags. The allowed schemes
     * are {@code styled} and {@code default}.
     *
     * @param schemaName The tag scheme name
     */
    public HighlightBuilder tagsSchema(String schemaName) {
        switch (schemaName) {
        case "default":
            preTags(DEFAULT_PRE_TAGS);
            postTags(DEFAULT_POST_TAGS);
            break;
        case "styled":
            preTags(DEFAULT_STYLED_PRE_TAG);
            postTags(DEFAULT_STYLED_POST_TAGS);
            break;
        default:
            throw new IllegalArgumentException("Unknown tag schema ["+ schemaName +"]");
        }
        return this;
    }

actual combat

http://121.43.135.181:9200/_cat/nodes # view node information
GET /_cat/nodes  #View all nodes. The primary node is marked with *
GET /_cat/health   #View es' health status
GET /_cat/master   #View master node
GET /_cat/indices    #View all indexes, which is equivalent to show databases of MySQL

Data operation, repeat operation is regarded as update

If the index does not exist, it will be created automatically

#put must have document id
PUT /index_name/typename/docid
{

}
#post will automatically generate an id
POST /index_name/typename/
{

}
#With an id, if it exists, it will be updated. If it does not exist, it will be added
POST /index_name/typename/2
{

}

GET data GET

Format: GET /indexname/typename/docid
GET /test/_cat/1

result

{
	"_index": "test",
	"_type": "_doc",
	"_id": "1",
	"_version": 2,
	"_seq_no": 1,   #The concurrency control field will be + 1 each time it is updated, which is used as an optimistic lock,? if_ seq_ no=1&if_ primary_ term=1
	"_primary_term": 1,    #As above, the main partition redistribution will change
	"found": true,
	"_source": {  #This is the data
		"name": "Ant tooth",
		"age": 23,
		"title": "How's Pikachu"
	}
}

_update

update data with_ update, there must be doc, duplicate data, the result is that the version number and serial number remain unchanged, and no operation is performed

No_ Update to update the duplicate data and change the version number and serial number

#Bring_ update, there must be doc, duplicate data, the result is that the version number and serial number remain unchanged, and no operation is performed
POST /test/_cat/1/_update
{
	"doc":{
	
	}
}


#No_ Update to update the duplicate data and change the version number and serial number
POST /test/_cat/1
{

}

DELETE

remove document

DELETE /test/_cat/1     #remove document
DELETE /test	 #Delete index

_bulk

Batch operation must be a post request

POST /test/_cat/_bulk
{"index":{"_id":"1"}}
{"name":"data"}

query

_search

querymatch_allQuery all
matchWord segmentation, including the content of word segmentation will be queried
match_phraseQuery the specified content without word segmentation
boolmust,must_not,should
sort
GET /test/_search
{
  "query": { 
    "match_all": {}   #Query all
  },
  "sort": [
    {
      "age": {
        "order": "desc"   #Sorting fields and rules
      }
    }
  ],
  "from": 0,   #Paging start position
  "size": 2		#Size per page
}

result

{
  "took" : 1,  #How many milliseconds did the query take
  "timed_out" : false,	#Timeout
  "_shards" : {		#Slice
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {	#Hit
    "total" : {
      "value" : 4,	#Number of query results
      "relation" : "eq"		#Query relation
    },
    "max_score" : null,	#Maximum score
    "hits" : [	#Hit
      {
        "_index" : "test", #Index name
        "_type" : "_doc",	#type
        "_id" : "3",	#Document id
        "_score" : null,	#score
        "_source" : {	#The data we store
          "name" : "storage box",
          "age" : 23,
          "title" : "Television"
        },
        "sort" : [
          23
        ]
      }
    ]
  }
}

bool query

GET /test/_search
{
  "query": {
    "bool": {
      "must": [
        {
          "match": {
            "name": "java"
          }
        }
      ],
      "must_not": [
        {"match": {
          "age": 23
        }}
      ],
      "should": [
        {"match": {
          "title": "Television"
        }}
      ],
       "filter": {    #Filter results
        "range": {    #Range
          "age": {
            "gte": 30,    #Greater than or equal to
            "lte": 40	#Less than or equal to
          }
        }
      }
    }
  },
  "from": 0,
  "size": 10
}

match_phrase

Match, no word segmentation

GET /test/_search
{
  "query": { 
    "match_phrase": {   ##The query field contains "Luoluo said java"
      "name": "Luo Luo said java"
    }
    
  },
  "from": 0,
  "size": 10
}


.keyword

Exact match

GET /test/_search
{
  "query": { 
    "match": {
      "name.keyword": "Luo Luo said java"   #Query this field = = "Luoluo said java"
    }
    
  },
  "from": 0,
  "size": 10
}

term

Like the match effect, the difference is that term is best used for precise numeric values (non text fields)

Note: match is used for full-text search fields and term is used for other non text fields

Aggregations - aggregations

GET /test/_search
{
  "query": { 
    "match_all": {}
  },
  "from": 0,
  "size": 10,
  "aggs": {   #Start aggregation
    "ageAgg": {   #Aggregate name
      "terms": {	#Aggregation type
        "field": "age",  #Aggregate field
        "size": 10    #First n articles
      },
      "aggs": {   #Inline aggregation here
        "avgAgg": {
          "avg": {   #Aggregation type
            "field": "age"
          }
        }
      }
    }
  }
}

give the result as follows

{
  "took" : 6,
  "timed_out" : false,
  "_shards" : {
    "total" : 5,
    "successful" : 5,
    "skipped" : 0,
    "failed" : 0
  },
  "hits" : {
    "total" : {
      "value" : 5,
      "relation" : "eq"
    },
    "max_score" : null,
    "hits" : [ ]
  },
  "aggregations" : {
    "ageAgg" : {
      "doc_count_error_upper_bound" : 0,
      "sum_other_doc_count" : 0,
      "buckets" : [
        {
          "key" : 23,    #Age 23
          "doc_count" : 4,   #There are four people
          "avgAgg" : {    
            "value" : 23.0     #The average age was 23.0
          }
        },
        {
          "key" : 35,
          "doc_count" : 1,
          "avgAgg" : {
            "value" : 35.0
          }
        }
      ]
    }
  }
}

Mappings

Field type mapping

PUT myindex
{
  "mappings": {
    "properties": {
      "name":{
        "type": "text"
      },
      "age":{
        "type": "integer"
      },
      "title":{
        "type": "keyword"
      }
    }
  }
}

Add field

PUT myindex/_mapping
{
  
    "properties": {
      "sex":{
        "type": "keyword",
        "index": true
      },
        "time":{
        "type": "date",
        "format": "yyyy-MM-dd"
      }
    }
}

_ mapping: viewing mappings

GET myindex/_mapping

_ reindex data migration

Migrate myindex data to newindex

POST _reindex
{
  "source": {"index": "myindex"},
  "dest": {"index":  "newindex"}
}

If there is a type

POST _reindex
{
  "source": {
      "index": "myindex",
      "type":"bank"  #Specify type
  },
  "dest": {"index":  "newindex"}
}

Kibana visualization

install

https://www.elastic.co/cn/downloads/kibana

Note: the version number must be consistent with the es version number

decompression

Configure kibana

vim config/kibana.yml 

The configuration is as follows

i18n.locale: "zh-CN"
server.port: 5601 # Listening port
server.name: "kibana-server"
server.host: "0.0.0.0" # Configure Internet access
elasticsearch.hosts: ["http://192.168.52.129:9200"] # elasticsearch URL connecting kibana
# kibana will write some data to es, which is the name of the index in ex
kibana.index: ".kibana"

Change kibana's operation user to es's user

chown -R elsearch:elsearch kibana-7.1.1-linux-x86_64

start-up

./bin/kibana

Integrated SpringBoot

Reference documents: https://www.elastic.co/guide/en/elasticsearch/client/java-rest/7.x/index.html

maven dependency

<dependency>
            <groupId>org.elasticsearch</groupId>
            <artifactId>elasticsearch</artifactId>
            <version>7.1.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.elasticsearch.client/elasticsearch-rest-high-level-client -->
        <dependency>
            <groupId>org.elasticsearch.client</groupId>
            <artifactId>elasticsearch-rest-high-level-client</artifactId>
            <version>7.1.1</version>
            <!-- Because the version number and es Inconsistent version number -->
            <exclusions>
                <exclusion>
                    <groupId>org.elasticsearch</groupId>
                    <artifactId>elasticsearch</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

Configuration class

package com.luo.search.config;

import org.apache.http.HttpHost;
import org.elasticsearch.client.*;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

/**
 * @Auther: Luo
 * @Date: 2021/3/12 14:28
 * @Description:
 */
@Configuration
public class ElasticSearchConfig {

    public static final RequestOptions COMMON_OPTIONS;
    static {
        RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder();
        /*builder.addHeader("Authorization", "Bearer " + TOKEN);
        builder.setHttpAsyncResponseConsumerFactory(
                new HttpAsyncResponseConsumerFactory
                        .HeapBufferedResponseConsumerFactory(30 * 1024 * 1024 * 1024));*/
        COMMON_OPTIONS = builder.build();
    }
    @Bean
    public RestHighLevelClient restHighLevelClient(){
        RestHighLevelClient client = new RestHighLevelClient(
                RestClient.builder(
                        new HttpHost("121.43.135.181", 9200, "http")));
        return client;
    }
}

Create index and add data

 /*
     * @Author Luo
     * @Description  Create index
     **/
    @Test
    void index1() throws IOException {
        ObjectMapper mapper = new ObjectMapper();
        IndexRequest indexRequest = new IndexRequest("user");//Index library
        indexRequest.id("1");//Document id
        User user = new User(1, "Luo Jiankang", new Date(), 8000.0, "Will tomorrow be better, Amen");
        String jsonString =mapper.writeValueAsString(user);
        indexRequest.source(jsonString, XContentType.JSON);
        restHighLevelClient.index(indexRequest,ElasticSearchConfig.COMMON_OPTIONS);
    }

    @Test
    void index2() throws IOException {
        Map<String, Object> jsonMap = new HashMap<>(8);
        jsonMap.put("id", "2");
        jsonMap.put("name", "kimchy");
        jsonMap.put("date", new Date());
        jsonMap.put("money", 34.34);
        jsonMap.put("content", "trying out Elasticsearch");
        IndexRequest indexRequest = new IndexRequest("user")
                .id("2").source(jsonMap);
        restHighLevelClient.index(indexRequest,ElasticSearchConfig.COMMON_OPTIONS);
    }

Batch add data

/*
     * @Author Luo
     * @Description Batch data
     * @Date 2021/3/16 9:22
     * @Param []
     * @return void
     **/
    public void buik() throws IOException {
        ArrayList list = new ArrayList(16);
        IndexRequest indexRequest = new IndexRequest();
        BulkRequest bulkRequest = new BulkRequest("user");
        list.forEach(item->{
            indexRequest.source(JSON.toJSON(item),XContentType.JSON);
            bulkRequest.add(indexRequest);
        });
        BulkResponse bulkResponse = restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
    }

Get document information

//get data
    @Test
    void getdata() throws IOException {
        GetRequest getRequest = new GetRequest("user","id");
        try {
            GetResponse getResponse = restHighLevelClient.get(getRequest, ElasticSearchConfig.COMMON_OPTIONS);
            String index = getResponse.getIndex();  //Get index name
            String id = getResponse.getId();//Get document id
            if (getResponse.isExists()) {
                long version = getResponse.getVersion();//Get version number
                String sourceAsString = getResponse.getSourceAsString();//Get the data and convert it into a string
                ObjectMapper mapper = new ObjectMapper();
                User user = mapper.readValue(sourceAsString, User.class);
                System.out.println(user);
                Map<String, Object> sourceAsMap = getResponse.getSourceAsMap();//Get data and convert it into map
                System.out.println(sourceAsMap);
                byte[] sourceAsBytes = getResponse.getSourceAsBytes();//Get the data and convert it into a byte array
            } else {

            }
        } catch (ElasticsearchException  e) {
            if (e.status() == RestStatus.NOT_FOUND) {

            }
            if (e.status() == RestStatus.CONFLICT) {

            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }

Determine whether the document exists

public void documentIsExits() throws IOException {
        GetRequest getRequest = new GetRequest(
                "user",
                "1");
        getRequest.fetchSourceContext(new FetchSourceContext(false)); //Without pulling _source, the efficiency is higher
        getRequest.storedFields("_none_"); //Non storage field 
        boolean exists = restHighLevelClient.exists(getRequest, RequestOptions.DEFAULT);
        System.out.println(exists);
    }

remove document

public void d() throws IOException {
        DeleteRequest deleteRequest = new DeleteRequest(
                "user",
                "1");
        DeleteResponse deleteResponse = restHighLevelClient.delete(deleteRequest, RequestOptions.DEFAULT);
        String index = deleteResponse.getIndex();
        String id = deleteResponse.getId();
        long version = deleteResponse.getVersion();
        ReplicationResponse.ShardInfo shardInfo = deleteResponse.getShardInfo(); //Obtain fragment information
        if (shardInfo.getTotal() != shardInfo.getSuccessful()) {

        }
        if (shardInfo.getFailed() > 0) {
            for (ReplicationResponse.ShardInfo.Failure failure :
                    shardInfo.getFailures()) {
                String reason = failure.reason();
            }
        }
    }

Update document

@Test
    public void update() throws IOException {
        UpdateRequest request = new UpdateRequest(
                "user",
                "1");
        ObjectMapper mapper = new ObjectMapper();

        User user = new User(1, "Xiao Wang", new Date(), 8400.0, "I love you, Lao Wang next door");
        String jsonString =mapper.writeValueAsString(user);

        request.doc(jsonString, XContentType.JSON);
        UpdateResponse updateResponse = restHighLevelClient.update(request, RequestOptions.DEFAULT);
    }


-------------------------------
    
      @Test
    public void update2() throws IOException {
        Map<String, Object> jsonMap = new HashMap<>();
        //jsonMap.put("date", new Date());
        jsonMap.put("content", "Get away");
        UpdateRequest request = new UpdateRequest("user", "2")
                .doc(jsonMap);
        UpdateResponse updateResponse = restHighLevelClient.update(request, RequestOptions.DEFAULT);
    }


Batch operation document

@Test
    public void bulkRequest () throws IOException {
        BulkRequest request = new BulkRequest();
        request.add(new IndexRequest("posts").id("1")
                .source(XContentType.JSON,"field", "foo"));
        request.add(new IndexRequest("posts").id("2")
                .source(XContentType.JSON,"field", "bar"));
        request.add(new IndexRequest("posts").id("3")
                .source(XContentType.JSON,"field", "baz"));
        request.add(new DeleteRequest("posts", "3"));
        request.add(new UpdateRequest("posts", "2")
                .doc(XContentType.JSON,"other", "test"));
        request.add(new IndexRequest("posts").id("4")
                .source(XContentType.JSON,"field", "baz"));
        request.timeout(TimeValue.timeValueMinutes(2));
        BulkResponse bulkResponse = restHighLevelClient.bulk(request, RequestOptions.DEFAULT);
        //If an operation fails, it will return true
        if (bulkResponse.hasFailures()) {

        }
        //Traversal result
        for (BulkItemResponse bulkItemResponse : bulkResponse) {
            DocWriteResponse itemResponse = bulkItemResponse.getResponse();
            switch (bulkItemResponse.getOpType()) {
                case INDEX:
                case CREATE:
                    IndexResponse indexResponse = (IndexResponse) itemResponse;
                    break;
                case UPDATE:
                    UpdateResponse updateResponse = (UpdateResponse) itemResponse;
                    break;
                case DELETE:
                    DeleteResponse deleteResponse = (DeleteResponse) itemResponse;
            }
        }
    }

Rebuild index (including data)

public void multiGetRequest() throws IOException {
        ReindexRequest reindexRequest = new ReindexRequest();
        reindexRequest.setSourceIndices("user");
        reindexRequest.setDestIndex("you_index");
        reindexRequest.setRefresh(true);
        BulkByScrollResponse bulkResponse = restHighLevelClient.reindex(reindexRequest, RequestOptions.DEFAULT);
        TimeValue timeTaken = bulkResponse.getTook(); //Total time consumed
        boolean timedOut = bulkResponse.isTimedOut();//Timeout
        long totalDocs = bulkResponse.getTotal();//Gets the number of documents processed
        long updatedDocs = bulkResponse.getUpdated();//Gets the number of updated documents
        long createdDocs = bulkResponse.getCreated();//Gets the number of new documents
        long deletedDocs = bulkResponse.getDeleted();//Gets the number of deleted documents
        long batches = bulkResponse.getBatches();
        long noops = bulkResponse.getNoops();
        long versionConflicts = bulkResponse.getVersionConflicts();
        long bulkRetries = bulkResponse.getBulkRetries();
        long searchRetries = bulkResponse.getSearchRetries();
        TimeValue throttledMillis = bulkResponse.getStatus().getThrottled();
        TimeValue throttledUntilMillis =
                bulkResponse.getStatus().getThrottledUntil();
        List<ScrollableHitSource.SearchFailure> searchFailures =
                bulkResponse.getSearchFailures();
        List<BulkItemResponse.Failure> bulkFailures =
                bulkResponse.getBulkFailures();
    }

Search documents

Query all data

@Test
    public  void SearchAll() throws IOException {
        SearchRequest searchRequest = new SearchRequest("user");
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        searchSourceBuilder.query(QueryBuilders.matchAllQuery());
        searchRequest.source(searchSourceBuilder);
        SearchResponse searchResponse = restHighLevelClient.search(searchRequest, RequestOptions.DEFAULT);
        RestStatus status = searchResponse.status(); //Get status
        TimeValue took = searchResponse.getTook();  //Time spent
        Boolean terminatedEarly = searchResponse.isTerminatedEarly();
        boolean timedOut = searchResponse.isTimedOut(); //Timeout

        SearchHits hits = searchResponse.getHits();//Outer hits
        TotalHits totalHits = hits.getTotalHits();
        System.out.println(totalHits.value);//Total documents queried

        SearchHit[] hitsHits = hits.getHits();//Inner hits
        for (SearchHit hitsHit : hitsHits) {
            String source = hitsHit.getSourceAsString(); //data
            System.out.println(source);
        }
    }

Condition query

Single condition query

package com.luo.search.web;

import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightField;
import org.elasticsearch.search.sort.SortOrder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.validation.annotation.Validated;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

/**
 * @Auther: Luo
 * @Date: 2021/3/15 14:10
 * @Description:
 */
@RestController
public class SearchWeb {

    @Autowired
    private RestHighLevelClient restHighLevelClient;

    @RequestMapping("/search/{keyword}/{page}/{pagesize}")
    public List Searchby(@PathVariable("keyword") String keyword,
                         @PathVariable("page") Integer page,
                         @PathVariable("pagesize") Integer pagesize) throws IOException {
        ArrayList<Map> list = new ArrayList<>(16);
        //Build the search request and specify the index of the operation
        SearchRequest firstSearchRequest = new SearchRequest("user");
        //Search condition constructor
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        //Specify search type
         //The termQuery processing string must be added with keyword, otherwise the query is invalid
        //searchSourceBuilder.query(QueryBuilders.termQuery("content.keyword", keyword));
        searchSourceBuilder.query(QueryBuilders.matchPhraseQuery("content", keyword));
        //Set highlight
        HighlightBuilder highlightBuilder = new HighlightBuilder();
        highlightBuilder.encoder("utf-8")//code
                .field("content")//Fields that need to be highlighted
                .preTags("<span style=\"color:\"red\">").postTags("</span>");//Pre suffix
        searchSourceBuilder.highlighter(highlightBuilder);
        //paging
        searchSourceBuilder.from(page);
        searchSourceBuilder.size(pagesize);
        //Set timeout
        searchSourceBuilder.timeout(TimeValue.timeValueSeconds(50));
        //Set collation
        searchSourceBuilder.sort("money", SortOrder.DESC);
        //Inject search criteria into the request
        firstSearchRequest.source(searchSourceBuilder);
        //Start search
        SearchResponse search = restHighLevelClient.search(firstSearchRequest, RequestOptions.DEFAULT);

        Map<String,Object> sourceAsMap =null;
        SearchHit[] hits = search.getHits().getHits();
        for (SearchHit hit : hits) {
            Map<String, HighlightField> highlightFields = hit.getHighlightFields();
            HighlightField content = highlightFields.get("content");
            sourceAsMap = hit.getSourceAsMap();
            if (content!=null){
                 //When it comes to string splicing, StringBuilder is more efficient
                StringBuilder new_content= new StringBuilder(16);
                Text[] fragments = content.getFragments();
                for (Text fragment : fragments) {
                    new_content+=fragment;
                }
                sourceAsMap.put(content.getName(),new_content);
            }
            list.add(sourceAsMap);
        }
        return list;
    }
}
http://localhost/search / I love you / 0 / 5

Multi condition query

package com.luo.search.web;

import org.elasticsearch.action.search.SearchRequest;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.RequestOptions;
import org.elasticsearch.client.RestHighLevelClient;
import org.elasticsearch.common.text.Text;
import org.elasticsearch.common.unit.TimeValue;
import org.elasticsearch.index.query.BoolQueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.search.builder.SearchSourceBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightBuilder;
import org.elasticsearch.search.fetch.subphase.highlight.HighlightField;
import org.elasticsearch.search.sort.SortOrder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Controller;
import org.springframework.validation.annotation.Validated;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import java.io.IOException;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;

/**
 * @Auther: Luo
 * @Date: 2021/3/15 14:10
 * @Description:
 */
@RestController
public class SearchWeb {

    @Autowired
    private RestHighLevelClient restHighLevelClient;

    @RequestMapping("/search/{keyword}/{page}/{pagesize}")
    public List Searchby(@PathVariable("keyword") String keyword,
                         @PathVariable("page") Integer page,
                         @PathVariable("pagesize") Integer pagesize) throws IOException {
        ArrayList<Map> list = new ArrayList<>(16);
        //Build the search request and specify the index of the operation
        SearchRequest firstSearchRequest = new SearchRequest("user");
        //Search condition constructor
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        //Specify search type
        //searchSourceBuilder.query(QueryBuilders.matchQuery("name", "Junior 3");
        //The termQuery processing string must be added with keyword, otherwise the query is invalid
//        searchSourceBuilder.query(QueryBuilders.termQuery("content.keyword", keyword));
        //searchSourceBuilder.query(QueryBuilders.matchPhraseQuery("content", keyword));
        BoolQueryBuilder boolQuery = QueryBuilders.boolQuery();
        //boolQuery.must(QueryBuilders.rangeQuery("money").gte(8400).lte(8800));
       
        boolQuery.must(QueryBuilders.matchPhraseQuery("content",keyword));
        boolQuery.must(QueryBuilders.matchPhraseQuery("name","the other woman"));

         //Using filter to filter the results will not recalculate the score, which is more efficient,
        boolQuery.filter(QueryBuilders.rangeQuery("money").gte(8400).lte(8800));
        
        searchSourceBuilder.query(boolQuery);

        //Set highlight
        HighlightBuilder highlightBuilder = new HighlightBuilder();
        highlightBuilder.encoder("utf-8")//code
                .field("content")//Fields that need to be highlighted
                .preTags("<span style=\"color:\"red\">").postTags("</span>");//Pre suffix
        searchSourceBuilder.highlighter(highlightBuilder);
        //paging
        searchSourceBuilder.from(page);
        searchSourceBuilder.size(pagesize);
        //Set timeout
        searchSourceBuilder.timeout(TimeValue.timeValueSeconds(50));
        //Set collation
        searchSourceBuilder.sort("money", SortOrder.DESC);
        //Inject search criteria into the request
        firstSearchRequest.source(searchSourceBuilder);
        //Start search
        SearchResponse search = restHighLevelClient.search(firstSearchRequest, RequestOptions.DEFAULT);

        Map<String,Object> sourceAsMap =null;
        SearchHit[] hits = search.getHits().getHits();
        for (SearchHit hit : hits) {
            Map<String, HighlightField> highlightFields = hit.getHighlightFields();
            HighlightField content = highlightFields.get("content");
            sourceAsMap = hit.getSourceAsMap();
            if (content!=null){
                 //When it comes to string splicing, StringBuilder is more efficient
                StringBuilder new_content= new StringBuilder(16);
                Text[] fragments = content.getFragments();
                for (Text fragment : fragments) {
                    new_content+=fragment;
                }
                sourceAsMap.put(content.getName(),new_content);
            }
            list.add(sourceAsMap);
        }
        return list;
    }
}

Tags: ElasticSearch

Posted on Tue, 21 Sep 2021 17:11:31 -0400 by rdog157h