Elasticsearch Series - Best Practices for Performance Tuning

outline Performance tuning is an essential to...

outline

Performance tuning is an essential topic for all components in the system architecture, and Elasticsearch is no exception. Although the default configuration in Elasticsearch is already excellent, it does not mean that it is perfect. Some of the necessary practices need to be understood.

Open Slow Query Log

Slow query log is an important tool for performance diagnosis. The general operation is to set a threshold for slow query, and then Runwei Kids Shoes routinely patrol the slow log every day. There are very slow queries, report event processing immediately, and the rest of the periodically take out the top n of the slow log to optimize.

The configuration of slow log is configured by command in elasticsearch version 6.3.1. Read and write operations can be set separately. The threshold value can be defined according to actual requirements and performance indicators. Some people find 5 seconds slow and others find 3 seconds unacceptable. Let's take 3 seconds as an example:

PUT /_all/_settings { "index.search.slowlog.threshold.query.warn":"3s", "index.search.slowlog.threshold.query.info":"2s", "index.search.slowlog.threshold.query.debug":"1s", "index.search.slowlog.threshold.query.trace":"500ms", "index.search.slowlog.threshold.fetch.warn":"1s", "index.search.slowlog.threshold.fetch.info":"800ms", "index.search.slowlog.threshold.fetch.debug":"500ms", "index.search.slowlog.threshold.fetch.trace":"200ms", "index.indexing.slowlog.threshold.index.warn":"3s", "index.indexing.slowlog.threshold.index.info":"2s", "index.indexing.slowlog.threshold.index.debug":"1s", "index.indexing.slowlog.threshold.index.trace":"500ms", "index.indexing.slowlog.level":"info", "index.indexing.slowlog.source":"1000" }

These three segments represent slow log output thresholds for query query, fetch query, and index write operations, _All means valid for all indexes or for specific indexes.

Also add the following configuration to the log4j2.properties configuration file:

# Query Operations Slow Log Output appender.index_search_slowlog_rolling.type = RollingFile appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling appender.index_search_slowlog_rolling.fileName = $$$_index_search_slowlog.log appender.index_search_slowlog_rolling.layout.type = PatternLayout appender.index_search_slowlog_rolling.layout.pattern = [%d][%-5p][%-25c] %.10000m%n appender.index_search_slowlog_rolling.filePattern = $$$_index_search_slowlog-%d.log appender.index_search_slowlog_rolling.policies.type = Policies appender.index_search_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy appender.index_search_slowlog_rolling.policies.time.interval = 1 appender.index_search_slowlog_rolling.policies.time.modulate = true logger.index_search_slowlog_rolling.name = index.search.slowlog logger.index_search_slowlog_rolling.level = trace logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling logger.index_search_slowlog_rolling.additivity = false # Index operation slow log output appender.index_indexing_slowlog_rolling.type = RollingFile appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling appender.index_indexing_slowlog_rolling.fileName = $$$_index_indexing_slowlog.log appender.index_indexing_slowlog_rolling.layout.type = PatternLayout appender.index_indexing_slowlog_rolling.layout.pattern = [%d][%-5p][%-25c] %marker%.10000m%n appender.index_indexing_slowlog_rolling.filePattern = $$$_index_indexing_slowlog-%d.log appender.index_indexing_slowlog_rolling.policies.type = Policies appender.index_indexing_slowlog_rolling.policies.time.type = TimeBasedTriggeringPolicy appender.index_indexing_slowlog_rolling.policies.time.interval = 1 appender.index_indexing_slowlog_rolling.policies.time.modulate = true logger.index_indexing_slowlog.name = index.indexing.slowlog.index logger.index_indexing_slowlog.level = trace logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling logger.index_indexing_slowlog.additivity = false

After restarting the elasticsearch instance, you will see the two log files generated in the / home/esuser/esdata/log directory.

Suggestions for Optimizing Practice

Basic usage specifications
  1. Search results should not return a large result set

Excessive result set takes up a lot of IO resources and bandwidth, and the speed is certainly not fast. Elasticsearch is a search engine. The best search engine is precise query or sub-precise query. The most important thing is the few results in the top rank, not all results. Optimizing search conditions and controlling the number of search results is the prerequisite for high performance.

If you do have a large number of data queries, scroll api is recommended.

  1. Avoid oversized document s

Http.max_Context_The default value for length is 100mb. If you write a document once, the contents of the document cannot exceed 100mb, otherwise es will refuse to write.Although you can modify this configuration, it is not recommended to do so, and the lucene engine at the bottom of ES still has a maximum limit of 2gb.

Excessive document s can consume a lot of resources and are not recommended in any way. If business needs really are very large, such as searching the contents of a book, it is recommended to split storage by chapter and paragraph.

  1. Avoid sparse data

Doument design fundamentally affects index performance. Sparse data is a typical bad design that wastes storage space and affects read and write performance.

Here are some suggestions for document structure design:

  • Avoid writing data without any association to the same index

The absence of correlated data means that the structure of the data is different and that hard-wired placement in the same index results in very sparse index data. It is recommended that these data be placed in different indexes.

  • Unified normalization of document structure

Doument's structure and naming are treated as uniformly as possible. It also creates a time field, avoiding either timestamp or create_time, as uniform as possible.

  • Disable norms and doc_for some field sValues

If a field does not need to consider its correlation score, then norms can be disabled, and doc_can be disabled if sorting or aggregating a field is not requiredValues field.

Server Level

Hardware resources are the hardest part of the core with the best performance. Good hardware means a high starting point.

  1. Use faster hardware resources

Within the budget range, use SSD solid state hard disks instead of mechanical hard disks;

CPU primary frequency and number of cores are of course strong enough to reach the upper budget limit;

The maximum memory of a single machine is 64GB, plus the machine until it runs out of money;

Try to use local storage system instead of network storage such as NFS, because hard drives are cheap.

  1. Give filesystem cache more memory

Elasticsearch's search relies heavily on the underlying filesystem cache, and if all the data can be stored in the filesystem cache, the search is essentially in seconds.

Due to practical constraints, it is best to have at least half of your total data in your machine's memory.

There are two ways to get the best results: one is to throw money, buy more machines, and increase memory; the other is to simplify the document data, put only the fields you need to search into es, and the filesystem cache can save more documents, which can improve memory utilization.The remaining fields can be placed in redis/mysql/hbase/hapdoop for secondary loading.

  1. Prevent swapping from swapping memory

Forbidding swapping, if es jvm memory is swapped to disk and swapped back to memory, it will cause a lot of disk IO and poor performance.

Elasticsearch hierarchy
  1. index buffer

In high concurrent writing scenarios, we can make the index buffer larger.indices.memory.index_buffer_size, this can be adjusted a little larger, this value defaults to 10% of jvm heap, this index buffer size is common to all shards, this value is divided by the number of shards, to calculate the average memory size that each shard can use, it is generally recommended to give a maximum of 512 MB per shard.

  1. Prohibit_all field

_All fields will index all the field values in the document, which takes up a lot of disk space but is actually not very useful. It is best to disable _for production environmentsAll field.

  1. Use best_compression

_source field and other fields occupy a lot of disk space, we recommend best_compression compresses.

  1. Use the smallest and most appropriate number type

es supports four numeric types: byte, short, integer, long.If the smallest type is appropriate, use the smallest type to save disk space.

  1. Disable unnecessary functionality

For field s that need to be aggregated and sorted, we create a positive index. For field s that need to be retrieved, we create an inverted index. For field s that don't care about doc scores, we can disable norm; For field s that do not require phrase query approximation matching, the property Location can be disabled.

  1. Do not map with default dynamic string type

The default dynamic string type mapping maps a string type field to both a text type and a keyword type. In most cases, we only need to use one of them, and the rest is a waste of disk space. For example, a field like id field may only need keyword, while a body field may only need a text field.

So keyword and text should be clearly distinguished at design time, not saved as a whole.

  1. Preheat filesystem cache

If we restart Elasticsearch, then the filesystem cache is empty and we load data into the filesystem cache every time we query data, we can query some data first, load some common data into memory in advance, and then use the memory data directly when real customers use it, and the response will be fast.

Code Development Level
  1. Use bulk for writing

When we use Java as a client, the write operations are all done using the bulk api.

  1. Writing data using multiple threads

  2. document uses an automatically generated id

Manually setting an ID to a document requires es to verify that the ID exists each time. This process is time consuming.If we use an automatically generated id, es can skip this step and write performance will be better.

For table IDS in relational databases, they can be stored as a field of ES documents.

  1. Attention to Doument Structure Design

Business R&D is of paramount importance, and a good document structure will result in excellent performance.

  1. Avoid script ing

  2. Take full advantage of caching

Instead of using the now function for time queries, you should convert the time to a canonical format on the client side and query in Elasticsearch to improve the cache usage.

Summary

This article introduces the common practices of Elasticsearch performance tuning, from server, instance to code level, which can be used as reference, but there is no conventional method for performance tuning, which needs repeated validation. Thank you.

23 June 2020, 19:58 | Views: 1567

Add new comment

For adding a comment, please log in
or create account

0 comments