Solution to the problem of ES can not access in virtual machine

Today, we have a problem that the elastic search in the linux virtual machine cannot be accessed on the host?

System version: CentOS 7

Elastic search version: the latest version of 7.5.2

JDK version: JDK13

Note: it seems that the minimum use environment of 7.5.2 is JDK11

First reaction: can't ping Tong?

Then I tried to ping both sides. Well, it's not about that

The second idea is that the 9200 hasn't been opened?

It's true that I didn't open it, but I still can't open it. It's said on the Internet that I open port 80. After trying, the virtual machine goes its own way.... B, the dog, and finally I shut down the firewall.... it's still not good to restart es

Finally, I found one on the Internet, which said that there are three kinds of address settings in the es configuration file:

1. network.host this is the intranet address. You can set it to 127.0.0.1 (this seems to be the default)

2. network.publish this is the publishing address. This is the only one used for cluster communication

3. Here's the key point. Network.bind'host is what we need to change. Setting this allows es to bind multiple IPS. Therefore, we can match the external and internal ip. The same cloud platform is accessed by internal ip, and different cloud platforms are accessed by external ip;

Well, finally, I set this to network.bind_host: 0.0.0.0

See here in detail: https://blog.csdn.net/totally123/article/details/79569537

Friendly tips: yml format in the colon after the space Oh;

Oh, one more thing is that after setting this, it seems to report

ERROR: [1] bootstrap checks failed
[1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

Well, the meaning of this passage is probably:

Error: [1] boot check failed

[1] : the default discovery setting is not suitable for production use; at least one of [discovery.seed ﹣ hosts, discovery.seed ﹣ providers, cluster.initial ﹣ master ﹣ nodes] must be configured

Then change the cluster.initial ﹣ master ﹣ nodes attribute in the yml configuration file to the current node name

This es can be accessed on the host.

After I think about it, I'd like to add the contents of my configuration file. The big guy on the Internet, if there is a wrong place to teach more!!!

# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#Cluster name
cluster.name: bigdata
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#Node name
node.name: hadoop-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
path.data: /home/es/data/elastic
#
# Path to log files:
#
#path.logs: /path/to/logs
path.logs: /home/es/logs/elastic
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#Memory lock prevents the memory of es from being swapped out; what I understand is to prevent its memory from being used for other purposes
#General production environment will be used, if you test... It doesn't matter!!!
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
#network.host: 192.168.0.1
#Intranet address
network.host: ES01
#Address of issuing place
#network.publish_host: 
#Binding IP
network.bind_host: 0.0.0.0

# Set a custom port for HTTP:
#
#http.port: 9200
#If the access port is not set, the default is 9200. In the cluster, the nodes can determine the primary node according to the starting order.
#Generally, the first es port number is 9200, the second is 9201, and so on
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#Initial master node
cluster.initial_master_nodes: ["hadoop-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
# Whether to allow other plug-ins to access? This is a property that is not available in the 7.5.2 version I added later
http.cors.enabled: true
#Asterisk I understand that all plug-ins can
http.cors.allow-origin: "*"
#
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#This is file backup. When the number of nodes in the cluster is 3, backup the data
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true

 

 

 

Published 13 original articles, won praise 5, visited 5415
Private letter follow

Tags: network Hadoop Linux CentOS

Posted on Sun, 02 Feb 2020 03:51:08 -0500 by zrobin01