zookeeper failed to start because the database could not be read

Recently, our cluster is in an abnormal state. It was found that it failed all the time when starting zk. After reading the log, it was really sad. If it was in black, it was reported in minutes. At first, I didn't really want to understand why it was so difficult to start zk, and it failed stably in about 5s. After a while, it can be judged that the process hasn't started completely. In this time period, most of it is In init state

[root@ZYC3-AQGK-LJCL-SRV05 deployer]# systemctl status zookeeper
● zookeeper.service - ZooKeeper Service
   Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-03-23 11:29:21 CST; 4s ago
     Docs: http://zookeeper.apache.org
  Process: 31011 ExecStop=/opt/zookeeper/zookeeper-prod/bin/zkServer.sh stop /opt/zookeeper/zookeeper-prod/conf/zoo.cfg (code=exited, status=0/SUCCESS)
  Process: 31129 ExecStart=/opt/zookeeper/zookeeper-prod/bin/zkServer.sh start /opt/zookeeper/zookeeper-prod/conf/zoo.cfg (code=exited, status=0/SUCCESS)
 Main PID: 31138 (java)
   CGroup: /system.slice/zookeeper.service
           └─31138 java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/zookeeper/zookeeper-prod/bin/../build/classes:/opt/zookeeper/zookeeper-prod/bin/../build/lib/*.jar:/opt/zookeeper/zoo...

Mar 23 11:29:20 ZYC3-AQGK-LJCL-SRV05 systemd[1]: Starting ZooKeeper Service...
Mar 23 11:29:20 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31129]: ZooKeeper JMX enabled by default
Mar 23 11:29:20 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31129]: Using config: /opt/zookeeper/zookeeper-prod/conf/zoo.cfg
Mar 23 11:29:21 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31129]: Starting zookeeper ... STARTED
Mar 23 11:29:21 ZYC3-AQGK-LJCL-SRV05 systemd[1]: Started ZooKeeper Service.
[root@ZYC3-AQGK-LJCL-SRV05 deployer]# systemctl status zookeeper
● zookeeper.service - ZooKeeper Service
   Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-03-23 11:29:21 CST; 5s ago
     Docs: http://zookeeper.apache.org
  Process: 31011 ExecStop=/opt/zookeeper/zookeeper-prod/bin/zkServer.sh stop /opt/zookeeper/zookeeper-prod/conf/zoo.cfg (code=exited, status=0/SUCCESS)
  Process: 31129 ExecStart=/opt/zookeeper/zookeeper-prod/bin/zkServer.sh start /opt/zookeeper/zookeeper-prod/conf/zoo.cfg (code=exited, status=0/SUCCESS)
 Main PID: 31138 (java)
   CGroup: /system.slice/zookeeper.service
           └─31138 java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/zookeeper/zookeeper-prod/bin/../build/classes:/opt/zookeeper/zookeeper-prod/bin/../build/lib/*.jar:/opt/zookeeper/zoo...

Mar 23 11:29:20 ZYC3-AQGK-LJCL-SRV05 systemd[1]: Starting ZooKeeper Service...
Mar 23 11:29:20 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31129]: ZooKeeper JMX enabled by default
Mar 23 11:29:20 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31129]: Using config: /opt/zookeeper/zookeeper-prod/conf/zoo.cfg
Mar 23 11:29:21 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31129]: Starting zookeeper ... STARTED
Mar 23 11:29:21 ZYC3-AQGK-LJCL-SRV05 systemd[1]: Started ZooKeeper Service.
[root@ZYC3-AQGK-LJCL-SRV05 deployer]# systemctl status zookeeper
● zookeeper.service - ZooKeeper Service
   Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Mon 2020-03-23 11:29:26 CST; 706ms ago
     Docs: http://zookeeper.apache.org
  Process: 31225 ExecStop=/opt/zookeeper/zookeeper-prod/bin/zkServer.sh stop /opt/zookeeper/zookeeper-prod/conf/zoo.cfg (code=exited, status=0/SUCCESS)
  Process: 31129 ExecStart=/opt/zookeeper/zookeeper-prod/bin/zkServer.sh start /opt/zookeeper/zookeeper-prod/conf/zoo.cfg (code=exited, status=0/SUCCESS)
 Main PID: 31138 (code=exited, status=1/FAILURE)

Mar 23 11:29:20 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31129]: Using config: /opt/zookeeper/zookeeper-prod/conf/zoo.cfg
Mar 23 11:29:21 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31129]: Starting zookeeper ... STARTED
Mar 23 11:29:21 ZYC3-AQGK-LJCL-SRV05 systemd[1]: Started ZooKeeper Service.
Mar 23 11:29:26 ZYC3-AQGK-LJCL-SRV05 systemd[1]: zookeeper.service: main process exited, code=exited, status=1/FAILURE
Mar 23 11:29:26 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31225]: ZooKeeper JMX enabled by default
Mar 23 11:29:26 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31225]: Using config: /opt/zookeeper/zookeeper-prod/conf/zoo.cfg
Mar 23 11:29:26 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31225]: Stopping zookeeper ... /opt/zookeeper/zookeeper-prod/bin/zkServer.sh: 182nd elements:kill: (31138) - No process
Mar 23 11:29:26 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[31225]: STOPPED
Mar 23 11:29:26 ZYC3-AQGK-LJCL-SRV05 systemd[1]: Unit zookeeper.service entered failed state.
Mar 23 11:29:26 ZYC3-AQGK-LJCL-SRV05 systemd[1]: zookeeper.service failed.

See that / opt / zookeeper / zookeeper Pro / conf / zoo.cfg is mentioned in the log, so go to this directory to see if there is anything worth mining. After all, conf can be assumed to be a configuration folder. The reason is that there should be folders such as log or output, in which running logs, especially error logs, are stored. According to this idea, related troubleshooting can be carried out.
After that, I found that there is a zookeeper.out file in the directory / opt / zookeeper / zookeeper prod / bin. This is the execution details. You can take a look at the contents, and then cat it. The problem is very clear

2020-03-23 11:36:58,799 [myid:] - INFO  [main:QuorumPeerConfig@136] - Reading configuration from: /opt/zookeeper/zookeeper-prod/bin/../conf/zoo.cfg
2020-03-23 11:36:58,814 [myid:] - INFO  [main:QuorumPeer$QuorumServer@184] - Resolved hostname: 10.153.115.26 to address: /10.153.115.26
2020-03-23 11:36:58,815 [myid:] - INFO  [main:QuorumPeer$QuorumServer@184] - Resolved hostname: 10.153.115.25 to address: /10.153.115.25
2020-03-23 11:36:58,816 [myid:] - INFO  [main:QuorumPeer$QuorumServer@184] - Resolved hostname: 10.153.115.24 to address: /10.153.115.24
2020-03-23 11:36:58,816 [myid:] - INFO  [main:QuorumPeer$QuorumServer@184] - Resolved hostname: 10.153.115.29 to address: /10.153.115.29
2020-03-23 11:36:58,816 [myid:] - INFO  [main:QuorumPeer$QuorumServer@184] - Resolved hostname: 10.153.115.28 to address: /10.153.115.28
2020-03-23 11:36:58,816 [myid:] - INFO  [main:QuorumPeer$QuorumServer@184] - Resolved hostname: 10.153.115.27 to address: /10.153.115.27
2020-03-23 11:36:58,816 [myid:] - WARN  [main:QuorumPeerConfig@354] - Non-optimial configuration, consider an odd number of servers.
2020-03-23 11:36:58,816 [myid:] - INFO  [main:QuorumPeerConfig@398] - Defaulting to majority quorums
2020-03-23 11:36:58,821 [myid:5] - INFO  [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3
2020-03-23 11:36:58,821 [myid:5] - INFO  [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 24
2020-03-23 11:36:58,822 [myid:5] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started.
2020-03-23 11:36:58,837 [myid:5] - INFO  [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed.
2020-03-23 11:36:58,839 [myid:5] - INFO  [main:QuorumPeerMain@130] - Starting quorum peer
2020-03-23 11:36:58,849 [myid:5] - INFO  [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory
2020-03-23 11:36:58,856 [myid:5] - INFO  [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181
2020-03-23 11:36:58,861 [myid:5] - INFO  [main:QuorumPeer@1158] - tickTime set to 2000
2020-03-23 11:36:58,861 [myid:5] - INFO  [main:QuorumPeer@1204] - initLimit set to 10
2020-03-23 11:36:58,861 [myid:5] - INFO  [main:QuorumPeer@1178] - minSessionTimeout set to -1
2020-03-23 11:36:58,862 [myid:5] - INFO  [main:QuorumPeer@1189] - maxSessionTimeout set to -1
2020-03-23 11:36:58,871 [myid:5] - INFO  [main:QuorumPeer@1467] - QuorumPeer communication is not secured!
2020-03-23 11:36:58,871 [myid:5] - INFO  [main:QuorumPeer@1496] - quorum.cnxn.threads.size set to 20
2020-03-23 11:36:58,872 [myid:5] - INFO  [main:FileSnap@86] - Reading snapshot /data/zookeeper/data/version-2/snapshot.b91d0000003c
2020-03-23 11:36:59,290 [myid:5] - ERROR [main:QuorumPeer@692] - Unable to load database on disk
java.io.IOException: The accepted epoch, ba86 is less than the current epoch, ba87
    at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:689)
    at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:635)
    at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:170)
    at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:114)
    at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81)
2020-03-23 11:36:59,292 [myid:5] - ERROR [main:QuorumPeerMain@92] - Unexpected exception, exiting abnormally
java.lang.RuntimeException: Unable to run quorum server 
    at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:693)
    at org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:635)
    at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:170)
    at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:114)
    at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:81)
Caused by: java.io.IOException: The accepted epoch, ba86 is less than the current epoch, ba87
    at org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:689)
    ... 4 more

It is written in the log that the snapshot of zk is being read, and then an error is reported, unable to load the database on the disk, so! I deleted snapshot.b91d00000003c from the snapshot, and let it regenerate the snapshot file by itself. It's done.

2020-03-23 11:36:58,872 [myid:5] - INFO  [main:FileSnap@86] - Reading snapshot /data/zookeeper/data/version-2/snapshot.b91d0000003c
2020-03-23 11:36:59,290 [myid:5] - ERROR [main:QuorumPeer@692] - Unable to load database on disk

crisp

[root@ZYC3-AQGK-LJCL-SRV05 deployer]# systemctl status zookeeper
● zookeeper.service - ZooKeeper Service
   Loaded: loaded (/etc/systemd/system/zookeeper.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-03-23 12:12:08 CST; 5min ago
     Docs: http://zookeeper.apache.org
  Process: 25348 ExecStop=/opt/zookeeper/zookeeper-prod/bin/zkServer.sh stop /opt/zookeeper/zookeeper-prod/conf/zoo.cfg (code=exited, status=0/SUCCESS)
  Process: 25658 ExecStart=/opt/zookeeper/zookeeper-prod/bin/zkServer.sh start /opt/zookeeper/zookeeper-prod/conf/zoo.cfg (code=exited, status=0/SUCCESS)
 Main PID: 25667 (java)
   CGroup: /system.slice/zookeeper.service
           └─25667 java -Dzookeeper.log.dir=. -Dzookeeper.root.logger=INFO,CONSOLE -cp /opt/zookeeper/zookeeper-prod/bin/../build/classes:/opt/zookeeper/zookeeper-prod/bin/../build/lib/*.jar:/opt/zookeeper/zoo...

Mar 23 12:12:07 ZYC3-AQGK-LJCL-SRV05 systemd[1]: Starting ZooKeeper Service...
Mar 23 12:12:07 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[25658]: ZooKeeper JMX enabled by default
Mar 23 12:12:07 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[25658]: Using config: /opt/zookeeper/zookeeper-prod/conf/zoo.cfg
Mar 23 12:12:08 ZYC3-AQGK-LJCL-SRV05 zkServer.sh[25658]: Starting zookeeper ... STARTED
Mar 23 12:12:08 ZYC3-AQGK-LJCL-SRV05 systemd[1]: Started ZooKeeper Service.

Because every time zk runs, there will be a snapshot file, which is used for state recovery. Because the disk before this host is full, importing zk cannot write messages in time. Then we restart the device. This should be the time when the snapshot file write fails. However, this is not an all-purpose solution, but also a specific analysis. Because in the cluster, deleting the snapshot file makes it difficult to recover the database state later. Fortunately, our zk is 6-node, and the other five nodes are normal, so this operation is allowed.
At this point, the work is finished:)

Tags: Linux Zookeeper Java Apache snapshot

Posted on Mon, 23 Mar 2020 22:27:12 -0400 by brianlange