Reader's Letter | Just after the cluster of HBases was built, Phoenix started and HBases completely collapsed. What was the cause? (Resolved)

Foreword: Some friends and friends have discussed some problems with me before, and I think these problems are very valuable. So I want to open a question and answer column in this public number to facilitate technical exchange and sharing. The column name is "Letter from Reader".If you encounter a problem that is difficult to solve because of your limited abilities, I will forward this to my resource circle and try my best to get the help of the big guys. I also enclose the questioner's WeChat QR code in order to provide you with such a platform to help each other solve the problem.We also warmly welcome you to actively explore solutions and boldly express your views in the message area.

Letter: Yu*chao

Ape asked a question

The first time Phoenix starts up after the HBase cluster is built, the Region node of HBase crashes completely. What is the reason?

Ape analysis

The following errors were made:

java.sql.SQLException: ERROR 2006 (INT08): Incompatible jars detected between client and server. Ensure that phoenix.jar is put on the classpath of HBase in every region server: org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered coprocessor service found for name MetaDataService in region SYSTEM.CATALOG,,1421861120199.56856673d5cff02b55b9ff5955485dba.
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5579)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3416)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3398)
    ... more
Caused by: org.apache.hadoop.hbase.exceptions.UnknownProtocolException: org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered coprocessor service found for name MetaDataService in region SYSTEM.CATALOG,,1421861120199.56856673d5cff02b55b9ff5955485dba.
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5579)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3416)
    ... more
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.UnknownProtocolException): org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered coprocessor service found for name MetaDataService in region SYSTEM.CATALOG,,1421861120199.56856673d5cff02b55b9ff5955485dba.
    at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5579)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3416)
    ... 14 more

Let's start by analyzing one point: Phoenix hangs when it starts HBase, and Phoenix does a lot of work based on the HBase coprocessor. It is obvious that there is no problem with starting HBase alone. When it comes to the coprocessor, an error will occur. It must not be the problem at the HBase end, nor must the following parameter be set to False.

# hbase-site.xml
<property>
    <name>hbase.coprocessor.abortonerror</name>
    <value>false</value>
</property>

Usually we use HBase coprocessor technology, first we need to set it to False.What does this mean?This means abort Hbase is not required when a coprocessor exception is loaded on top of HBase.That's definitely not necessary. You can't load a coprocessor and hang up the entire cluster, since handwritten coprocessor code is inevitably bug gy.

After we set this parameter, restart HBase and start Phoenix again. This time HBase is OK, but Phoenix still reported the error above. Why?See the ape answer below.

Ape Answer

Actually, at the first glance of this anomaly, I always feel Phoenix is incompatible with HBase, and the answers on the Internet are mostly incompatible. I looked at the version of HBase specially, and there is really no problem.There may be many factors that cause this problem, such as version incompatibility, so don't beat around the Bush here. Expose directly what caused this problem this time.

The reason we found this is that almost all jar packages under the Phoenix compressed package are copied to the HBase/lib directory, causing package conflicts.The official website only requires that Phoenix-version-server.jar be copied to the HBase/lib directory.

To install a pre-built phoenix, use these directions:

  • Download and expand the latest phoenix-[version]-bin.tar.
  • Add the phoenix-[version]-server.jar to the classpath of all HBase region server and master and remove any previous version. An easy way to do this is to copy it into the HBase lib directory (use phoenix-core-[version].jar for Phoenix 3.x)
  • Restart HBase.
  • Add the phoenix-[version]-client.jar to the classpath of any Phoenix client.

Simple 4-step operation, much simpler than the old version.So here we still recommend that you mainly focus on the official web tutorials, supplemented by the online tutorials, in addition to fewer pits, you can also learn the true things, which will help to integrate.This is also an important point to emphasize in this article, do not follow suit, after all, there may be different incentives for the same problem ~

Tags: Java HBase Apache Hadoop

Posted on Mon, 06 Apr 2020 23:53:14 -0400 by Formula