Implementation of distributed lock with ZooKeeper

Implementation of distributed lock with ZooKeeper

Model selection of zookeeper client
  • The native zookeeper client has a series of problems such as one-time watcher and no timeout reconnection mechanism
  • ZkClient, which solves some problems of the native client, is still used in some existing old systems
  • Cursor, which provides various application scenarios (encapsulating distributed locks, counters, etc.), is preferred for new projects
Distributed lock usage scenario

In the single project, the lock in the jvm can complete the need, but in the micro service and distributed environment, the same service may be deployed on multiple servers, and multiple JVMs can not complete the synchronous operation through the common jvm lock, so the distributed lock needs to be used to complete the lock and release. For example, in order service, we need to generate the order number sequence according to the date, so it is possible to generate the same time and date, resulting in duplicate order numbers.

Implementation principle of zookeeper distributed lock
  • Zookeeper stipulates that at the same time, multiple clients cannot create the same node. We can use this feature to implement distributed locks. The zookeeper temporary node only exists in the session life cycle, and will be automatically destroyed at the end of the session.
  • The watcher mechanism can trigger the watcher to unblock and retrieve the lock when the node representing the lock resource is deleted, which is also a great advantage of the zookeeper distributed lock over other distributed lock schemes.
1. Based on temporary node scheme

The first scheme is relatively simple. The logic is that whoever creates the node successfully holds the lock and blocks the failed node. Thread A holds the lock first, and thread B will block if it fails to acquire it. At the same time, monitor / lockPath is set. After thread A finishes the operation, delete the node and trigger the listener. At this time, thread B will unblock and acquire the lock again.

We imitate the lock interface design of the native jdk, and use the template method design pattern to write the distributed lock. This advantage is strong scalability. We can quickly switch to the redis distributed lock, database distributed lock and other implementation methods.

To create a Lock interface:

public interface Lock {
    /**
     * Acquire lock
     */
    void getLock() throws Exception;
    /**
     * Release lock
     */
    void unlock() throws Exception;
}

AbstractTemplateLock abstract class:

public abstract class AbstractTemplateLock implements Lock {
    @Override
    public void getLock() {
        if (tryLock()) {
            System.out.println(Thread.currentThread().getName() + "Lock acquired successfully");
        } else {
            //wait for
            waitLock();//Event listening if the node is deleted, it can be retrieved
            //Reacquire
            getLock();
        }
    }
    protected abstract void waitLock();
    protected abstract boolean tryLock();
    protected abstract void releaseLock();
    @Override
    public void unlock() {
        releaseLock();
    }
}

zookeeper distributed lock logic:

@Slf4j
public class ZkTemplateLock extends AbstractTemplateLock {
    private static final String zkServers = "127.0.0.1:2181";
    private static final int sessionTimeout = 8000;
    private static final int connectionTimeout = 5000;
    private static final String lockPath = "/lockPath";
    private ZkClient client;

    public ZkTemplateLock() {
        client = new ZkClient(zkServers, sessionTimeout, connectionTimeout);
        log.info("zk client Connection successful:{}",zkServers);
    }
    @Override
    protected void waitLock() {
        CountDownLatch latch = new CountDownLatch(1);
        IZkDataListener listener = new IZkDataListener() {
            @Override
            public void handleDataDeleted(String dataPath) throws Exception {
                System.out.println("Listening to the node being deleted");
                latch.countDown();
            }
            @Override
            public void handleDataChange(String dataPath, Object data) throws Exception {}
        };
        //Complete watcher registration
        client.subscribeDataChanges(lockPath, listener);
        //Block yourself
        if (client.exists(lockPath)) {
            try {
                latch.await();
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
        //Cancel watcher registration
        client.unsubscribeDataChanges(lockPath, listener);
    }
    @Override
    protected boolean tryLock() {
        try {
            client.createEphemeral(lockPath);
            System.out.println(Thread.currentThread().getName()+"Get lock");
        } catch (Exception e) {
            log.error("Creation failed");
            return false;
        }
        return true;
    }
    @Override
    public void releaseLock() {
       client.delete(this.lockPath);
    }
}

Disadvantages:

Only one thread will get the lock every time the lock is contested. When the number of threads is large, there will be A "panic" phenomenon. The zookeeper node may run slowly or even go down. This is because when other threads do not get the lock, they will listen to the / lockPath node. When thread A is released, A large number of threads will stop blocking at the same time to scramble for the lock. This operation is very resource-consuming, and the performance is greatly reduced.

2. Scheme based on temporary sequential nodes

The difference between the temporary sequence node and the temporary node is that the generated nodes are orderly. We can use this feature to only let the current thread monitor the thread with the last sequence number, and judge whether its sequence number is the minimum each time we acquire the lock. The minimum is to acquire the lock. After the execution, delete the current node and continue to judge who is the node with the minimum sequence number.

Source code of temporary sequence node operation:

@Slf4j
public class ZkSequenTemplateLock extends AbstractTemplateLock {
    private static final String zkServers = "127.0.0.1:2181";
    private static final int sessionTimeout = 8000;
    private static final int connectionTimeout = 5000;
    private static final String lockPath = "/lockPath";
    private String beforePath;
    private String currentPath;
    private ZkClient client;

    public ZkSequenTemplateLock() {
        client = new ZkClient(zkServers);
        if (!client.exists(lockPath)) {
            client.createPersistent(lockPath);

        }
        log.info("zk client Connection successful:{}",zkServers);

    }

    @Override
    protected void waitLock() {
        CountDownLatch latch = new CountDownLatch(1);
        IZkDataListener listener = new IZkDataListener() {
            @Override
            public void handleDataDeleted(String dataPath) throws Exception {
                System.out.println("Listening to the node being deleted");
                latch.countDown();
            }
            @Override
            public void handleDataChange(String dataPath, Object data) throws Exception {}
        };
        //Adding a data deletion watcher to the top node essentially starts another thread to listen to the previous node
        client.subscribeDataChanges(beforePath, listener);
        //Block yourself
        if (client.exists(beforePath)) {
            try {
                System.out.println("block"+currentPath);
                latch.await();
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
        //Cancel watcher registration
        client.unsubscribeDataChanges(beforePath, listener);
    }
    @Override
    protected boolean tryLock() {
        if (currentPath == null) {
            //Create a temporary sequence node
            currentPath = client.createEphemeralSequential(lockPath + "/", "lock-data");
            System.out.println("current:" + currentPath);
        }
        //Get all children and sort them. Temporary node name is a self growing string
        List<String> childrens = client.getChildren(lockPath);
        //Sort list, sort in natural order
        Collections.sort(childrens);
        if (currentPath.equals(lockPath + "/" + childrens.get(0))) {
            return true;
        } else {
            //If the current node is not ranked first, the previous node information is obtained and assigned to beforePath
            int curIndex = childrens.indexOf(currentPath.substring(lockPath.length() + 1));
            beforePath = lockPath + "/" + childrens.get(curIndex - 1);
        }
        System.out.println("beforePath"+beforePath);
        return false;
    }
    @Override
    public void releaseLock() {
        System.out.println("delete:" + currentPath);
        client.delete(currentPath);
    }
}
Curator distributed lock tool

The cursor provides the following types of locks:

  • Shared Reentrant Lock: a global synchronization lock. At the same time, no two clients will hold a lock
  • Shared lock: similar to shared reentrant lock, but not reentrant (sometimes it will cause deadlock for this reason)
  • Shared reentrant read-write lock
  • Shared semaphore
  • Multi Shared Lock: container entity to manage multiple locks
    We use InterProcessMutex in the first Shared Reentrant Lock to lock and release the lock
public class ZkLockWithCuratorTemplate implements Lock {
    // zk host address
    private String host = "localhost";
    // zk self increasing storage node
    private String lockPath = "/curatorLock";
    // Retry sleep time
    private static final int SLEEP_TIME_MS = 1000;
    // Maximum retries 1000
    private static final int MAX_RETRIES = 1000;
    //Session timeout
    private static final int SESSION_TIMEOUT = 30 * 1000;
    //Connection timeout
    private static final int CONNECTION_TIMEOUT = 3 * 1000;
        //Cursor core operation class
    private CuratorFramework curatorFramework;

    InterProcessMutex lock;

   public ZkLockWithCuratorTemplate() {
       curatorFramework = CuratorFrameworkFactory.builder()
               .connectString(host)
               .connectionTimeoutMs(CONNECTION_TIMEOUT)
               .sessionTimeoutMs(SESSION_TIMEOUT)
               .retryPolicy(new ExponentialBackoffRetry(SLEEP_TIME_MS, MAX_RETRIES))
               .build();
       curatorFramework.start();
       lock = new InterProcessMutex (curatorFramework, lockPath);
    }
    @Override
    public void getLock() throws Exception {
        //Release lock after 5s timeout
         lock.acquire(5, TimeUnit.SECONDS);
    }
    @Override
    public void unlock() throws Exception {
        lock.release();
    }
}
Source code and test class address: https://github.com/Motianshi/distribute-tool
 The article is from: java, the official account, welcome the attention.

Tags: Zookeeper Session jvm JDK

Posted on Tue, 16 Jun 2020 23:44:48 -0400 by notaloser