Implementation of redis request forwarding

Redis (12): implementation of redis request forwarding
Catalog

  1. How to implement command forwarding in cluster mode?
  2. How to implement command forwarding in master-slave mode?
    3. How to use redis cluster?

4. How to realize the normal request forwarding?
The common reasons for request forwarding are: 1. The request cannot be processed by itself and needs to be forwarded to the corresponding server for processing; 2. To achieve load balancing, use the routing service and select the target instance for forwarding;

In cluster mode, requests can be made to any redis server. However, not all servers will handle real requests, but only instances that meet the redis slot rules will handle real requests;

There is a situation in which when a request hits a redis instance that shouldn't be hit, it should be forwarded.

So, how to do this forwarding?

Return to the top

  1. How to implement command forwarding in cluster mode?

//server.c, when processing requests in a unified way, it will determine the cluster mode for processing
int processCommand(client *c) {

...
/* If cluster is enabled perform the cluster redirection here.
 * However we don't perform the redirection if:
 * 1) The sender of this command is our master.
 * 2) The command has no key arguments. */
// In the cluster mode, find the corresponding redis node according to the hashlot
if (server.cluster_enabled &&
    !(c->flags & CLIENT_MASTER) &&
    !(c->flags & CLIENT_LUA &&
      server.lua_caller->flags & CLIENT_MASTER) &&
    !(c->cmd->getkeys_proc == NULL && c->cmd->firstkey == 0))
{
    int hashslot;

    if (server.cluster->state != CLUSTER_OK) {
        flagTransaction(c);
        clusterRedirectClient(c,NULL,0,CLUSTER_REDIR_DOWN_STATE);
        return C_OK;
    } else {
        int error_code;
        // Find the corresponding redis node
        clusterNode *n = getNodeByQuery(c,c->cmd,c->argv,c->argc,&hashslot,&error_code);
        // Unless it is the data that should be processed by itself, the response data node is not here. Let the client find another data node
        // Therefore, the redis node does not forward data, but prompts the customer to search again
        // The client takes the returned information and sends the request to the corresponding node for processing
        if (n == NULL || n != server.cluster->myself) {
            flagTransaction(c);
            clusterRedirectClient(c,n,hashslot,error_code);
            return C_OK;
        }
    }
}
...

}

//cluster.c, find the redis node corresponding to the key
/* Return the pointer to the cluster node that is able to serve the command.

  • For the function to succeed the command should only target either:
    *
  • 1) A single key (even multiple times like LPOPRPUSH mylist mylist).
  • 2) Multiple keys in the same hash slot, while the slot is stable (no
  • resharding in progress).
    *
  • On success the function returns the node that is able to serve the request.
  • If the node is not 'myself' a redirection must be perfomed. The kind of
  • redirection is specified setting the integer passed by reference
  • 'error_code', which will be set to CLUSTER_REDIR_ASK or
  • CLUSTER_REDIR_MOVED.
    *
  • When the node is 'myself' 'error_code' is set to CLUSTER_REDIR_NONE.
    *
  • If the command fails NULL is returned, and the reason of the failure is
  • provided via 'error_code', which will be set to:
    *
  • CLUSTER_REDIR_CROSS_SLOT if the request contains multiple keys that
  • don't belong to the same hash slot.
    *
  • CLUSTER_REDIR_UNSTABLE if the request contains multiple keys
  • belonging to the same slot, but the slot is not stable (in migration or
  • importing state, likely because a resharding is in progress).
    *
  • CLUSTER_REDIR_DOWN_UNBOUND if the request addresses a slot which is
  • not bound to any node. In this case the cluster global state should be
  • already "down" but it is fragile to rely on the update of the global state,
  • so we also handle it here. */
    clusterNode getNodeByQuery(client c, struct redisCommand cmd, robj argv, int argc, int hashslot, int *error_code) {
    clusterNode *n = NULL;
    robj *firstkey = NULL;
    int multiple_keys = 0;
    multiState *ms, _ms;
    multiCmd mc;
    int i, slot = 0, migrating_slot = 0, importing_slot = 0, missing_keys = 0;

    / Set error code optimistically for the base case. /
    if (error_code) *error_code = CLUSTER_REDIR_NONE;

    /* We handle all the cases as if they were EXEC commands, so we have

    • a common code path for everything */
  1. (cmd->proc == execCommand) {
    /* If CLIENT_MULTI flag is not set EXEC is just going to return an

    * error. */

    if (!(c->flags & CLIENT_MULTI)) return myself;
    ms = &c->mstate;
    } else {
    /* In order to have a single codepath create a fake Multi State

    * structure if the client is not in MULTI/EXEC state, this way
    * we have a single codepath below. */

    ms = &_ms;
    _ms.commands = &mc;
    _ms.count = 1;
    mc.argv = argv;
    mc.argc = argc;
    mc.cmd = cmd;
    }

/* Check that all the keys are in the same hash slot, and obtain this

  • slot and the node associated. */
  1. (i = 0; i < ms->count; i++) {
    struct redisCommand *mcmd;
    robj **margv;
    int margc, *keyindex, numkeys, j;

    mcmd = ms->commands[i].cmd;
    margc = ms->commands[i].argc;
    margv = ms->commands[i].argv;
    //Obtain all keyindexes for subsequent key retrieval
    keyindex = getKeysFromCommand(mcmd,margv,margc,&numkeys);
    for (j = 0; j < numkeys; j++) {

       robj *thiskey = margv[keyindex[j]];
       // Calculation of hashSlot, crc16 algorithm
       int thisslot = keyHashSlot((char*)thiskey->ptr,
                                  sdslen(thiskey->ptr));
    
       if (firstkey == NULL) {
           /* This is the first key we see. Check what is the slot
            * and node. */
           firstkey = thiskey;
           slot = thisslot;
           n = server.cluster->slots[slot];
    
           /* Error: If a slot is not served, we are in "cluster down"
            * state. However the state is yet to be updated, so this was
            * not trapped earlier in processCommand(). Report the same
            * error to the client. */
           if (n == NULL) {
               getKeysFreeResult(keyindex);
               if (error_code)
                   *error_code = CLUSTER_REDIR_DOWN_UNBOUND;
               return NULL;
           }
    
           /* If we are migrating or importing this slot, we need to check
            * if we have all the keys in the request (the only way we
            * can safely serve the request, otherwise we return a TRYAGAIN
            * error). To do so we set the importing/migrating state and
            * increment a counter for every missing key. */
           if (n == myself &&
               server.cluster->migrating_slots_to[slot] != NULL)
           {
               migrating_slot = 1;
           } else if (server.cluster->importing_slots_from[slot] != NULL) {
               importing_slot = 1;
           }
       } else {
           /* If it is not the first key, make sure it is exactly
            * the same key as the first we saw. */
           if (!equalStringObjects(firstkey,thiskey)) {
               if (slot != thisslot) {
                   /* Error: multiple keys from different slots. */
                   getKeysFreeResult(keyindex);
                   if (error_code)
                       *error_code = CLUSTER_REDIR_CROSS_SLOT;
                   return NULL;
               } else {
                   /* Flag this request as one with multiple different
                    * keys. */
                   multiple_keys = 1;
               }
           }
       }
    
       /* Migarting / Improrting slot? Count keys we don't have. */
       // Find out whether the value exists in Library 0. If not, increase the miss rate
       if ((migrating_slot || importing_slot) &&
           lookupKeyRead(&server.db[0],thiskey) == NULL)
       {
           missing_keys++;
       }

    }
    getKeysFreeResult(keyindex);
    }

/* No key at all in command? then we can serve the request

  • without redirections or errors. */
  1. (n == NULL) return myself;

/ Return the hashslot by reference. /
if (hashslot) *hashslot = slot;

/* MIGRATE always works in the context of the local node if the slot

  • is open (migrating or importing state). We need to be able to freely
  • move keys among instances in this case. */
  1. ((migrating_slot || importing_slot) && cmd->proc == migrateCommand)
    return myself;

/* If we don't have all the keys and we are migrating the slot, send

  • an ASK redirection. */
  1. (migrating_slot && missing_keys) {
    if (error_code) *error_code = CLUSTER_REDIR_ASK;
    return server.cluster->migrating_slots_to[slot];
    }

/* If we are receiving the slot, and the client correctly flagged the

  • request as "ASKING", we can serve the request. However if the request
  • involves multiple keys and we don't have them all, the only option is
  • to send a TRYAGAIN error. */
  1. (importing_slot &&
    (c->flags & CLIENT_ASKING || cmd->flags & CMD_ASKING))
    {
    if (multiple_keys && missing_keys) {

       if (error_code) *error_code = CLUSTER_REDIR_UNSTABLE;
       return NULL;

    } else {

       return myself;

    }
    }

/* Handle the read-only client case reading from a slave: if this

  • node is a slave and the request is about an hash slot our master
  • is serving, we can reply without redirection. */
  1. (c->flags & CLIENT_READONLY &&
    cmd->flags & CMD_READONLY &&
    nodeIsSlave(myself) &&
    myself->slaveof == n)
    {
    return myself;
    }

/* Base case: just return the right node. However if this node is not

  • myself, set error_code to MOVED since we need to issue a rediretion. */
  1. (n != myself && error_code) *error_code = CLUSTER_REDIR_MOVED;

return n;
}

//cluster.c, compute hashSlot, use crc16 algorithm
//Special syntax: {key ﹐ with ﹐ hash} key ﹐ without ﹐ hash
/* We have 16384 hash slots. The hash slot of a given key is obtained

  • as the least significant 14 bits of the crc16 of the key.
    *
  • However if the key contains the {...} pattern, only the part between
  • { and } is hashed. This may be useful in the future to force certain
  • keys to be in the same node (assuming no resharding is in progress). */
  1. int keyHashSlot(char *key, int keylen) {
    int s, e; / start-end indexes of { and } /

    for (s = 0; s < keylen; s++)

       if (key[s] == '{') break;
    

    / No '{' ? Hash the whole key. This is the base case. /
    if (s == keylen) return crc16(key,keylen) & 0x3FFF;

    / '{' found? Check if we have the corresponding '}'. /
    for (e = s+1; e < keylen; e++)

       if (key[e] == '}') break;
    

    / No '}' or nothing betweeen {} ? Hash the whole key. /
    if (e == keylen || e == s+1) return crc16(key,keylen) & 0x3FFF;

    /* If we are here there is both a { and a } on its right. Hash

    * what is in the middle between { and }. */

    return crc16(key+s+1,e-s-1) & 0x3FFF;
    }

//According to the status value, respond to the client. The data node is not in this node
/* Send the client the right redirection code, according to error_code

  • that should be set to one of CLUSTER_REDIR_* macros.
    *
  • If CLUSTER_REDIR_ASK or CLUSTER_REDIR_MOVED error codes
  • are used, then the node 'n' should not be NULL, but should be the
  • node we want to mention in the redirection. Moreover hashslot should
  • be set to the hash slot that caused the redirection. */
  1. clusterRedirectClient(client c, clusterNode n, int hashslot, int error_code) {
    if (error_code == CLUSTER_REDIR_CROSS_SLOT) {

       addReplySds(c,sdsnew("-CROSSSLOT Keys in request don't hash to the same slot\r\n"));

    } else if (error_code == CLUSTER_REDIR_UNSTABLE) {

       /* The request spawns mutliple keys in the same slot,
        * but the slot is not "stable" currently as there is
        * a migration or import in progress. */
       addReplySds(c,sdsnew("-TRYAGAIN Multiple keys request during rehashing of slot\r\n"));

    } else if (error_code == CLUSTER_REDIR_DOWN_STATE) {

       addReplySds(c,sdsnew("-CLUSTERDOWN The cluster is down\r\n"));

    } else if (error_code == CLUSTER_REDIR_DOWN_UNBOUND) {

       addReplySds(c,sdsnew("-CLUSTERDOWN Hash slot not served\r\n"));

    } else if (error_code == CLUSTER_REDIR_MOVED ||

              error_code == CLUSTER_REDIR_ASK)

    {

       // When the corresponding data node is not itself, and the node that should be processed has been found, respond to the corresponding information of the client
       // ASK error indicates that the data is being migrated, and it is not known when the migration is completed. Therefore, the redirection is temporary and the slot cache should not be refreshed
       // Move error redirection is (relatively) permanent and the slot cache should be refreshed
       addReplySds(c,sdscatprintf(sdsempty(),
           "-%s %d %s:%d\r\n",
           (error_code == CLUSTER_REDIR_ASK) ? "ASK" : "MOVED",
           hashslot,n->ip,n->port));

    } else {

       serverPanic("getNodeByQuery() unknown error.");

    }
    }

Therefore, the request forwarding in the redis cluster mode is not the direct forwarding of the request by the redis server, but by responding to the transfer instruction to the client, the client re initiates the target request, so as to realize the command forwarding.

In fact, the response transfer processing of redis should only occur when the redis node changes, such as adding or reducing nodes, so that redis can realize data rebalancing. Under normal circumstances, the client is fully responsible for which data should be requested to which redis node. This is also the advantage of the cluster. Each data node only processes the corresponding range data. Therefore, the client needs to cache the slot storage rule or location of the server (the slot storage information can be obtained through cluster slots), so as to request the correct node for operation.

Return to the top

  1. How to implement command forwarding in master-slave mode?
    In master-slave mode, only the master node can write requests, while the slave node is responsible for synchronizing the data of the master node. However, when we do read-write separation, the slave node can bear the read traffic. However, if the write process hits the slave node, does this involve a request forwarding? Let's take a look:

//The master-slave command processing judgment is also processed in processCommand
int processCommand(client *c) {

...
/* Don't accept write commands if this is a read only slave. But
 * accept write commands if this is our master. */
// For slave nodes, only read requests can be accepted. If it is a write request, a direct response can be made
if (server.masterhost && server.repl_slave_ro &&
    // Except for the master request, which is used to synchronize data
    !(c->flags & CLIENT_MASTER) &&
    c->cmd->flags & CMD_WRITE)
{
    // -READONLY You can't write against a read only slave.
    addReply(c, shared.roslaveerr);
    return C_OK;
}
...
return C_OK;

}

Therefore, in the redis master-slave mode, the server does not do forwarding processing. In order to realize the function of read-write separation, the client must deal with it by itself. For example, you need to locate the master node and send the write request in the past. The read request can be load balanced. This is also the responsibility of many database middleware.

Return to the top
3. How to use redis cluster?
redis cluster, in essence, provides the data partition storage capacity (of course, there is a lot of work to be done to realize this function), but the access to data needs to be processed by the client itself. So, let's take jedis as the client and see how the clients use the cluster! The test cases are as follows:

@Test
public void testCluster() throws Exception {
    // Add service node Set collection of cluster
    Set<HostAndPort> hostAndPortsSet = new HashSet<HostAndPort>();
    // Add node
    hostAndPortsSet.add(new HostAndPort("192.168.1.103", 7000));
    hostAndPortsSet.add(new HostAndPort("192.168.1.103", 7001));
    hostAndPortsSet.add(new HostAndPort("192.168.1.103", 8000));
    hostAndPortsSet.add(new HostAndPort("192.168.1.103", 8001));
    hostAndPortsSet.add(new HostAndPort("192.168.1.103", 9000));
    hostAndPortsSet.add(new HostAndPort("192.168.1.103", 9001));

    // Jedis connection pool configuration
    JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
    // Maximum number of idle connections, 8 by default
    jedisPoolConfig.setMaxIdle(5);
    // Maximum number of connections, 8 by default
    jedisPoolConfig.setMaxTotal(10);
    //Minimum number of idle connections, default 0
    jedisPoolConfig.setMinIdle(0);
    // Get the maximum number of milliseconds to wait for the connection (if it is set to blockwhenexhausted when blocking), throw an exception if it times out, less than zero: the uncertain time of blocking, default - 1
    jedisPoolConfig.setMaxWaitMillis(2000);
    //Carry out validateObject verification on the obtained connection
    jedisPoolConfig.setTestOnBorrow(true);
    // JedisCluster will inherit the JedisSlotBasedConnectionHandler, that is to say, it will handle the problem of slot location
    JedisCluster jedis = new JedisCluster(hostAndPortsSet, jedisPoolConfig);
    String key = "key1";
    String value = "Value1";
    jedis.set(key, value);
    System.out.println("set a value to Redis over. " + key + "->" + value);
    value = jedis.get("key1");
    System.out.println("get a value from Redis over. " + key + "->" + value);
    jedis.close();
}

As mentioned above, it is the way that jedis accesses redis cluster. The application after sdk encapsulation is always simple and easy to use. It is mainly accessed through JedisCluster. The big difference with the stand-alone redis access lies in the location of data key. We can take a look at it in detail.

The following is the class inheritance diagram of JedisCluster:

In contrast, the class inheritance diagram of Jedis:

The interfaces they both implement are: BasicCommands, Closeable, JedisCommands

It can be seen that there are many differences between the redis operation under the cluster and the ordinary redis operation. However, we just want to discuss how to locate the key, so a set/get is enough.

// When JedisCluster is initialized, slot information will be initialized to the local cache
// redis.clients.jedis.JedisClusterConnectionHandler#JedisClusterConnectionHandler

public JedisClusterConnectionHandler(Set nodes,

                                   final GenericObjectPoolConfig poolConfig, int connectionTimeout, int soTimeout, String password) {
this.cache = new JedisClusterInfoCache(poolConfig, connectionTimeout, soTimeout, password);
// When initializing JedisCluster, a pull of slot information will be triggered for subsequent use
initializeSlotsCache(nodes, poolConfig, password);

}
private void initializeSlotsCache(Set startNodes, GenericObjectPoolConfig poolConfig, String password) {

for (HostAndPort hostAndPort : startNodes) {
  Jedis jedis = new Jedis(hostAndPort.getHost(), hostAndPort.getPort());
  if (password != null) {
    jedis.auth(password);
  }
  try {
    // As long as a node responds successfully, that's enough
    // The purpose of traversal is to ensure high availability and to avoid some node failures without information
    cache.discoverClusterNodesAndSlots(jedis);
    break;
  } catch (JedisConnectionException e) {
    // try next nodes
  } finally {
    if (jedis != null) {
      jedis.close();
    }
  }
}

}

// The operation of set is to wrap a layer of Jedis with JedisClusterCommand
// redis.clients.jedis.JedisCluster#set(java.lang.String, java.lang.String)

@Override
public String set(final String key, final String value) {

  // connectionHandler is an instance of JedisSlotBasedConnectionHandler
  // Default retries: 5
return new JedisClusterCommand<String>(connectionHandler, maxAttempts) {
  @Override
  public String execute(Jedis connection) {
    return connection.set(key, value);
  }
}.run(key);

}
// redis.clients.jedis.JedisClusterCommand#run(java.lang.String)
public T run(String key) {

if (key == null) {
  throw new JedisClusterException("No way to dispatch this command to Redis Cluster.");
}

return runWithRetries(SafeEncoder.encode(key), this.maxAttempts, false, false);

}
//For the access redis node with retry, the retry scenarios include: the data node is not in the access node; the access node is in data migration; the access node is not available;
// redis.clients.jedis.JedisClusterCommand#runWithRetries
private T runWithRetries(byte[] key, int attempts, boolean tryRandomNode, boolean asking) {

if (attempts <= 0) {
  throw new JedisClusterMaxRedirectionsException("Too many Cluster redirections?");
}

Jedis connection = null;
try {

  if (asking) {
    // TODO: Pipeline asking with the original command to make it
    // faster....
    connection = askConnection.get();
    connection.asking();

    // if asking success, reset asking flag
    asking = false;
  } else {
    if (tryRandomNode) {
      connection = connectionHandler.getConnection();
    } else {
        // Directly call connectionHandler.getConnectionFromSlot to get the corresponding redis connection
        // The calculated slot here is the set of CRC16% 0x3fff implemented by the redis server, that is, if all the terminals are consistent, the same decision can be made
      connection = connectionHandler.getConnectionFromSlot(JedisClusterCRC16.getSlot(key));
    }
  }

  return execute(connection);

} catch (JedisNoReachableClusterNodeException jnrcne) {
  throw jnrcne;
} catch (JedisConnectionException jce) {
  // release current connection before recursion
  releaseConnection(connection);
  connection = null;

  if (attempts <= 1) {
    //We need this because if node is not reachable anymore - we need to finally initiate slots renewing,
    //or we can stuck with cluster state without one node in opposite case.
    //But now if maxAttempts = 1 or 2 we will do it too often. For each time-outed request.
    //TODO make tracking of successful/unsuccessful operations for node - do renewing only
    //if there were no successful responses from this node last few seconds
    this.connectionHandler.renewSlotCache();

    //no more redirections left, throw original exception, not JedisClusterMaxRedirectionsException, because it's not MOVED situation
    throw jce;
  }
    // Connection exception, request random node again
  return runWithRetries(key, attempts - 1, tryRandomNode, asking);
} catch (JedisRedirectionException jre) {
  // if MOVED redirection occurred,
  if (jre instanceof JedisMovedDataException) {
    // it rebuilds cluster's slot cache
    // recommended by Redis cluster specification
    this.connectionHandler.renewSlotCache(connection);
  }

  // release current connection before recursion or renewing
  releaseConnection(connection);
  connection = null;

  if (jre instanceof JedisAskDataException) {
    asking = true;
    askConnection.set(this.connectionHandler.getConnectionFromNode(jre.getTargetNode()));
  } else if (jre instanceof JedisMovedDataException) {
  } else {
    throw new JedisClusterException(jre);
  }
    // After receiving the MOVED/ASK response, refresh the slot information and visit again
  return runWithRetries(key, attempts - 1, false, asking);
} finally {
  releaseConnection(connection);
}

}
//Calculate the hashSlot value
// redis.clients.util.JedisClusterCRC16#getSlot(byte[])
public static int getSlot(byte[] key) {

int s = -1;
int e = -1;
boolean sFound = false;
for (int i = 0; i < key.length; i++) {
  if (key[i] == '{' && !sFound) {
    s = i;
    sFound = true;
  }
  if (key[i] == '}' && sFound) {
    e = i;
    break;
  }
}
if (s > -1 && e > -1 && e != s + 1) {
  return getCRC16(key, s + 1, e) & (16384 - 1);
}
return getCRC16(key) & (16384 - 1);

}
//According to the hashSlot, get the corresponding redis connection instance
@Override
public Jedis getConnectionFromSlot(int slot) {

  // First, get the connection information corresponding to the slot from the cache, which is naturally empty at the beginning
JedisPool connectionPool = cache.getSlotPool(slot);
if (connectionPool != null) {
  // It can't guaranteed to get valid connection because of node
  // assignment
  return connectionPool.getResource();
} else {
    // To refresh the slot cache information, you need to request the cluster slot to obtain the slot distribution information, and then store it in the jedisclusterinfo cache
  renewSlotCache(); //It's abnormal situation for cluster mode, that we have just nothing for slot, try to rediscover state
  connectionPool = cache.getSlotPool(slot);
  // If you still can't get it, randomly select a connection
  // At this time, the random node is requested, and the server may respond to the correct node location information
  if (connectionPool != null) {
    return connectionPool.getResource();
  } else {
    //no choice, fallback to new connection to random node
    return getConnection();
  }
}

}

// redis.clients.jedis.JedisClusterConnectionHandler#renewSlotCache()

public void renewSlotCache() {

cache.renewClusterSlots(null);

}
// redis.clients.jedis.JedisClusterInfoCache#renewClusterSlots
public void renewClusterSlots(Jedis jedis) {

//If rediscovering is already in process - no need to start one more same rediscovering, just return
if (!rediscovering) {
  try {
    w.lock();
    rediscovering = true;

    if (jedis != null) {
      try {
        discoverClusterSlots(jedis);
        return;
      } catch (JedisException e) {
        //try nodes from all pools
      }
    }
    // Traverse the cluster nodes in turn until there is a correct response
    for (JedisPool jp : getShuffledNodesPool()) {
      try {
        jedis = jp.getResource();
        discoverClusterSlots(jedis);
        return;
      } catch (JedisConnectionException e) {
        // try next nodes
      } finally {
        if (jedis != null) {
          jedis.close();
        }
      }
    }
  } finally {
    rediscovering = false;
    w.unlock();
  }
}

}

private void discoverClusterSlots(Jedis jedis) {

// Send the cluster slots command to obtain the slot distribution information
List<Object> slots = jedis.clusterSlots();
this.slots.clear();

for (Object slotInfoObj : slots) {
  List<Object> slotInfo = (List<Object>) slotInfoObj;

/* Format: 1) 1) start slot
 *            2) end slot
 *            3) 1) master IP
 *               2) master port
 *               3) node ID
 *            4) 1) replica IP
 *               2) replica port
 *               3) node ID
 *           ... continued until done
 */
  if (slotInfo.size() <= MASTER_NODE_INDEX) {
    continue;
  }

  List<Integer> slotNums = getAssignedSlotArray(slotInfo);

  // hostInfos
  // The third element is the master information
  List<Object> hostInfos = (List<Object>) slotInfo.get(MASTER_NODE_INDEX);
  if (hostInfos.isEmpty()) {
    continue;
  }

  // at this time, we just use master, discard slave information
  HostAndPort targetNode = generateHostAndPort(hostInfos);
  // Store only master information
  assignSlotsToNode(slotNums, targetNode);
}

}

private List getAssignedSlotArray(List

List<Integer> slotNums = new ArrayList<Integer>();
// Add the governed slot range to the list in turn
// 0 ~ 5999
for (int slot = ((Long) slotInfo.get(0)).intValue(); slot <= ((Long) slotInfo.get(1))
    .intValue(); slot++) {
  slotNums.add(slot);
}
return slotNums;

}
//Put all the given slot s under the jurisdiction of targetNode for easy subsequent acquisition
// redis.clients.jedis.JedisClusterInfoCache#assignSlotsToNode
public void assignSlotsToNode(List targetSlots, HostAndPort targetNode) {

// The lock here is the writeLock in ReentrantReadWriteLock
w.lock();
try {
    // Create a redis connection
  JedisPool targetPool = setupNodeIfNotExist(targetNode);
  // Point the slot in the range to targetNode in turn
  // Normally, the size of slots should be 16384
  for (Integer slot : targetSlots) {
    // slots = new HashMap<Integer, JedisPool>();
    slots.put(slot, targetPool);
  }
} finally {
  w.unlock();
}

}
// redis.clients.jedis.JedisClusterInfoCache#setupNodeIfNotExist(redis.clients.jedis.HostAndPort)
public JedisPool setupNodeIfNotExist(HostAndPort node) {

w.lock();
try {
  String nodeKey = getNodeKey(node);
  JedisPool existingPool = nodes.get(nodeKey);
  if (existingPool != null) return existingPool;

  JedisPool nodePool = new JedisPool(poolConfig, node.getHost(), node.getPort(),
      connectionTimeout, soTimeout, password, 0, null, false, null, null, null);
  nodes.put(nodeKey, nodePool);
  return nodePool;
} finally {
  w.unlock();
}

}
//After refreshing the slot cache information, it is easy to re request the redis connection
// redis.clients.jedis.JedisClusterInfoCache#getSlotPool
public JedisPool getSlotPool(int slot) {

r.lock();
try {
  return slots.get(slot);
} finally {
  r.unlock();
}

}

From the above description, we know how the whole customer handles cluster requests. There are two steps as a whole: 1. Get the slot distribution information of the redis cluster through the cluster slot, and cache it locally; 2. According to the slot distribution information, send a request to the corresponding redis node.

In addition, there are some unexpected situations, that is, what if the slot information obtained by the client is wrong? How to keep the consistency between client cache and server?

In fact, the client does not guarantee the accuracy of the slot information, nor the consistency of the data with the server. Instead, when an error occurs, it can refresh again. Through jedisclustercommand ා runwithretries, retry the error and refresh the slot data.

Return to the top
4. How to realize the normal request forwarding?
As you can see, redis has actually been avoiding the issue of forwarding.

So, in practice, how does our forwarding work come true?

In the simplest case, after receiving the client's request, repackage the data, build a new request at the destination address, send it, and wait for the result to respond. When the target server responds, it can respond the result to the client. Such as: application gateway, proxy server;

Secondly, respond to a status code (such as 302) of the client to let the client jump independently. This is the same as the implementation of redis;

Relatively complex, directly use flow for docking. After receiving the client's request, directly transfer the data to the target server. Similarly, after the target server responds, directly write the data to the client channel. In this case, a large number of data re encapsulation is avoided, and the performance loss caused by forwarding is greatly reduced, so as to improve the response speed. This scenario is generally used to transfer large files.

Original address https://www.cnblogs.com/yougewe/p/12546817.html

Tags: Database Redis Jedis Java

Posted on Mon, 23 Mar 2020 22:38:35 -0400 by john_wenhold