As a single thread, Redis is oversold when I use it?

Focus on PHP, MySQL, Linux and front-end development. Thank you for your attention!!! The article is arranged in GitHub,...
Practical description
Demo steps
The third scenario
The fourth scenario
problem analysis
Problem summary
Concrete implementation
Spin lock
summary

Focus on PHP, MySQL, Linux and front-end development. Thank you for your attention!!! The article is arranged in GitHub,Gitee The technologies mainly include PHP, Redis, MySQL, JavaScript, HTML & CSS, Linux, Java, Golang, Linux, tool resources and other relevant theoretical knowledge, interview questions and practical content.

Practical description

Recently, in a project marketing activity, a colleague used Redis to realize inventory management of goods. During the pressure test, it was found that there was oversold. Here is a summary of how to correctly use Redis to solve the oversold situation in the second kill scenario.

Demo steps

Here we will not directly explain how to achieve a safe and efficient distributed lock. Instead, locks are implemented in different ways in a step-by-step manner, and the shortcomings of each lock are found, as well as how to optimize this type of lock, so as to finally achieve an efficient and secure distributed lock.

The first scenario

This scenario uses Redis to store the quantity of goods. First obtain the inventory and judge the inventory. If the inventory is greater than 0, reduce it by 1, and then update the Redis inventory data. The schematic diagram is as follows:

  1. When the first request comes, judge the inventory quantity of Redis. Then reduce the commodity inventory by one, and then update the inventory quantity.

  2. When the second request comes between the first request and the inventory logic, the same logic reads the Redis inventory, judges the inventory quantity, and then performs the inventory reduction operation. At this time, the commodity they operate is actually the same commodity.

  3. Then, with such logic, when a large number of requests such as second kill come, it is easy that the actual quantity of goods sold is far greater than the quantity of goods in stock.

public function demo1(ResponseInterface $response) { $application = ApplicationContext::getContainer(); $redisClient = $application->get(Redis::class); /** @var int $goodsStock Current inventory of goods*/ $goodsStock = $redisClient->get($this->goodsKey); if ($goodsStock > 0) { $redisClient->decr($this->goodsKey); // TODO performs additional business logic return $response->json(['msg' => 'Second kill success'])->withStatus(200); } return $response->json(['msg' => 'The second kill failed, and the commodity inventory is insufficient.'])->withStatus(500); }

Problem analysis:

  1. This method uses Redis to manage commodity inventory and reduce the pressure on MySQL.

  2. Assuming that the inventory is only 1 at this time, the first request is in the process of determining that the inventory is greater than 0 and reducing the inventory. If there is a second request to read the data, it is found that the commodity inventory is greater than 0. Both will execute the logic of second kill. However, when there is only one inventory, there is an oversold situation.

  3. At this point, let's imagine that if we can only allow one request to process inventory, and other requests can only wait until the end of the previous request to obtain commodity inventory, can we realize oversold? This is the lock mechanism implementation mentioned in the following scenarios.

The second scenario

Use the file lock. After the first request comes, open the file lock. After processing the business, release the current file lock, then process the next request and cycle in turn. Ensure that all current requests and only one request is processing inventory. After the request is processed, the lock is released.

  1. Using a file lock, a request is made to lock a file. At this time, other requests will be blocked. The next request will not be executed until the previous request successfully releases the lock file.

  2. All requests are like a queue. The previous queue is first entered and the next queue is last entered. They are carried out in the order of FIFO at a time.

    public function demo3(ResponseInterface $response) { $fp = fopen("/tmp/lock.txt", "r+"); try { if (flock($fp, LOCK_EX)) { // Exclusive locking $application = ApplicationContext::getContainer(); $redisClient = $application->get(Redis::class); /** @var int $goodsStock Current inventory of goods*/ $goodsStock = $redisClient->get($this->goodsKey); if ($goodsStock > 0) { $redisClient->decr($this->goodsKey); // TODO handles additional business logic $result = true; // Business logic processing final result flock($fp, LOCK_UN); // Release lock fclose($fp); if ($result) { return $response->json(['msg' => 'Second kill success'])->withStatus(200); } return $response->json(['msg' => 'Second kill failed'])->withStatus(200); } else { flock($fp, LOCK_UN); // Release lock fclose($fp); return $response->json(['msg' => 'Insufficient inventory, second kill failed.'])->withStatus(500); } } else { fclose($fp); return $response->json(['msg' => 'The activity is too popular and there are too many people snapping up. Please try again later.'])->withStatus(500); } } catch (\Exception $exception) { fclose($fp); return $response->json(['msg' => 'System exception'])->withStatus(500); } finally { fclose($fp); } }

Problem analysis:

  1. If a file lock is used, it is time-consuming to open and release a lock. In the business scenario of second kill, a large number of requests come, which is easy to occur. Most users have been waiting for requests.

  2. When a file lock is opened, it is aimed at the current server. If our project is a distributed deployment, the above locking can only be performed for the current server, not for the request. As shown in the following figure: it is easy to have multiple servers and multiple locks at any time.

The third scenario

In this scheme, Redis stores the commodity inventory first, and a request will reduce the above inventory by 1. If the inventory returned by Redis is less than 0, it indicates that the current second kill has failed. It mainly uses Redis's one-way programming. Ensure that only one thread is executing every write to Redis.

public function demo2(ResponseInterface $response) { $application = ApplicationContext::getContainer(); $redisClient = $application->get(Redis::class); /** @var int $goodsStock Redis Inventory data after reduction of 1 */ $goodsStock = $redisClient->decr($this->goodsKey); if ($goodsStock > 0) { // TODO performs additional business logic $result = true;// Results of business processing if ($result) { return $response->json(['msg' => 'Second kill success'])->withStatus(200); } else { $redisClient->incr($this->goodsKey);// Increase the reduced inventory by 1 return $response->json(['msg' => 'Second kill failed'])->withStatus(500); } } return $response->json(['msg' => 'The second kill failed, and the commodity inventory is insufficient.'])->withStatus(500); }

Problem analysis:

  1. Although the scheme makes use of Redis's single thread model, it can avoid oversold emptying. When the inventory is 0, a second kill request will reduce the inventory by 1. Finally, the cached data of Redis will be less than 0.

  2. In this scheme, the user's second kill quantity is inconsistent with the actual second kill commodity quantity. As in the above code, when the business processing result is FALSE, add 1 to Redis. If an exception occurs in the process of adding 1, the number of goods will be inconsistent if it is not added successfully.

The fourth scenario

Through the analysis of the above three cases, we can conclude that the file lock is the best scheme. However, file locks cannot solve the emptying of distributed deployment. At this time, we can use Redis's setnx and expire to implement distributed locks. The setnx command sets a lock first, and expire adds a timeout to the lock.

public function demo4(ResponseInterface $response) { $application = ApplicationContext::getContainer(); $redisClient = $application->get(Redis::class); if ($redisClient->setnx($this->goodsKey, 1)) { // Suppose that the server goes down when the user performs the following operations $redisClient->expire($this->goodsKey, 10); // TODO processing business logic $result = true;// Results of processing business logic // Delete lock $redisClient->del($this->goodsKey); if ($result) { return $response->json(['msg' => 'Second kill succeeded.'])->withStatus(200); } return $response->json(['msg' => 'Second kill failed.'])->withStatus(500); } return $response->json(['msg' => 'System exception, please try again.'])->withStatus(500); }

Problem analysis:

  1. Through the above example code, we will feel that there seems to be no problem with this method. Add a lock and release the lock. But think about it carefully. After adding a lock, an exception occurred when setting the expiration time for the lock, resulting in the failure to add the expiration time to the lock normally. Is this lock always there?

  2. Therefore, implementing Redis distributed locks in the above situation does not meet atomicity.

The fifth scenario

In the fourth scenario, Redis is used to implement distributed locks. However, the distributed lock is not atomic. Fortunately, Redis provides a combined version of the two commands, which can achieve atomicity. It is set(key, value, ['nx', 'ex' = > 'expiration time').

public function demo5(ResponseInterface $response) { $application = ApplicationContext::getContainer(); $redisClient = $application->get(Redis::class); if ($redisClient->set($this->goodsKey, 1, ['nx', 'ex' => 10])) { try { // TODO handles second kill business $result = true;// Results of processing business logic $redisClient->del($this->goodsKey); if ($result) { return $response->json(['msg' => 'Second kill succeeded.'])->withStatus(200); } else { return $response->json(['msg' => 'Second kill failed.'])->withStatus(200); } } catch (\Exception $exception) { $redisClient->del($this->goodsKey); } finally { $redisClient->del($this->goodsKey); } } return $response->json(['msg' => 'System exception, please try again.'])->withStatus(500); }

Problem analysis:

  1. Through step-by-step promotion, you may think that the fifth scenario, Redis to achieve distribution, should be seamless. We carefully observe where TODO is called and where business logic is processed. What happens if the business logic exceeds the cache setting by 10 seconds?

  2. If the logical processing exceeds 10 seconds, the second second second kill request can normally process its own service request. It happens that when the business logic of the first request is completed and the Redis lock is to be deleted, the Redis lock of the second request will be deleted. The third request will be executed normally. According to this logic, is it an invalid lock like the Redis lock?

  3. This will cause the current request to delete the Redis lock instead of its own lock. If we do a verification when deleting locks, we can only delete our own locks to see if this scheme works? Next, let's look at the sixth case.

The sixth scenario

For the fifth case above, a unique identification judgment of the request is added during deletion. That is, you can only delete the ID when you add a lock.

public function demo6(ResponseInterface $response) { $application = ApplicationContext::getContainer(); $redisClient = $application->get(Redis::class); /** @var string $client Unique identification of the current request*/ $client = md5((string)mt_rand(100000, 100000000000000000).uniqid()); if ($redisClient->set($this->goodsKey, $client, ['nx', 'ex' => 10])) { try { // TODO processing second kill business logic $result = true;// Results of processing business logic $redisClient->del($this->goodsKey); if ($result) { return $response->json(['msg' => 'Second kill success'])->withStatus(200); } return $response->json(['msg' => 'Second kill failed'])->withStatus(500); } catch (\Exception $exception) { if ($redisClient->get($this->goodsKey) == $client) { // There is a time difference here $redisClient->del($this->goodsKey); } } finally { if ($redisClient->get($this->goodsKey) == $client) { // There is a time difference here $redisClient->del($this->goodsKey); } } } return $response->json(['msg' => 'Please try again later'])->withStatus(500); }

problem analysis

  1. Through the above analysis, it seems that there is no problem at all. However, if you are careful, you can see where I added the comment "there is a time difference here". If Redis reads the cache and judges that the unique ID of the request is consistent, a blocking, network fluctuation, etc. occurs when del deletes the lock. The Del command is executed only after the lock expires. Is the deleted lock still the currently requested lock?

  2. If you delete the lock at this time, it must not be the currently requested lock. It's the next requested lock. In this case, will the lock be invalid?

Problem summary

Through the above example code demonstration, it is found that a big problem is when releasing the lock for Redis, because it does not belong to an atomic operation. Combined with the sixth case, if we can ensure that the released lock is atomic and the added lock is atomic, can we correctly ensure that our distributed lock is free of problems?

  1. When adding locks, we can implement atomic operations by using Redis native commands.

  2. When releasing a lock, only the lock added by itself is deleted, which has been solved in the sixth scenario.

  3. Next, we only need to consider the atomic operation when releasing the lock. Since Redis native does not have such a command, we need to use lua operation to realize atomicity.

Concrete implementation

By opening the official website, you can see several clients that provide distributed lock implementation on the official website. You can use them directly. Official website address , the client I use here is rtckit/reactphp-redlock
. The specific installation method can be directly operated according to the document. Here is a brief description of the two methods of calling.

The first way

public function demo7() { /** @var Factory $factory Initialize a Redis instance*/ $factory = new \Clue\React\Redis\Factory(); $client = $factory->createLazyClient('127.0.0.1'); /** @var Custodian $custodian Initialize a lock listener*/ $custodian = new \RTCKit\React\Redlock\Custodian($client); $custodian->acquire('MyResource', 60, 'r4nd0m_token') ->then(function (?Lock $lock) use ($custodian) { if (is_null($lock)) { // Failed to acquire lock } else { // Add a lock with a 10s life cycle // TODO processing business logic // Release lock $custodian->release($lock); } }); }

The general logic of this method is similar to that in the sixth scheme. Redis's set + nx command is used to implement atomic locking, and then a random string is set for the current lock to handle that when releasing the current lock, other people's locks cannot be released. The big difference is that when using release to release a lock, this method calls a lua script to delete the lock. Ensure that the release of the lock is atomic. The following is a rough screenshot of releasing the lock.

// lua script public const RELEASE_SCRIPT = <<<EOD if redis.call("get", KEYS[1]) == ARGV[1] then return redis.call("del", KEYS[1]) else return 0 end EOD; public function release(Lock $lock): PromiseInterface { /** @psalm-suppress InvalidScalarArgument */ return $this->client->eval(self::RELEASE_SCRIPT, 1, $lock->getResource(), $lock->getToken()) ->then(function (?string $reply): bool { return $reply === '1'; }); }

The second way

The second method is not far from the first, but adds a spin lock. Always obtain the lock. If it is not obtained, the current request will be abandoned.

public function demo8() { /** @var Factory $factory Initialize a Redis instance*/ $factory = new \Clue\React\Redis\Factory(); $client = $factory->createLazyClient('127.0.0.1'); /** @var Custodian $custodian Initialize a lock listener*/ $custodian = new \RTCKit\React\Redlock\Custodian($client); $custodian->spin(100, 0.5, 'HotResource', 10, 'r4nd0m_token') ->then(function (?Lock $lock) use ($custodian) : void { if (is_null($lock)) { // There will be 100 sessions, each with an interval of 0.5 seconds to obtain the lock. If the lock is not obtained. Then the lock request is abandoned. } else { // Add a lock with a 10s life cycle // TODO processing business logic // Release lock $custodian->release($lock); } }); }

Spin lock

spinlock, also known as spin lock, is a locking mechanism to protect shared resources. Spinlocks are similar to mutexes. They are used to solve the mutex of a resource.

Whether it is a mutual exclusion lock or a spin lock, at most one holder and only one execution unit can obtain the lock at any time. However, they are slightly different in scheduling mechanism. For mutex, if the resource has been occupied, the resource applicant can only go to sleep. However, the spin lock will not cause the caller to sleep. If the spin lock has been held by other execution units, the caller will always cycle there to see whether the holder of the spin lock has released the lock. "Spin" is named for this.

summary

In fact, through the above schemes, careful you may find many problems.

  1. Concurrency itself can be a multi-threaded processing method. After we add locks here, does parallel processing become serial processing. It reduces the so-called high performance of second kill.

  2. Can the above scheme work in Redis master-slave replication, cluster and other deployment architecture schemes?

  3. Many people say that zookeeper is more suitable for distributed lock scenarios. Where does zookeeper consume more than Redis?

With all kinds of questions, we'll see you in the next article. Like, interested, welcome to pay attention to my article. The shortcomings in the article are also welcome to correct.

25 October 2021, 09:16 | Views: 1925

Add new comment

For adding a comment, please log in
or create account

0 comments