Difference between Mutex and RWMutex

Mutex lock and read-write lock

Here's a quote Performance analysis of golang mutual exclusion lock and read-write lock The definitions of mutex lock and read-write lock in are clear

mutex

1. There are two operations for mutex: acquire lock and release lock
2. When a goroutine acquires the mutex, no goroutine can acquire the mutex. You can only wait for the goroutine to release the mutex
3. Mutex is applicable to the situation that the number of read and write operations is almost the same
4. Both reading and writing can be put into the mutex

Read write lock

1. There are four operations of read lock, read lock, read unlock, write lock and write unlock
2. There can be at most one write lock and multiple read locks (the maximum data is related to the number of CPU s)
3. The priority of the write lock is higher than that of the read lock. This is because the write lock has been blocked to prevent too many read locks
4. When one goroutine obtains a write lock, other goroutines cannot obtain a read lock or a write lock until the write lock is released
5. When one goroutine gets a read lock, other goroutines can get a read lock, but not a write lock. Therefore, it can also be seen that if a goroutine wants to acquire a write lock, other goroutines will constantly acquire and release the read lock, which will cause the write lock to be blocked all the time, so it can be avoided that the priority of the write lock is higher than that of the read lock,
6. The read-write lock is applicable to the situation of more reading and less writing.
7. When there are many read operations, for example, there are three goroutines: G1, G2 and G3, all of which want to read A piece of data A. if we use the mutex lock, it is the following situation: G1 first locks, then reads A, and then releases; G2 then locks, reads A, and releases; G3 locks, reads A, and then releases This operation is serial. Since each goroutine needs to wait in line for the previous goroutine to release the lock, the efficiency is obviously not high. However, if we use the read-write lock at this time, we can make G1, G2, G3 read A at the same time, which can greatly improve the efficiency.
8. The write operation can only be placed in the write lock, and the read operation can be placed in the read-write lock, but the concurrency efficiency must be low in the write lock

performance comparison

Test one

When running the original code provided by the blogger on this computer, the result is the opposite of that of the blogger

package main

import (
	"fmt"
	"sync"
	"time"
)

const MAXNUM = 1000 //Size of map
const LOCKNUM = 1e7 //Locking times

var lock sync.Mutex        //mutex 
var rwlock sync.RWMutex    //Read write lock
var lock_map map[int]int   //Mutex map
var rwlock_map map[int]int //Read write lock map

func main() {
	var lock_w = &sync.WaitGroup{}
	var rwlock_w = &sync.WaitGroup{}
	lock_w.Add(LOCKNUM)
	rwlock_w.Add(LOCKNUM)
	lock_ch := make(chan int, 10000)
	rwlock_ch := make(chan int, 10000)
	lock_map = make(map[int]int, MAXNUM)
	rwlock_map = make(map[int]int, MAXNUM)
	init_map(lock_map, rwlock_map)
	time1 := time.Now()
	for i := 0; i < LOCKNUM; i++ {
		go test1(lock_ch, i, lock_map, lock_w)
	}
	lock_w.Wait()
	time2 := time.Now()
	for i := 0; i < LOCKNUM; i++ {
		go test2(rwlock_ch, i, rwlock_map, rwlock_w)
	}
	rwlock_w.Wait()
	time3 := time.Now()
	fmt.Println("lock time:", time2.Sub(time1).String())
	fmt.Println("rwlock time:", time3.Sub(time2).String())
}

func init_map(a map[int]int, b map[int]int) { //Initialize map
	for i := 0; i < MAXNUM; i++ {
		a[i] = i
		b[i] = i
	}
}

func test1(ch chan int, i int, mymap map[int]int, w *sync.WaitGroup) int {
	lock.Lock()
	defer lock.Unlock()
	w.Done()
	return mymap[i%MAXNUM]
}

func test2(ch chan int, i int, mymap map[int]int, w *sync.WaitGroup) int {
	rwlock.RLock()
	defer rwlock.RUnlock()
	w.Done()
	return mymap[i%MAXNUM]
}

Out:

lock time: 3.6869219s
rwlock time: 2.7925313s

However, it can be seen that the magnitude of the two is still close when the concurrency reaches 1e7

Test two

Let's increase the chan delivery (general scenario) and the time-consuming in the task (this will increase the serial burden of the mutex)

package main

import (
	"fmt"
	"sync"
	"time"
)

const MAXNUM = 1000 //Size of map
const LOCKNUM = 1e5 //Locking times

var lock sync.Mutex        //mutex 
var rwlock sync.RWMutex    //Read write lock
var lock_map map[int]int   //Mutex map
var rwlock_map map[int]int //Read write lock map

func main() {
	var lock_w sync.WaitGroup
	var rwlock_w sync.WaitGroup
	lock_w.Add(LOCKNUM)
	rwlock_w.Add(LOCKNUM)
	lock_ch := make(chan int, 1000)    // Cache impact is small, because the storage is immediately taken out of chan
	rwlock_ch := make(chan int, 1000)
	lock_map = make(map[int]int, MAXNUM)
	rwlock_map = make(map[int]int, MAXNUM)
	count1 := 0
	count2 := 0
	init_map(lock_map, rwlock_map)
	time1 := time.Now()
	for i := 0; i < LOCKNUM; i++ {
		go test1(lock_ch, i, lock_map, &lock_w)
	}
	go func() {
		lock_w.Wait()
		close(lock_ch)
	}()
	for i := range lock_ch {
		count1 += i
	}
	fmt.Printf("CHAN ID SUM %d\n", count1)

	time2 := time.Now()
	for i := 0; i < LOCKNUM; i++ {
		go test2(rwlock_ch, i, rwlock_map, &rwlock_w)
	}
	go func() {
		rwlock_w.Wait()
		close(rwlock_ch)
	}()
	for i := range rwlock_ch {
		count2 += i
	}
	fmt.Printf("CHAN ID SUM %d\n", count2)
	time3 := time.Now()
	fmt.Println("lock time:", time2.Sub(time1).String())
	fmt.Println("rwlock time:", time3.Sub(time2).String())
}

func init_map(a map[int]int, b map[int]int) { //Initialize map
	for i := 0; i < MAXNUM; i++ {
		a[i] = i
		b[i] = i
	}
}

func test1(ch chan int, i int, mymap map[int]int, w *sync.WaitGroup) int {
	lock.Lock()
	defer lock.Unlock()
	ch <- i
	time.Sleep(time.Nanosecond)
	w.Done()
	return mymap[i%MAXNUM]
}

func test2(ch chan int, i int, mymap map[int]int, w *sync.WaitGroup) int {
	rwlock.RLock()
	defer rwlock.RUnlock()
	ch <- i
	time.Sleep(time.Nanosecond)
	w.Done()
	return mymap[i%MAXNUM]
}

Out:

CHAN ID SUM 4999950000
CHAN ID SUM 4999950000
lock time: 2m50.2909581s
rwlock time: 124.6928ms

You can see that the two locks are obviously not in the same order of magnitude

Test three

Replace the mutex lock with the write lock of the read-write lock, and the result is the same as the previous test

conclusion

When the read-write lock is multi read, its performance is much better than that of the mutex lock because it avoids the parallelism of the mutex lock

Tags: less

Posted on Mon, 15 Jun 2020 23:14:29 -0400 by doofystyle