Detailed explanation of sync package in golang concurrency

The Golang sync package provides basic asynchronous operation methods, including Mutex, Once execution and WaitGroup. Th...

The Golang sync package provides basic asynchronous operation methods, including Mutex, Once execution and WaitGroup.
This article mainly introduces the basic usage of these functions provided by sync package.

  • WaitGroup: concurrent wait group
  • Mutex: mutex
  • RWMutex: read / write lock
  • Once: execute once
  • Cond: semaphore
  • Pool: temporary object pool
  • Map: map with lock
sync.WaitGroup

sync.WaitGroup refers to the waiting group, which is very common in Golang concurrent programming. It refers to waiting for one group of work to complete before proceeding to the next group of work.
sync.WaitGroup has three functions:

func (wg *WaitGroup) Add(delta int) // Add add n concurrent processes func (wg *WaitGroup) Done() // Done completes a concurrent process func (wg *WaitGroup) Wait() // Wait for other concurrent processes to end

sync.WaitGroup is most commonly used in Golang programming in the process pool. The following example will start 1000 concurrent processes at the same time.

func main() { wg := &sync.WaitGroup{} for i := 0; i < 1000; i++ { wg.Add(1) go func() { defer func() { wg.Done() }() time.Sleep(1 * time.Second) fmt.Println("hello world ~") }() } // Wait for the end of all processes wg.Wait() fmt.Println("WaitGroup all process done ~") }

sync.WaitGroup has no way to specify the maximum number of concurrent processes, which may cause problems in some scenarios. For example, in the database operation scenario, we do not want a large number of database connections at some time to make the database inaccessible.
Therefore, in order to control the maximum concurrency, github.com/remeh/sizedwaitgroup is recommended, which is very similar to sync.WaitGroup.

In the following example, there are only 10 concurrent processes at most. If there are 10 concurrent processes, only one process can start a new one after doing is executed.

import "github.com/remeh/sizedwaitgroup" func main() { # Maximum 10 concurrent wg := sizedwaitgroup.New(10) for i = 0; i < 1000; i++ { wg.Add() go func() { defer func() { wg.Done() }() time.Sleep(1 * time.Second) fmt.Println("hello world ~") }() } // Wait for the end of all processes wg.Wait() fmt.Println("WaitGroup all process done ~") }
sync.Mutex

sync.Mutex is called mutex and is often used in concurrent programming. A coroutine is a lightweight thread in user mode. (so we can understand it with the idea of thread)

The concept of mutex lock: lock the shared data to ensure that only one thread or coroutine can operate at the same time.

Note: a mutex lock is a lock that multiple threads or coroutines grab together. The thread or coroutine that grabs the lock executes first, and those that don't grab it wait. After the mutex is used and released, other waiting threads or coroutines grab the lock.

sync.Mutex has two functions, Lock and UnLock, which respectively represent obtaining a Lock and releasing a Lock.

func (m *Mutex) Lock() func (m *Mutex) UnLock()

The initial value of sync.Mutex is unlocked, and sync.Mutex is often used as an anonymous variable of other structures.

For example: we often use online payment to buy things, and there will be both expenditure and income in the same bank account at a certain time, so the bank has to ensure that our balance is accurate and the data is correct.
We can simply realize the bank's expenditure and income to illustrate the use of Mutex.

type Bank struct { sync.Mutex balance map[string]float64 } // In revenue func (b *Bank) In(account string, value float64) { // Locking ensures that only one coroutine can access this code at the same time b.Lock() defer b.Unlock() v, ok := b.balance[account] if !ok { b.balance[account] = 0.0 } b.balance[account] += v } // Out expenditure func (b *Bank) Out(account string, value float64) error { // Locking ensures that only one coroutine can access this code at the same time b.Lock() defer b.Unlock() v, ok := b.balance[account] if !ok || v < value { return errors.New("account not enough balance") } b.balance[account] -= value return nil }
sync.RWMutex

sync.RWMutex, called read-write lock, is a variant of sync.Mutex. RWMutex comes from the well-known reader writer problem of computer operating system.
The purpose of sync.RWMutex is to support multiple concurrent processes to read a resource at the same time, but only one concurrent process can update the resource. In other words, reading and writing are mutually exclusive, writing and writing are mutually exclusive, and reading and reading are not mutually exclusive.

It can be summarized as follows:

  • When a process is reading, all write processes must wait until the end of all read processes to obtain the lock for write operations.
  • When a process is reading, all read processes are not affected and can be read.
  • When a process is writing, all read and write processes must wait until the end of the write process to obtain a lock for read and write operations.
  • RWMutex has five functions that provide lock operations for read and write respectively
// Write operation func (rw *RWMutex) Lock() func (rw *RWMutex) Unlock() // Read operation func (rw *RWMutex) RLock() func (rw *RWMutex) RUnlock() // RLocker() can obtain the read lock and then pass it to other coroutines for use. func (rw *RWMutex) RLocker() Locker

For example, in the example of sync.Mutex, we do not provide query operation. If Mutex mutex is used, there is no way to support simultaneous query by multiple people, so we use sync.RWMutex to rewrite this code.

type Bank struct { sync.RWMutex balance map[string]float64 } func (b *Bank) In(account string, value float64) { b.Lock() defer b.Unlock() v, ok := b.balance[account] if !ok { b.balance[account] = 0.0 } b.balance[account] += v } func (b *Bank) Out(account string, value float64) error { b.Lock() defer b.Unlock() v, ok := b.balance[account] if !ok || v < value { return errors.New("account not enough balance") } b.balance[account] -= value return nil } func (b *Bank) Query(account string) float64 { b.RLock() defer b.RUnlock() v, ok := b.balance[account] if !ok { return 0.0 } return v }
sync.Once

Sync. Once refers to an object implementation that is executed only once. It is often used to control that some functions can only be called once. Usage scenarios of sync.Once, such as singleton mode and system initialization.
For example, multiple calls to the close of the channel in the case of concurrency will lead to panic. To solve this problem, we can use sync.Once to ensure that the close will be executed only once.

The structure of sync.Once is as follows, with only one function. The variable done is used to record the execution state of the function, and sync.Mutex and sync.atomic are used to ensure thread safe reading of done.

type Once struct { m Mutex //mutex done uint32 // Execution status } func (o *Once) Do(f func())

For example, in the case of 1000 concurrent coroutines, only one coroutine will be executed to fmt.Printf. In the case of multiple executions, the output content is different, because it depends on which coroutine calls the anonymous function first.

func main() { once := &sync.Once{} for i := 0; i < 1000; i++ { go func(idx int) { once.Do(func() { time.Sleep(1 * time.Second) fmt.Printf("hello world index: %d", idx) }) }(i) } time.Sleep(5 * time.Second) }
sync.Cond

sync.Cond refers to synchronization condition variables, which are generally used in combination with mutexes. In essence, it is the synchronization mechanism of some coprocesses waiting for a condition.

// NewCond returns a new Cond with Locker l. func NewCond(l Locker) *Cond { return &Cond } // A Locker represents an object that can be locked and unlocked. type Locker interface { Lock() Unlock() }

sync.Cond has three functions: Wait, Signal and Broadcast:

// Wait for notification func (c *Cond) Wait() // Signal unicast notification func (c *Cond) Signal() // Broadcast broadcast notification func (c *Cond) Broadcast()

For example, sync.Cond is used for concurrent coroutine condition variables.

var sharedRsc = make(map[string]interface{}) func main() { var wg sync.WaitGroup wg.Add(2) m := sync.Mutex{} c := sync.NewCond(&m) go func() { // this go routine wait for changes to the sharedRsc c.L.Lock() for len(sharedRsc) == 0 { c.Wait() } fmt.Println(sharedRsc["rsc1"]) c.L.Unlock() wg.Done() }() go func() { // this go routine wait for changes to the sharedRsc c.L.Lock() for len(sharedRsc) == 0 { c.Wait() } fmt.Println(sharedRsc["rsc2"]) c.L.Unlock() wg.Done() }() // this one writes changes to sharedRsc c.L.Lock() sharedRsc["rsc1"] = "foo" sharedRsc["rsc2"] = "bar" c.Broadcast() c.L.Unlock() wg.Wait() }
sync.Pool

sync.Pool refers to the temporary object pool. Golang and Java have GC mechanism, so many developers basically don't consider memory recycling. Unlike C + + development, they often need to recycle objects themselves.
Gc is a double-edged sword, which brings the convenience of programming, but also increases the runtime overhead. Improper use may seriously affect the performance of the program. Therefore, scenes with high performance requirements cannot generate too much garbage arbitrarily.
sync.Pool is used to solve such problems. Pool can be used as a temporary object pool. Instead of creating objects independently, it takes an object from the temporary object pool.

sync.Pool has two functions, get and put. Get is responsible for taking an object from the temporary object pool. Put is used to put the object back into the temporary object pool at the end.

func (p *Pool) Get() interface{} func (p *Pool) Put(x interface{})

Official examples are as follows:

var bufPool = sync.Pool{ New: func() interface{} { return new(bytes.Buffer) }, } func timeNow() time.Time { return time.Unix(1136214245, 0) } func Log(w io.Writer, key, val string) { // Get the temporary object. If not, it will be created automatically b := bufPool.Get().(*bytes.Buffer) b.Reset() b.WriteString(timeNow().UTC().Format(time.RFC3339)) b.WriteByte(' ') b.WriteString(key) b.WriteByte('=') b.WriteString(val) w.Write(b.Bytes()) // Put the temporary object back into the Pool bufPool.Put(b) } func main() { Log(os.Stdout, "path", "/search?q=flowers") }

From the above example, we can see that creating a Pool object cannot specify the size, so the number of cache objects in sync.Pool is unlimited (only limited by memory),

How does sync.Pool control the number of temporary cache objects?

sync.Pool registers a poolCleanup function during init, which will clear all cached objects in all pools. After this function is registered, it will be called before each Gc. Therefore, the duration of sync.Pool cache is only between two Gc. Because the cache objects will be cleared during Gc, there is no need to worry about the infinite increase of pool.

Because of this, sync.Pool is suitable for caching temporary objects, but not for persistent object pools (connection pools, etc.).

sync.Map

Before version 1.9 of Go, the built-in map object does not have concurrency security. Many times, we have to encapsulate the map structure that supports concurrency security. As shown below, add a read-write lock sync.RWMutex to the map.

type MapWithLock struct { sync.RWMutex M map[string]Kline }

sync.Map has five functions in total, and the usage is similar to that of the native map:

// Query a key func (m *Map) Load(key interface{}) (value interface{}, ok bool) // Set key value func (m *Map) Store(key, value interface{}) // If the key exists, the value corresponding to the key is returned; otherwise, the key value is set func (m *Map) LoadOrStore(key, value interface{}) (actual interface{}, loaded bool) // Delete a key func (m *Map) Delete(key interface{}) // Traversing the map is still unordered func (m *Map) Range(f func(key, value interface{}) bool)

25 October 2021, 08:40 | Views: 4004

Add new comment

For adding a comment, please log in
or create account

0 comments