Boost Chapter 12 concurrent programming

All the contents of this article are derived from Chapter 12 of the guide to the complete development of BOOST library: going deep into the "quasi" standard library of C + + (3rd Edition)

This chapter includes three concurrent programming components in the Boost library. Atomic, which implements the atomic operation library defined by C++11 standard; Thread, which is compatible with C++11 standard and adds portable thread processing capability to C + +; asio, a powerful library for synchronous and asynchronous IO operations, can handle port and network communication by using the preamplifier mode, and is expected to become the bottom communication library of C + + standard.

1. atomic

1.1 functions:

The code in this section is single threaded without concurrency.

1.2 header file:

#include<boost/atomic.hpp>
using namespace boost;

1.3 usage:

#include "/home/fjc / desktop / algorithm book / C++Boost/boost_guide/common/std.hpp"
using namespace std;

#include <boost/atomic.hpp>
using namespace boost;


//
void case1()
{
    atomic<int> a(10);  //Remember the construction method here
    assert(a == 10);
    //It is dangerous not to default: atomic < int > A;
    
    atomic<double> l;
    l = 100L;
    cout << l << endl;

    atomic<double> d(2.414);
    cout << d << endl;
    cout << "--------------"<<endl;
}

void case2()
{
    atomic<bool> b{false};          //Atomized bool
    assert(!b.load());                      //Display call load value

    b.store(true);                              //Show call store
    assert(b);                                       //Hermit type conversion, equivalent to load

    atomic<int> n(100);                                   //Atomization int
    assert(n.exchange(200) == 100);                 //Returns the original value while storing
    assert(n == 200);                                   //Implicit type conversion, equivalent to load

    cout << "--------------"<<endl;
}
//
void case3()
{
    atomic<long> l(100);                

    long v = 100;               //Set the variable expected, lvalue
    if (l.compare_exchange_weak(v, 313))            //Compare and exchange
    {
        assert(l == 313 && v == 100);       //If the successful value changes, the original value of 100 is output
    }

    v = 200;                                                    //Set the variable expected=200
    auto b = l.compare_exchange_strong(v, 99);          //Compare and exchange
    assert(!b && v == 313);                                                     //Exchange failed, output original value 313

    l.compare_exchange_weak(v, 99);         //Compare the exchange again
    assert(l == 99 && v == 313 );
    cout << "--------------"<<endl;
}

//
#include <boost/utility.hpp>
void case4()
{
    atomic<int> n(100);

    assert(n.fetch_add(10) == 100);
    assert(n == 110);

    assert(++n == 111);
    assert(n++ ==111);
    assert(n == 112);

    assert((n -= 10) == 102);

    atomic<int> b{BOOST_BINARY(1101)};      //Binary 1101

    auto x = b.fetch_and(BOOST_BINARY(0110));   //The logic and operation returns the original value 1101
    assert(x == BOOST_BINARY(1101) &&
           b == BOOST_BINARY(0100));                        //b is 0100 after operation
    assert((b |= BOOST_BINARY(1001))                //Equivalent to fetch_or, return the calculated value
            == BOOST_BINARY(1101));
}

//
void case5()
{
    atomic<bool> b{true};
    assert(b);

    b = false;
    assert(!b.load());

    auto x = b.exchange(true);
    assert(b && !x);
}
//
#include <boost/intrusive_ptr.hpp>

template<typename T>
class ref_count			//Generic reference counting class
{
private:
    typedef boost::atomic<int> atomic_type; //Defining atomic types
    mutable atomic_type m_count{0};	//The initialization note is mutable
protected:
    ref_count() {}
    ~ref_count() {}
public:
    typedef boost::intrusive_ptr<T> counted_ptr;
    void add_ref() const            //Increase reference count
    {
        m_count.fetch_add(1, boost::memory_order_relaxed); //No sequence requirements
    }

    void sub_ref() const		//Reduce reference count
    {
        if (m_count.fetch_sub(1, boost::memory_order_release) == 1)
        {
            boost::atomic_thread_fence(boost::memory_order_acquire);//Atomic level thread protection, get previous modifications
            delete static_cast<const T*>(this);								//Delete pointer, need transformation
        }
    }

    decltype(m_count.load()) count() const			//Get the reference count. Note decltype
    {
        return m_count.load();			//The value can also be converted with hermit type
    }

public:
    template<typename ... Args>			//Variable parameter template
    static counted_ptr make_ptr(Args&& ... args)  			//Factory function
    {
        return counted_ptr(new T(std::forward<Args>(args)...));
    }
private:
    friend void intrusive_ptr_add_ref(const T* p)		//Demand function
    {
        p->add_ref();
    }
    friend void intrusive_ptr_release(const T* p)		//Demand function
    {
        p->sub_ref();
    }
};

class demo: public ref_count<demo>	//Add reference counting capability
{
public:
    demo()
    {
        cout << "demo ctor" << endl;
    }
    ~demo()
    {
        cout << "demo dtor" << endl;
    }
    int x;
};
void case6()
{
    //demo::counted_ptr p(new demo);
    auto p = demo::make_ptr();

    p->x = 10;
    assert(p->x == 10);
    assert(p->count() == 1);
}
//

int main()
{
    case1();
    case2();
    case3();
    case4();
    case5();
    case6();
}

2. thread

2.1 functions:

You need the time concept provided by the chrono library to perform water surface and wait operations. You need to compile the chrono library first

2.2 header file:

#include<boost/thread.hpp>
using namespace boost;

2.3 usage:

2.3.1 mutex:

mutex:
Mutex is a method for thread synchronization, which can prevent multiple threads from operating shared resources at the same time. Once a thread locks the mutex, other threads must wait for it to unlock the mutex before accessing the shared resource

timed_mutex:
If you don't want to block threads because of mutex, you can use timed_mutex, call its try_lock_for() or try_lock_until(), wait for a relative or absolute time

2.3.2 lock_guard:

Used to assist in locking mutexes. Lock during construction and unlock during disassembly to avoid forgetting and unlocking. It is like a smart pointer

2.3.3 unique_lock:

And lock_ Similar to guard, its constructor can receive other locking options, so it has different behavior. unique_lock does not allow copying, but it uses pointers instead of references to store mutexes internally, which makes it more flexible to be transferred and used.

2.3.4 lock adapter:

lock_guard,unique_lock is usually used with mutex. The adapter class needs to specify the mutex type to be adapted in the template parameters and use it in the way of inheritance. The subclass automatically obtains their lock interface, and then can use lock_guard and unique_lock.

2.3.5 lockable concept check class:

2.3.6 lock function:

In addition to using mutex's member function or lock_guard/unique_lock, you can also use two free functions lock() and try_lock() to operate mutex, which is similar to make_unique_locks(), which can lock multiple mutexes at a time and ensure that no deadlock occurs

2.3.7 thread:

Thread class implements the thread representation in the operating system and is responsible for starting and managing thread objects. It is similar to POSIX threads in concept and operation.
Four static functions:
get_id(): a function with the same name as thread, which is used to obtain thread::id
yield(): indicates that the current thread abandons the time slice and allows other threads to run
sleep_for(): the thread waits for a short period of time while sleeping
sleep_until(): the thread waits for a point in time to sleep

2.3.8 start thread:

Thread startup can use bind or lambda expressions

thread t1(bind(dummy,100));
thread t2([]{dummy(500);});

sleep_for is a thread waiting for death, so there is a join waiting thread. The thread member function joinable() can determine whether the thread object identifies an executable thread body. If joinable() returns true, we can call the member function join to block and wait for the end of thread execution

detach thread: the temporary object starts the thread and separates it from the thread execution body, but the thread continues to run
thread_guard: used to control the behavior of thread object deconstruction
scoped_thread: it can customize the action of the destructor using template parameters

2.3.9 interrupt thread:

interrupt(): requires the thread to interrupt execution
interruption_requested(): checks whether the thread is required to be interrupted

12 breakpoints of thread
Enable / disable thread interrupts

2.3.10 thread_group:

It is used to manage a group of threads, just like a thread pool

2.3.11 call_once:

In order to ensure that the initialization function can be called correctly in the multithreaded environment, the thread library provides a mechanism of calling only once. When multiple threads are used to operate the function, only another thread can execute successfully to avoid errors caused by multiple executions. With once_ The flag object is used as the initialization flag, and then call is used_ Once to call the function and complete the initialization only once

2.3.12 condition variable: condition

Condition variable is another synchronization mechanism for waiting. It can realize the communication between threads. It must be used in conjunction with mutex to wait for an event in another thread to occur (meet a condition), and then it can continue to execute.

2.3.13 shared_mutex:

It allows threads to obtain multiple shared ownership and one exclusive ownership, and implements the read-write lock mechanism, that is, multiple read threads and one write thread

2.3.14 future:

In many cases, a thread does not only perform some work, it may also return some calculation results. The thread library uses future to provide a method for the return value of asynchronous operation threads, because this return value seems to be unavailable at the beginning of thread execution. It is an "expected value" of the "future"

2.3.15 shared_future:

Call future multiple times

Tags: C++ thread boost

Posted on Mon, 13 Sep 2021 19:43:19 -0400 by khalidorama