Last article C + + multithreading learning (I) thread creation and management We have a preliminary understanding of the concept of thread and the process that has been proposed with thread, and the significance and difference between them. An example is given at the end of the article. I initially learned how to create multithreading and multithreading. In the above example, there is a problem that cout shows confusion. This article will explain why confusion occurs and how to solve it.
To analyze the reasons, first look at the concept of multithreading
- Multithreading Concurrency: multiple operations are processed alternately in the same time period. The thread switching time slice is very short (generally in milliseconds). A time slice is too late to complete the access to a resource most of the time;
- Inter thread communication: a task is divided into multiple threads for concurrent processing. Multiple threads may have to process data in a shared memory. Multiple threads need to access the same shared memory data accurately and orderly.
In this way, we know the reason. The time slice of thread switching is very short. Cout may cut another thread before it is finished. In addition, the resources accessed by cout are created by the main thread under the same process, that is, the same process. The accessed resource memory is shared. If it is not controlled, the output will naturally be disordered. It is difficult for us to predict the cpu's thread switching time and thread execution speed. How to control the data access and read-write of multiple threads within the logic we want. This requirement is the synchronization and mutual exclusion of threads. Its essence is the orderly use of shared memory with processes. Forgive me for putting synchronization and mutual exclusion here, It is regarded as the narrow mutual exclusion and synchronization in the concept of multithreading, which is easy to understand and learn. In fact, this concept should be generalized to the access of multiple processes (non multithreading) to a certain resource (not a certain memory).
Mutual exclusion: it means that only one visitor is allowed to access a resource at the same time, which is unique and exclusive. However, mutual exclusion cannot limit the access order of visitors to resources, that is, the access is out of order.
Synchronization: it refers to the orderly access of visitors to resources through other mechanisms on the basis of mutual exclusion (in most cases). In most cases, synchronization has achieved mutual exclusion, especially when all written resources must be mutually exclusive. In a few cases, multiple visitors can be allowed to access resources at the same time.
Obviously, synchronization is a more complex mutual exclusion, and mutual exclusion is a special synchronization.
We can see that the key to orderly access is mutual exclusion. There are many mechanisms to realize mutual exclusion. The simplest one provided by c + + is mutual exclusion. There is a < mutex > library file that specifically supports mutually exclusive access to shared data structures.
lock and unlock protect shared resources
The full name of mutex is mutual exclusion, which is an object object used to help control concurrent access to resources in an exclusive and exclusive manner. The resource here may be an object or a combination of multiple objects. In order to obtain exclusive resource access, the corresponding thread must lock the mutex, which can prevent other threads from locking the mutex until the first thread unlocks the mutex. The main operation functions of mutex class are shown in the following table:
As can be seen from the above table, mutex not only provides conventional locks, but also provides attempt locks for possible blocking caused by conventional locks (locks with time require the support of mutex class timed_mutex with time, as shown below). Here is a sample code:
// mutex1.cpp Protect shared global variables through mutex lock and unlock #include <chrono> #include <mutex> #include <thread> #include <iostream> std::chrono::milliseconds interval(100); std::mutex mutex; int job_shared = 0; //Both threads can modify 'job'_ Shared ', mutex will protect this variable int job_exclusive = 0; //Only one thread can modify 'job'_ Exclusive ', no protection required //This thread can only modify 'job'_ shared' void job_1() { mutex.lock(); std::this_thread::sleep_for(5 * interval); //Make 'job'_ 1 'lock holding waiting ++job_shared; std::cout << "job_1 shared (" << job_shared << ")\n"; mutex.unlock(); } // This thread can modify 'job'_ Shared 'and' job '_ exclusive' void job_2() { while (true) { //Loop indefinitely until the lock is obtained and the 'job' is modified_ shared' if (mutex.try_lock()) { //Modify 'job' if the attempt to obtain the lock is successful_ shared' ++job_shared; std::cout << "job_2 shared (" << job_shared << ")\n"; mutex.unlock(); return; } else { //The attempt to acquire the lock failed, then modify 'job'_ exclusive' ++job_exclusive; std::cout << "job_2 exclusive (" << job_exclusive << ")\n"; std::this_thread::sleep_for(interval); } } } int main() { std::thread thread_1(job_1); std::thread thread_2(job_2); thread_1.join(); thread_2.join(); getchar(); return 0; }
Simply analyze the code and create thread_1,thread_2. In thread_2. join blocks the main thread only after it is created, so thread_1,thread_2 will be executed concurrently. job_ The share object and job are used by both threads_ Exclusive only thread_2 use.
As shown in the figure above, the execution sequence and speed of the two threads are uncontrollable. After one thread locks, only the current thread can access the job_shared. The concept of lock() is that after mutex locks the lock, before it unlocks, calling lock() again will block it. It will continue to execute until it is unlocked (). try_lock() is used to judge whether it can be locked (), that is, whether it has not been unlocked () after being locked (). In the unlocked state, lock() will be executed and return true to continue to execute. Otherwise, false will be returned to continue to execute without blocking. Understanding the use mechanism of lock(), we know that if more than one thread accesses the same resource, especially when changing its value, it needs to add lock(), so as to ensure that only one thread is executing the command segment between lock() and unlock() at the same time.
lock() must be followed by unlock(), otherwise other threads will block when calling lock(), which requires that lock() and unlock() must correspond one by one, otherwise deadlock is likely to occur. Why can we say that there will be deadlock without one-to-one correspondence? We can write an error example to verify it.
// mutex1.cpp Protect shared global variables through mutex lock and unlock #include <chrono> #include <mutex> #include <thread> #include <iostream> std::chrono::milliseconds interval(100); std::mutex mutex; int job_shared = 0; //Both threads can modify 'job'_ Shared ', mutex will protect this variable int job_exclusive = 0; //Only one thread can modify 'job'_ Exclusive ', no protection required //This thread can only modify 'job'_ shared' void job_1() { while (true) { mutex.lock(); std::this_thread::sleep_for(interval); //Make 'job'_ 1 'lock holding waiting ++job_shared; std::cout << "job_1 shared (" << job_shared << ")\n"; //mutex.unlock(); // Deliberately do not release the lock, so that two threads deadlock blocking } } // This thread can modify 'job'_ Shared 'and' job '_ exclusive' void job_2() { while (true) { mutex.lock(); std::this_thread::sleep_for(interval); //Make 'job'_ 2 'lock holding waiting ++job_shared; std::cout << "job_2 shared (" << job_shared << ")\n"; mutex.unlock(); } } int main() { //std::thread thread_1(job_1); std::thread thread_2(job_2); std::thread thread_1(job_1); thread_1.join(); thread_2.join(); getchar(); return 0; }
The execution time of the code in the lower lock control area is reduced, which makes it easier for two threads to deadlock at the same time. You can change the sleep time. Under the test, let's look at the running results. Both threads are blocked and no longer execute, and when they are blocked is not controllable.
Forgetting to write lock(), unlock() and using it improperly will bring serious consequences. Is this very similar to the c + + pointers new and delete we use? In order to solve the problem of carelessness of human beings as developers, c + + kindly provides the intelligent pointer shared_ptr and unique_ptr, the corresponding c + + also provides many more powerful intelligent locks_ Guard and unique_lock. According to our needs, there is also a lock timed that can set the time_ Mutex, nested lock recursive_mutex, I won't study it in depth at present. Let's go through the function concept first.
Class templatedescribe std::mutex Only one thread can be locked at a time. If it is locked, any other lock () will block until the mutex is available again and try_lock() will fail. std::recursive_mutex It allows the same thread to obtain its lock multiple times at the same time. Its typical application is that a function captures a lock and calls another function, which captures the same lock again. std::timed_mutex Additionally, it allows you to pass a time period or point in time to define how long it can try to capture a lock. It provides try for this_ lock_ For (duration) and try_lock_until(timepoint). std::recursive_timed_mutex The same thread is allowed to obtain its lock multiple times, and the duration can be specified.
Now we will initially use mutex < mutex > to solve the problem of disordered access to the same resources by multiple threads. Then let's go back to
C + + multithreading learning (I) thread creation and management The last one shows the disorder. The disorder is caused by multiple threads calling cout at the same time. Let's lock all the places where cout is called.
//thread2.cpp adds mutex protection for concurrent access to cout display terminal resources #include <iostream> #include <thread> #include <chrono> #include <mutex> using namespace std; std::mutex mutex1; void thread_function(int n) { std::thread::id this_id = std::this_thread::get_id(); //Get thread ID for(int i = 0; i < 5; i++){ mutex1.lock(); cout << "Child function thread " << this_id<< " running : " << i+1 << endl; mutex1.unlock(); std::this_thread::sleep_for(std::chrono::seconds(n)); //Process Sleep n seconds } } class Thread_functor { public: // Function acts like a function. The imitation function in C + + is implemented by overloading the () operator in the class, so that you can create the object of the class like a function void operator()(int n) { std::thread::id this_id = std::this_thread::get_id(); for(int i = 0; i < 5; i++){ { mutex1.lock(); cout << "Child functor thread " << this_id << " running: " << i+1 << endl; mutex1.unlock(); } std::this_thread::sleep_for(std::chrono::seconds(n)); //Process Sleep n seconds } } }; int main() { thread mythread1(thread_function, 1); // Pass the initial function as an argument to the thread if(mythread1.joinable()) //Judge whether join() or detach() can be used successfully. If it returns true, it can be used; if it returns false, it cannot be used mythread1.join(); // Use the join() function to block the main thread until the child thread has finished executing Thread_functor thread_functor; thread mythread2(thread_functor, 3); // Pass the initial function as an argument to the thread if(mythread2.joinable()) mythread2.detach(); // Use the detach() function to make the child thread run in parallel with the main thread, and the main thread no longer waits for the child thread auto thread_lambda = [](int n){ std::thread::id this_id = std::this_thread::get_id(); for(int i = 0; i < 5; i++) { mutex1.lock(); cout << "Child lambda thread " << this_id << " running: " << i+1 << endl; mutex1.unlock(); std::this_thread::sleep_for(std::chrono::seconds(n)); //Process Sleep n seconds } }; thread mythread3(thread_lambda, 4); // Pass the initial function as an argument to the thread if(mythread3.joinable()) mythread3.join(); // Use the join() function to block the main thread until the child thread has finished executing unsigned int n = std::thread::hardware_concurrency(); //Gets the number of available hardware concurrent cores mutex1.lock(); std::cout << n << " concurrent threads are supported." << endl; mutex1.unlock(); std::thread::id this_id = std::this_thread::get_id(); for(int i = 0; i < 5; i++){ { mutex1.lock(); cout << "Main thread " << this_id << " running: " << i+1 << endl; mutex1.unlock(); } std::this_thread::sleep_for(std::chrono::seconds(1)); } getchar(); return 0; }
results of enforcement
Normal display, no disorder.
Further, according to our understanding of join and detach(), only mythread2 and mythread3 threads will be parallel at the same time. In fact, we only need to lock these two threads
//thread2.cpp adds mutex protection for concurrent access to cout display terminal resources #include <iostream> #include <thread> #include <chrono> #include <mutex> using namespace std; std::mutex mutex1; void thread_function(int n) { std::thread::id this_id = std::this_thread::get_id(); //Get thread ID for(int i = 0; i < 5; i++){ cout << "Child function thread " << this_id<< " running : " << i+1 << endl; std::this_thread::sleep_for(std::chrono::seconds(n)); //Process Sleep n seconds } } class Thread_functor { public: // Function acts like a function. The imitation function in C + + is implemented by overloading the () operator in the class, so that you can create the object of the class like a function void operator()(int n) { std::thread::id this_id = std::this_thread::get_id(); for(int i = 0; i < 5; i++){ { mutex1.lock(); cout << "Child functor thread " << this_id << " running: " << i+1 << endl; mutex1.unlock(); } std::this_thread::sleep_for(std::chrono::seconds(n)); //Process Sleep n seconds } } }; int main() { thread mythread1(thread_function, 1); // Pass the initial function as an argument to the thread if(mythread1.joinable()) //Judge whether join() or detach() can be used successfully. If it returns true, it can be used; if it returns false, it cannot be used mythread1.join(); // Use the join() function to block the main thread until the child thread has finished executing Thread_functor thread_functor; thread mythread2(thread_functor, 3); // Pass the initial function as an argument to the thread if(mythread2.joinable()) mythread2.detach(); // Use the detach() function to make the child thread run in parallel with the main thread, and the main thread no longer waits for the child thread auto thread_lambda = [](int n){ std::thread::id this_id = std::this_thread::get_id(); for(int i = 0; i < 5; i++) { mutex1.lock(); cout << "Child lambda thread " << this_id << " running: " << i+1 << endl; mutex1.unlock(); std::this_thread::sleep_for(std::chrono::seconds(n)); //Process Sleep n seconds } }; thread mythread3(thread_lambda, 4); // Pass the initial function as an argument to the thread if(mythread3.joinable()) mythread3.join(); // Use the join() function to block the main thread until the child thread has finished executing unsigned int n = std::thread::hardware_concurrency(); //Gets the number of available hardware concurrent cores std::cout << n << " concurrent threads are supported." << endl; std::thread::id this_id = std::this_thread::get_id(); for(int i = 0; i < 5; i++){ { cout << "Main thread " << this_id << " running: " << i+1 << endl; } std::this_thread::sleep_for(std::chrono::seconds(1)); } getchar(); return 0; }
Look at the results
Similarly, the display is normal and there is no abnormality.