-
summary
This chapter introduces concurrent programming, mainly involving parallel computing, thread and its principle, thread anti deadlock operation, etc. This paper comprehensively introduces the principles and methods of multitasking, thread synchronization and concurrent programming.
1. Parallel computing
-
Parallel computing
Parallel computing is a computing method that solves problems faster than serial computing by using multiple processors executing parallel algorithms. The structure of modern multi-core processor can well realize parallel computing. The future of computer development is also parallel computing.
-
Sequential algorithm and parallel computing
-
Sequential algorithm
The general code block format is as follows. Each code block of sequential algorithm may contain multiple steps. Each step is executed successively through a single task, one step at a time. When all steps are completed, the task ends.
begin step_1; step_2; ... step_n; end // next step
-
parallel algorithm
The general code block structure is as follows. All tasks in the code block of parallel computing are executed in parallel, and the next step is executed after all tasks are completed.
cobegin task_1; task_2; ... task_n; coend // next step
-
-
Parallelism and concurrency
-
Parallelism
All tasks in a parallel algorithm are running at the same time. Generally, parallel operation can be realized in an ideal multiprocessing system.
-
Concurrency
In a single CPU system, only a single task can be executed at a time, and logical parallel execution can only be realized through concurrent execution.
-
2. Thread
-
Thread introduction
A thread is an independent execution unit that shares the address space with a process. At the same time, a thread also shares other resources with a process. A thread can create a sub thread that shares an address with itself, and a sub thread can also create its own sub thread. A process realizes task processing through its main thread and sub thread. Different operating systems implement threads in different ways, but at present, almost all operating systems support the thread standard Pthread of IEEE POSIX 1003.1.
-
Advantages of threads
- Create and switch faster blocks
- Faster response
- More suitable for parallel computing
-
Disadvantages of threads
- Explicit synchronization from the user is required
- Library functions are not safe enough for threads
- Context switching takes too much time in a single CPU system
3. Thread management
Threads, like processes, also have kernel mode and user mode. In user mode, threads execute in the same address of their process, and each thread has its own execution Stack. In kernel mode, system calls are executed according to the scheduling strategy of the system kernel, and there are also processes such as suspension and activation. The system kernel calls will preferentially select the same entry Thread in the process.
Thread management function
Most operating systems are compatible with POSIX Pthread thread standard. Pthread provides the following programming interfaces for thread management:
pthread_create(thread, attr, function, arg); -> Create thread pthread_exit(status); -> Terminate thread pthread_cancel(thread); -> Cancel thread pthread_attr_init(attr); -> Initialize thread properties pthread_attr_destroy(attr); -> Delete thread properties
-
Create thread
You can use pthread_ The create() function creates a thread:
int pthread_create(pthread_t *pthread_id, pthread_attr_t *atr, void *(*func)(void *), void *arg);
Returns 0 if the creation is successful and an error code if it fails.
Parameters:
-
pthread_id refers to pthread_ Pointer to a variable of type T. It is populated with a unique thread ID assigned by the operating system kernel. In POSIX, pthread_t is an opaque type. Threads can use pthread_ The self () function gets its own ID. In Linux, pthread_ The T type is defined as an unsigned long integer, so the ID can be printed as% lu.
-
attr is a pointer to another opaque data type that points to thread properties.
-
func the entry address of the new thread function of the new thread to execute.
-
arg pointer to thread function parameter
void *func(void *arg);
-
4. Thread synchronization
-
Threads execute in the same address space of the process and share all global variables and data structures in the same address space. When different threads operate on the same shared resource, if the result depends on the execution order of threads, competition will occur, which is eliminated by concurrent programs.
-
mutex
In order to solve the competition between threads, mutex can be used. It is a lock that can only be operated when it has a lock.
Mutex can be initialized statically and dynamically.
-
Deadlock prevention
The mutex adopts the blocking protocol. If the thread cannot obtain the mutex, it will be blocked and can continue to operate only after the mutex is unlocked. Deadlock occurs when multiple entities wait for each other. The countermeasures include deadlock prevention, deadlock avoidance, deadlock detection and recovery.
Deadlock prevention can be achieved through the conditional locking function pthread_mutex_trylock() implementation.
-
Conditional variable
Conditional variables enable inter thread collaboration
5. Practice
Example 4.2 of textbook p-127: quick sorting with concurrent threads
- code
/* * * chapter4 code example 4.2 QUICK SORT * * */ #include <stdio.h> #include <stdlib.h> #include <pthread.h> typedef struct{ int upperbound; int lowerbound; }PARM; #define N 10 int a[N] = {2,1,3,4,6,5,8,7,9,0}; //unsorted data int print() //print current A[] contents { int i; printf("[ "); for(i = 0 ; i < N ; i++) printf("%d" , a[i]); printf("]\n"); } void *Qsort(void *aptr) { PARM *ap , aleft , aright; int pivot , pivotIndex , left ,right ,temp; int upperbound , lowerbound; pthread_t me , leftThread , rightThread; me = pthread_self(); ap =(PARM *)aptr; upperbound = ap->upperbound; lowerbound = ap->lowerbound; pivot = a[upperbound]; //pick low pivot value left = lowerbound - 1; //scan index from left side right = upperbound; //scan index from right side if(lowerbound >= upperbound) pthread_exit (NULL); while(left < right){ //partition loop do{left++;} while (a[left] < pivot); do{right--;}while(a[right] > pivot); if (left < right ) { temp = a[left]; a[left] = a[right]; a[right] = temp; } } print(); pivotIndex = left; //put pivot back temp = a[pivotIndex] ; a[pivotIndex] = pivot; a[upperbound] = temp; //start the "recursive threads" aleft.upperbound = pivotIndex - 1; aleft.lowerbound = lowerbound; aright.upperbound = upperbound; aright.lowerbound = pivotIndex + 1; printf("%lu: create left and right threadsln", me) ; pthread_create(&leftThread , NULL , Qsort , (void * )&aleft); pthread_create(&rightThread , NULL , Qsort , (void *)&aright); //wait for left and right threads to finish pthread_join(leftThread,NULL); pthread_join(rightThread, NULL); printf("%lu: joined with left & right threads\n",me); } int main(int argc, char *argv[]){ PARM arg; int i, *array; pthread_t me,thread; me = pthread_self( ); printf("main %lu: unsorted array = ", me); print( ) ; arg.upperbound = N-1; arg. lowerbound = 0 ; printf("main %lu create a thread to do QS\n" , me); pthread_create(&thread , NULL , Qsort , (void * ) &arg); //wait for qs thread to finish pthread_join(thread,NULL); printf ("main %lu sorted array = ", me); print () ; }
-
Problems encountered
1. The following error is reported when you directly gcc qs.c:
solve:
The function qsort is one of the library functions defined in the stdlib header file. A user-defined function cannot have the same name. Change the name of the defined qsort() function to qsort() Just.
2. After problem 1 is solved, the following error is reported when gcc qs.c again:
solve:
The pthread header file is used. The - pthread parameter needs to be added when compiling the program, that is, it is compiled in the following mode:
gcc qs.c -pthread
Complete the compilation and get the executable file.
-
function
-
Textbook results