CPU scheduling policy

preface

Problem introduction:
When thread 1 is blocked and threads 2 and 3 are in ready state, which should be executed? A scheduling policy is required


Intuitive idea of CPU scheduling:

1.FIFO: first in first out (queuing)
2.Priority: those with high priority shall be executed first

In the face of complex scenes, these two methods are almost impossible.

CPU scheduling: select a process from the ready queue according to a certain scheduling algorithm, and hand over the CPU usage right to the selected process. If there is no ready process, the system will arrange a system idle process or idle process

CPU scheduling timing: occurs when the kernel returns to the user state after processing interrupt / exception / system call

The process terminates normally or due to some error;
A new process is created or a waiting process becomes ready;
When a process enters the blocking state from the running state;
When a process changes from running state to ready state.

CPU scheduling algorithm metrics:

Throughput( Throughput):  Number of processes completed per unit time;

Turnaround time TT (Turnaround Time): The time from the request to the completion of each process;

response time RT(Response Time): Time from request to first response;

CPU Utilization rate(CPU Utilization): CPU The proportion of time spent doing effective work;

waiting time(Waiting time): Each process is in the ready queue(ready queue)Waiting time in;

......

Tip: the following is the main content of this article

1, Scheduling algorithm

The purpose of designing scheduling algorithm:

1.Facing users, the purpose is to make users satisfied
2.Face to face process: CPU The goal of scheduling is process satisfaction

Process satisfaction:
End the task as soon as possible: the turnaround time (from task entry to task end) is short
 Fast response to user operation: short response time (from operation to response)
Less time consumption in the system: throughput (number of tasks completed)

General principle: the system focuses on task execution and can reasonably allocate tasks

1.FCFS(First Come, First Served)

First Come First Serve algorithm (FCFS):

Use the CPU in the order in which the processes are ready.
Features: non preemptive, fair and easy to implement. The short process behind the long process takes a long time, which is not conducive to the user experience

The processing times of the three processes are 12, 3 and 3 respectively. The arrival order of the two processes is discussed

Advantages: simple scheduling algorithm

Disadvantages:
1. The average waiting time fluctuates greatly. Short processes may be executed after long processes

2. The utilization of I/O resources and CPU resources is low. CPU intensive processes will lead to idle I/O devices, and I/O intensive processes will lead to idle CPU

2.SJF(Shortest Job First)

Shortest Job First (SJF):
Processes with the shortest completion time shall be executed first and not preempted

If the scheduling result is P1, P2,..., Pn, the average turnaround time is:
P1 + P2 + P3 + ... + Pn = ∑(n + 1 - i) * Pi
The turnaround time for P1 is P1
The turnaround time of P2 is P1 + P2
......
The turnover time of Pn contains n*P1 + (n - 1) * P2 +
P1 calculates the most times and needs to put the shortest task in front

Therefore, the average turnaround time of this method is the smallest.

3.RR(Round Robin)

Round Robin (RR):
Each process is allocated a time slice to allow the process to run in the time period. If the process is still running at the end of the time slice, the CPU will be deprived and allocated to another process. If the process blocks or ends before the end of the time slice, the CPU will switch immediately.

Features: fair; conducive to interactive computing and fast response time; due to process switching, the time slice rotation algorithm costs high overhead; it is beneficial to the large size difference of different processes in the process table, but unfavorable to the same size of processes.

Two points to avoid in time slice design:

1. The time slice is too large and the waiting time is too long. In the extreme case, it degenerates into FCFS algorithm

2. The time slice is too small and the response is too fast, resulting in a large number of context switches, which will affect the throughput of the system

4. Compromise scheme

We can set priority, set foreground tasks and background tasks. Foreground tasks have high priority and background tasks have low priority. Define foreground task queue and background task queue. Background tasks can be scheduled only when there are no foreground tasks

However, if there are always foreground tasks, background tasks cannot be executed (tasks with low priority cannot be executed)

Therefore, the priority of tasks should be adjusted dynamically
Generally, background tasks take a long time. Once the background task is transferred to the foreground for execution, it may take a long time. The CPU is not released all the time, and the response time of the foreground task can not be guaranteed. Time slices should be set for both front and rear tasks. When the background task is transferred to the foreground for execution for a period of time, the CPU should be released for other tasks to execute

Compromise scheme: give priority to short tasks (reduce turnaround time), take rotation scheduling as the core, and set priority

2, Schedule()

The purpose of schedule() is to find the next task and switch to the next task

Source code:

// Task 0 is an idle task and is called only when other tasks are not running
// It cannot be killed or sleep. The status information "state" in task 0 is never used

void schedule(void)
{
	int i,next,c;
	struct task_struct ** p; // Pointer to the task structure

/* check alarm, wake up any interruptible tasks that have got a signal */
// Check the alarm (alarm timing value of the process) and wake up any interruptible task that has been signaled

	hold p Initialized to a pointer to the address of the last process,Reverse scan all processes,And skip the null pointer
	for(p = &LAST_TASK ; p > &FIRST_TASK ; --p)
		if (*p) { //*p pointer to the current process
		//jiffies is the number of ticks (every 10ms / tick) since the system is started
			//Judge whether the timer expires. If it expires, set the SIGALARM bit in the signal bitmap and clear the timer to 0
			if ((*p)->alarm && (*p)->alarm < jiffies) { 
					(*p)->signal |= (1<<(SIGALRM-1));
					(*p)->alarm = 0;
				}
			//If the signal bitmap indicates that a non blocking signal is delivered and the state of the task is interruptible, set the task state to ready
			if (((*p)->signal & ~(_BLOCKABLE & (*p)->blocked)) &&
			(*p)->state==TASK_INTERRUPTIBLE)
				(*p)->state=TASK_RUNNING;   // Set to ready for execution
		}

/* this is the scheduler proper: */

// Process scheduling
	// Check the ready task and determine the next running task.
	while (1) {
		c = -1;                 //Traverse the task array from the last task
		next = 0;
		i = NR_TASKS;
		p = &task[NR_TASKS];

		//Sort ready tasks by time slice 
		//Compare the counter value of each ready status task (decreasing tick count of task running time)
		//Which value is large and the running time is not long, the next will point to which task number
		while (--i) {
			if (!*--p)
				continue;
			if ((*p)->state == TASK_RUNNING && (*p)->counter > c) //counter here is the time slice
			//If it is judged to be in the ready state and counter > - 1, assign values to c and next, and traverse to find the largest counter
				c = (*p)->counter, next = i;
		}

		
		//If the comparison shows that the counter value is not equal to 0, or there is no runnable task in the system
		//Then jump out of the loop and perform task switching
		if (c) break;

		//If the time slice of all tasks is 0, the time slice of each task is recalculated according to the priority
      	//Update the counter value of each task, and then return to the beginning for comparison
		//Calculation method: counter = counter/2 + priority
		for(p = &LAST_TASK ; p > &FIRST_TASK ; --p)
			if (*p)
				(*p)->counter = ((*p)->counter >> 1) + //counter here represents priority
						(*p)->priority;
	}
	//Point the current task pointer to the task with the task number next, and switch to the task to run
	//If there are no other runnable tasks in the system, then next=0, so the scheduling points to idle tasks
	switch_to(next);    // Task switching: switch to the task with task number next, and allow
}

1.counter (time slice)

counter is a typical time slice, so it is round robin scheduling to ensure response
do_ When the counter in timer is reduced to 0, the schedule

2.counter (priority)

The priority represented by counter can be dynamically adjusted
When a blocked process is ready again, it takes precedence over a non blocked process

summary

Tip: here is a summary of the article:

The core processing part of the scheduling function: This is to select the subsequent tasks according to the time slice and priority scheduling mechanism of the process.

It first circularly checks all tasks in the task array, selects the task with the largest value according to the value counter of the remaining execution time of each ready task, and uses switch_ The (0) function switches to the task.

If the value of all ready status tasks is equal to zero, it means that the time slice of all tasks has been run at the moment. Therefore, reset the running time slice value counter of each task according to the priority weight of the task, and then re execute the cycle to check the execution time slice values of all tasks.

Tags: Python Linux Operation & Maintenance Database Operating System

Posted on Sat, 25 Sep 2021 13:17:59 -0400 by kmussel