Slow SQL Analysis Paper

Slow Query Log

Understanding slow queries

mysql's slow query is a type of log record provided by mysql that records statements whose execution time exceeds the set threshold.This time-out is configurable, and the default time-out is 10s, which is then written to the log.Then combined with explain to analyze sql.

By default, the mysql database does not have a slow query log open and this parameter needs to be set manually.Starting this parameter is generally not recommended if it is not required for tuning, as opening slow query logs can have more or less a performance impact.

Temporary Open
show variables like '%slow_query_log%';

Use the above statement to query whether a slow query is turned on and to record the log location of the slow sql.Below slow_query_log:OFF indicates shutdown.

Use this statement to open the slow query log (0Close 1 Open), which will only take effect on the current database.And it will fail if mysql is restarted.

SET GLOBAL slow_query_log = 1;

Re-query discovery has started

Permanently open

It's not necessary to keep this open. You need to modify the configuration file if it's going to work permanentlyMy.cnf.stayMy.cnfFile adds or modifies the following parameters:

slow_query_log = 1
slow_query_log_file = /var/lib/mysql/[host_name]-slow.log

slow_query_log_file specifies the path to store slow query log files.The system will give a default file host_by defaultName-Slow.log

Set slow sql duration

The default SQL execution timeout for mysql is 10s, which is modified by the following commands:

-- Query timeout
show variables like '%long_query_time%';

-- Set timeout duration
set global long_query_time = 3;

Reuse mysql under client connection after setup.

View mysql log

- Simulate log generation timeout

SELECT SLEEP(4);

Slow sql log file found

Log Analysis Tool

Mysql provides a log analysis tool, mysqldumpslow, if manually searching and analyzing logs is often a manual effort in a production environment.

Analysis parameters:
s: Indicates how to sort
c: Number of visits
l: Lock-in time
r: Return records
t: Query time
al: Average locking time
ar: average number of records returned
at: average query time
t: How many previous bars of data are returned
g: followed by a regular matching pattern, case insensitive.
Query returns the top 10 SQL records

Common query examples:

-- Top 10 Recordset Returns sql
mysqldumpslow -s r -t 10 /var/lib/mysql/localhost-slow.log

-- Top 10 Visits sql
mysqldumpslow -s c -t 10 /var/lib/mysql/localhost-slow.log

-- The first 10 queries that get the installation time order contain left join queries
mysqldumpslow -s t -t 10 -g "join" /var/lib/mysql/localhost-slow.log

-- It is recommended that you combine these commands | and more Use, otherwise there may be situations where the screen keeps brushing thieves
mysqldumpslow -s r -t 10 /var/lib/mysql/localhost-slow.log

show profile analysis sql

Add test data, create two tables, and simulate test insertion

CREATE TABLE dept
(
	id INT PRIMARY KEY AUTO_INCREMENT,
	deptno MEDIUMINT NOT NULL DEFAULT 0,
	dname VARCHAR(20) NOT NULL DEFAULT '',
	loc  VARCHAR(13) NOT NULL DEFAULT ''
);

CREATE TABLE emp
(
	id INT PRIMARY KEY AUTO_INCREMENT,
	empno  MEDIUMINT NOT NULL DEFAULT 0, -- number
	ename VARCHAR(20) NOT NULL DEFAULT '',	-- Name	
	job  VARCHAR(9) NOT NULL DEFAULT '',	-- work
	mgr  MEDIUMINT NOT NULL DEFAULT 0, 	-- mgr
	hiredate DATE NOT NULL,			-- Days and Time
	sal DECIMAL(7, 2) NOT NULL,		-- salary
	comm DECIMAL(7, 2) NOT NULL,		-- dividend
	deptno MEDIUMINT NOT NULL DEFAULT 0	-- Department number
);


/**
	This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *may* want to use the less safe log_Bin_Trust_Function_Creators variable)
	You need to turn on the bin-log to do the following:
		show variables like 'log_bin_trust_function_creators';	-- Is the query open
		set global log_bin_trust_function_creators = 1;		-- open
*/


-- Method of generating a random string
-- $$ 
DELIMITER $$
CREATE FUNCTION rand_string( n INT ) RETURNS VARCHAR(255)
BEGIN 
	DECLARE chars_str VARCHAR(100) DEFAULT 'qwertyuiopasdfghjklzxcvbnm';
	DECLARE return_str VARCHAR(100) DEFAULT '';
	DECLARE i INT DEFAULT 0;
	
	WHILE i < n DO
		SET return_str = CONCAT( return_str, SUBSTRING( return_str, FLOOR( 1 + RAND() * 52 ), 1 ) );
		SET i = i + 1;
	END WHILE;
	
	RETURN return_str;
END $$


-- Generate Random Numbers
DELIMITER $$
CREATE FUNCTION rand_num() RETURNS INT(5)
BEGIN 
	DECLARE i INT DEFAULT 0;
		SET i = FLOOR( 100 + RAND() * 10 );
	RETURN i;
END$$

-- drop function rand_num;  	-- Delete Method


-- Bulk Create Data Stored Procedures
-- Autocommit will be closed in advance
SET autocommit = 0;

DELIMITER $$
CREATE PROCEDURE insert_emp( IN START INT(10), IN max_num INT(10) )
BEGIN 
	DECLARE i INT DEFAULT 0;
	REPEAT
		SET i = i + 1;
		INSERT INTO emp ( empno, ename, job, mgr, hiredate, sal, comm, deptno )
		VALUES ( (START+i), rand_string(6), 'SALESMAN', 0001, CURDATE(), 2000, 400, rand_num() );
		UNTIL i = max_num
	END REPEAT;
	COMMIT;
END $$



DELIMITER $$
CREATE PROCEDURE insert_dept( IN START INT(10), IN max_num INT(10) )
BEGIN 
	DECLARE i INT DEFAULT 0;
	REPEAT
		SET i = i + 1;
		INSERT INTO dept( deptno, dname, loc ) VALUES ( (START+i), rand_string(10), rand_string(8) );
		
		UNTIL i = max_num
	END REPEAT;
	COMMIT;
END $$


-- dept Table adds 10 pieces of data
CALL insert_dept( 100, 10 );

-- emp Add 50 w data
CALL insert_emp( 100001, 500000 );

show profile is a tool provided by mysql to analyze resource consumption for execution of sql statements in the current session.It can be used as a measurement standard for sql tuning.By default, it is turned off.And save the last 15 allowed results.When turned on, your sql is recorded in the background, and when analyzed, sql is retrieved directly from the list in the record.

Open profile

-- Check to see if it is on and off by default
SHOW VARIABLES LIKE 'profiling';

-- Open (Client Connection Tool needs to be restarted)
set global profiling = 1;

Test query sql

-- id%10 It doesn't make sense to invalidate an index
SELECT * FROM emp GROUP BY id%10 LIMIT 150000;	-- 0.482s

Analyze sql execution process

show profiles;

You can find the sql you just made.
Query_id: Query id, which can be used to view more detailed execution information.
Duration: Length of execution
Query: Execution time

View a Query_id corresponds to more detailed point information

-- Execution Plan
EXPLAIN SELECT * FROM emp GROUP BY id%10 LIMIT 150000;
-- profile Detailed Execution Information followed by Correspondence Query_id 
SHOW PROFILE cpu, block io FOR QUERY 5;

The following figure has information about each step of sql execution

Temporary tables and external sorting were found by executing the plan.Then, through profile observation, it is discovered that a temporary table is created by Creating tmp table, Sending data is executed to move the data to the temporary table, the thieves are found to be time consuming, sorted, and the temporary table is deleted.
You can see that profile is very useful with an execution plan.

profile Each Parameter Meaning
Focus on red labels
ALL: Display all cost information.
(2) BLOCK IO: Show block IO overhead.
CONTEXT SWITCHES: Context switching overhead.
(4) CPU: Display CPU cost information.
IPC: Displays send and receive overhead information.
MEMORY: Display memory overhead information.
PAGE FAULTS: Displays page error overhead information.
SOURCE: Display and Source_function, Source_file, Source_line-related cost information.
SWAPS: Display the exchange overhead information.

If the following information appears in the execution details (Creating tmp table discovery is slow in the example above), you need to optimize your SQL:
Converting HEAP to MyISAM: The result of the query is too large, there is not enough memory, and the data is moved to disk.
(2) Creating tmp table: Creating temporary tables.Copy the data to the temporary table first, then delete the temporary table after use.
Copying to tmp table on disk: Copying temporary tables in memory to disk is dangerous!!!
(4) locked: locked

Open Global Query Log

The global query log is used to keep all SQL execution records. This feature is mainly used in test environments and should never be turned on in production environments.

Configuration Start
stayMy.cnfModified in

# open
general_log=1
# Path to log file
general_log_file=/path/logfile
# Output Format
log_output=FILE

Code startup (recommended)

#open
set global general_log=1;
set global log_output='TABLE';

#After that, the sql statements you write will be recorded in general_in the mysql LibraryLog table, which can be viewed with the following commands
select * from mysql.general_log;

After that, the SQL statements you write will be recorded in general_in a MySQL library that existed at the time MySQL was installedLog table, which can be viewed using the following commands

select * from mysql.general_log;

performance schema performance analysis is also attached:https://www.cnblogs.com/zhoujinyi/p/5236705.html

Tags: SQL MySQL Database less

Posted on Fri, 12 Jun 2020 22:13:07 -0400 by john9292