Linux system running analysis tool - Summary of the use of the sarcommand

catalog sar introduction sar usage sar installation and...
sar usage
sar installation and startup
1. Historical statistics
2. Statistics of CPU usage
3. Check memory usage
4. View system Swap partition information
5. View I/O and transfer rate statistics
6. Statistics of disk usage
7. Statistics of process, inode, file and lock table status
8. Statistical network information

catalog

sar introduction

sar usage

sar installation and startup

sar usage

1. Historical statistics

2. Statistics of CPU usage

Sar-u 23 ා statistics every 2 seconds three times in total

sar -q ා view average load

3. Check memory usage

4. View system Swap partition information

5. View I/O and transfer rate statistics

6. Statistics of disk usage

7. Statistics of process, inode, file and lock table status

8. Statistical network information

Sar-n dev 11: Statistics once every 1 second, once in total. The following average is the average value after multiple statistics

sar -n EDEV 11 ා statistics network equipment communication failure information:

Sar-n sock 1 1 Statistics socket connection information

Statistics of sar-n TCP 13 × TCP connections

Summary of sar-n use

sar command summary

sar introduction

sar usage

sar is the abbreviation of system activity reporter. At present, it is the most comprehensive system performance analysis tool on Linux. The activities of the system can be reported from the following aspects, including

  • File reading and writing
  • System call usage
  • Disk I/O
  • cpu efficiency
  • Memory usage
  • Process activities and IPC related content

sar installation and startup

yum install sysstat #Default path after installation find / -name sysstat /etc/sysconfig/sysstat /etc/cron.d/sysstat /etc/selinux/targeted/active/modules/100/sysstat #Start command systemctl start sysstat systemctl status sysstat systemctl stop sysstat
sar usage

1. Historical statistics

#The default log is saved under / var/log/sa /. It is saved by day and collected once every 10 minutes [root@node1 ~]# sar Linux 3.10.0-957.el7.x86_64 (k8s.node2) 06/26/2020 _x86_64_ (1 CPU) 08:00:01 AM CPU %user %nice %system %iowait %steal %idle 08:10:01 AM all 0.07 0.00 0.09 0.06 0.00 99.78 08:20:01 AM all 0.07 0.00 0.09 0.00 0.00 99.84 08:30:01 AM all 0.08 0.00 0.08 0.00 0.00 99.84 08:40:01 AM all 0.11 0.00 0.26 0.06 0.00 99.57 Average: all 0.08 0.00 0.13 0.03 0.00 99.76 #Or use the command sar -r -f /var/log/sysstat/sa26 [root@node1 ~]# sar -r -f /var/log/sa/sa26 Linux 3.10.0-957.el7.x86_64 (k8s.node2) 06/26/2020 _x86_64_ (1 CPU) 08:00:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 08:10:01 AM 427400 568496 57.08 2104 341612 448300 45.01 263976 166348 0 08:20:01 AM 426076 569820 57.22 2104 341620 450800 45.27 264840 166384 0 08:30:01 AM 427080 568816 57.12 2104 341608 448300 45.01 264072 166364 0 08:40:01 AM 400196 595700 59.82 2104 341740 452332 45.42 265584 166348 4 Average: 420188 575708 57.81 2104 341645 449933 45.18 264618 166361 1

2. Statistics of CPU usage

Sar-u 23 ා statistics every 2 seconds three times in total

[root@node1 ~]@ sar -u 2 3 Linux 3.10.0-957.el7.x86_64 (node1) 06/26/2020 _x86_64_ (1 CPU) 09:30:02 AM CPU %user %nice %system %iowait %steal %idle 09:30:04 AM all 0.00 0.00 0.50 0.00 0.00 99.50 09:30:06 AM all 0.00 0.00 0.00 0.00 0.00 100.00 09:30:08 AM all 0.50 0.00 0.00 0.00 0.00 99.50 Average: all 0.17 0.00 0.17 0.00 0.00 99.67 #View the usage of each CPU sar -p 2 3 #Output the result to the file under / tmp / at the same time sar -u -o /tmp/sa 2 3 #Read in from file sar -f /tmp/sa

#%User - CPU usage of user space

#%nice changes the CPU usage of the process that has been prioritized

#%CPU utilization of system kernel space

#%Percentage of iowait CPU waiting for IO

#%CPU used by virtual machine CPU of steel virtual machine

#%idle CPU

#In the above display, we mainly look at% iowait and% idle. If% iowait is too high, it means that there is an I/O bottleneck, that is, disk IO cannot meet the business requirements. If% idle is too low, it means that the CPU utilization is serious. We need to determine whether the CPU bottleneck is based on the memory utilization.   

sar -q ා view average load

#Runq SZ run queue length (number of processes waiting to run, no more than 3 CP S per core)

#The number of processes and threads in plist SZ process list

#ldavg-1 average CPU load in the last minute, that is, the average value obtained by adding the load of multi-core CPU in the past minute and dividing by the number of cores, 5 minutes and 15 minutes and so on

#ldavg-5 average CPU load in the last five minutes

#ldavg-15 average CPU load in the last 15 minutes

3. Check memory usage

[root@node1 ~]@ sar -r Linux 3.10.0-957.el7.x86_64 (k8s.node2) 06/26/2020 _x86_64_ (1 CPU) 08:00:01 AM kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 08:10:01 AM 427400 568496 57.08 2104 341612 448300 45.01 263976 166348 0 08:20:01 AM 426076 569820 57.22 2104 341620 450800 45.27 264840 166384 0 08:30:01 AM 427080 568816 57.12 2104 341608 448300 45.01 264072 166364 0 08:40:01 AM 400196 595700 59.82 2104 341740 452332 45.42 265584 166348 4 Average: 420188 575708 57.81 2104 341645 449933 45.18 264618 166361 1

#kbmemfree free free physical memory size

#Physical memory size in use of kbmemused

#%memused physical memory usage

#The size of physical memory used as buffer in kbbuffers kernel, kbbuffers and kbcached: these two values are buffer and cache in free command

#File size of kbcached cache

#kbcommit is the minimum memory required to ensure the normal operation of the current system, that is, the minimum memory required to ensure that memory does not overflow (physical memory + Swap partition)

#The value of commit is a percentage of kbcommit and total memory (physical memory + swap partition)

4. View system Swap partition information

[root@node1 ~]@ sar -w Linux 3.10.0-957.el7.x86_64 (k8s.node2) 06/26/2020 _x86_64_ (1 CPU) 08:00:01 AM proc/s cswch/s 08:10:01 AM 0.04 62.66 08:20:01 AM 0.02 56.84 08:30:01 AM 0.01 54.77 08:40:01 AM 0.05 64.78 Average: 0.03 59.76

#pswpin/s the number of swap page s per second from the swap partition to the system

#pswpott/s the number of swap page s exchanged from the system to swap per second

5. View I/O and transfer rate statistics

[root@node1 ~]@ sar -b Linux 3.10.0-957.el7.x86_64 (k8s.node2) 06/26/2020 _x86_64_ (1 CPU) 08:00:01 AM tps rtps wtps bread/s bwrtn/s 08:10:01 AM 0.15 0.01 0.14 0.16 3.22 08:20:01 AM 0.07 0.00 0.07 0.00 0.76 08:30:01 AM 0.06 0.00 0.06 0.00 0.71 08:40:01 AM 0.60 0.39 0.21 10.82 10.00 Average: 0.22 0.10 0.12 2.74 3.67

#tps the total IO per second of the disk, equal to tps in iostat

#Total IO read from disk by rtps per second

#wtps total IO written to disk per second

#bread/s total number of blocks read from disk per second

#bwrtn/s the total number of blocks written to disk per second

6. Statistics of disk usage

[root@node1 ~]@ sar -d Linux 3.10.0-957.el7.x86_64 (k8s.node2) 06/26/2020 _x86_64_ (1 CPU) 08:00:01 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 08:10:01 AM dev8-0 0.15 0.16 3.22 22.49 0.00 7.19 6.20 0.09 08:10:01 AM dev253-0 0.17 0.16 3.21 19.78 0.00 5.78 4.62 0.08 08:10:01 AM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:20:01 AM dev8-0 0.07 0.00 0.76 10.76 0.00 0.52 0.29 0.00 08:20:01 AM dev253-0 0.08 0.00 0.76 9.62 0.00 0.55 0.26 0.00 08:20:01 AM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:30:01 AM dev8-0 0.06 0.00 0.71 12.09 0.00 11.03 10.66 0.06 08:30:01 AM dev253-0 0.07 0.00 0.71 10.57 0.00 9.78 9.33 0.06 08:30:01 AM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:40:01 AM dev8-0 0.60 10.82 10.00 34.89 0.00 8.01 7.28 0.43 08:40:01 AM dev253-0 0.71 10.62 10.00 29.09 0.00 6.70 5.86 0.42 08:40:01 AM dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: dev8-0 0.22 2.74 3.67 29.30 0.00 7.47 6.76 0.15 Average: dev253-0 0.26 2.69 3.67 24.84 0.00 6.28 5.45 0.14 Average: dev253-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00

#The name of DEV disk device. If you do not add - p, the device name similar to dev253-0 will be displayed, so adding - p is more direct

#tps: total number of I/O transfers per second

#rd_sec/s total number of sectors read per second

#wr_sec/s total number of sectors written per second

#Avgrq SZ average data size per disk I/O operation (sector)

#The average length of avgqu-sz disk request queue

#The average elapsed time of each request, including the waiting time of the request queue, from the request disk operation to the completion of the system processing. The unit is milliseconds (1 second equals 1000 milliseconds), which is equal to the seek time + queue time + service time

#The service processing time of svctm I/O, i.e. excluding the time in the request queue

#%The percentage of CPU occupied by util I/O requests. The higher the value is, the slower I/O is

7. Statistics of process, inode, file and lock table status

[root@node1 ~]@ sar -v Linux 3.10.0-957.el7.x86_64 (k8s.node2) 06/26/2020 _x86_64_ (1 CPU) 08:00:01 AM dentunusd file-nr inode-nr pty-nr 08:10:01 AM 67612 1152 24497 2 08:20:01 AM 67637 1216 24533 2 08:30:01 AM 67630 1152 24497 2 08:40:01 AM 91215 1216 48189 2 Average: 73524 1184 30429 2

#Number of entries that dentunusd does not use in buffered directory entries

#File NR number of file handles used by the system

#Number of indexes used by inode NR

#Number of ptys used in pty NR

The index and file handle values in this are not the values viewed by ulimit-a, but sysctl.conf The kernel related values defined in max file represent the number of file handles that can be opened at the system level, while ulimit-n controls the number of file handles that can be opened at the process level, which can be viewed using sysctl-a | grep inode and sysctl-a | grep file.

File Max specifies the system wide limit on the number of file handles that can be opened by all processes (system level, kernel level). (The value in file-max denotes the maximum number of file handles that the Linux kernel will allocate). This value should have been added when the error message "Too many open files in system" is received. # cat /proc/sys/fs/file-max 4096 # echo 100000 > /proc/sys/fs/file-max perhaps # echo ""fs.file-max=65535" >> /etc/sysctl.conf # sysctl -p File NR can view the number of currently open file handles in the system. It includes three numbers: the first represents the number of allocated file descriptors, the second represents the number of idle file handles, and the third represents the maximum value (consistent with file max) that can open file handles. The kernel will dynamically allocate file handles, but it will not release them again (this may not be applicable to the latest kernel. In my file NR, the second column is always 0, and the first column is increased or decreased) man bash, find the section describing ulimit: provide control over the available resources (including file handle, number of processes, core file size, etc.) of the shell and the processes it starts. This is process level, that is, a session in the system and its start

8. Statistical network information

The sar-n option uses six different switches : DEV, EDEV, NFS, NFSD, SOCK, IP, EIP, ICMP, EICMP, TCP, ETCP, UDP, SOCK6, IP6, EIP6, ICMP6, EICMP6 and UDP6, DEV displays network interface information, EDEV displays statistics about network errors, NFS statistics about active NFS clients, NFSD statistics about NFS servers, SOCK displays socket information, ALL displays ALL five switches. They can be used alone or together.  

Sar-n dev 11: Statistics once every 1 second, once in total. The following average is the average value after multiple statistics

#Name of IFACE local network interface

#rxpck/s packets accepted per second

#txpck/s database sent per second

#rxKB/S packet size accepted per second, in kilobytes

#txKB/S the size of packets sent per second, in kilobytes

#rxcmp/s compressed packets per second

#txcmp/s compressed packets per second

#rxmcst/s received multicast packets per second

sar -n EDEV 11 ා statistics network equipment communication failure information:

#IFACE network card name

#rxerr/s received corrupted packets per second

#txerr/s number of packet errors sent per second

#coll/s when sending packets, the number of collisions per second is only available in half duplex mode

#rxdrop/s the number of network packets lost per second by the receiving end of the network card device when the buffer is full

#txdrop/s the number of network packets lost per second by the network device sender when the buffer is full

#txcarr/s the number of carrier errors per second when sending packets

#rxfram the number of frame to frame errors per second when receiving packets

#rxfifo number of buffer overflow errors per second while receiving packets

#txfifo the number of buffer overflow errors per second when a packet occurs

Sar-n sock 1 1 Statistics socket connection information

#Total number of socket s currently used by totsche

#The total number of TCP socket s currently being used by tcpsck

#The total number of UDP socket s currently being used by udpsck

#Total number of skickets that rawsck is currently using for RAW

#If frag current number of IP slices

#Number of connections in TIME-WAIT state in TCP TW TCP socket

########If you use the FULL keyword, it is equivalent to the combination of DEV, EDEV and SOCK

Statistics of sar-n TCP 13 × TCP connections

#active/s new active connection

#passive/s new passive connection

#Segment accepted by iseg/s

#Segment of oseg/s output

Summary of sar-n use

-n DEV: network interface statistics. -n EDEV: network interface error. -n IP: IP datagram statistics. -n EIP: IP error statistics. -n TCP: TCP statistics. -n ETCP: TCP error statistics. -n SOCK: socket usage. sar command summary
(1) sar -b 5 5 // IO transfer rate (2) sar -B 5 5 // Page exchange rate (3) sar -c 5 5 // Rate of process creation (4) sar -d 5 5 // Active information for block devices (5) sar -n DEV 5 5 // Status information of network devices (6) sar -n SOCK 5 5 // Use of SOCK (7) sar -n ALL 5 5 // All network status information (8) sar -P ALL 5 5 // Usage status information and IOWAIT statistics status of each CPU (9) sar -q 5 5 // The length of the queue (the number of processes waiting to run) and the state of the load (10) sar -r 5 5 // Memory and swap space usage (11) sar -R 5 5 // Memory statistics (allocation and release of memory pages, memory pages used by the system as BUFFER per second, memory pages cache d to per second) (12) sar -u 5 5 // CPU usage and IOWAIT information (same as default monitoring) (13) sar -v 5 5 // inode, file and other kernel tablesd status information (14) sar -w 5 5 // Number of context exchanges per second (15) sar -W 5 5 // Statistical information exchanged by SWAP (the monitoring status is the same as that of iostat's si so) (16) sar -x 2906 5 5 // Display statistics of the specified process (2906), including errors caused by the process, CPU usage of user level and system level users, and which CPU is running on (17) sar -y 5 5 // TTY device activity status (18) Export to file(-o)And read record information(-f)

25 June 2020, 23:21 | Views: 7835

Add new comment

For adding a comment, please log in
or create account

0 comments