format of nginx default configuration syntax and log

nginx default configuration To see which default configuration files are available for nginx, open the / etc/nginx/nginx.conf file to see the end line...

nginx default configuration

To see which default configuration files are available for nginx, open the / etc/nginx/nginx.conf file to see the end line section

Other configuration files ending in.Conf under /etc/nginx/conf.d/file are imported into the file by default.

See what files are default under / etc/nginx/conf.d/file

ls /etc/nginx/conf.d/

That is, there are two configuration files, nginx.conf default.conf, by default

Interpret the nginx.conf configuration file, which is divided into three main sections

First block:

  • user Sets the System user for the nginx Service
  • Number of worker_processes
  • Error Log for error_log nginx
  • pid at pid nginx service startup

Section 2: (Events)

  • events worker_connections Maximum number of connections allowed per process
  • Number of use r worker processes (set epoll or select)

Third block:

http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; }

View file details

######Nginx configuration file nginx.conf Chinese Details##### #Define users and user groups for Nginx to run user www www; #Number of nginx processes, recommended equal to the total number of CPU cores. worker_processes 8; #Global error log definition type, [debug | info | notice | warn | error | crit] error_log /usr/local/nginx/logs/error.log info; #Process pid file pid /usr/local/nginx/logs/nginx.pid; #Specifies the maximum number of descriptors a process can open: #Operating mode and maximum number of connections #This directive means that when a nginx process opens the maximum number of file descriptors, the theoretical value should be the maximum number of open files (ulimit-n) divided by the number of nginx processes, but nginx allocation requests are not so even, so it is best to keep the same value as ulimit-n. #Now the number of open files under the linux 2.6 kernel is 65535, and the worker_rlimit_nofile should fill in 65535 accordingly. #This is because assigning requests to processes during nginx scheduling is not so balanced, so if you fill in 10240 and the total concurrency reaches 3-40,000, a process may exceed 1024, which will return a 502 error. worker_rlimit_nofile 65535; events { #Reference event model, use [kqueue | rtsig | epoll |/dev/poll | select | poll]; epoll model #linux recommends epoll, which is a high performance network I/O model for linux cores older than 2.6. If running on FreeBSD, use the kqueue model. #Supplementary Notes: #Like apache, nginx has different event models for different operating systems #A) Standard Event Model #Select, poll are standard event models, nginx will choose select select or poll if no more effective method exists in the current system #B) Efficient Event Model #Kqueue: Using kqueue with FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0, and MacOS X. MacOS X systems that use dual processors may cause a kernel crash. #Epoll: For Linux kernel version 2.6 and later systems. #/dev/poll: For Solaris 7 11/99+, HP/UX 11.22+ (eventport), IRIX 6.5.15+, and Tru64 UNIX 5.1A+. #Eventport: For Solaris 10.To prevent a kernel crash, it is necessary to install security patches. use epoll; #Maximum number of connections per process (Maximum number of connections = number of connections * number of processes) #Depending on your hardware tuning, use it in conjunction with the previous work process as large as possible, but don't run the cpu to 100%.The maximum number of connections allowed per process, and theoretically the maximum number of connections per nginx server. worker_connections 65535; #keepalive timeout. keepalive_timeout 60; #Buffer size for client request header.This can be set according to your system paging size. Normally a request header size will not exceed 1k, but since the system paging size is usually larger than 1k, it is set here as the paging size. #Page size can be obtained with the command getconf PAGESIZE. #[root@web001 ~]# getconf PAGESIZE #4096 #There are also cases where the client_header_buffer_size exceeds 4k, but the client_header_buffer_size must be set to an integer multiple of the System Page Size. client_header_buffer_size 4k; #This specifies the cache for open files, which is not enabled by default, max specifies the number of caches, which is recommended to be the same as the number of open files, and inactive is how long it takes for a file to be deleted after it has not been requested. open_file_cache max=65535 inactive=60s; #This refers to how often valid information is checked for the cache. #Syntax: open_file_cache_valid time default value: open_file_cache_valid 60 uses field: http, server, location This directive specifies when valid information for cached items in open_file_cache needs to be checked. open_file_cache_valid 80s; #The minimum number of times a file is used within the inactive parameter time in the open_file_cache directive. If this number is exceeded, the file descriptor is always open in the cache. For example, if a file is not used once in the inactive time, it will be removed. #Syntax: open_file_cache_min_uses number Default value: open_file_cache_min_uses 1 Use field: http, server, location This directive specifies the minimum number of files that can be used within a certain time range of parameters in which the open_file_cache directive is invalid. If a larger value is used, the file descriptor will always open in the cache. open_file_cache_min_uses 1; #Syntax: open_file_cache_errors on | off default: open_file_cache_errors off uses the field: http, server, location This directive specifies whether searching for a file is a cache error. open_file_cache_errors on; } #Set up an http server to leverage its reverse proxy capabilities to provide load balancing support http { #File extension and file type mapping table include /etc/nginx/mime.types; #Default file type default_type application/octet-stream; #Default encoding #charset utf-8; #hash table size for server name #The hash table holding the server name is controlled by the directives server_names_hash_max_size and server_names_hash_bucket_size.The hash bucket size parameter always equals the size of the hash table and is a multiple of the cache size of the processor along the way.By reducing the number of accesses in memory, it is possible to speed up the search for hash table key values in the processor.If hash bucket size equals the size of the processor cache along the way, the worst-case number of lookups in memory is 2 when looking for keys.The first is to determine the address of the storage unit, and the second is to find the key value in the storage unit.Therefore, if Nginx gives a hint that you need to increase hash max size or hash bucket size, the first thing to do is increase the size of the previous parameter. server_names_hash_bucket_size 128; #Buffer size for client request header.This can be set according to your system's paging size. Normally a request's header size will not exceed 1k, but since the system's paging size is usually larger than 1k, this is set here as the paging size.Page size can be obtained with the command getconf PAGESIZE. client_header_buffer_size 32k; #Customer request header buffer size.By default, nginx uses the client_header_buffer_size buffer to read the header value, and if the header is too large, it uses the large_client_header_buffers to read it. large_client_header_buffers 4 64k; #Set the size of files uploaded through nginx client_max_body_size 8m; #Turn on the efficient file transfer mode, the sendfile directive specifies whether nginx calls the sendfile function to output files. Set it to on for normal applications, off for heavy disk IO applications such as downloads, to balance disk and network I/O processing speed and reduce system load.Note: Change this to off if the picture does not display properly. #The sendfile directive specifies whether nginx calls the sendfile function (zero copy mode) to output a file, which must be set to on for normal applications.If an application disk IO overload application is used for download, etc., set to off to balance disk and network IO processing speed and reduce system uptime. sendfile on; #Open directory list access, appropriate download server, shut down by default. autoindex on; #This option allows or prohibits the use of socke's TCP_CORK option, which is used only when sendfile is used tcp_nopush on; tcp_nodelay on; #Long connection timeout in seconds keepalive_timeout 120; #FastCGI-related parameters are designed to improve the performance of the website by reducing resource usage and improving access speed.The following parameters can be understood literally. fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; #gzip module settings gzip on; #Turn on gzip compressed output gzip_min_length 1k; #Minimum Compressed File Size gzip_buffers 4 16k; #Compression Buffer gzip_http_version 1.0; #Compressed version (default 1.1, use 1.0 for front end if squid2.5) gzip_comp_level 2; #Compression Level gzip_types text/plain application/x-javascript text/css application/xml; #The compression type already contains textml by default, so there's no need to write it down or write it down, but there's a warn. gzip_vary on; #Use when opening a limit on the number of IP connections #limit_zone crawler $binary_remote_addr 10m; #Load Balance Configuration upstream piao.jd.com { #Uptream has a balanced load, weights are weights, and weights can be defined according to machine configuration.The weigth parameter represents the weight, and the higher the weight, the greater the probability of being assigned. server 192.168.80.121:80 weight=3; server 192.168.80.122:80 weight=2; server 192.168.80.123:80 weight=3; #Uptream for nginx currently supports four ways of allocation #1. Polling (default) #Each request is assigned to a different back-end server one by one in a chronological order, which can be automatically rejected if the back-end server is down loaded. #2,weight #Specify the polling probability, the weight being proportional to the access ratio, for uneven performance on the back-end server. #For example: #upstream bakend { # server 192.168.0.14 weight=10; # server 192.168.0.15 weight=10; #} #2,ip_hash #Each request is assigned as a hash result of accessing the ip, so that each visitor has a fixed access to a back-end server, which can solve the session problem. #For example: #upstream bakend { # ip_hash; # server 192.168.0.14:88; # server 192.168.0.15:80; #} #3. fair (third party) #Allocate requests by the response time of the back-end server, giving priority to those with shorter response times. #upstream backend { # server server1; # server server2; # fair; #} #4. url_hash (third party) #Assign requests based on hash results of accessing URLs so that each url is directed to the same back-end server, which is more efficient when cached. #Example: add hash statement in upstream, server statement can not write weight and other parameters, hash_method is the hash algorithm used #upstream backend { # server squid1:3128; # server squid2:3128; # hash $request_uri; # hash_method crc32; #} #tips: #upstream bakend{#Define Ip and Device Status for Load Balancing Devices}{ # ip_hash; # server 127.0.0.1:9090 down; # server 127.0.0.1:8080 weight=2; # server 127.0.0.1:6060; # server 127.0.0.1:7070 backup; #} #Increase proxy_pass http://bakend/ in server s that require load balancing; #The status of each device is set to: #1.down indicates that the pre-single server is temporarily not participating in the load #2. The larger the weight, the greater the weight of the load. #3.max_fails: The default number of requests allowed to fail is 1. When the maximum number of requests is exceeded, an error defined by the proxy_next_upstream module is returned #4.fail_timeout:The time to pause after the max_fails failures. #5.backup: Request backup machines when all other non-backup machines are down or busy.So this machine will have the lightest pressure. #nginx supports setting up multiple groups of load balancing at the same time for unused server s. #client_body_in_file_only is set to On to say that data from client post is recorded in a file for debug #client_body_temp_path Sets the directory of the record file to set up to three levels of directory #location matches URL s. Can be redirected or new proxy load balancing }

Configuration of virtual hosts/etc/nginx/conf.d/default.conf

server { #Listening Port listen 80; #Domain names can be multiple, separated by spaces server_name www.jd.com jd.com; index index.html index.htm index.php; root /data/www/jd; #Load Balancing for **** location ~ .*.(php|php5)?$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } #Picture Cache Time Settings location ~ .*.(gif|jpg|jpeg|png|bmp|swf)$ { expires 10d; } #JS and CSS Cache Time Settings location ~ .*.(js|css)?$ { expires 1h; } #Log formatting #$remote_addr and $http_x_forwarded_for record the ip address of the client; #$remote_user: Used to record client user names; #$time_local: Used to record access time and time zone; #$request: the url and http protocol used to record requests; #$status: used to record request status; success is 200, #$body_bytes_sent: Records the body content size of the file sent to the client; #$http_referer: Used to record visits from that page link; #$http_user_agent: Record information about the client's browser; #Usually the web server is placed behind the reverse proxy, so you can't get the client's IP address. The IP address you get through $remote_add is the reverse proxy server's iP address.The reverse proxy server adds x_forwarded_for information to the http header that forwards the request to record the IP address of the original client and the server address of the original client's request. log_format access '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; #Define the access log for this virtual host access_log /usr/local/nginx/logs/host.access.log main; access_log /usr/local/nginx/logs/host.access.404.log log404; #Enable reverse proxy for'/' location / { proxy_pass http://127.0.0.1:88; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; #Back-end Web server can get users'real IP through X-Forwarded-For proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #The following are some of the reverse proxy configurations, optional. proxy_set_header Host $host; #Maximum number of single file bytes allowed for client requests client_max_body_size 10m; #Buffer agent buffers the maximum number of bytes requested by the client. #If you set it to a larger value, such as 256k, it is normal to submit any image smaller than 256K using either firefox or IE browsers.If you comment on this directive and use the default client_body_buffer_size setting, which is twice the page size of the operating system, 8k or 16k, the problem occurs. #Whether using Firefox 4.0 or IE8.0, submitting a larger, 200-k or so image returns a 500 Internal Server Error client_body_buffer_size 128k; #Indicates that nginx should be prevented from answering with an HTTP reply code of 400 or higher. proxy_intercept_errors on; #Timeout for Backend Server Connection_Initiate Handshake Wait Response Timeout #nginx connection timeout with back-end server (proxy connection timeout) proxy_connect_timeout 90; #Backend Server Data Return Time (Proxy Send Timeout) #Backend Server Data Return Time This means that the back-end server must complete all data within the specified time proxy_send_timeout 90; #Backend server response time (proxy receive timeout) after successful connection #After a successful connection _Wait for the response time of the back-end server _The queue has actually entered the back-end for processing (also referred to as the time the back-end server processes requests) proxy_read_timeout 90; #Set the size of the buffer where the proxy server (nginx) holds user header information #Sets the size of the buffer for the first part of the reply read from the proxy server, which typically contains a small reply header, which by default is the size of a buffer specified in the instruction proxy_buffers, but can be set to a smaller size proxy_buffer_size 4k; #proxy_buffers buffer, settings for pages under 32k on average #Sets the number and size of buffers used to read replies (from the proxy server) and, by default, paging size, which may be 4k or 8k depending on the operating system proxy_buffers 4 32k; #Buffer size under high load (proxy_buffers*2) proxy_busy_buffers_size 64k; #Set the size of the data when writing proxy_temp_path to prevent a worker process from blocking too long when passing files #Set the cache folder size, greater than this value, to be passed from the upstream server proxy_temp_file_write_size 64k; } #Set an address to view Nginx status location /NginxStatus { stub_status on; access_log on; auth_basic "NginxStatus"; auth_basic_user_file confpasswd; #The contents of an htpasswd file can be generated using the htpasswd tool provided by apache. } #Local Dynamic-Static Separation Reverse Proxy Configuration #All jsp pages are handled by tomcat or resin location ~ .(jsp|jspx|do)?$ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; } #All static files are read directly by nginx without tomcat or resin location ~ .*.(htm|html|gif|jpg|jpeg|png|bmp|swf|ioc|rar|zip|txt|flv|mid|doc|ppt| pdf|xls|mp3|wma)$ { expires 15d; } location ~ .*.(js|css)?$ { expires 1h; } } } ######Nginx configuration file nginx.conf Chinese Details#####

Log format

1.HTTP requests

nginx acts as a proxy for web servers and HTTP and handles http requests.

http requests are based on tcp.

A complete http request consists of request and response.

Request composition: request row, request header, request data

Response consists of: status line, message header, response body

We can view a complete http request from the command line:

curl -v https://coding.imooc.com > /dev/null

The greater-than part is the request part, and the less-than part is the response part

2.Nginx log type

Includes: error.log access_log

error.log handles http request errors and nginx's own service error states, and records them at different error levels

access_log primarily records the status of processing each http request access

The primary implementation of logging is to use log_format

Each information recorded by nginx can be treated as a variable, and log_format combines these variables and records them in the log.

Let's take a look at the configuration of log_format

Syntax: log_format name [escape=default|json] string ...; Default: log_format combined "..."; Context: http (constraint log_format Configuration Location)

Let's look at the nginx.conf configuration instance

Nginx has roughly three types of variables that can be recorded in log_format

  • HTTP request variable - arg_PARAMETER http_HEADER send_http_HEADER (returned by server)
  • Built-in variable - Nginx built-in
  • Custom variable - self-defined

Example 1: Get the request header information for a request

First let's edit the log_format section of the configuration file

Check the correctness of the configuration file after saving the edits

nginx -t -c /etc/nginx/nginx.conf

Reload service after check is complete

nginx -s reload -c /etc/nginx/nginx.conf

Then we request this machine many times to see the log output.

curl 127.0.0.1 curl 127.0.0.1

We see that anget information has been recorded.

For nginx's built-in variables, we can view the official website information:
https://nginx.org/en/docs/http/ngx_http_core_module.html#var_status

Let's look at the default log_format

log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main;

We see that the default format is basically a single quotation mark''enclosing some variables, including a dash-square bracket [] printed together as a delimiter.Each variable has this meaning:

remote_addr: The address of the corresponding client remote_user: is the user name that requests authentication from the client. If the authentication module is not turned on, the value is empty. time_local: Indicates nginx server time Request: The row representing the request request header Status: indicates the return status of the response body_bytes_sent: Indicates the size of the body data returned from the server to the client http_referer: Represents the parent page of the request http_user_agent: Represents agent information http_x_forwarded_for: Records information at each level of the request

11 June 2019, 13:00 | Views: 6795

Add new comment

For adding a comment, please log in
or create account

0 comments