Nginx common configuration

1. Configure multiple domain names for one site

server {

listen 80;

server_name aaa.cn bbb.cn;

}

server_name can be followed by multiple domain names, which are separated by spaces

2. Configure multiple sites for one service

server {

listen 80;

server_name aaa.cn;

location / {

root /home/project/pa;

index index.html;

}

}

server {

listen 80;

server_name bbb.cn ccc.cn;

location / {

root /home/project/pb;

index index.html;

}

}

server {

listen 80;

server_name ddd.cn;

location / {

root /home/project/pc;

index index.html;

}

}

Based on the implementation of Nginx virtual host configuration, Nginx has three types of virtual hosts

IP based virtual host: you need to have multiple addresses on your server, and each site corresponds to a different address. This method is rarely used

Port based virtual host: each site corresponds to a different port. When accessing, use ip:port. You can modify the port of listen to use it

Domain name based virtual host: the most widely used way. In the above example, domain name based virtual host is used. The premise is that you have multiple domain names corresponding to each site and server_name just fill in different domain names

3. Static resource cache

Please filter according to your actual situation

location ~ .*\.(?:js|css|jpg|jpeg|gif|png|ico|cur|gz|svg|svgz|mp4|ogg|ogv|webm)$ {

expires 7d;

}

location ~ .*\.(?:htm|html)$ {

add_header Cache-Control "private, no-store, no-cache, must-revalidate, proxy-revalidate";

}

Note here: there is a difference between no cache and no store. No cache means that expired resources are not cached. The cache will process the resources after effective processing and confirmation to the server, while no store is really not cached.

4. Turn on gzip compression

http {

gzip on; #Enable gzip compression

gzip_disable "MSIE [1-6]\.(?!.*SV1)"; #Configure gzip disabling conditions and support regular. This indicates that gzip is not enabled for ie6 and below (because it is not supported by ie versions earlier)

gzip_proxied any; #Unconditionally compress all result data

gzip_min_length 10k; #Set the minimum number of bytes allowed to be compressed; This means that if the file is less than 10 bytes, it does not need to be compressed, because it is meaningless and is very small

gzip_comp_level 6; #Set the compression ratio, the minimum is 1, the processing speed is fast and the transmission speed is slow; 9 is the maximum compression ratio, with slow processing speed and fast transmission speed; This indicates the compression level, which can be any one of 0 to 9. The higher the level, the smaller the compression, saving bandwidth resources, but also consuming CPU resources, so the general compromise is 6

gzip_buffers 16 8k; #Set the size of the compression buffer. Here, set 16 8K memory as the compression result stream cache

gzip_http_version 1.1; #Compressed version

gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #Formulate compression types, and configure as many compression types as possible during online configuration!

gzip_vary on; #Select the support variable header; This option allows the front-end cache server to cache gzip compressed pages; This can not be written, which means that when transmitting data, it indicates to the client that I used gzip compression

}

5. cpu affinity optimization

By default, multiple processes may run on one CPU or a core, resulting in uneven use of hardware resources by Nginx processes. This optimization is to allocate different Nginx processes to different CPUs as much as possible

Two CPU Parameter configuration:

worker_processes 2;

worker_cpu_affinity 0101 1010;

Four CPU Parameter configuration:

worker_processes 4;

worker_cpu_affinity 0001 0010 0100 1000;

Eight CPU Parameter configuration:

worker_processe8;

worker_cpu_affinity 00000001 00000010 00000100 00001000 00010000 00100000 01000000 10000000;

6. Number of worker processes running Nginx

The number of Nginx running work processes is generally set to the core of the CPU or the number of cores x2. If you don't know the CPU core count, you can press 1 after the top command, or you can view the / proc/cpuinfo file grep ^processor /proc/cpuinfo | wc -l

[root@lx~]# vi/usr/local/nginx1.10/conf/nginx.conf

worker_processes 4;

[root@lx~]# /usr/local/nginx1.10/sbin/nginx-s reload

[root@lx~]# ps -aux | grep nginx |grep -v grep

root 9834 0.0 0.0 47556 1948 ? Ss 22:36 0:00 nginx: master processnginx

www 10135 0.0 0.0 50088 2004 ? S 22:58 0:00 nginx: worker process

www 10136 0.0 0.0 50088 2004 ? S 22:58 0:00 nginx: worker process

www 10137 0.0 0.0 50088 2004 ? S 22:58 0:00 nginx: worker process

www 10138 0.0 0.0 50088 2004 ? S 22:58 0:00 nginx: worker process

7. Nginx maximum open files

worker_rlimit_nofile 65535;

This instruction refers to when a nginx The maximum number of file descriptors opened by the process. The theoretical value should be the maximum number of files opened( ulimit -n)And nginx Divide the number of processes, but nginx Distribution requests are not so uniform, so it's best to work with ulimit -n The values of are consistent.

Note: the file resource limit can be configured in/etc/security/limits.conf Settings for root/user And other users or*Set on behalf of all users.

linux Default value open files Is 1024. To view the current system value:

# ulimit -n

1024

Note the server only allows 1024 files to be opened at the same time.

Use ulimit -a to view all the limit values of the current system, and use ulimit -n to view the current maximum number of open files.

By default, the newly installed linux is only 1024. When it is used as a server with heavy load, it is easy to encounter error: too many open files. Therefore, it needs to be enlarged and added at the end of / etc/security/limits.conf:

* soft nofile 65535

* hard nofile 65535

* soft noproc 65535

* hard noproc 65535

User re login takes effect (ulimit -n)

8. Nginx event processing model

events {

use epoll;

worker_connections 65535;

multi_accept on;

}

nginx adopts epoll event model with high processing efficiency.

Work_connections is the maximum number of client connections allowed by a single worker process. This value is generally determined according to the server performance and memory. The actual maximum value is the number of worker processes multiplied by work_connections.

In fact, we fill in a 65535, which is enough. These are the concurrency values. If the concurrency of a website reaches such a large number, it is also a big station!

multi_accept tells nginx to accept as many connections as possible after receiving a new connection notification. The default is on. When it is set to on, multiple workers process the connection in a serial manner, that is, only one worker is awakened for one connection and the others are in a sleep state. When it is set to off, multiple workers process the connection in a parallel manner, that is, one connection will wake up all workers , continue to sleep until the connection is allocated. When your server has few connections, turning on this parameter will reduce the load to a certain extent, but when the server throughput is large, you can turn off this parameter for efficiency.

9. Turn on efficient transmission mode

http {

include mime.types;

default_type application/octet-stream;

......

sendfile on;

tcp_nopush on;

......

}

Include mime.types: media type. include is just an instruction that contains the contents of another file in the current file. Default_type application / octet stream: the default media type is sufficient. sendfile on: turns on the efficient file transfer mode. The sendfile instruction specifies whether nginx calls the sendfile function to output the file. For common applications, it is set to on. If it is used for the next file transfer For heavy load applications such as disk IO, it can be set to off to balance the disk and network I/O processing speed and reduce the system load. Note: if the picture is abnormal, change this to off. tcp_nopush on: it must be in sendfile mode to prevent network congestion and actively reduce the number of network message segments (send the response header together with the beginning of the body, not one by one.)

10. Connection timeout

The main purpose is to protect server resources, CPU and memory and control the number of connections, because establishing connections also consumes resources.

keepalive_timeout 60;

tcp_nodelay on;

client_header_buffer_size 4k;

open_file_cache max=102400 inactive=20s;

open_file_cache_valid 30s;

open_file_cache_min_uses 1;

client_header_timeout 15;

client_body_timeout 15;

reset_timedout_connection on;

send_timeout 15;

server_tokens off;

client_max_body_size 10m;

keepalived_timeout: the timeout period for the client to maintain the session. After this time, the server will disconnect the link. tcp_nodelay: it also prevents network congestion, but it is valid only if it is included in the keepalived parameter. client_header_buffer_size 4k: the buffer size of the client request header. This can be set according to your system page size. Generally, the size of a request header will not exceed 1k. However, since the general system page size should be greater than 1k, it is set as the page size here. The page size can be obtained with the command getconf PAGESIZE. open_file_cache max=102400 inactive=20s: this will specify the cache for the open file. It is not enabled by default. Max specifies the number of caches. It is recommended to be consistent with the number of open files. Inactive refers to how long it takes to delete the cache after the file has not been requested. open_file_cache_valid 30s: This refers to how often the cached valid information is checked. open_file_cache_min_uses 1 : open_ file_ The minimum number of times a file is used within the time of the inactive parameter in the cache instruction. If this number is exceeded, the file descriptor is always opened in the cache. For example, if a file is not used once within the inactive time, it will be removed. client_header_timeout: sets the timeout of the request header. We can also set this lower. If no data is sent after this time, nginx will return the error of request time out. client_body_timeout sets the timeout of the request body. We can also set this lower. After this time, no data is sent. The same error prompt as above. reset_timeout_connection: Tell nginx to close unresponsive client connections. This will free up the memory space occupied by that client. send_timeout: the timeout time of the response client. This timeout time is limited to the time between two activities. If it exceeds this time, the client has no activity and nginx closes the connection. server_tokens: it doesn't make nginx execute faster, but it can turn off the nginx version number in the error page, which is good for security. client_max_body_size: upload file size limit.

11. fastcgi tuning

fastcgi_connect_timeout 600;

fastcgi_send_timeout 600;

fastcgi_read_timeout 600;

fastcgi_buffer_size 64k;

fastcgi_buffers 4 64k;

fastcgi_busy_buffers_size 128k;

fastcgi_temp_file_write_size 128k;

fastcgi_temp_path/usr/local/nginx1.10/nginx_tmp;

fastcgi_intercept_errors on;

fastcgi_cache_path/usr/local/nginx1.10/fastcgi_cache levels=1:2

keys_zone=cache_fastcgi:128minactive=1d max_size=10g;

fastcgi_connect_timeout 600: Specifies the timeout for connecting to the backend FastCGI. fastcgi_send_timeout 600: the timeout time for transmitting a request to FastCGI. fastcgi_read_timeout 600: Specifies the timeout for receiving FastCGI responses. fastcgi_buffer_size 64k: Specifies the size of the buffer required to read the first part of the FastCGI response. The default buffer size is. FastCGI_ The size of each block in the buffers instruction can be set smaller. fastcgi_buffers 4 64k: specifies how many and how large buffers are needed locally to buffer FastCGI response requests. If the page size generated by a php script is 256KB, four 64KB buffers will be allocated for caching. If the page size is greater than 256KB, the part greater than 256KB will be cached in fastcgi_temp_path, but this is not a good method because the data processing speed in memory is faster than that on disk. Generally, this value should be the middle value of the page size generated by php scripts in the site. If the page size generated by most scripts in the site is 256KB, you can set this value to "8 32K", "4 64k", etc. fastcgi_busy_buffers_size 128k: it is recommended to set it to FastCGI_ Twice the number of buffers, and the number of buffers in busy hours. fastcgi_temp_file_write_size 128k: when writing FastCGI_ temp_ The default value is FastCGI_ Twice the number of buffers. If the value is set for hours, it may report 502BadGateway when the load comes up. fastcgi_temp_path: cache temporary directory. fastcgi_intercept_errors on: this directive specifies whether to pass 4xx and 5xx error messages to the client, or allow nginx to use error_page handles error messages. Note: if the static file does not exist, the 404 page will be returned, but the php page will return a blank page! fastcgi_cache_path /usr/local/nginx1.10/fastcgi_cachelevels=1:2 keys_zone=cache_fastcgi:128minactive=1d max_size=10g : FastCGI_ For cache directory, you can set the directory level. For example, 1:2 will generate 16 * 256 subdirectories and cache_fastcgi is the name of the cache space. The cache is how much memory is used (such popular content nginx is directly put into memory to improve access speed). inactive indicates the default expiration time. If the cache data is not accessed within the expiration time, it will be deleted, max_size indicates the maximum amount of hard disk space used. fastcgi_cache cache_fastcgi: # indicates to enable FastCGI cache and specify a name for it. It is very useful to turn on the cache, which can effectively reduce the CPU load and prevent the error release of 502. cache_fastcgi is proxy_ cache_ The name of the cache created by the path instruction. fastcgi_cache_valid 200 302 1h: # used to specify the cache time of the response code. The value in the instance indicates that the 200 and 302 responses will be cached for one hour, which is the same as FastCGI_ Use with cache. fastcgi_cache_valid 301 1d: cache 301 responses for one day. fastcgi_cache_valid any 1m: cache other responses for 1 minute. fastcgi_cache_min_uses 1: this instruction is used to set how many times the same URL will be cached. fastcgi_cache_key http://$host$request_uri: this instruction is used to set the Key value of the web cache. Nginx hashes and stores it according to the Key value md5. It is generally based on $host (domain name), $request_ Uri (requested path) and other variables are combined into a proxy_cache_key . fastcgi_pass: specify the listening port and address of FastCGI server, which can be local or other.

12. Enable pathinfo mode

When we use thinkphp and CodeIgniter framework, the address is basically / index.php/group_controller?*** The PHP file is accessed through the index.php entry. This mode is path_info mode, pathinfo mode is the url format of index.ph/index/index, which is not supported by nginx by default. We need to configure it

location ~ \.php {

include fastcgi_params;

fastcgi_pass php-fpm:9000;

fastcgi_index index.php;

fastcgi_param SCRIPT_FILENAME /data/www/$fastcgi_script_name;

# Add the following three lines of support

pathinfofastcgi_split_path_info ^(.+\.php)(.*)$;

fastcgi_param PATH_INFO $fastcgi_path_info;

include fastcgi_params;

}

13. Configure default site

server {

listen 80 default;

}

When multiple virtual hosts are created on an nginx service, they will be searched from top to bottom by default. If the virtual host cannot be matched, the content of the first virtual host will be returned. If you want to specify a default site, you can put the virtual host of this site at the location of the first virtual host in the configuration file, or configure listen default on the virtual host of this site

14. nginx add account password verification

server {

location / {

auth_basic "please input user&passwd";

auth_basic_user_file key/auth.key;

}

}

Many services are accessed through nginx, but they do not provide the function of account authentication. They can be realized through authbase account password authentication provided by nginx. The following script can be used to generate the account password

# cat pwd.pl

#!/usr/bin/perluse strict;

my $pw=$ARGV[0] ;

print crypt($pw,$pw)."\n";

usage method:

# perl pwd.pl opf8BImqCAXww

# echo "admin:opf8BImqCAXww" > key/auth.key

15. nginx open column directory

When you want nginx to exist as a file download server, you need to open the nginx column directory

server {

location download {

autoindex on;

autoindex_exact_size off;

autoindex_localtime on;

}

}

autoindex_exact_size: when it is on (default), the exact size of the file is displayed, and the unit is byte; Change to off to display the approximate size of the file, in KB or MB or GB

autoindex_localtime: when it is off (default), the displayed file time is GMT time; When it is changed to on, the displayed file time is the server time

By default, when accessing the listed txt and other files, the contents of the file will be displayed on the browser. If you want the browser to download directly, add the configuration below

if ($request_filename ~* ^.*?\.(txt|pdf|jpg|png)$) {

add_header Content-Disposition 'attachment';

}

16. IP access is not allowed

server {

listen 80 default;

server_name _;

return 404;

}

There may be some unregistered domain names or domain names you don't want to point the server address to your server, which will have a certain impact on your site. It is necessary to prohibit IP or unconfigured domain name access. We use the above default rule to turn the default traffic to 404

The above method is rough. Of course, you can also configure all unconfigured addresses. When visiting, you can directly 301 redirect to your website, which can also bring a certain amount of traffic to your website

server {

rewrite ^/(.*)$ https://baidu.com/$1 permanent;

}

17. Direct return to validation file

location = /XDFyle6tNA.txt {

default_type text/plain;

return 200 'd6296a84657eb275c05c31b10924f6ea';

}

Many times, wechat and other programs require us to put a txt file into the project to verify the ownership of the project. We can directly modify nginx in the above way without really putting the file on the server

18. nginx configuring upstream reverse proxy

http {

...

upstream tomcats {

server 192.168.106.176 weight=1;

server 192.168.106.177 weight=1;

}

server {

location /ops-coffee/ {

proxy_pass http://tomcats; proxy_set_header Host $host;

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_set_header X-Forwarded-Proto $scheme;

}

}

}

If you don't pay attention, you may fall into a proxy_ The trap of pass with or without bars. Here's the proxy in detail_ pass http://tomcats And proxy_pass http://tomcats/ Differences between:

Although it is only one / difference, the results are very different. There are two situations:

1. The destination address does not contain uri (proxy_pass) http://tomcats ). At this time, in the new target url, the matching uri is not modified. What it is is is what it is.

location /ops-coffee/ {

proxy_pass http://192.168.106.135:8181;

}

http://domain/ops-coffee/ --> http://192.168.106.135:8181/ops-coffee/

http://domain/ops-coffee/action/abc --> http://192.168.106.135:8181/ops-coffee/action/abc

2. The destination address contains uri (proxy_pass) http://tomcats/ , / is also a uri). In the new target url, the matching uri will be modified to the uri in the parameter.

location /ops-coffee/ {

proxy_pass http://192.168.106.135:8181/;

}

http://domain/ops-coffee/ --> http://192.168.106.135:8181

http://domain/ops-coffee/action/abc --> http://192.168.106.135:8181/action/abc

19. nginx upstream enable keepalive

upstream tomcat {

server www.baidu.com:8080;

keepalive 1024;

}

server {

location / {

proxy_http_version 1.1;

proxy_set_header Connection "";

proxy_pass http://tomcat;

}

}

In most cases, nginx will be used as a reverse proxy in the project. For example, nginx is followed by tomcat, and nginx is followed by php. At this time, we can turn on keepalive between nginx and back-end services to reduce the resource consumption caused by frequent TCP connections. The configuration is as follows

Keepalive: Specifies that the maximum number of connections each nginxworker can maintain is 1024. It is not set by default, that is, keepalive does not take effect when nginx is used as a client

proxy_http_version 1.1: enabling keepalive requires the HTTP protocol version to be HTTP 1.1

proxy_set_header Connection "": in order to be compatible with the old protocol and prevent keepalive failure caused by Connection close in the HTTP header, the Connection in the HTTP header needs to be cleared in time

20. 404 automatically jump to the home page

server {

location / {

error_page 404 = @ops-coffee;

}

location @ops-coffee {

rewrite .* / permanent;

}

}

The 404 page on the website is not particularly friendly. We can automatically jump to the home page after 404 appears through the above configuration

21. Hide version number

http {

server_tokens off;

}

There are often nginx security vulnerabilities for a certain version. Hiding the nginx version number has become one of the main security optimization methods. Of course, the most important thing is to upgrade and repair the vulnerabilities in time

22. Enable HTTPS

server {

listen 443;

server_name baidu.com;

ssl on;

ssl_certificate /etc/nginx/server.crt;

ssl_certificate_key /etc/nginx/server.key;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_ciphers HIGH:!aNULL:!MD5;

}

ssl on:  open https

ssl_certificate:  to configure nginx ssl Path to certificate

ssl_certificate_key:  to configure nginx ssl certificate key Path of

ssl_protocols:  Specifies the name to use when the client establishes a connection ssl Protocol version, if compatibility is not required TSLv1,Just remove it directly

ssl_ciphers:  Specify the encryption algorithm used for client connection. You can configure a more secure algorithm here

23. Add black and white list

White list configuration

location /admin/ {

allow 192.168.1.0/24;

deny all;

}

The above indicates that only hosts in the 192.168.1.0/24 network segment are allowed to access, and all other hosts are denied

You can also write a blacklist to prohibit access to some addresses and allow all others, such as

location /ops-coffee/ {

deny 192.168.1.0/24;

allow all;

}

More often, client requests go through layers of proxies. We need to go through $http_x_forwarded_for, which can be written like this

set $allow false;

if ($http_x_forwarded_for = "211.144.204.2") {

set $allow true;

}

if ($http_x_forwarded_for ~ "108.2.66.[89]") {

set $allow true;

}

if ($allow = false) {

return 404;

}

24. Restriction request method

if ($request_method !~ ^(GET|POST)$ ) {return 405;}

$request_method can obtain the method requesting nginx

Configuration only allows GET\POST method access, and other methods return 405

25. Reject user agent

if ($http_user_agent ~* LWP::Simple|BBBike|wget|curl) {return 444;}

Some lawbreakers may use wget/curl and other tools to scan our website. We can simply prevent it by banning the corresponding user agent

The 444 status of Nginx is special. If 444 is returned, the client will not receive the information returned by the server, just like the website cannot connect

26. Picture anti-theft chain

location /images/ {

valid_referers none blocked www.baidu.com baidu.com;

if ($invalid_referer) {

return 403;

}

}

valid_referers: verify the referer. none allows the referer to be empty, and blocked allows requests without protocol. In addition to the above two categories, only the image resources under images can be accessed when the referer is www.baidu.com or baidu.com, otherwise 403 is returned

Of course, you can also redirect requests that do not comply with the referer rule to a default image, such as the one below

location /images/ {

valid_referers blocked www.baidu.com baidu.com;

if ($invalid_referer) {

rewrite ^/images/.*\.(gif|jpg|jpeg|png)$ /static/qrcode.jpg last;

}

}

27. Control the number of concurrent connections

You can use NGX_ http_ limit_ conn_ The module limits the number of concurrent connections of an IP

http {

limit_conn_zone $binary_remote_addr zone=ops:10m;

server {

listen 80;

server_name baidu.com;

root /home/project/webapp;

index index.html;

location / {

limit_conn ops 10;

}

access_log /tmp/nginx_access.log main;

}

}

limit_conn_zone: set the parameters of the shared memory space that saves the state of each key (for example, $binary_remote_addr), zone = space name: size

The calculation of size is related to variables, such as $binary_ remote_ The size of addr variable is 4 bytes fixed for recording IPV4 address, while 16 bytes fixed for recording IPV6 address. The storage status occupies 32 or 64 bytes in 32-bit platform and 64 bytes in 64 bit platform. 1m shared memory space can store about 32000 32-bit States and 16000 64 bit states

limit_conn: specify a set shared memory space (for example, the space with name ops) and the maximum number of connections for each given key value

The above example shows that only 10 connections are allowed at the same time for the same IP

When there are multiple limits_ When the conn instruction is configured, all connection limits will take effect

http {

limit_conn_zone $binary_remote_addr zone=ops:10m;

limit_conn_zone $server_name zone=coffee:10m;

server {

listen 80;

server_name baidu.com;

root /home/project/webapp;

index index.html;

location / {

limit_conn ops 10;

limit_conn coffee 2000;

}

}

}

The above configuration will not only limit the number of connections from a single IP source to 10, but also limit the total number of connections from a single virtual server to 2000

28. Buffer overflow attack

Buffer overflow attacks are implemented by writing data into the buffer beyond the buffer boundary and rewriting memory fragments. Limiting the size of the buffer can effectively prevent it

client_body_buffer_size 1K;

client_header_buffer_size 1k;

client_max_body_size 1k;

large_client_header_buffers 2 1k;

client_body_buffer_size: 8k or 16k by default, indicating the buffer size occupied by the client's request body. If the connection request exceeds the value specified in the cache, all or part of these request entities will attempt to write to a temporary file.

client_header_buffer_size: indicates the buffer size of the client request header. In most cases, a request header will not be greater than 1k. However, if there is a large cookie from the wap client, it may be greater than 1k. Nginx will allocate a larger buffer to it. This value can be in large_ client_ header_ Set in buffers

client_max_body_size: indicates the maximum acceptable body size of the client request. It appears in the content length field of the request header. If the request is greater than the specified value, the client will receive a "Request Entity Too Large" (413) error, which is usually limited when uploading files to the server

large_client_header_buffers indicates the number and size of buffers used by some large request headers. By default, a buffer size is the size of the paging file in the operating system, usually 4k or 8k. The request field cannot be larger than a buffer size. If the client sends a large header, nginx will return "Request URI too large" (414), The longest field of the request header cannot be greater than a buffer, otherwise the server will return "Bad request" (400)

Several timeout configurations need to be modified at the same time

client_body_timeout 10;

client_header_timeout 10;

keepalive_timeout 5 5;

send_timeout 10;

client_body_timeout: indicates the timeout time for reading the request body. If the connection exceeds this time and the client does not respond, Nginx will return a "Request time out" (408) error

client_header_timeout: indicates the timeout for reading the client request header. If the connection exceeds this time and the client does not respond, Nginx will return a "Request time out" (408) error

keepalive_timeout: the first value of the parameter indicates the timeout time of the long connection between the client and the server. After this time, the server will close the connection. The optional second parameter indicates the time value of keep alive: timeout = time in the Response header. This value can enable some browsers to know when to close the connection so that the server does not have to close the connection repeatedly, If this parameter is not specified, nginx will not send keep alive information in the Response header

send_timeout: indicates the timeout after sending a response to the client. Timeout means that the client has not entered the fully established state and only completed two handshakes. If the client does not respond after this time, nginx will close the connection

29. Header header settings

The following settings can effectively prevent XSS attacks

add_header X-Frame-Options "SAMEORIGIN";

add_header X-XSS-Protection "1; mode=block";

add_header X-Content-Type-Options "nosniff";

X-Frame-Options: the response header indicates whether the browser is allowed to load frame and other attributes. There are three configurations. DENY prohibits any web page from being embedded, SAMEORIGIN only allows the nesting of this website, and ALLOW-FROM allows the nesting of specified addresses

X-XSS-Protection: indicates that XSS filtering is enabled (X-XSS-Protection: 0 when filtering is disabled), and mode=block indicates that if XSS attacks are detected, the page rendering will be stopped

X-Content-Type-Options: the response header is used to specify the browser's guessing behavior for the true type of unspecified or incorrectly specified content type resources. Nosnif means that no guessing is allowed

In the normal request response, the browser will distinguish the response type according to the content type. However, when the response type is not specified or incorrectly specified, the browser will try to enable mime sniffing to guess the response type of the resource, which is very dangerous

For example, a. jpg image file is maliciously embedded with executable js code. When resource type guessing is enabled, the browser will execute the embedded js code, which may have unexpected consequences

In addition, there are several security configurations for request headers that need attention

Content security policy: defines which resources can be loaded on the page,

add_header Content-Security-Policy "default-src 'self'";

The above configuration will restrict all external resources to be loaded only from the current domain name. Default SRC defines the default loading policy for all types of resources, and self allows content from the same source

Strict transport security: it will tell the browser to use HTTPS protocol instead of HTTP to access the target site

add_header Strict-Transport-Security "max-age=31536000; includeSubDomains";

The above configuration indicates that after the user accesses for the first time, a field containing the strict transport security response header will be returned. This field will tell the browser that all requests of the current website will be accessed using https protocol in the next 31536000 seconds. The parameter includeSubDomains is optional, indicating that all subdomains will adopt the same rules

Finally, recommend a website for in-depth study of Nginx:

http://tengine.taobao.org/book/index.html

Tags: Docker Nginx Container MinIO

Posted on Mon, 22 Nov 2021 12:46:28 -0500 by KoshNaranek