Nginx distributed framework


Is a high-performance HTTP and reverse proxy web server.

Services provided:

  • Dynamic static separation (web Service)
  • Load balancing (reverse proxy)
  • web cache
  • Less memory and strong concurrency (Supporting 50000 concurrent)


Download address:

  1. After downloading and uploading, it is usually installed on the Linux server under / usr/local for decompression

    tar -zxvf nginx-1.18.0.tar.gz
  2. Configure it and execute it under the nginx root directory

    # 1. Install the compilation tool file library
    yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel
    # 2. Use default configuration
    # 3. Press enter to compile
    # 4. Then enter and install
    make install
    # Note: if custom installation
    # 1. Create nginx temporary directory
    mkdir -p /var/temp/nginx
    # 2. Enter the nginx installation directory and specify the configuration file
    ./configure \ --prefix=/usr/local/nginx/nginx \
    --pid-path=/var/run/nginx/ \
    --lock-path=/var/lock/nginx.lock \
    --error-log-path=/var/log/nginx/error.log \
    --http-log-path=/var/log/nginx/access.log \
    --with-http-gzip-static_module \
    --http-client-body-temp-path=/var/temp/nginx/client \
    --http-proxy-temp-path=/var/temp/nginx/proxy \
    --http-fastcgi-temp-path=/var/temp/nginx/fastcgi \
    --http-uwsgi-temp-path=/var/temp/nginx/uwsgi \
    --http-scgi-temp-path=/var/temp/nginx/scgi \
    --with-http-stub-status_module \
    # --with-http_ssl_module is configured to support https
    # 3. Compilation and installation
    make install
  3. Find installation path:

    whereis nginx
    # The default is / usr/local/nginx

    Start Nginx. The default is port 80. Enter the server IP address in the browser to access the default page of Nginx, indicating that the installation is successful.

Uninstall Nginx:

# Close the nginx process
./nginx -s stop
rm -rf /user/local/nginx
make clean

Common commands

Start / stop / exit nginx:

# start-up
cd /usr/local/nginx/sbin/
start nginx

# stop it
./nginx -s stop

# Safe exit, graceful exit
./nginx -s quit

# Reload profile
./nginx -s reload

# Check the configuration file for errors
./nginx -t

# View nginx process
ps aux|grep nginx

Set nginx startup:

vim /etc/rc.local
 Then add at the bottom

Other commands:

# Turn on the firewall
service firewalld start
# Turn off firewall
service firewalld stop
# Query whether the port is open
firewall-cmd --query-port=8080/tcp
# Open port 80
firewall-cmd --permanent --add-port=80/tcp
# Remove port
firewall-cmd --permanent --remove-port=8080/tcp

Core profile structure

The configuration file is placed in / usr/local/nginx/conf/nginx.conf by default. There are three blocks in the configuration file by default: Global block, events block and http block.

worker_processes  1;

events {
    worker_connections  1024;

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       80;
        server_name  localhost;
        location / {
            root   html;
            index  index.html index.htm;
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;

Global block

user instruction:

Used to configure the users and user groups of the worker process running the Nginx server. When we first create Nginx, execute. / configure --user=user --group=group, and add in the configuration file

user user name;

Then use the Linux command line to create a user

useradd user name
user user name
# Then, the user has access rights under / home / user name / ` directory, and other directories do not have access rights

work process instruction:

nginx is a process model, which is divided into main process and work process.

master_process: used to specify whether to start the work process.

worker_processes: configure the number of working processes generated by Nginx, which is generally consistent with the number of CPU cores - 1.

# Set work process
worker_processes  2;

events block

accept_mutex instruction:

Configuration: accept_mutex on|off ; Used to set Nginx network connection serialization.

worker_connections Directive:

The default is worker_commections 512; Used to configure the maximum number of connections of a single worker process;

(the maximum number of connections of a single worker process * N worker threads). This value cannot be greater than the maximum number of open file handles supported by the operating system. (generally 65535 for Linux)

# Modify the maximum number of open file handles supported by the operating system. Temporarily modify
ulimit -HSn 2048
# Permanent modification
vi /etc/security/limits.conf

http block

include instruction:

Include and import external files for modular division.

include mime.types;
default_type application/octet-stream;

sendfile Directive:

Enable efficient file transfer, which is conducive to the performance of file transfer.

sendfile   on;  # Enable efficient transmission with tcp_nopush combination
tcp_nopush on;  # When the request accumulates a certain size, it is sent

keepalive_requests Directive:

Used to set the number of times a keep alive connection is used and the timeout time of the client connection service.

gzip instruction:

Opening it facilitates the transmission of files and request data.

gzip on

server block:

listen command:

Used to configure the listening port.

server_name instruction:

Used to set the virtual host service name.

server {
	listen 80;
	# You can configure multiple or wildcard characters *. www.itcast. * Note * can only be used at the beginning and end

location command:

The URI used to set the request.

server {
	# You can add = (exact match) ~ (support regular) ~ * (support regular and uppercase) after location
	location /abc{
		default_type text/plain;
		return 200 "access success";

error_page command:

Set the error page for the web site.

server {
	error_page 404;
	error_page 404 /50x.html;
	error_page 500 502 503 504 /50x.html;
	location =/50x.html{
		root html;

gzip_static command:

To solve the coexistence of Gzip and sendfile, you need to add ngx_http_gzip_static_module module to use gzip_static instruction.

# 1. Query the configuration parameters of the current Nginx
nginx -V
# 2. Clear compilation content
cd /root/nginx/core/nginx-xxx
make clean
# 3. Use configure to configure parameters
./configure (...Previous content) --with-http_gzip_static_module
# 4. Compile
# 5. Move the nginx binary executable file in the objs directory to sbin in the nginx installation directory
mv objs/nginx /usr/local/nginx/sbin
# 6. Execute the update command
make upgrade
# 7. Add under the http block of the configuration file
gzip_static on

proxy block:

Reverse proxy module

proxy_pass command:

Command is used to set the proxy server address.

proxy_set_header instruction:

This instruction can change the request header information of the client request received by the Nginx server, and then send the new request to the proxy server.

server {
	listen 8080;
	server_name localhost;
	# Set the IP of DNS to resolve proxy_pass domain name
	location /server {
		proxy_pass http://192.168.xx.xx:8080/;
		proxy_set_header username TOM;

Profile case:

Basic deployment:

# Global block
# Configure the users and user groups that are allowed to run the Nginx worker process
user www;
# Generally, the number of CPU cores is - 1
worker_processes 2; 

# Configure the path where the Nginx server runs to store the error log
error_log logs/error.log;
pid logs/;

# Global block
	accept_mutex on;
	multi_accept on;
	# Set the maximum number of connections for the worker process of Nginx
	worker_connections 1024;
# http block
	include mime.types;
	default_type application/octet-stream;
	# The configuration allows sendfile transport and enables efficient file transfer
	sendfile on;
	tcp_nopush on;
	tcp_nodelay on;
	# Configure connection timeout
	keepalive_timeout 65;
	# Gzip compression function configuration
	include /home/www/gzip/nginx_gzip.conf;
	# Configure request processing log format
	log_format server1 '===>server1 access log';
	# log_format server2 '===>server2 access log';
	# server Block start #
	include /home/www/conf.d/*.conf;
	# server Block end #


Configuration file for server1

	listen 8081;
	# Set the virtual host service name, which can be separated by domain name, multiple spaces, or wildcard "*". Example: *.
	server_name localhost;
	# Configure request processing log storage path
	access_log /home/www/myweb/server1/logs/access.log server1;
	location /server1/location1{
		root /home/www/myweb;
		index server1.html;
	location /server1/location2{
		root /home/www/myweb;
		index server2.html;
	# Configure error page steering
	location = /404.html {
		root /home/www/myweb;
		index 404.html;

Configuration file for nginx_gzip.conf

# Enable gzip function
gzip on;
# Compressed source file types, such as application/javascript application/html; check in mine.types.
gzip_types *; 
# Compression level 1-9
gzip_comp_level 6; 
# The minimum length of the response page to be compressed. If it is less than this number, it will not be compressed
gzip_min_length 1024k;
# Cache space size, just use the default
gzip_buffers 4 16K; 
# Specify the minimum HTTP request version required to compress the response. Just use the default
gzip_http_version 1.1;
# Add a compression ID to the header information. The default is off
gzip_vary on;
# Do not compress versions below IE6
gzip_disable "MSIE [1-6]\.";

Solve cross domain problems:

Use the add_header instruction, which can be used to add some header information

location /xxx{
	add_header 'Access-Control-Allow-Origin' *;
	add_header 'Access-Control-Allow-Credentials' 'true';
	add_header 'Access-Control-Allow-Methods' GET,POST,PUT,DELETE;
	add_header 'Access-Control-Allow-Headers' *;
	root /home/www/myweb;
	index server1.html;

Solve the static resource anti-theft chain:

With the valid_referers instruction, if a domain name or IP address is added to the, if the value is 1, 403 is returned

location /xxx {
	valid_referers none blocked*;
	if ($invalid_referer){
		# Return to 403
		return 403
		# If you let this picture show other default pictures
		rewrite ^/ /images/Picture name.png break;
	root /usr/local/nginx/html;

# If this domain name is not included, 403 will be redirected
valid_referers *
	return 403

Rewrite domain name jump:

Accessing xxx1 and xxx2 domain names will jump to the domain name of zong.

server {
	listen 80;
	rewrite ^(.*)$1;
	# You can also jump under a certain path
	location /user {
		rewrite ^/user(.*)$$1;

Configure SSL:

Step 1: generate certificate

1. Purchase using alicloud and other platforms: or use openssl

Purchase a certificate, create a certificate, bind the domain name, pass the audit, and download the certificate.

Under a file on the server (nginx/conf), create a cert directory to store the downloaded certificate file.

server {
	listen 80;
	server_name domain name;
	# Redirect http requests to https
	rewrite ^(.*)$ https://$host$1;
	location / {
		root html;
		index index.html index.htm;
server {
	listen 443 ssl;
	server_name domain name;
	root html;
	index index.html index.htm;
	# Certificate file
	ssl_certificate cert/xxx.pem;
	ssl_certificate_key cert/xxx.key;
	ssl_session_cache shared:SSL:1m;
	ssl_session_timeout 5m;
	# Encryption rules
	ssl_ciphers HIGH:!aNULL:!MD5;
	# Indicates the type of TLS protocol used
	ssl_protocols TLSv1 TLSv1.1 TLSv1.2
	ssl_prefer_server_ciphers on;
	location / {
		root html;
		index index.html index.htm;

Configure load balancing:

Load balancing status:

  • down does not participate in load balancing
  • Backup the backup server. It is not executed until the main service is suspended
  • max_ Failures the number of times the request is allowed to fail, max_fails =3
  • fail_timeout After Max_ Service pause time after failures_ timeout=15
  • max_conns limits the maximum number of receive connections. When the maximum concurrency of this machine is 100, that is, the value is 100

Load balancing policy:

  • Polling default

  • Weight weight method

  • ip_hash is based on the IP allocation method, which can solve the problem that session s are not shared

    upstream backend{
  • least_ According to the least connection mode, Conn is suitable for different request processing times, resulting in server overload.

  • url_hash according to the URL allocation method, hash & request_ uri;

  • fair based on response time

# Load balancing configuration
upstream backend {
	server down;
	server max_conns=100 weight=1;
	server weight=2;
upstream backend2{
	hash &request_uri;
server {
    listen       80;
    server_name  localhost;
    location / {
        #root   html;
        #index  index.html index.htm;
		proxy_pass http://backend;
    location /backend2/ {
		proxy_pass http://backend2;

Configure cache:

	# Set cache root directory levels cache rule keys_zone cache name, cache size, 1m=8000 connection address
 	# How long will the inactive cache be used_ Size maximum cache space
	proxy_cache_path /usr/local/cache levels=2:1 keys_zone=Cache name:200m inactive=1d max_size=20g;
	upstream backend{
	server {
		listen 8080;
		server_name localhost;
		location / {
			# Set not to cache. If it is a js file, it will not be cached
			if ($request_uri ~ /.*\.js$){
				set $nocache 1;
			proxy_no_cache $nocache $cookie_nocache $arg_nocache $arg_comment;
			proxy_cache_bypass $nocache $cookie_nocache $arg_nocache $arg_comment;
			proxy_cache Cache name;
			proxy_cache_key $scheme$proxy_host$request_uri;
			# Cache only when the request is 5 times
			proxy_cache_min_uses 5;
			# When the cache status code is 200, the cache duration is 5 days
			proxy_cache_valid 200 5d;
			# When the cache status code is 404, the cache duration is 30s
			proxy_cache_valid 404 30s;
			proxy_cache_valid any 1m;
			add_header nginx-cache"$upstream_cache_status";
			proxy_pass http://backend;

Delete the corresponding cache directory:

rm -rf /usr/local/proxy_cache/......

Dynamic and static separation:

upstream webservice{
	# tomcat address
server {
	listen 80;	
	server_name localhost;
	# dynamic resource
	location /demo {
		proxy_pass http://webservice;
	# Static resources
	location ~/.*\.(png|jpg|gif|js){
		root html/web; # Static file address
		gzip on;
	location / {
		root html/web;
		index index.html index.htm;

Build a high availability Nginx cluster:

To prepare two Nginx machines, you need to install Keepalived on these two machines to realize the high availability function through VRRP protocol.

Install Keepalived

Step 1:Download from official website keepalived,Official website address
Step 2:Upload the downloaded resources to the server 
 Step 3:establish keepalived catalogue
mkdir keepalived
 Step 4:Decompress the compressed file to the specified directory
tar -zxf keepalived-2.0.20.tar.gz -C keepalived/
Step 5:yes keepalived Configure, compile, and install
cd keepalived/keepalived-2.0.20
./configure --sysconf=/etc --prefix=/usr/local
make && make install

configuration file

Generally, it is configured in / etc / kept / kept.conf

global_defs {
	# When keepalived sends a handover, you need to send an email to the specific email address
	notification_email {
	# Set sender's mailbox information
	# Specify smpt mailbox service address
	smtp_connect_timeout 30
	# An identity of the server A machine
	router_id keepalivedA
	vrrp_garp_interval 0
	vrrp_gna_interval 0
vrrp_instance VI_1 {
	# Two values: MASTER master BACKUP slave
	state MASTER
	# Non preemptive, this parameter can only be used when the state is backup
	# Therefore, we set the state to backup and let it compete through priority to realize host and slave
	interface ens33
	virtual_router_id 51
	# The higher priority will become the host
	priority 100
	advert_int 1
	authentication {
		# Authentication mode
		auth_type PASS
		# Specify the password used for authentication, up to 8 digits
		auth_pass 1111
	virtual_ipaddress {
		# Virtual IP address setting virtual IP address

Enable keepalived

cd /usr/local/sbin

Script for automatic switching

  1. Add the corresponding configuration image in the keepalived configuration file

    # ck_n is the script name
    vrrp_script ck_n{
    	script "Script location"
    	interval 3 #Execution interval
    	weight -20 #Dynamically adjust VRRP_ Priority of instance
    vrrp_instance VI_1 {
    	virtual_ipaddress {
    	# Using Shell scripts
    	track_script {
  2. Script

    Content: monitor the running status of nginx. If nginx fails to start, try to start again. If it fails to start, close the keepalived process.

    num=`ps -C nginx --no-header | wc -l`
     if [ $num -eq 0 ];then
     sleep 2
    if [ `ps -C nginx --no-header | wc -l` -eq 0 ]; then
     killall keepalived
  3. Set permissions for script files

    chmod 755

common problem

Startup failed. pid file not found


mkdir -p /var/run/nginx/
# Specify the configuration file to restart
nginx -C /usr/local/nginx/conf/nginx.conf
# restart
nginx -s reload

Configure environment variables:

vim /etc/profile
# Add at the end of the document
export NGINX_HOME=/usr/local/nginx
export PATH=$NGINX_HOME/sbin:$PATH

Set automatic startup:

vim /lib/systemd/system/nginx.service
# Add in file
#Describe service
#Describe service category
#For the setting of Service operation parameters, note that the absolute path should be used for the start, restart and stop commands of [Service]
#Form of background operation
#Service specific running commands
#Restart command
ExecReload=/usr/local/nginx/sbin/nginx -s reload
#Stop command
ExecStop=/usr/local/nginx/sbin/nginx -s quit
#Indicates that a separate temporary space is allocated to the service
#The related settings of service installation under the operation level can be set to multi-user, that is, the system operation level is 3

# Then modify the file permissions
chmod 755 /usr/lib/systemd/system/nginx.service
# Set startup and self startup
systemctl enable nginx.service

Tags: Linux Operation & Maintenance Nginx

Posted on Tue, 21 Sep 2021 23:34:36 -0400 by HFD