Nginx learning notes

Nginx learning notes

Write your own notes through the video. For beginners of Nginx.

1, General content of the article

Software links required for learning: extraction code: euj4
1. Introduction to nginx
(1) What is nginx and what can be done
(2) Forward proxy
(3) Reverse proxy
(4) Dynamic and static separation
2. Installation of Nginx
3. Common commands and configuration files for Nginx
4. Nginx configuration instance 1 reverse proxy
5. Nginx configuration instance 2 load balancing
6. Nginx configuration example 3 dynamic and static separation
7. High availability cluster of Nginx
(1) nginx configuring master-slave mode
(2) nginx configuration dual master mode

1, What is Nginx?

1. Bottlenecks in the company's products?

  • When our company's project was just launched, the concurrency was small and users used less. Therefore, in the case of low concurrency, it was enough to start the application with a jar package, and then the internal tomcat returned the content to users.
  • But slowly, more and more users use our platform, and the concurrency increases slowly. At this time, one server can't meet our needs.
  • So we expanded horizontally and added servers. At this time, several projects are started on different servers. If users want to access, they need to add a proxy server to help us forward and process requests through the proxy server.
  • We hope that this proxy server can help us receive users' requests, and then forward users' requests to different server nodes according to rules. The user is not aware of this process. The user does not know which server returned the result. We also hope that he can provide different weight choices according to the performance of the server. Ensure the best experience! So we used Nginx.

2. What is Nginx?

  • Baidu Encyclopedia definition:
    • Nginx (engine x) is a high-performance HTTP and reverse proxy web server. It also provides IMAP/POP3/SMTP services. Nginx is, the second most visited site in Russia by Igor sesoyev (Russian: Рамблер) Developed, the first public version 0.1.0 was released on October 4, 2004. nginx 1.0.4 was released on June 1, 2011.
    • Its characteristics are less memory and concurrent capability. In fact, nginx's concurrent capability is better in the same type of web server. Chinese mainland users use Baidu website: Baidu, Jingdong, Sina, NetEase, Tencent, Taobao, etc., with 12.18% usage rate in the global active website, about 22 million 200 thousand websites.
    • Nginx is a service with very simple installation, concise configuration files (perl syntax is also supported) and very few bugs. Nginx is very easy to start, and can run almost 7 * 24 continuously, even if it runs for several months without restarting. You can also upgrade the software version without interruption.
    • The Nginx code is completely written from scratch in C. The official data test shows that it can support the response of up to 50000 concurrent connections.

3. Nginx action?

  • Is a high-performance HTTP and reverse proxy web server.
  • Services provided:
    • Dynamic static separation (web Service)
    • Load balancing (reverse proxy)
    • web cache
    • Less memory and strong concurrency (Supporting 50000 concurrent)

4. Forward proxy and reverse proxy

  • Forward proxy: if the Internet (extranet) outside the LAN is imagined as a huge resource pool, the client in the LAN needs to access the Internet through the proxy server. This proxy service is called forward proxy. The proxy server should be configured at the client to access the specified website

  • Reverse proxy: in fact, the client is not aware of the proxy, because the client can access it without any configuration. We only need to send the request to the reverse proxy server. The reverse proxy server selects the target server to obtain the number and returns it to the client. At this time, the reverse proxy server and the target server are external servers, exposing the proxy Server address, which hides the real server IP address.

5. Load balancing

  • Meaning: increase the number of servers, and then distribute the requests to each server. Instead of concentrating the original requests on a single server, distribute the requests to multiple servers and distribute the load to different servers, which is what we call load balancing

6. Dynamic and static separation

  • Meaning: in our software development, some requests need background processing, and some requests do not need background processing (such as css, html, jpg, js, etc.) , these files that do not need background processing are called static files. Let the dynamic web pages in the dynamic website distinguish the constant resources from the frequently changing resources according to certain rules. After the dynamic and static resources are split, we can cache them according to the characteristics of the static resources. Improve the speed of resource response. That is, the dynamic pages and static pages can be separated from each other Use the same server to parse, so as to speed up the parsing speed and reduce the pressure of the original single server.

2, Installation

  • Download address:

  • How to install nginx in linux system.

    • 1. Install gcc: to install nginx, you need to compile the source code downloaded from the official website. The compilation depends on the gcc environment. If there is no gcc environment, you need to install it.

      [root@localhost ~]# yum install gcc-c++
    • 2. pcre PCRE devel installation: PCRE(Perl Compatible Regular Expressions) is a perl library, including a perl compatible regular expression library. The http module of nginx uses pcre to parse regular expressions, so you need to install the pcre library on linux. pcre devel is a secondary development library developed using pcre. Nginx also needs this library.

      [root@localhost ~]# yum install -y pcre pcre-devel
    • 3. Zlib installation: the zlib library provides many ways to compress and decompress. nginx uses zlib to gzip the contents of the http package, so you need to install the zlib library on Centos.

      [root@localhost ~]# yum install -y zlib zlib-devel
    • 4. OpenSSL installation: OpenSSL is a powerful secure socket layer cipher library, including main cipher algorithms, common key and certificate encapsulation management functions and ssl protocol, and provides rich applications for testing or other purposes. nginx supports not only http protocol, but also https (that is, transmitting http over ssl Protocol) Therefore, you need to install the OpenSSL Library in Centos.

      [root@localhost ~]# yum install -y openssl openssl-devel
    • 5. Go to the official website to download the installation package: manually download the. tar.gz installation package. After downloading, upload it to the server.

    • 6. Decompress

      [root@localhost opt]# tar -zxvf nginx-1.18.0.tar.gz 
      [root@localhost opt]# cd nginx-1.18.0/
    • 7. Configuration: use the default configuration and execute in the nginx root directory. (installed in the: / usr/local/nginx directory by default)

      [root@localhost nginx-1.18.0]# ./configure 
      [root@localhost nginx-1.18.0]# make
      [root@localhost nginx-1.18.0]# make install
      #Find installation directory
      [root@localhost nginx-1.18.0]# whereis nginx
      nginx: /usr/local/nginx
    • Enter the Nginx installation directory and start the test.

      [root@localhost nginx-1.18.0]# cd /usr/local/nginx/
      #conf configuration directory
      #html page
      #logs log
      #sbin command directory
      [root@localhost nginx]# ls
      conf  html  logs  sbin
      [root@localhost nginx]# cd sbin/
      [root@localhost sbin]# ls
      #Start Nginx. If there is no output, it is a good result.
      [root@localhost sbin]# ./nginx 
      [root@localhost sbin]# 
    • Check whether nginx starts successfully.

      #Visit Nginx and print out the page information to prove that the visit is successful.
      [root@localhost sbin]# curl localhost
      <!DOCTYPE html>
      <title>Welcome to nginx!</title>
          body {
              width: 35em;
              margin: 0 auto;
              font-family: Tahoma, Verdana, Arial, sans-serif;
      <h1>Welcome to nginx!</h1>
      <p>If you see this page, the nginx web server is successfully installed and
      working. Further configuration is required.</p>
      <p>For online documentation and support please refer to
      <a href=""></a>.<br/>
      Commercial support is available at
      <a href=""></a>.</p>
      <p><em>Thank you for using nginx.</em></p>

    • Note: if the connection fails, check whether the alicloud security group opens the port or the server firewall opens the port! (you can also turn off the firewall directly).

      # Turn on the firewall
      service firewalld start
      # service iptables restart 
      service firewalld restart
      # Turn off firewall
      service firewalld stop
      # View firewall open rules
      firewall-cmd --list-all
      # Query whether the port is open
      firewall-cmd --query-port=8080/tcp
      # Open port 80
      firewall-cmd --permanent --add-port=80/tcp
      # Remove port
      firewall-cmd --permanent --remove-port=8080/tcp
      #Restart the firewall (restart the firewall after modifying the configuration)
      firewall-cmd --reload
      # Parameter interpretation
      1,firwall-cmd: yes Linux Actions provided firewall A tool for;
      2,--permanent: Indicates that it is set to persistent;
      3,--add-port: Identify the added port;
  • How do I install Windows

    • 1. Unzip the downloaded installation package. The directory structure is as follows:

    • 2. Enter the configuration file to view the default configuration (now mainly view the default monitoring port number).

    • 3. Run nginx.exe program through cmd (double click will flash directly).
    • 4. Check whether nginx starts successfully. Enter the web address directly in the browser address bar http://localhost:80 Press enter and the following page appears, indicating that the startup is successful!
    • 5. Close nginx
      • If you use the cmd command window to start nginx, closing the cmd window cannot end the nginx process. You can use two methods to close nginx.
      • (1) Enter the nginx command nginx - s stop (quick forced stop nginx) or nginx - s quit (complete, orderly and safe stop nginx).
      • (2) Use taskkill / F / T / im nginx.exe.
  • When we modify the configuration file nginx.conf of nginx, we don't need to close nginx and restart nginx. We just need to execute the command nginx -s reload to make the changes take effect

  • Uninstall Nginx

    # Close the nginx process
    ./nginx -s stop
    #Delete Nginx directory
    rm -rf /user/local/nginx
    #Clear configuration
    make clean

2, Nginx common commands

  • View nginx version command

    [root@localhost sbin]# ./nginx -v
    nginx version: nginx/1.18.0
  • Start command: execute. / nginx in / usr/local/nginx/sbin directory

    cd /usr/local/nginx/sbin/
    ./nginx  start-up
  • close command

    ./nginx -s stop  stop it
    ./nginx -s quit  Safe exit
  • Reload command

    ./nginx -s reload  Reload profile
    ps aux|grep nginx  see nginx process
  • Set nginx boot.

    vim /etc/rc.local
     Then add at the bottom

3, nginx.conf configuration document

1. Profile directory

  • Under the nginx installation directory, the default configuration files are placed in the conf directory of this directory, and the main configuration file nginx.conf is also in it. The subsequent use of nginx is basically to modify this configuration file accordingly.

2. Content in the configuration file.

  • There are three blocks in the configuration file by default: Global block, events block and http block. Most of them are comments.
  • View the contents of the profile.
    		[root@localhost conf]# cat nginx.conf
    		worker_processes  1;
    		events {
    		    worker_connections  1024;
    		http {
    		    include       mime.types;
    		    default_type  application/octet-stream;
    		    sendfile        on;
    		    keepalive_timeout  65;		    server {
    		        listen       80;
    		        server_name  localhost;
    		        location / {
    		            root   html;
    		            index  index.html index.htm;
    		        error_page   500 502 503 504  /50x.html;
    		        location = /50x.html {
    		            root   html;

2.1. Global block

  • Configure the configuration instructions for the overall operation of the server. Like worker_processes 1: configure the number of concurrent processes.
  • User instruction: used to configure the user and user group of the worker process running the Nginx server (configure the user and user group allowed to run the Nginx worker process). When we first create Nginx, execute. / configure --user=user --group=group to add in the configuration file.
    user user name;
  • work process instruction:
    • nginx is a process model, which is divided into main process and work process.
    • master_process: used to specify whether to start the work process.
    • worker_processes: configure the number of working processes generated by Nginx, which is generally consistent with the number of CPU cores - 1.
      # Set work process
      worker_processes  2;

2.2. events block

  • Affect the network connection between the Nginx server and users.
  • accept_mutex instruction: configuration: accept_mutex on|off ; Used to set Nginx network connection serialization.
  • worker_connections instruction: the default is worker_commections 512; Used to configure the maximum number of connections of a single worker process; (the maximum number of connections of a single worker process * N worker threads). This value cannot be greater than the maximum number of open file handles supported by the operating system. (generally 65535 for Linux)
    #The maximum number of connections supported is 1024.
     worker_connections 1024

2.3. http block

1. http global block

  • Include instruction: include and import external files for modular division.

    include mime.types;
    default_type application/octet-stream;
  • sendfile instruction: enables efficient file transfer, enabling file transfer performance.

    sendfile   on;  # Enable efficient transmission with tcp_nopush combination
    tcp_nopush on;  # When the request accumulates a certain size, it is sent
  • keepalive_requests instruction: used to set the number of times a keep alive connection is used and the timeout time of the client connection service.

  • gzip instruction: turn it on to facilitate the transfer of files and requested data.

    gzip on

2. server block

  • listen instruction: used to configure the listening port.

  • server_name instruction: used to set the virtual host service name.

    server {
        listen 80;
        # You can configure multiple or wildcard characters *. www.itcast. * Note * can only be used at the beginning and end
  • location instruction: used to set the URI of the request.

    server {
        # You can add = (exact match) ~ (support regular) ~ * (support regular and uppercase) after location
        location /abc{
            default_type text/plain;
            return 200 "access success";
  • error_page instruction: set the error page of the website.

    server {
        error_page 404;
        error_page 404 /50x.html;
        error_page 500 502 503 504 /50x.html;
        location =/50x.html{
            root html;
  • gzip_static instruction: to solve the coexistence of Gzip and sendfile, you need to add ngx_http_gzip_static_module module to use gzip_static instruction.

2.1. Proxy block reverse proxy module
  • proxy_pass instruction: this command is used to set the address of the proxy server.

  • proxy_set_header instruction: this instruction can change the request header information of the client request received by the Nginx server, and then send the new request to the proxy server.

    server {
        listen 8080;
        server_name localhost;
        # Set the IP of DNS to resolve proxy_pass domain name
        location /server {
            proxy_pass http://192.168.xx.xx:8080/;
            proxy_set_header username TOM;

2.4 configuration file cases (for reference)

  • For details:

1. Basic deployment:

# Global block
# Configure the users and user groups that are allowed to run the Nginx worker process
user www;
# Generally, the number of CPU cores is - 1
worker_processes 2; 
# Configure the path where the Nginx server runs to store the error log
error_log logs/error.log;
pid logs/;
# Global block
    accept_mutex on;
    multi_accept on;
    # Set the maximum number of connections for the worker process of Nginx
    worker_connections 1024;
# http block
    include mime.types;
    default_type application/octet-stream;
    # The configuration allows sendfile transport and enables efficient file transfer
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    # Configure connection timeout
    keepalive_timeout 65;
    # Gzip compression function configuration
    include /home/www/gzip/nginx_gzip.conf;
    # Configure request processing log format
    log_format server1 '===>server1 access log';
    # log_format server2 '===>server2 access log';
    # server Block start #
    include /home/www/conf.d/*.conf;
    # server Block end #

2. Configuration file for server1

    listen 8081;
    # Set the virtual host service name, which can be separated by domain name, multiple spaces, or wildcard "*". Example: *.
    server_name localhost;
    # Configure request processing log storage path
    access_log /home/www/myweb/server1/logs/access.log server1;
    location /server1/location1{
        root /home/www/myweb;
        index server1.html;
    location /server1/location2{
        root /home/www/myweb;
        index server2.html;
    # Configure error page steering
    location = /404.html {
        root /home/www/myweb;
        index 404.html;

3. Configuration file for nginx_gzip.conf

# Enable gzip function
gzip on;
# Compressed source file types, such as application/javascript application/html; check in mine.types.
gzip_types *; 
# Compression level 1-9
gzip_comp_level 6; 
# The minimum length of the response page to be compressed. If it is less than this number, it will not be compressed
gzip_min_length 1024k;
# Cache space size, just use the default
gzip_buffers 4 16K; 
# Specify the minimum HTTP request version required to compress the response. Just use the default
gzip_http_version 1.1;
# Add a compression ID to the header information. The default is off
gzip_vary on;
# Do not compress versions below IE6
gzip_disable "MSIE [1-6]\.";

4. Solve cross domain problems

  • Use the add_header instruction, which can be used to add some header information
    location /xxx{
        add_header 'Access-Control-Allow-Origin' *;
        add_header 'Access-Control-Allow-Credentials' 'true';
        add_header 'Access-Control-Allow-Methods' GET,POST,PUT,DELETE;
        add_header 'Access-Control-Allow-Headers' *;
        root /home/www/myweb;
        index server1.html;

5. Solve static resource anti-theft chain

  • Use valid_ The referers instruction returns 403 if the domain name or IP address is added, and if the value is 1.

    location /xxx {
        valid_referers none blocked*;
        if ($invalid_referer){
            # Return to 403
            return 403
            # If you let this picture show other default pictures
            rewrite ^/ /images/Picture name.png break;
        root /usr/local/nginx/html;
    # If this domain name is not included, 403 will be redirected
    valid_referers *
        return 403

6. Rewrite domain name jump

  • Accessing xxx1 and xxx2 domain names will jump to the domain name of zong.
    server {
        listen 80;
        rewrite ^(.*)$1;
        # You can also jump under a certain path
        location /user {
            rewrite ^/user(.*)$$1;

7. Configure SSL

  • Generate a certificate: purchase it using alicloud and other platforms, or use openssl. Purchase a certificate, create a certificate, bind the domain name, pass the audit, and download the certificate. Under a file on the server (nginx/conf), create a cert directory to store the downloaded certificate file.
    server {
        listen 80;
        server_name domain name;
        # Redirect http requests to https
        rewrite ^(.*)$ https://$host$1;
        location / {
            root html;
            index index.html index.htm;
    server {
        listen 443 ssl;
        server_name domain name;
        root html;
        index index.html index.htm;
        # Certificate file
        ssl_certificate cert/xxx.pem;
        ssl_certificate_key cert/xxx.key;
        ssl_session_cache shared:SSL:1m;
        ssl_session_timeout 5m;
        # Encryption rules
        ssl_ciphers HIGH:!aNULL:!MD5;
        # Indicates the type of TLS protocol used
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2
        ssl_prefer_server_ciphers on;
        location / {
            root html;
            index index.html index.htm;

4, Nginx configuration instance - reverse proxy instance 1

  • Implementation effect: open the browser, enter the address in the browser address bar, and jump to the tomcat main page of liunx system.
  • preparation
    • 1. Install tomcat on the liunx system and use the default port 8080.
    • 2. Put the tomcat installation file into the liunx system and unzip it.
      [root@localhost opt]# tar -zxvf apache-tomcat-9.0.1.tar.gz 
    • 3. Enter the tomcat bin directory and start the tomcat server with. /
      [root@localhost opt]# cd apache-tomcat-9.0.1/
      #Enter the bin directory
      [root@localhost apache-tomcat-9.0.1]# cd bin/
      [root@localhost bin]# ls
      bootstrap.jar  catalina-tasks.xml            configtest.bat  digest.bat  startup.bat      tomcat-native.tar.gz  version.bat
      catalina.bat   commons-daemon.jar           shutdown.bat       tool-wrapper.bat    commons-daemon-native.tar.gz       setclasspath.bat      tomcat-juli.jar
      #Start tomcat
      [root@localhost bin]# ./ 
      Using CATALINA_BASE:   /opt/apache-tomcat-9.0.1
      Using CATALINA_HOME:   /opt/apache-tomcat-9.0.1
      Using CATALINA_TMPDIR: /opt/apache-tomcat-9.0.1/temp
      Using JRE_HOME:        /opt/java/jre
      Using CLASSPATH:       /opt/apache-tomcat-9.0.1/bin/bootstrap.jar:/opt/apache-tomcat-9.0.1/bin/tomcat-juli.jar
      Tomcat started.
    • 4. View the ports open to the outside world, and open port 8080
      #Open 8080 port
      firewall-cmd --add-port=8080/tcp --permanent
      firewall-cmd –reload
      #View open port numbers
      firewall-cmd --list-all
    • 5. Access tomcat server through browser in windows system. (LinuxIp:8080)
  • Analysis of access process
  • Specific configuration
    • 1. The first step is to configure the corresponding relationship between domain name and ip in the host file of windows system
    • 2. Configure request forwarding in nginx (reverse proxy configuration). The configuration file has been modified. You must reload it to take effect.
      [root@localhost sbin]# ./nginx -s reload
    • Final test

5, Nginx configuration instance - reverse proxy instance 2

  • Implementation effect: use nginx reverse proxy to jump to services on different ports according to the access path
    nginx listening port is in plane 9001.
    visit Jump directly to
    visit Jump directly to
  • preparation
    • 1. Prepare two tomcat servers, one 8080 port and one 8081 port
      #Copy a Tomcat to the tomcat8081 folder
      [root@localhost opt]# cp -r apache-tomcat-9.0.1 tomcat8081/
      [root@localhost opt]# cd tomcat8081/
      [root@localhost tomcat8081]# cd apache-tomcat-9.0.1/conf/
      #Modify the access port number of a tomcat
      [root@localhost conf]# vim server.xml 
    • 2. Create files and pages
      #tomcat entering port 8080
      [root@localhost opt]# cd apache-tomcat-9.0.1/
      #Enter webapps directory
      [root@localhost apache-tomcat-9.0.1]# cd webapps/
      [root@localhost webapps]# ls
      docs  examples  host-manager  manager  ROOT
      #Create edu directory
      [root@localhost webapps]# mkdir edu
       get into edu catalogue
      [root@localhost webapps]# cd edu/
      #Create a.html
      [root@localhost vod]# vim a.html 
      Similarly, enter webapps under tomcat on the 8081 side to create vod and a.html.
    • 3. Test two tomcat access pages

  • Specific configuration
    • 1. Find the nginx (/ usr/local/nginx/conf /) configuration file and configure the reverse proxy. After modifying the configuration, remember to reload the configuration.

    • 2. The port number of opening external access is 9001 8080 8081, which is opened in turn.

      #Open 9001 port
      firewall-cmd --add-port=9001/tcp --permanent
      firewall-cmd –reload
      #View open port numbers
      firewall-cmd --list-all
  • Final test

6, Nginx configuration instance - load balancing

1. Case

  • Implementation effect: enter the address in the browser address bar , load balancing effect, average access to a.html pages in ports 8080 and 8081 (two servers access each other on average, one at a time).

  • preparation

    • 1. Prepare two tomcat servers, one 8080 and one 8081
    • 2. In the webapps directory of the two tomcat, create the edu folder, and create a page a.html in the edu folder for testing (Tomcat on port 8080 has added the edu folder through the above steps, so you don't need to add it again).
      #Copy vod to the current directory and rename it edu
      [root@localhost webapps]# cp -r vod/ /opt/tomcat8081/apache-tomcat-9.0.1/webapps/edu
      [root@localhost webapps]# ls
      docs  edu  examples  host-manager  manager  ROOT  vod
      [root@localhost webapps]# cd e
      [root@localhost webapps]# cd edu/
      [root@localhost edu]# ls
      [root@localhost edu]# cat a.html 
  • Configure load balancing in the configuration file of nginx. After modifying the configuration, remember to reload the configuration (remember to comment out the above configuration, otherwise you will not see the current effect).

  • Test, if you use Google browser, remember to turn on and disable the cache.

2. nginx allocation server policy

  • First polling (default)
    Each request is allocated to different back-end servers one by one in chronological order. If the back-end server goes down, it can be automatically eliminated.

  • The second weight
    Weight means that the weight is 1 by default. The higher the weight, the more clients are allocated and the more requests are processed. Specifies the polling probability. The weight is proportional to the access ratio. It is used in the case of uneven performance of the back-end server.

    upstream server_pool{ 
    server weight=10; 
    server weight=10; 
  • The third ip_hash
    Each request is allocated according to the hash result of the access ip, so that each visitor accesses a back-end server, which can solve the problem of session sharing.

    upstream server_pool{ 
  • The fourth kind of fair
    Requests are allocated according to the response time of the back-end server, and those with short response time are allocated first.

    upstream server_pool{ 
  • least_ According to the least connection mode, Conn is suitable for different request processing times, resulting in server overload.

  • url_hash according to the URL allocation method, hash & request_ uri;

3. Load balancing state

  • down does not participate in load balancing
  • Backup the backup server. It is not executed until the main service is suspended
  • max_ Failures the number of times the request is allowed to fail, max_fails =3
  • fail_timeout After Max_ Service pause time after failures_ timeout=15
  • max_conns limits the maximum number of receive connections. When the maximum concurrency of this machine is 100, that is, the value is 100

4. Load balancing configuration (for reference)

upstream backend {
    server down;
    server max_conns=100 weight=1;
    server weight=2;
upstream backend2{
    hash &request_uri;
server {
    listen       80;
    server_name  localhost;
    location / {
        #root   html;
        #index  index.html index.htm;
        proxy_pass http://backend;
    location /backend2/ {
        proxy_pass http://backend2;

7, Nginx configuration instance - dynamic and static separation

  • What is dynamic and static separation.
    • Nginx dynamic static separation is simply to separate dynamic and static requests. It can not be understood as simply separating dynamic pages from static pages. Strictly speaking, dynamic requests should be separated from static requests. It can be understood as using nginx to process static pages and Tomcat to process dynamic pages. From the perspective of current implementation, dynamic and static separation can be roughly divided into two types.
      • 1. One is to simply separate the static file into a separate domain name and put it on an independent server, which is also a popular scheme at present;
      • 2. Another method is to publish dynamic and static files together, separated by nginx. Different request forwarding is realized by specifying different suffix names through location. By setting the expires parameter, you can make the browser cache expiration time and reduce the requests and traffic with the server. The specific definition of expires is to set an expiration time for a resource, that is, it does not need to be verified by the server, but directly confirm whether it expires through the browser itself, so no additional traffic will be generated. This approach is well suited to resources that do not change frequently. (if the file is frequently updated, it is not recommended to use expires to cache). I set 3d here, which means that if the URL is accessed within these three days and a request is sent, the last update time of the file has not changed compared with the server, it will not be fetched from the server and the status code 304 will be returned. If there is any modification, it will be directly re downloaded from the server and the status code 200 will be returned.
  • preparation
    • 1. Prepare static resources (create data directory, put dynamic resources and static resources) in liunx system for access.
  • Specific configuration
    • 1. Configure in the nginx configuration file. (configuration needs to be reloaded)
  • 4. Final test
    • Access static resources.
    • Accessing the directory of static resources will display all static resources.
    • Access dynamic resources.

Tags: Operation & Maintenance Nginx

Posted on Mon, 04 Oct 2021 00:07:46 -0400 by disconne