By dunwu
https://github.com/dunwu/ngin...
summary
What is Nginx?
Nginx (engine x) is a lightweight Web server, reverse proxy server and email (IMAP/POP3) proxy server.
What is reverse proxy?
Reverse Proxy means that the proxy server accepts the connection request on the internet, then forwards the request to the server on the internal network, and returns the result from the server to the client requesting connection on the internet. At this time, the proxy server acts as a Reverse Proxy server.
Installation and use
install
Please refer to: Nginx service introduction and installation
use
nginx is easy to use, just a few commands.
The commonly used commands are as follows:
nginx -s stop #Close Nginx quickly, possibly without saving relevant information, and terminate the web service quickly. nginx -s quit #Close Nginx smoothly, save relevant information, and end web service in a planned way. nginx -s reload #The configuration needs to be reloaded due to the change of Nginx related configuration. nginx -s reopen #Reopen the log file. nginx -c filename #Specify a configuration file for Nginx instead of the default. nginx -t #Don't run, just test the configuration file. nginx will check the syntax of the configuration file for correctness and try to open the file referenced in the configuration file. nginx -v #Displays the version of nginx. nginx -V #Displays the version of nginx, compiler version, and configuration parameters.
If you don't want to hit the command every time, you can add a new startup batch file in the nginx installation directory startup.bat , double click to run. The contents are as follows:
@echo off rem If started before nginx And record it pid File, yes kill Specify process nginx.exe -s stop rem Test configuration file syntax correctness nginx.exe -t -c conf/nginx.conf rem display version information nginx.exe -v rem Start according to the specified configuration ginxnginx.exe -c conf/nginx.conf
If you are running under Linux, write a shell script, which is the same.
nginx configuration practice
I always think that the configuration of various development tools will be explained in combination with actual combat, which will be easier to understand.
http reverse proxy configuration
Let's achieve a small goal first: to complete an http reverse proxy without considering complicated configuration.
nginx.conf The configuration file is as follows:
Note: conf/ nginx.conf Is the default configuration file for nginx. You can also use nginx -c to specify your profile
#Run user #user somebody; #Start the process, usually set equal to the number of CPUs worker_processes 1; #Global error log error_log D:/Tools/nginx-1.10.1/logs/error.log; error_log D:/Tools/nginx-1.10.1/logs/notice.log notice; error_log D:/Tools/nginx-1.10.1/logs/info.log info; #PID file, which records the process IDpid of the currently started nginx D:/Tools/nginx-1.10.1/logs/nginx.pid; #Working mode and maximum number of connections events { worker_connections 1024; #Maximum number of concurrent links for a single background worker process } #Set up http server and use its reverse proxy function to provide load balancing support http { #Set MIME type (mail support type), which is set by mime.types Document definition include D:/Tools/nginx-1.10.1/conf/mime.types; default_type application/octet-stream; #Set log log_format main '[$remote_addr] - [$remote_user] [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log D:/Tools/nginx-1.10.1/logs/access.log main; rewrite_log on; #sendfile Instruction assignment nginx Call or not sendfile Functions( zero copy Method) to output files. For normal applications, #It must be set to on. If it is used for heavy load applications such as downloading, it can be set to off to balance the processing speed of disk and network I/O and reduce the uptime of the system sendfile on; #tcp_nopush on; #Connection timeout keepalive_timeout 120; tcp_nodelay on; #gzip compression switch #gzip on; #Set the actual server list upstream zp_server1{ server 127.0.0.1:8089; } #HTTP server server { #Listen to port 80, which is a well-known port number for HTTP protocol listen 80; #Define use www.xx.com Visit server_name www.helloworld.com; #Home page index index.html #Directory to webapp root D:\01_Workspace\Project\github\zp\SpringNotes\spring-security\spring-shiro\src\main\webapp; #Encoding format charset utf-8; #Agent configuration parameters proxy_connect_timeout 180; proxy_send_timeout 180; proxy_read_timeout 180; proxy_set_header Host $host; proxy_set_header X-Forwarder-For $remote_addr; #The path of the reverse agent (bound to the upstream), and the mapped path is set after the location location / { proxy_pass http://zp_server1; } #Static files, handled by nginx itself location ~ ^/(images|javascript|js|css|flash|media|static)/ { root D:\01_Workspace\Project\github\zp\SpringNotes\spring-security\spring-shiro\src\main\webapp\views; #After 30 days of expiration, the static file is not updated very much, and the expiration can be set a little larger. If it is updated frequently, it can be set a little smaller. expires 30d; } #Set the address for viewing Nginx status location /NginxStatus { stub_status on; access_log on; auth_basic "NginxStatus"; auth_basic_user_file conf/htpasswd; } #Access to. htxxx files is prohibited location ~ /\.ht { deny all; } #Error handling page (optional configuration) #error_page 404 /404.html; #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root html; #} } }
Well, let's try:
- Start webapp, and note that the port to start binding should be consistent with the port set by upstream in nginx.
- Change host: add a DNS record to the host file in the directory C:\Windows\System32\drivers\etc
127.0.0.1 www.helloworld.com
- Start up startup.bat Command of
- Access in browser www.helloworld.com , no accident, it's already accessible.
Load balancing configuration
In the previous example, the proxy pointed to only one server.
However, in the actual operation process of the website, most of the servers are running the same app, so load balancing is needed to divert.
nginx can also realize simple load balancing function.
Suppose an application scenario: deploy the application on three linux servers: 192.168.1.11:80, 192.168.1.12:80, and 192.168.1.13:80. Website domain name www.helloworld.com , public IP is 192.168.1.11. nginx is deployed on the server where the public IP address is located to balance the load of all requests.
nginx.conf The configuration is as follows:
http { #Set MIME type, which is set by mime.type Document definition include /etc/nginx/mime.types; default_type application/octet-stream; #Format log access_log /var/log/nginx/access.log; #List of servers for load balancing upstream load_balance_server { #The weight parameter represents the weight. The higher the weight, the greater the probability of being assigned server192.168.1.11:80 weight=5; server192.168.1.12:80 weight=1; server192.168.1.13:80 weight=6; } #HTTP server server { #Listen for port 80 listen80; #Define use www.xx.com visit server_name www.helloworld.com; #Load balancing requests for all requests location / { root /root; #Define the default site root location for the server index index.html index.htm; #Define the name of the first page index file proxy_pass http://load_balance_server; request to load_ balance_ List of servers defined by server #Here are some reverse agent configurations (optional) #proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; #The back-end Web server can obtain the user's real IP through x-forward-for proxy_set_header X-Forwarded-For $remote_addr; proxy_connect_timeout90; #Timeout of nginx connection with backend server (proxy connection timeout) proxy_send_timeout90; #Back end server data return time (agent send timeout) proxy_read_timeout90; #After the connection is successful, the response time of back-end server (agent receiving timeout) proxy_buffer_size4k; #Set the buffer size for the proxy server (nginx) to save the user header information proxy_buffers432k; #proxy_buffers buffer. If the average web page is below 32k, set it like this proxy_busy_buffers_size64k; #Buffer size under high load (proxy_buffers*2) proxy_temp_file_write_size64k; #Set the cache folder size. If it is larger than this value, it will be transferred from the upstream server client_max_body_size10m; #Maximum single file bytes allowed for client requests client_body_buffer_size128k; #Buffer agent the maximum number of bytes requested by the buffer client } } }
The website has multiple webapp configurations
When a website has more and more functions, it is often necessary to separate some relatively independent modules and maintain them independently. In this way, there are usually multiple webapp s.
For example: if www.helloworld.com The site has several webapp s, finance, product, and admin. The ways to access these applications are distinguished by context:
www.helloworld.com/finance/
www.helloworld.com/product/
www.helloworld.com/admin/
We know that the default port number of http is 80. If you start these three webapp applications on one server at the same time, you can't use 80 port. Therefore, these three applications need to bind different port numbers.
So, here's the problem. The user is actually accessing www.helloworld.com When you visit different webapp s, you will not visit them with the corresponding port number. So, you need to use reverse proxy again.
Configuration is not difficult, let's see how to do it:
http { #Some basic configurations are omitted here upstream product_server{ server www.helloworld.com:8081; } upstream admin_server{ server www.helloworld.com:8082; } upstream finance_server{ server www.helloworld.com:8083; } server { #Some basic configurations are omitted here #server pointing to product by default location / { proxy_pass http://product_server; } location /product/{ proxy_pass http://product_server; } location /admin/ { proxy_pass http://admin_server; } location /finance/ { proxy_pass http://finance_server; } } }
https reverse proxy configuration
Some sites with high security requirements may use HTTPS (a secure HTTP protocol using ssl communication standard).
HTTP protocol and SSL standard are not popular here. However, to configure https with nginx, you need to know several things:
- The fixed port number of HTTPS is 443, which is different from port 80 of HTTP
- SSL standard needs to introduce security certificate, so nginx.conf You need to specify the certificate and its corresponding key
The other is basically the same as http reverse agent, except that there are some differences in the configuration of the Server part.
#HTTP server server { #Listen for port 443. 443 is a well-known port number, mainly used for HTTPS protocol listen 443 ssl; #Define use www.xx.com Visit server_name www.helloworld.com; #ssl certificate file location (common certificate file format: crt/pem) ssl_certificate cert.pem; #ssl certificate key location ssl_certificate_key cert.key; #ssl configuration parameters (optional configuration) ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; #Digital signature, MD5 is used here ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { root /root; index index.html index.htm; } }
Static site configuration
Sometimes, we need to configure static sites (that is, html files and a bunch of static resources).
For example: if all the static resources are in the / app/dist directory, we only need to nginx.conf Specify the home page and the host of this site.
The configuration is as follows:
worker_processes 1; events { worker_connections 1024; }http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/javascript image/jpeg image/gif image/png; gzip_vary on; server { listen 80; server_name static.zp.cn; location / { root /app/dist; index index.html; #Forward any request to index.html } } }
Then, add HOST:
127.0.0.1 static.zp.cn
At this point, access the static.zp.cn , you can access the static site.
Set up file server
Sometimes, the team needs to archive some data or materials, so the file server is essential. Using Nginx can quickly and easily build a simple file service.
Key configuration points in Nginx:
- Turn autoindex on to display the directory. It is not turned on by default.
- autoindex_exact_size turn on to display the size of the file.
- Autoindex_ When Localtime is turned on, the modification time of the file can be displayed.
- Root is used to set the root path to open as a file service.
- Charset is set to charset utf-8,gbk;, which can avoid the problem of Chinese garbled code (it is still garbled after setting in windows Server, I haven't found a solution yet).
The simplest configuration is as follows:
autoindex on; #Show table of contents autoindex_exact_size on; #Show file size autoindex_localtime on; #Show file time server { charset utf-8,gbk; #After setting under windows Server, the code is still disorderly and there is no solution for now listen 9050 default_server; listen [::]:9050 default_server; server_name _; root /share/fs; }
Cross domain solutions
In the field of web development, the front and back-end separation mode is often used. In this mode, the front-end and the back-end are independent web applications, for example, the back-end is Java programs, and the front-end is React or Vue applications.
When the independent web app s access each other, there is bound to be a cross domain problem. There are generally two ways to solve cross domain problems:
1,CORS
Set the HTTP response header on the back-end server, and add the domain name you need to run the access to the access control allow origin.
2,jsonp
According to the request, the back end constructs json data and returns it. The front end uses json to cross domains.
These two ideas are not discussed in this paper.
It should be noted that according to the first idea, nginx also provides a cross domain solution.
give an example: www.helloworld.com The website is composed of a front-end app and a back-end app. The front-end port number is 9000 and the back-end port number is 8080.
If the front-end and back-end use http for interaction, the request will be rejected because of cross domain problems. Let's see how nginx solves this problem:
First, in enable-cors.conf To set CORS in the file:
# allow origin list set$ACAO'*'; # set single origin if ($http_origin~* (www.helloworld.com)$) { set$ACAO$http_origin; } if ($cors = "trueget") { add_header'Access-Control-Allow-Origin'"$http\_origin"; add_header'Access-Control-Allow-Credentials''true'; add_header'Access-Control-Allow-Methods''GET, POST, OPTIONS'; add_header'Access-Control-Allow-Headers''DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'; } if ($request_method = 'OPTIONS') { set$cors"${cors}options"; } if ($request_method = 'GET') { set$cors"${cors}get"; } if ($request_method = 'POST') { set$cors"${cors}post"; }
Next, in your server, include enable-cors.conf To introduce cross domain configuration:
# ---------------------------------------------------- #This file is the project nginx configuration fragment #You can include it directly in nginx config (recommended) #Or copy to existing nginx and configure by yourself # www.helloworld.com Domain name needs to be configured with dns hosts #Among them, api opens cors, which needs to cooperate with another configuration file in this directory # ---------------------------------------------------- upstream front_server{ server www.helloworld.com:9000; } upstream api_server{ server www.helloworld.com:8080; } server { listen 80; server_name www.helloworld.com; location ~ ^/api/ { include enable-cors.conf; proxy_pass http://api_server; rewrite "^/api/(.*)$" /$1 break; } location ~ ^/ { proxy_pass http://front_server; } }
That's it.
If there are errors or other problems, please comment and correct. If you have any help, please click like + forward to share.
Welcome to the official account of the brother of migrant workers: the road of brother technology.