Nginx configuration and load balancing under windows

1, Introduction Nginx(engine x) is a high-performance HTTP and reverse proxy Web server. It also provides IMAP/POP3/SMTP...
1, Introduction
2, Environmental preparation
3, Disposition
4, Test and operation results

1, Introduction

Nginx(engine x) is a high-performance HTTP and reverse proxy Web server. It also provides IMAP/POP3/SMTP services. Nginx is a lightweight Web server / reverse proxy server and e-mail (IMAP/POP3) proxy server, which is distributed under BSD like protocol. It is characterized by less memory and strong concurrency. In fact, the concurrency of nginx is better in the same type of Web server, which is used by many large factories.
  today, let's talk about the installation and configuration of Nginx under windows and the use of simple load balancing.

2, Environmental preparation

2.1. Download

  Nginx download address: http://nginx.org/en/download.html , download the second stable version of windows in this article. As shown in the figure:

2.2. Nginx directory

    here, the directory structure after I unzip it to local C:\myProgram\Nginx-1.20.1 \ is as follows:

    enter the configuration file directory C:\myProgram\Nginx-1.20.1\conf, and we will create a new conf.d directory to store custom configuration files, etc.

2.3 common commands

   here we list some common commands that can be used directly in the future:

commandmeaningnginx -hView help informationnginx -V or nginx -VView the version number of Nginx. V simply displays the version information. V displays not only the version information, but also the configuration parameter informationnginx -s reopenopen the log filenginx -tVerify that the (nginx.conf) configuration file has syntax errorsnginx -s reloadProfile modification reload commandnginx -c specifies the path to the configuration fileSpecify the configuration file to start. The default is conf/nginx.confstart nginxStart Nginxnginx -s quitNormally stop or close Nginxnginx -s stopQuick stop or shutdown of Nginx

be careful:
   it is recommended that you enter the installation directory, such as C:\myProgram\Nginx-1.20.1\conf here. Through the cmd command, first execute nginx -t to check whether the configuration file is configured correctly, and then execute start nginx, as shown in the figure:

If it has been started and the configuration file has been modified, you only need to execute nginx -t and then nginx -s reload. After successful startup, access the following: http://localhost:80 perhaps http://localhost , as shown in the figure below, it indicates success:

3, Disposition

    we have seen the directory structure before. Generally, the nginx.conf class is used for global configuration. We can create our configuration file in the custom directory conf.d. it is generally named in the way of domain name and port, such as localhost_80,localhost_443 and so on. Load balancing is put in upstream.conf. Of course, you can name it in different ways.
. The nginx configuration is best classified. If there are many configurations, it is difficult to maintain.

3.1. Configure nginx.conf

#The global configuration of Nginx, that is, main, contains Events and HTTP #Specify the running user and user group of the Nginx Worker process, which is run by the nobody account by default (windows will have a warning message) #user nobody nobody; #Specifies the number of processes to be started by Nginx. Each Nginx process consumes an average of 10M~12M memory worker_processes 2; #To define the global error log file, you must set the warn level or above. The log output levels are: [debug|info|notice|warn|error|crit] error_log logs/error.log notice; #Specifies the storage file location for the process pid pid logs/nginx.pid; #Nginxworker maximum open files worker_rlimit_nofile 65535; #The events event command sets the working mode of Nginx and the upper limit of the number of connections events{ #Specify the working mode of Nginx. The working modes supported by Nginx include select, poll, kqueue, epoll, rtsig and / dev/poll #use epoll; #Maximum connections of a single process (maximum connections = connections * processes) worker_connections 65536; } #Set http server http{ #Realize the setting of the files contained in the configuration file include mime.types; #Default file type: binary stream default_type application/octet-stream; #Maximum hash table size of service domain name server_names_hash_max_size 512; #hash table size of service domain name server_names_hash_bucket_size 128; #Specifies the output format of the Nginx log log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent $request_body "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"' '"responsetime":$request_time' '-"$upstream_cache_status"' ' Alian $http_[W-Os] $http_[W-Brand] $http_[W-Model] $http_[W-IMEI] $http_[W-App-Version] $http_[W-Token] $http_[W-Partner-Id] $http_[W-Operator-Id] $http_[W-Window-Id]'; log_format logstash '{"@timestamp":"$time_iso8601",' '"slbip":"$remote_addr",' '"clientip":"$http_x_forwarded_for",' '"serverip":"$server_addr",' '"size":$body_bytes_sent,' '"responsetime":$request_time,' '"domain":"$host",' '"method":"$request_method",' '"requesturi":"$request_uri",' '"url":"$uri",' '"appversion":"$HTTP_APP_VERSION",' '"referer":"$http_referer",' '"agent":"$http_user_agent",' '"status":"$status",' '"W-Brand":"$http_[W-Brand]",' '"W-Model":"$http_[w-model]",' '"W-Token":"$http_[w-token]",' '"W-Token":"$http_[W-Token]",' '"devicecode":"$HTTP_HA"}'; #Maximum number of single file bytes allowed for client requests client_max_body_size 20m; #Specifies the size of the headerbuffer from the client request header client_header_buffer_size 32K; #Specifies the maximum number and size of caches for larger headers in client requests large_client_header_buffers 4 32k; #Used to turn on efficient file transfer mode sendfile on; #The data packets will not be transmitted immediately. When the data packets are the largest, they will be transmitted at one time, so as to improve the I/O performance. It can also help solve the network congestion. Of course, there will be a little delay (when using the sendfile function, tcp_nopush works, which is mutually exclusive with tcp_nodelay) tcp_nopush on; #When sending small pieces of data, send the data immediately. The response is fast and the client benefits (mutually exclusive with tcp_nopush) tcp_nodelay off; #Set the timeout for the client connection to remain active, in seconds. If it exceeds, the connection will be closed keepalive_timeout 60; #Automatic index creation, such as directory browsing and downloading, is off by default autoindex off; #Set client request header read timeout client_header_timeout 10; #Set the client request body read timeout. The default value is 60 client_body_timeout 30; #Specifies the timeout for the response client send_timeout 30; #Set the timeout for the connection between the Nginx server and the backend FastCGI server fastcgi_connect_timeout 60; #Set the timeout for Nginx to allow the FastCGI server to return data fastcgi_send_timeout 60; #Set the timeout for Nginx to read the response information from the FastCGI server fastcgi_read_timeout 60; #Set the size of the buffer used to read the first part of the response information received from the FastCGI server fastcgi_buffer_size 64k; #Set the size and number of buffers used to read the response information received from the FastCGI server fastcgi_buffers 4 64k; #Set the fastcgi that can be used when the system is busy_ Buffers size. The recommended size is fastcgi_buffers *2 fastcgi_busy_buffers_size 128k; #The size of fastcti temporary file can be set to 128-256K fastcgi_temp_file_write_size 128k; #Configure the HttpGzip module of Nginx (whether the HttpGzip module is installed and optimize the website) #Turn on GZIP compression to compress the output data stream in real time #gzip on; #Set the minimum number of bytes allowed to compress the page. The default is 0. No matter how many pages are compressed, it is recommended to be greater than 1K #gzip_min_length 1k; #Apply 4 units of 16K memory as the compressed result stream cache #gzip_buffers 4 16k; #Set to identify the HTTP protocol version. The default is 1.1 #gzip_http_version 1.1; #Specify GZIP compression ratio. 1 has the smallest compression ratio and the fastest processing speed; 9 has the largest compression ratio and fast transmission speed, but the processing is the slowest and consumes cpu resources #gzip_comp_level 3; #Specify the type of compression. The "text/html" type will always be compressed whether specified or not #gzip_types text/plain application/x-javascript text/css application/xml; #Let the front-end cache server cache GZIP compressed pages #gzip_vary on; #Contains sub configuration files. Here are all. Conf files in the conf.d directory include conf.d/*.conf; include fastcgi.conf; }

3.2. Customize the configuration of localhost_80.conf

server{ listen 80 ; #server_name 10.130.3.16; server_name localhost; charset utf-8; add_header X-Cache $upstream_cache_status; #Site root directory (customized, not necessarily this) root html; location / { root html; index index.html index.htm; } location ~ ^/NLB/ { proxy_redirect off; #port proxy_set_header Host $host; #Remote address proxy_set_header X-Real-IP $remote_addr; #The program can obtain the remote ip address proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #upstream.conf will be used here. This file has been introduced in nginx.conf proxy_pass http://load-balance; } #All static files are directly read by nginx without tomcat or resin($and #(there is a space between $and {) location ~ .*.(js|css)?$ { expires 15d; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ ^.*\.svn\/{ deny all; } #Set the address to view Nginx status location /NginxStatus { stub_status off; access_log logs/NginxStatus.log; auth_basic "NginxStatus"; auth_basic_user_file confpasswd; #The contents of the htpasswd file can be generated using the htpasswd tool provided by apache. } #The absolute path can be used. If it is a relative path, it is the relative execution startup directory (nginx.exe) access_log logs/localhost_access.log; }

3.3. User defined load balancing configuration: upstream.conf

upstream load-balance { #Reserved backup machines. Backup machines are only requested when all other non backup machines fail or are busy server 127.0.0.1:8082 backup; #The current server does not participate in load balancing temporarily server 127.0.0.1:8083 down; server 127.0.0.1:8084; server 127.0.0.1:8085; }

In fact, we can also configure the weight information. The larger the value, the greater the possibility of polling, max_ Failures allows the number of requests to fail. The default value is 1_ Timeout experienced Max_ The time when the service is suspended after failures. The two are generally used together, as follows:

upstream load-balance { #Reserved backup machine. When all other non backup machines fail or are busy, the backup machine will be requested server 127.0.0.1:8082 backup; #The current server does not participate in load balancing temporarily server 127.0.0.1:8083 down; server 127.0.0.1:8084 weight=10; server 127.0.0.1:8085 weight=1 max_fails=2 fail_timeout=30s; }

4, Test and operation results

4.1 test items

application.yml

server: port: 8082 servlet: context-path: /NLB
package com.alian.loadbalance.controller; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Value; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RestController; @Slf4j @RestController public class LoadBalanceController { @Value("$") private int serverPort; @RequestMapping("loadBalance") public String loadBalance(){ log.info("Service[{}]Processing return of",serverPort); return "Service port:"+serverPort; } }

4.2 running multi instance services in idea

If you want to make the application highly available, you need to deploy multiple services. Here, take idea as an example to see how to test the machine. Because our port in the configuration file is 8082, our application only needs different ports, so we change the ports every time we start. The four I start here are described as follows:

Service NLB portexplain8082Hot backup8083Offline status8084normal operation8085normal operation

Click Allow parallel run in the upper right corner of Edit configurations next to idea. Remember to change the port every time and run again. The specific startup is shown in the figure below:

4.3 normal request

Two consecutive requests: http://localhost/NLB/loadBalance Results obtained:


No matter two or more times, the results are returned by polling, which is based on nginx's default polling rules.

4.4 other tests

Close the instances with 8084 and 8085 ends. Request again http://localhost/NLB/loadBalance , it is found that our hot standby server works. The results are as follows:

4.5 static resource request

We put a picture aliasblog.png under C:\myProgram\Nginx-1.20.1\html, because localhost_80.conf has been configured, and the root directory is html, so we directly request it http://localhost/AlianBlog.png , the results are as follows:

15 October 2021, 22:21 | Views: 9061

Add new comment

For adding a comment, please log in
or create account

0 comments