yum install haproxyhaproxy profile configuration
Here I attach a written configuration file, and I will explain the points that need to be changed
global # log 127.0.0.1 local0 info #[err warning info debug] / / log location log 127.0.0.1 local3 maxconn 65535 #Maximum concurrent connections daemon #Set to run in the background nbproc 1 #Number of processes #pidfile /home/admin/haproxy/logs/haproxy.pid #pidfile /usr/local/haproxy/haproxy.pid defaults log global mode http #Default mode option httplog #http log format option dontlognull option forwardfor option httpclose retries 2 #Consider the server unavailable after three failures # option redispatch #If the cookie is written to serverId and the client does not refresh the cookie, after the server corresponding to serverId is hung up, it is forced to be directed to other healthy servers maxconn 65535 #When the server load is very high, the default maximum number of connections will be automatically terminated if the current queue has been processed for a long time contimeout 5000 #connection timed out clitimeout 30000 #Client timeout srvtimeout 30000 #server time-out frontend web_in mode http maxconn 65535 bind :80 acl is_a hdr_beg(host) -i a.example.com acl is_b hdr_beg(host) -i 2linux.com use_backend a_server if is_a use_backend b_server if is_b backend a_server mode http #http mode #balance source stats uri /haproxy balance leastconn cookie JSESSIONID prefix stats hide-version option httpclose server hellen_3 192.168.56.119:80 weight 5 check inter 2000 rise 2 fall 3 backend b_server mode http #http mode stats uri /haproxy balance leastconn #balance roundrobin cookie JSESSIONID prefix stats hide-version option httpclose server test_2 192.168.61.129:80 check weight 5 check inter 2000 rise 2 fall 3 server test_3 192.168.61.132:80 check weight 5 check inter 2000 rise 2 fall 3 listen stats_auth bind 192.168.31.81:8080 #ip access to cluster management interface stats refresh 20s stats enable stats uri /admin-status #Access address of cluster management interface stats auth admin:linux123 #Login user name stats admin if TRUE
Set the group corresponding to the domain name
Let's look at frontend web_in this part
frontend web_in mode http maxconn 65535 bind :80 acl is_a hdr_beg(host) -i a.example.com acl is_b hdr_beg(host) -i 2linux.com use_backend a_server if is_a use_backend b_server if is_b backend b_server mode http #http mode stats uri /haproxy balance leastconn #Allocation algorithm used cookie JSESSIONID prefix stats hide-version option httpclose server test_2 192.168.61.129:80 check weight 5 check inter 2000 rise 2 fall 3 server test_3 192.168.61.132:80 check weight 5 check inter 2000 rise 2 fall 3
ACL is_b hdr_ beg(host) -i 2 linux.com Represents 2 linux.com The access of corresponds to is_ Group B, use_backend b_server if is_b. Indicates is_ Group B corresponds to backend B_ The contents of the server block. Back end B as shown in the figure above_ Server, in this block, we set the child nodes of load balancing.
Set child nodes
backend b_server mode http #http mode stats uri /haproxy balance leastconn #Allocation algorithm used cookie JSESSIONID prefix stats hide-version option httpclose server test_2 192.168.61.129:80 check weight 5 check inter 2000 rise 2 fall 3 server test_3 192.168.61.132:80 check weight 5 check inter 2000 rise 2 fall 3
server test_2 ,server test_3 is the set child node information.
server test_2 192.168.61.129:80 check weight 5 check inter 2000 rise 2 fall 3
It means that one of the nodes ip is 192.168.61.129:80, and the weight is 5 (the higher the weight is, the higher the request probability is), check inter 2000 rise 2 fall 3 cluster security check 2000 milliseconds once, and the third failure twice will be kicked out of the cluster.
As configured, I set two nodes 129 and 132 here
Management page
listen stats_auth bind 192.168.61.130:8080 #ip access to cluster management interface stats refresh 20s stats enable stats uri /admin-status #Access address of cluster management interface stats auth admin:linux123 #Login user name stats admin if TRUE
This configuration is the web management page of the cluster, such as configuration annotation configuration
test
I have configured different pages in 129 and 132, and we can visit them for testing. As shown in the figure below, the test is successful