首页 > 代码库 > haproxy学习
haproxy学习
frontend webserver *:80 前端监听在80端口,前端名称为webserver
default_backend web
#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend web 后端服务器名称web
balance roundrobin
server web1 192.168.223.137:80 check
server web2 192.168.223.146:80 check
当访问浏览器:http://192.168.223.136/,会根据roundrobin算法调度到后端服务器
配置haproxy日志:
1、将下面四行注释掉
# Provides UDP syslog reception
$ModLoad imudp
$UDPServerRun 514
# Provides TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514
2、添加如下一行
local2.* /var/log/haproxy.log
3、重启rsyslog服务:service rsyslog restart
当请求访问haproxy时,日志内容如下:
[root@node1 ~]# tail -f /var/log/haproxy.log
Aug 5 21:55:42 localhost haproxy[66553]: 192.168.223.1:56443 [05/Aug/2017:21:55:41.900] webserver web/web2 168/0/0/0/168 200 297 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
Aug 5 21:55:42 localhost haproxy[66553]: 192.168.223.1:56443 [05/Aug/2017:21:55:42.068] webserver web/web1 158/0/0/1/159 200 260 - - ---- 1/1/0/0/0 0/0 "GET / HTTP/1.1"
可以看出实际的客户端地址192.168.223.1,以及请求的前端服务器名称,后端服务器的信息
查看后端服务器信息:(后端由一个nginx,一个httpd提供服务)
nginx日志:
[root@wadeson html]# tail -f ../logs/access.log
192.168.223.136 "-" - - [05/Aug/2017:21:55:39 +0800] "GET / HTTP/1.1" "192.168.223.136" 200 30 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" "192.168.223.1"
192.168.223.136 "-" - - [05/Aug/2017:21:55:40 +0800] "GET / HTTP/1.1" "192.168.223.136" 200 30 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko" "192.168.223.1"
而httpd日志:
[root@wadeson ~]# tail -f /var/log/httpd/access_log
192.168.223.136 - - [05/Aug/2017:21:55:38 +0800] "GET / HTTP/1.1" 200 30 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"
看不见真实客户端的ip,如何获取真实ip呢?
在haproxy配置文件中默认定义了:option forwardfor except 127.0.0.0/8
这个参数的含义:允许在发往服务器的请求首部中插入“X-Forwarded-For”,是haproxy请求发往后端服务器的过程,只需要在后端服务器的日志格式中添加这个首部就能获取到真实ip的值
nginx的日志格式:
log_format main ‘$remote_addr "$http_x_real_ip" - $remote_user [$time_local] "$request" "$http_host" ‘
‘$status $body_bytes_sent "$http_referer" ‘
‘"$http_user_agent" "$http_x_forwarded_for"‘; 已经默认自带了这个首部,所以无需修改就可以看见
现在设置httpd的日志格式:
LogFormat "%h %{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
CustomLog logs/access_log combined
在httpd的日志格式中添加该首部,然后调用格式名称,再次访问网站,刷新日志如下:
192.168.223.136 - - [05/Aug/2017:21:55:43 +0800] "GET / HTTP/1.1" 200 30 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"
192.168.223.136 192.168.223.1 - - [05/Aug/2017:22:23:02 +0800] "GET / HTTP/1.1" 200 30 "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"
至此,nginx和httpd两个后端都能收到真实的客户端的ip
haproxy的算法:
backend web
balance source
hash-type consistent
server web1 192.168.223.137:80 check
server web2 192.168.223.146:80 check
查看haproxy日志:
Aug 5 22:32:25 localhost haproxy[66820]: Proxy webserver started.
Aug 5 22:32:25 localhost haproxy[66820]: Proxy web started.
Aug 5 22:32:31 localhost haproxy[66821]: 192.168.223.1:57002 [05/Aug/2017:22:32:31.349] webserver web/web2 1/0/0/1/2 200 297 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
Aug 5 22:32:31 localhost haproxy[66821]: 192.168.223.1:57002 [05/Aug/2017:22:32:31.351] webserver web/web2 198/0/0/1/199 304 149 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
Aug 5 22:32:31 localhost haproxy[66821]: 192.168.223.1:57002 [05/Aug/2017:22:32:31.551] webserver web/web2 183/0/1/1/185 304 149 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
可以看出请求都被发往web2这台后端服务器了
2、uri的hash算法:
现在将两台web服务器都增加一个test.html文件:
echo "<h1>test 192.168.223.137</h1>" > test.html
echo "<h1>test 192.168.223.146</h1>" > test.html
修改haproxy配置文件:
backend web
balance uri
hash-type consistent
server web1 192.168.223.137:80 check
server web2 192.168.223.146:80 check
访问网站:http://192.168.223.136/test.html
查看haproxy日志:
Aug 5 22:36:44 localhost haproxy[66860]: Proxy webserver started.
Aug 5 22:36:44 localhost haproxy[66860]: Proxy web started.
Aug 5 22:36:55 localhost haproxy[66861]: 192.168.223.1:57065 [05/Aug/2017:22:36:55.365] webserver web/web1 0/0/0/0/0 200 260 - - ---- 1/1/0/0/0 0/0 "GET /test.html HTTP/1.1"
Aug 5 22:36:57 localhost haproxy[66861]: 192.168.223.1:57065 [05/Aug/2017:22:36:55.366] webserver web/web1 1903/0/0/1/1904 304 173 - - ---- 1/1/0/0/0 0/0 "GET /test.html HTTP/1.1"
可以看出请求test.html的uri都被请求到web1这台服务器了
3、hdr(name),基于header内容的hash算法
修改haproxy配置文件:
backend web
balance hdr(User-Agent)
hash-type consistent
server web1 192.168.223.137:80 check
server web2 192.168.223.146:80 check
查看haproxy日志:
Aug 5 22:43:00 localhost haproxy[66910]: Proxy webserver started.
Aug 5 22:43:00 localhost haproxy[66910]: Proxy web started.
Aug 5 22:43:15 localhost haproxy[66911]: 192.168.223.1:57124 [05/Aug/2017:22:43:15.826] webserver web/web2 0/0/0/1/1 304 149 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
Aug 5 22:43:17 localhost haproxy[66911]: 192.168.223.1:57124 [05/Aug/2017:22:43:15.827] webserver web/web2 1916/0/0/0/1916 304 149 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
请求都被派发到了web2,而web2的日志如下:
192.168.223.136 192.168.223.1 - - [05/Aug/2017:22:43:22 +0800] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"
192.168.223.136 192.168.223.1 - - [05/Aug/2017:22:43:22 +0800] "GET / HTTP/1.1" 304 - "-" "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko"
配置文件说明:
1、代理
-defaults:用于为所有其他配置提供默认参数,这配置默认配置参数可由下一个“defaults”所重新设定
-frontend:用于定义一系列监听的套接字,这些套接字可接受客户端请求并与之建立连接
-backend:用于定义一系列后端服务器,代理将会将对应客户端的请求转发至这些服务器
-listen:把一个前端和一个后端绑在一起,通过关联前端和后端定义了一个完整的代理,通常只对tcp流量有用
所有代理的名称只能使用大写字母、小写字母、数字、-、_、点号、冒号,此外ACL名称会区分字母大小写
bind [<address>]:<port_range> [, ...] [param*]
在前端定义一个或多个地址或者端口
mode { tcp|http|health }
实例运行的模式或者协议
use_backend dynamic if url_dyn
use_backend static if url_css url_img extension_img
default_backend dynamic
cookie SRV insert indirect nocache
backend web
balance roundrobin
cookie SRV insert indirect nocache 在原有的cookie信息上insert上SRV:{web1_session|web2_session},实现cookie的绑定
server web1 192.168.223.137:80 check cookie web1_session
server web2 192.168.223.146:80 check cookie web2_session
haproxy学习