首页 > 代码库 > Varnish之二web缓存实践
Varnish之二web缓存实践
Varnish之二web缓存实践
一、实验说明
操作系统说明:Centos 7.2
主机名:node-proxy IP:192.168.2.5(模拟外网)/192.168.2.18(模拟内网) 职责:双网卡,nginx负载均衡调度器
主机名:node01 IP:192.168.2.14(模拟内网) 职责:varnish缓存服务器
主机名:node02 IP:192.168.2.15/192.168.2.101(模拟内网) 职责:httpd静态web01服务器,模拟基于IP虚拟机主机
主机名:node03 IP:192.168.2.17/192.168.2.102(模拟内网) 职责:httpd静态web02服务器,模拟基于IP虚拟机主机
二、配置各个节点网络环境,以及相关服务
1、配置nginx负载均衡器网卡信息
[root@node-proxy ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno16777728: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a2:f5:26 brd ff:ff:ff:ff:ff:ff inet 192.168.2.5/24 brd 192.168.2.255 scope global eno16777728 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fea2:f526/64 scope link valid_lft forever preferred_lft forever 3: eno33554960: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:a2:f5:30 brd ff:ff:ff:ff:ff:ff inet 192.168.2.18/24 brd 192.168.2.255 scope global eno33554960 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fea2:f530/64 scope link valid_lft forever preferred_lft forever
2、nginx负载均衡器,安装nginx(yum 安装默认版本为1.10.2)
[root@node-proxy ~]# yum install nginx -y
3、修改nginx负载均衡器配置/etc/nginx/nginx.conf,反代理至varnish缓存服务器
注:这里测试静态web缓存,因此所有请求都调度至Varnish。生产环境,应该根据不同url,或动静分离调度至不同的varnish服务器上,提高缓存命中率
[root@node-proxy /]# vim /etc/nginx/nginx.conf ***************************************************************************** ...... server { listen 80; #listen [::]:80 default_server; server_name localhost; #root /usr/share/nginx/html; # Load configuration files for the default server block. #include /etc/nginx/default.d/*.conf; location / { root /usr/share/nginx/html; index index.html index.htm; proxy_pass http://192.168.2.14:6081; #调度至缓存服务器 }
4、启动nginx服务,并查看监听端口是否正常
[root@node-proxy /]# systemctl start nginx.service [root@node-proxy /]# ss -tlpn |grep ‘nginx‘ LISTEN 0 128 *:80 *:* users:(("nginx",pid=2979,fd=6),("nginx",pid=2978,fd=6),("nginx",pid=2977,fd=6))
5、配置httpd静态web01服务器网卡信息(在线添加ip)
[root@node02 ~]# ip addr add 192.168.2.101/24 dev eno16777728 [root@node02 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno16777728: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:01:07:1d brd ff:ff:ff:ff:ff:ff inet 192.168.2.15/24 brd 192.168.2.255 scope global eno16777728 valid_lft forever preferred_lft forever inet 192.168.2.101/24 scope global secondary eno16777728 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe01:71d/64 scope link valid_lft forever preferred_lft forever
6、安装配置web01服务器httpd服务(默认httpd版本2.4)
1)安装httpd
[root@node02 ~]# yum install httpd -y
2)建立虚拟主机存放网页的根目录,在/var/www目录下建立test1、test2文件夹,其中分别存放index.html、index.html
[root@node02 www]# pwd /var/www [root@node02 www]# mkdir {test1,test2}
test1目录下index.html内容如下:
<HTML> <BODY> <CENTER><H1>web01 192.168.2.15 test1.html!</H1></CENTER> </BODY> </HTML>
test2目录下index.html内容如下:
<HTML> <BODY> <CENTER><H1>web01 192.168.2.101 test1.txt!</H1></CENTER> </BODY> </HTML>
3)编辑httpd.conf,文件中找到DocumentRoot “/var/www/html”,并把它注释掉
#DocumentRoot "/var/www/html"
4)从/usr/share/doc/httpd-2.4.6/拷贝httpd-vhosts.conf文件至/etc/httpd/conf.d目录下,并编辑修改httpd-vhosts.conf如下:
[root@node02 conf.d]# cp /usr/share/doc/httpd-2.4.6/httpd-vhosts.conf . [root@node02 conf.d]# cat httpd-vhosts.conf <VirtualHost 192.168.2.15:80> ServerAdmin www.test.com DocumentRoot "/var/www/test1" </VirtualHost> <VirtualHost 192.168.2.101:80> ServerAdmin www.test.com DocumentRoot "/var/www/test2" </VirtualHost>
5)启动httpd服务,并测试是否访问正常
[root@node02 conf.d]# systemctl start httpd.service [root@node02 conf.d]# curl http://192.168.2.15 <HTML> <BODY> <CENTER><H1>web01 192.168.2.15 test1!</H1></CENTER> </BODY> </HTML> [root@node02 conf.d]# curl http://192.168.2.101 <HTML> <BODY> <CENTER><H1>web01 192.168.2.101 test2!</H1></CENTER> </BODY> </HTML>
7、配置httpd静态web02服务器网卡信息(在线添加ip)
[root@node03 ~]# ip addr add 192.168.2.102/24 dev eno16777728 [root@node03 ~]# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno16777728: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:8f:74:d9 brd ff:ff:ff:ff:ff:ff inet 192.168.2.17/24 brd 192.168.2.255 scope global eno16777728 valid_lft forever preferred_lft forever inet 192.168.2.102/24 scope global secondary eno16777728 valid_lft forever preferred_lft forever inet6 fe80::20c:29ff:fe8f:74d9/64 scope link valid_lft forever preferred_lft forever
8、安装配置web02服务器httpd服务(默认httpd版本2.4)
1)安装httpd
[root@node03 ~]# yum install -y httpd
2)建立虚拟主机存放网页的根目录,在/var/www目录下建立test1、test2文件夹,其中分别存放index.html、index.html
[root@node03 www]# pwd /var/www [root@node02 www]# mkdir {test1,test2}
test1目录下index.html内容如下:
<HTML> <BODY> <CENTER><H1>web02 192.168.2.17 test2.html!</H1></CENTER> </BODY> </HTML>
test2目录下index.html内容如下:
<HTML> <BODY> <CENTER><H1>web02 192.168.2.102 test2.txt!</H1></CENTER> </BODY> </HTML>
3)编辑httpd.conf,文件中找到DocumentRoot “/var/www/html”,并把它注释掉
#DocumentRoot "/var/www/html"
4)从/usr/share/doc/httpd-2.4.6/拷贝httpd-vhosts.conf文件至/etc/httpd/conf.d目录下,并编辑修改httpd-vhosts.conf如下:
[root@node03 conf.d]# cp /usr/share/doc/httpd-2.4.6/httpd-vhosts.conf . [root@node03 conf.d]# cat httpd-vhosts.conf <VirtualHost 192.168.2.17:80> ServerAdmin www.test.com DocumentRoot "/var/www/test1" </VirtualHost> <VirtualHost 192.168.2.102:80> ServerAdmin www.test.com DocumentRoot "/var/www/test2" </VirtualHost>
5)启动httpd服务,并测试是否访问正常
[root@node03 conf.d]# systemctl start httpd.service [root@node03 conf.d]# curl http://192.168.2.17 <HTML> <BODY> <CENTER><H1>web01 192.168.2.17 test1!</H1></CENTER> </BODY> </HTML> [root@node03 conf.d]# curl http://192.168.2.102 <HTML> <BODY> <CENTER><H1>web01 192.168.2.102 test2!</H1></CENTER> </BODY> </HTML>
9、安装配置Varnish服务器node01节点缓存服务(默认varnish版本4.0)
1)安装Varnish
[root@node01 ~]# yum install -y varnish
2)配置varnish服务器,修改/etc/varnish/default.vcl将请求反代到后端的一台主机web01 node02节点上,并启动varnish缓存服务,配置如下:
[root@node01 ~]# vim /etc/varnish/default.vcl backend default { .host = "192.168.2.15"; .port = "80"; }
[root@node01 ~]# systemctl start varnish.service
注意:重启varnish会导致缓存全部失效,造成严重后果,慎重!!!!!!!!!!!!
3)SHELL登录varnish交互式管理界面,加载编译修改后的配置文件,新建一个test,并把默认vcl读取test文件中
[root@node01 ~]# varnishadm -S /etc/varnish/secret -T 127.0.0.1:6082 200 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.10.0-327.el7.x86_64,x86_64,-smalloc,-smalloc,-hcritbit varnish-4.0.4 revision 386f712 Type ‘help‘ for command list. Type ‘quit‘ to close CLI session. vcl.load test default.vcl #加载默认vcl至test文件 200 VCL compiled. vcl.list #查看当前vcl配置列表 200 active 0 boot available 0 test vcl.use test #应用test配置文件 200 VCL ‘test‘ now active
4)客户端192.168.2.138浏览器测试访问,nginx服务器模拟外网ip(192.168.2.15),是否正常反代成功至192.168.2.15:
5)验证varnish缓存服务器,客户端是否缓存命中,需要在vcl_deliver状态引擎上加上一个判断机制,设定给客户端的响应报文的某个首部的值来实现并且得知哪个varnish服务器命中的缓存
注:这里通过obj.hits内置变量值,自定义首部名称为X-Cached,来判断是否命中,如命中显示hit,命中显示miss。同时追加server.ip获取缓存服务器ip
@@在/etc/varnish/default.vcl修改配置文件如下:
sub vcl_deliver { # Happens when we have all the pieces we need, and are about to send the # response to the client. # # You can do accounting or modifying the final object here. if (obj.hits > 0){ set resp.http.X-Cached = "hit" + server.ip; }else{ set resp.http.X-Cached = "miss" + server.ip; } }
@@SHELL登录varnish交互式管理界面,加载编译修改后的配置文件,新建一个test2,把修改后vcl读取test2文件中,并应用
[root@node01 ~]# varnishadm -S /etc/varnish/secret -T 127.0.0.1:6082 200 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.10.0-327.el7.x86_64,x86_64,-smalloc,-smalloc,-hcritbit varnish-4.0.4 revision 386f712 Type ‘help‘ for command list. Type ‘quit‘ to close CLI session. vcl.load test2 default.vcl 200 VCL compiled. vcl.list 200 available 0 boot available 0 test active 0 test1 available 0 test2 vcl.use test2 200 VCL ‘test2‘ now active
@@这里用客户端192.168.2.6(linux)访问验证,注意当新客户端第一次访问时,缓存是不会被命中的
[root@moni ~]# curl -I http://192.168.2.5 HTTP/1.1 200 OK Server: nginx/1.10.2 Date: Thu, 19 Jan 2017 03:08:12 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 98 Connection: keep-alive Last-Modified: Thu, 19 Jan 2017 01:30:56 GMT ETag: "62-546687c1a0225" X-Varnish: 32782 Age: 0 Via: 1.1 varnish-v4 X-Cached: miss 192.168.2.14 #未命中 You have new mail in /var/spool/mail/root [root@moni ~]# curl -I http://192.168.2.5 HTTP/1.1 200 OK Server: nginx/1.10.2 Date: Thu, 19 Jan 2017 03:08:17 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 98 Connection: keep-alive Last-Modified: Thu, 19 Jan 2017 01:30:56 GMT ETag: "62-546687c1a0225" X-Varnish: 12 32783 Age: 5 Via: 1.1 varnish-v4 X-Cached: hit 192.168.2.14 #命中
6)vcl_recv状态引擎中定义相关策略限制客户端某些请求不进行缓存
注:这里以客户端当请求url匹配到/admin或/login或以.php结束,或请求报文中user-agent首部匹配到url时,return(pass),提交给vcl_pass引擎处理,不进行缓存为示例
@@在/etc/varnish/default.vcl修改配置文件如下:
sub vcl_recv { # Happens before we check if we have this in cache already. # # Typically you clean up the request here, removing cookies you don‘t need, # rewriting the request, etc. if (req.url ~ "(?i)^/(admin|login)" || req.url ~ "(?i)\.php$" || req.http.user-agent ~ "curl"){ return(pass); } }
注:~ :表示正则匹配
(?i): 不区分大小写
@@SHELL登录varnish交互式管理界面,加载编译修改后的配置文件,新建一个test3,把修改后vcl读取test3文件中,并应用
[root@node01 ~]# varnishadm -S /etc/varnish/secret -T 127.0.0.1:6082 200 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.10.0-327.el7.x86_64,x86_64,-smalloc,-smalloc,-hcritbit varnish-4.0.4 revision 386f712 Type ‘help‘ for command list. Type ‘quit‘ to close CLI session. vcl.load test3 default.vcl 200 VCL compiled. vcl.list 200 available 0 boot available 0 test available 0 test1 active 0 test2 available 0 test3 vcl.use test3 200 VCL ‘test3‘ now active
@@这里用客户端192.168.2.6(linux)访问验证,是否限制的url不会被缓存
[root@moni ~]# curl -I http://192.168.2.5/admin HTTP/1.1 404 Not Found Server: nginx/1.10.2 Date: Thu, 19 Jan 2017 04:03:14 GMT Content-Type: text/html; charset=iso-8859-1 Connection: keep-alive X-Varnish: 32785 Age: 0 Via: 1.1 varnish-v4 X-Cached: miss 192.168.2.14 #带了/admin所以不缓存
7)删除客户端的请求报文中的某些首部(如cookie信息),或后端服务器响应的某些首部(如设定cookie的首部信息),并强制进行缓存,提高缓存命中率。当请求报文首部中有cookie信息时,是很难命中缓存的,因为一旦有cookie信息,varnish中缓存的键值信息中键的值就是请求的url和cookie信息组成,然后hash计算出来的结果,这样就非常难以命中缓存,所以,对于一些不涉及敏感信息的cookie信息,我们应该在首部中进行清除之后,再进行缓存比匹配
注意:为了提高缓存命中率,应该将请求报文中的无用的cookie信息移除,在响应报文中,无效的cookie信息也将其移除,以保证缓存命中率;要移除用户请求报文中的cookie信息,要在vcl_recv中进行,要移除后端服务器响应报文中的cookie信息,要在vcl_backend_response中进行
@@如上面定义除了login和admin相关的cookie信息,其他页面的cookie信息都取消
需编辑修改vcl配置文件/etc/vanish/default.vcl配置文件,在对应的引擎段加入
sub vcl_recv { if(!(req.url ~ "(?i)^/(login|admin)")) { unset req.http.cookie } }
注:请求的url不是以/login或/admin开头的,则取消其请求报文首部中的cookie首部的信息,从而实现了提高缓存命中率的效果,因为请求首部中如果有cookie信息,是很难命中缓存的
sub vcl_backend_response { if(!(bereq.url ~ "(?i)^/(login|admin)")) { unset beresp.http.Set-cookie } }
注:针对后端服务器的响应报文中,如果响应的请求的url不是以/login或/admin开始的,则取消其响应报文中的Set-cookie的首部,表示不对此类信息设定cookie。
@@对于特定类型的资源图片、css、js等,取消其私有标识,并强行设定其可以由varnish缓存的时长
需编辑修改vcl配置文件/etc/vanish/default.vcl配置文件,在对应的引擎段加入
sub vcl_backend_response { if (beresp.http.cache-control !~ "s-maxage") { if (bereq.url ~ "(?i)\.(jpg|pbg|gif|jpeg)$") { unset beresp.Set-Cookie; set beresp.ttl = 7200s; } if (bereq.url ~ "(?i)\.(css|js)$") { unset beresp.Set-Cookie; set beresp.ttl = 3600s; } } }
注:当后端服务器的响应报文中,cache-control首部的值不匹配s-maxage,也就是没有设定公共缓存的时长的响应报文,如果请求到后端主机的请求报文中的url是以.jpg结尾("(?i)表示不区分大小写"),就设定其响应报文的缓存的ttl时长为7200秒,然后取消响应报文中的Set-Cookie的首部,进行缓存
注:请求到后端主机的请求报文中的url是以.css结尾("(?i)表示不区分大小写"),就设定其响应报文的缓存的ttl时长为7200秒,然后取消响应报文中Set-Cookie的首部,进行缓存
8)使用PURGE方法,基于客户端请求报文来源IP的访问控制机制
@@在/etc/varnish/default.vcl修改配置文件如下:
acl testip{ "192.168.2.200"; }
#注:定义ACL,名称:testip,指定的ip为192.168.2.200,注意ip地址必须用双引号引起,否则语法错误。如果是限制网段即:"192.168.1.0"/24
sub vcl_purge { return(synth(200,"Purged")); } sub vcl_recv { # Happens before we check if we have this in cache already. # # Typically you clean up the request here, removing cookies you don‘t need, # rewriting the request, etc. if (req.method == "PURGE"){ if (client.ip ~ testip){ return(purge); }else { return(synth(405,"GET OUT Method is not allowed")); } } }
#注:客户端来源IP匹配testip中的值,如匹配由return(purge)处理,不匹配则return(synth)处理返回405状态码并提示信息
@@SHELL登录varnish交互式管理界面,加载编译修改后的配置文件,新建一个test4,把修改后vcl读取test4文件中,并应用
[root@node01 ~]# varnishadm -S /etc/varnish/secret -T 127.0.0.1:6082 200 ----------------------------- Varnish Cache CLI 1.0 ----------------------------- Linux,3.10.0-327.el7.x86_64,x86_64,-smalloc,-smalloc,-hcritbit varnish-4.0.4 revision 386f712 Type ‘help‘ for command list. Type ‘quit‘ to close CLI session. vcl.load test4 default.vcl 200 VCL compiled. vcl.use test4 200 VCL ‘test4‘ now active
@@在客户端192.168.2.200(PURGE)访问192.168.2.5,是否提示报错信息
[root@gitlab ~]# curl -X PURGE http://192.168.2.5/index.html <!DOCTYPE html> <html> <head> <title>405 GET OUT Method is not allowed</title> </head> <body> <h1>Error 405 GET OUT Method is not allowed</h1> #拒绝并返回提示 <p>GET OUT Method is not allowed</p> <h3>Guru Meditation:</h3> <p>XID: 32809</p> <hr> <p>Varnish cache server</p> </body> </html>
@@在客户端192.168.2.6访问192.168.2.5,是否正常被缓存命中,不受限制
[root@moni ~]# curl -I http://192.168.2.5/index.html HTTP/1.1 200 OK Server: nginx/1.10.2 Date: Thu, 19 Jan 2017 06:24:20 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 98 Connection: keep-alive Last-Modified: Thu, 19 Jan 2017 01:30:56 GMT ETag: "62-546687c1a0225" X-Varnish: 32814 32812 Age: 4 Via: 1.1 varnish-v4 X-Cached: hit 192.168.2.14
9)客户端请求负载均衡至后端多台主机(将后端多台主机定义为一个主机组,然后当满足一定条件时,将请求调度到该主机组,通过vcl_init状态引擎,new一个主机组,组内添加定义的主机)
定义示例格式:
import directors; # 需要导入directors功能 backend server1 { .host = .port = } backend server2 { .host = .port = } sub vcl_init { new GROUP_NAME = directors.round_robin(); GROUP_NAME.add_backend(server1); GROUP_NAME.add_backend(server2); } sub vcl_recv { # send all traffic to the bar director: set req.backend_hint = GROUP_NAME.backend(); }
@@实践:当客户端访问请求.txt,缓存服务器调度至后端的web01节点(192.168.2.101)虚拟机主机上和web02节点(192.168.2.102)虚拟主机上;如访问.html资源时,缓存服务器调度至后端的web01节点(192.168.2.15)虚拟机主机上和web02节点(192.168.2.17)虚拟主机上
注意:本实验需要不能缓存被访问资源,如被缓存则由varnish响应了,看不到效果
在/etc/varnish/default.vcl修改配置文件如下:
import directors; #需要导入directors功能,才能使用主机组
#以下为定义后台主机
backend html1 { .host = "192.168.2.15"; .port = "80"; } backend html2 { .host = "192.168.2.17"; .port = "80"; } backend txt1 { .host = "192.168.2.101"; .port = "80"; } backend txt2 { .host = "192.168.2.102"; .port = "80"; } #定义vcl_init,新建testsvr1、txtsvr2组,把相应的主机分别加入到组中 sub vcl_init { new testsvr1 = directors.round_robin(); #轮询 testsvr1.add_backend(html1); testsvr1.add_backend(html2); new txtsvr2 = directors.random(); #随机调度 txtsvr2.add_backend(txt1,2); txtsvr2.add_backend(txt2,1); } sub vcl_recv { # Happens before we check if we have this in cache already. # # Typically you clean up the request here, removing cookies you don‘t need, # rewriting the request, etc. # # #if (req.url ~ "(?i)^/(admin|login)" || req.url ~ "(?i)\.php$" || req.http.user-agent ~ "curl"){ #return(pass); #} if (req.url ~ "(?i)\.html$") { #判断匹配html后缀,调度相应的组 set req.backend_hint = testsvr1.backend(); } if (req.url ~ "(?i)\.txt$") { #判断匹配txt后缀,调度相应的组 set req.backend_hint = txtsvr2.backend(); } }
在节点web01和web02,/var/www/test2目录下,各新建index.txt(在/var/www/test1目录下新建index.html已经在上面创建过),内容分别如下:
[root@node02 test2]# cat index.txt web01 192.168.2.101 test2.txt! [root@node03 test2]# cat index.txt web02 192.168.2.102 test2.txt!
在客户端192.168.2.6访问192.168.2.5,是否正常轮询以及调度正确
[root@moni ~]# curl http://192.168.2.5/index.html <HTML> <BODY> <CENTER><H1>web01 192.168.2.15 test1.html!</H1></CENTER> </BODY> </HTML> [root@moni ~]# curl http://192.168.2.5/index.html <HTML> <BODY> <CENTER><H1>web02 192.168.2.17 test2.html!</H1></CENTER> </BODY> </HTML>
[root@moni ~]# curl http://192.168.2.5/index.txt web02 192.168.2.102 test2.txt!
10)varnish健康性,当后端主机故障,不会调度至故障主机,恢复正常后,可再自动调度。
注:在定义backend时,用.probe定义该主机对应的健康状况检测机制,当探测到某个主机不在线时,会自动将该后端主机移除
也可以单独定义健康状况检测的probe配置段,然后在各个后端主机上进行引用
backend BE_NAME { .host = .port = .probe = { .url= .timeout= .interval= .window= .threshhold= } }
.probe:定义健康状态检测方法;
.url:检测时请求的URL,默认为”/";
.request:发出的具体请求;(用于不使用.url检测时)
.request =
"GET /.healthtest.html HTTP/1.1"
"Host: www.nwc.com"
"Connection: close"
.window:基于最近的多少次检查来判断其健康状态;
.threshhold:最近.window中定义的这么次检查中至有.threshhold定义的次数是成功的;
.interval:检测频度;
.timeout:超时时长;
.expected_response:期望的响应码,默认为200;用于基于.request机制时定义响应机制
健康状态检测的配置方式:
(1) probe PB_NAME = { } backend NAME = { .probe = PB_NAME; ... } (2) backend NAME { .probe = { ... } }
三、Varnish性能优化相关参数
1、常见参数
thread_pool_max 5000 [threads] :每个线程池最大启动线程池个数,最大不建议超过5000
thread_pool_min 100 [threads] :每个线程池最小启动的线程个数(额外的意义为,最大的空闲线程数)
thread_pool_stack 48k [bytes] :每个线程的栈的空间大小
thread_pool_timeout 300.000 [seconds] :线程池的超时时间,也就是一个线程空闲多长时间就会被关闭,因为线程池有最大和最小数量,线程个数动态在最大和最小之间动态调整
thread_pools [pools] :线程池个数,,默认为2个,最好小于或等于CPU核心数量
注:最大的并发连接数=thread_pools*thread_pool_max
thread_queue_limit 20 requests :每个线程池最大允许的等待队列的长度
thread_stats_rate 10 [requests] :线程最大允许处理多少个请求后一次性将日志信息写入日志区域
workspace_thread 2k [bytes] :每个线程额外提供多大空间作为其请求处理的专用工作空间
thread_pool_add_delay :创建线程时的延迟时间,也就是,需要创建线程时,不是立即创建,而是延迟一段时间,说不定在此时间内有线程空闲下来,从而不用创建新的线程
thread_pool_destroy_delay :销毁线程时的延迟时间,也就是当需要销毁线程时,不是立即就销毁,而是等一段时间再销毁,以免有新的请求进来而导致需要新建新线程处理
2、如何热调整varnish参数,不重启varnish服务,有两种方法 :
1)SHELL 交互式设置
[root@node01 ~]# varnishadm -S /etc/varnish/secret -T 127.0.0.1:6082 param.show #显示当前varnish统计信息 param.set thread_pool_max 1024 #设定线程池大小 200 param.show thread_pool_max 200 thread_pool_max Value is: 1024 [threads] Default is: 5000 Minimum is: 100
2)修改配置文件/etc/varnish/varnish.params
注意:如果varnish已经启动,在该文件修改参数,要使其失效需要重启varnish,如果重启varnish会导致缓存全部丢失,慎重。所以建议用以上方法动态调整varnish参数
根据实际情况调整以下相应参数:
DAEMON_OPTS="-p thread_pool_min=5 -p thread_pool_max=500 -p thread_pool_timeout=300"
本文出自 “一万小时定律” 博客,请务必保留此出处http://daisywei.blog.51cto.com/7837970/1893228
Varnish之二web缓存实践