首页 > 代码库 > 中小企业openstack私有云布署实践【4.2 上层代理haproxy+nginx配置 (办公网测试环境)】
中小企业openstack私有云布署实践【4.2 上层代理haproxy+nginx配置 (办公网测试环境)】
续上一节说明
一开始我也是使用haproxy来做的,但后来方式改了,是因为物理机controller的高配置有些浪费,我需要1组高可用的上层nginx代理服务器来实现其它域名80代理访问,很多办公网测试的域名解析58.251.17.238的IP,都是复用走这组controller的nginx
测试环境:haproxy + nginx
所以,我需要将haproxy的dashboard占用的80剥离出来
两边的controller主备控制节点均安装
yum install -y haproxy
创建目录
mkdir -p /home/haproxy/log && mkdir -p /home/haproxy/run/
赋予目录权限
chown -R haproxy:haproxy /home/haproxy
在controller1上的配置示例
[root@controller1 ~]# vi /etc/haproxy/haproxy.cfg
#全局配置
global
chroot /home/haproxy/log
daemon
group haproxy
maxconn 20000
pidfile /home/haproxy/run/haproxy.pid
user haproxy
defaults
log global
maxconn 20000
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
#dashboard界面,此处注释掉,放置到nginx中。
#listen dashboard_cluster_80
# bind 10.40.42.10:80
# balance source
# option tcpka
# option httpchk
# option tcplog
# server controller1 10.40.42.1:80 check inter 2000 rise 2 fall 5
# server kcontroller2 10.40.42.2:80 check inter 2000 rise 2 fall 5
#数据库集群访问,使用backup访问实现只访问controller1,而当1挂了的时候才手工解开配置跨公网VPN访问kxcontroller2,或者再造一台数据库服务器进入数据库集群
listen galera_cluster_3306
bind 10.40.42.10:3306
mode tcp
balance source
option tcpka
option httpchk
server controller1 10.40.42.1:3306 check port 9200 inter 2000 rise 2 fall 5
# server kxcontroller2 10.120.42.2:3306 backup check port 9200 inter 2000 rise 2 fall 5
#队列 RabbitMQ 访问,使用访问实现只访问1台,当VIP在controller1上时,它只访问controller1上的rabbitmq
listen rabbitmq_cluster_5672
bind 10.40.42.10:5672
mode tcp
balance roundrobin
server controller1 10.40.42.2:5672 check inter 2000 rise 2 fall 5
# server controller2 10.40.42.2:5672 check inter 2000 rise 2 fall 5
#镜像Glance API 访问,使用访问实现只访问1台,无论VIP在谁身上时,它只访问controller2上的Glance API ,controller1每天凌晨定时向controller2同步image文件,当controller2有故障时做手工冷备切换至controller1
listen glance_api_cluster_9292
bind 10.40.42.10:9292
balance source
option tcpka
option httpchk
option tcplog
# server controller1 10.40.42.2:9292 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:9292 check inter 2000 rise 2 fall 5
#镜像Glance 注册 访问,使用访问实现只访问1台,无论VIP在谁身上时 ,它只访问controller2上的 Glance 注册 ,controller1每天凌晨定时向controller2同步image文件,当controller2有故障时做手工冷备切换至controller1
listen glance_registry_cluster_9191
bind 10.40.42.10:9191
balance source
option tcpka
option tcplog
# server controller1 10.40.42.1:9191 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:9191 check inter 2000 rise 2 fall 5
#keystone 35357访问,使用访问实现只访问1台,当VIP在controller1上时,它的队列访问controller1上的 keystone 35357
listen keystone_admin_cluster_35357
bind 10.40.42.10:35357
balance source
option tcpka
option httpchk
option tcplog
server controller1 10.40.42.1:35357 check inter 2000 rise 2 fall 5
# server controller2 10.40.42.2:35357 check inter 2000 rise 2 fall 5
#keystone 5000访问,使用访问实现只访问1台,当VIP在controller1上时,它只访问controller1上的 keystone 5000
listen keystone_public_internal_cluster_5000
bind 10.40.42.10:5000
balance source
option tcpka
option httpchk
option tcplog
server controller1 10.40.42.1:5000 check inter 2000 rise 2 fall 5
# server controller2 10.40.42.2:5000 check inter 2000 rise 2 fall 5
#nova api访问,不考虑VIP在谁身上,这个服务可以负载均衡访问
listen nova_compute_api_cluster_8774
bind 10.40.42.10:8774
balance source
option tcpka
option httpchk
option tcplog
server controller1 10.40.42.1:8774 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:8774 check inter 2000 rise 2 fall 5
#nova 元数据 访问,不考虑VIP在谁身上,这个服务可以负载均衡访问
listen nova_metadata_api_cluster_8775
bind 10.40.42.10:8775
balance source
option tcpka
option tcplog
server controller1 10.40.42.1:8775 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:8775 check inter 2000 rise 2 fall 5
#cinder 块存储访问,这里在controller1上开启来测试了,只指向controller1,因为controller1上挂了一块iscsi的网络盘,再用网络盘,加到cinder中,暂没有回收
listen cinder_api_cluster_8776
bind 10.40.42.10:8776
balance source
option tcpka
option httpchk
option tcplog
server controller1 10.40.42.1:8776 check inter 2000 rise 2 fall 5
# server controller2 10.40.42.2:8776 check inter 2000 rise 2 fall 5
#ceilometer 访问,虽然这里VIP开启,但后端服务我没开启,暂时挂在这里
listen ceilometer_api_cluster_8777
bind 10.40.42.10:8777
balance source
option tcpka
option tcplog
server controller1 10.40.42.1:8777 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:8777 check inter 2000 rise 2 fall 5
#nova VNC访问,不考虑VIP在谁身上,这个后端 服务可以负载均衡访问
listen nova_vncproxy_cluster_6080
bind 10.40.42.10:6080
balance source
option tcpka
option tcplog
server controller1 10.40.42.1:6080 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:6080 check inter 2000 rise 2 fall 5
#neutron api访问,不考虑VIP在谁身上,这个后端服务可以负载均衡访问
listen neutron_api_cluster_9696
bind 10.40.42.10:9696
balance source
option tcpka
option httpchk
option tcplog
server controller1 10.40.42.1:9696 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:9696 check inter 2000 rise 2 fall 5
#swift 块存储访问,虽然这里VIP开启,但后端服务我没开启,暂时挂在这里
listen swift_proxy_cluster_8080
bind 10.40.42.10:8080
balance source
option tcplog
option tcpka
server controller1 10.40.42.1:8080 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:8080 check inter 2000 rise 2 fall 5
#展示普能用户可用于查询的页面http://10.40.42.10:8888/stats 用户和密码admin:admin
listen admin_stats
bind 0.0.0.0:8888
option httplog
mode http
stats refresh 30s
stats uri /stats
stats realm Haproxy Manager
stats auth admin:admin
#展示管理员 admin 页面可供修改页面http://10.40.42.10:8008/admin-venic 用户和密码venic:venic8888
listen stats_auth 0.0.0.0:8008
mode http
stats enable
stats uri /admin-venic
stats auth venic:venic8888
stats admin if TRUE
[root@controller2 ~]# vi /etc/haproxy/haproxy.cfg
#全局配置
global
chroot /home/haproxy/log
daemon
group haproxy
maxconn 20000
pidfile /home/haproxy/run/haproxy.pid
user haproxy
defaults
log global
maxconn 20000
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout check 10s
#dashboard界面,此处注释掉,放置到nginx中。
#listen dashboard_cluster_80
# bind 10.40.42.10:80
# balance source
# option tcpka
# option httpchk
# option tcplog
# server controller1 10.40.42.1:80 check inter 2000 rise 2 fall 5
# server kcontroller2 10.40.42.2:80 check inter 2000 rise 2 fall 5
#数据库集群访问,使用backup访问实现只访问controller1,而当1挂了的时候才手工解开配置跨公网VPN访问kxcontroller2,或者再造一台数据库服务器进入数据库集群
listen galera_cluster_3306
bind 10.40.42.10:3306
mode tcp
balance source
option tcpka
option httpchk
server controller1 10.40.42.1:3306 check port 9200 inter 2000 rise 2 fall 5
# server kxcontroller2 10.120.42.2:3306 backup check port 9200 inter 2000 rise 2 fall 5
#队列 RabbitMQ 访问,使用访问实现只访问1台,当VIP在controller2上时,它只访问controller2上的rabbitmq
listen rabbitmq_cluster_5672
bind 10.40.42.10:5672
mode tcp
balance roundrobin
# server controller1 10.40.42.2:5672 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:5672 check inter 2000 rise 2 fall 5
#镜像Glance API 访问,使用访问实现只访问1台,无论VIP在谁身上时,它只访问controller2上的Glance API ,controller1每天凌晨定时向controller2同步image文件,当controller2有故障时做手工冷备切换至controller1
listen glance_api_cluster_9292
bind 10.40.42.10:9292
balance source
option tcpka
option httpchk
option tcplog
# server controller1 10.40.42.2:9292 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:9292 check inter 2000 rise 2 fall 5
#镜像Glance 注册 访问,使用访问实现只访问1台,无论VIP在谁身上时 ,它只访问controller2上的 Glance 注册 ,controller1每天凌晨定时向controller2同步image文件,当controller2有故障时做手工冷备切换至controller1
listen glance_registry_cluster_9191
bind 10.40.42.10:9191
balance source
option tcpka
option tcplog
# server controller1 10.40.42.1:9191 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:9191 check inter 2000 rise 2 fall 5
#keystone 35357访问,使用访问实现只访问1台,当VIP在controller2上时,它的队列访问controller2上的 keystone 35357
listen keystone_admin_cluster_35357
bind 10.40.42.10:35357
balance source
option tcpka
option httpchk
option tcplog
# server controller1 10.40.42.1:35357 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:35357 check inter 2000 rise 2 fall 5
#keystone 5000访问,使用访问实现只访问1台,当VIP在controller2上时,它的队列访问kxcontroller2上的 keystone 5000
listen keystone_public_internal_cluster_5000
bind 10.40.42.10:5000
balance source
option tcpka
option httpchk
option tcplog
# server controller1 10.40.42.1:5000 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:5000 check inter 2000 rise 2 fall 5
#nova api访问,不考虑VIP在谁身上,这个服务可以负载均衡访问
listen nova_compute_api_cluster_8774
bind 10.40.42.10:8774
balance source
option tcpka
option httpchk
option tcplog
server controller1 10.40.42.1:8774 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:8774 check inter 2000 rise 2 fall 5
#nova 元数据 访问,不考虑VIP在谁身上,这个服务可以负载均衡访问
listen nova_metadata_api_cluster_8775
bind 10.40.42.10:8775
balance source
option tcpka
option tcplog
server controller1 10.40.42.1:8775 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:8775 check inter 2000 rise 2 fall 5
#cinder 块存储访问,这里在controller1上开启来测试了,只指向controller1,因为controller1上挂了一块iscsi的网络盘,再用网络盘,加到cinder中,暂没有回收
listen cinder_api_cluster_8776
bind 10.40.42.10:8776
balance source
option tcpka
option httpchk
option tcplog
server controller1 10.40.42.1:8776 check inter 2000 rise 2 fall 5
# server controller2 10.40.42.2:8776 check inter 2000 rise 2 fall 5
#ceilometer 访问,虽然这里VIP开启,但后端服务我没开启,暂时挂在这里
listen ceilometer_api_cluster_8777
bind 10.40.42.10:8777
balance source
option tcpka
option tcplog
server controller1 10.40.42.1:8777 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:8777 check inter 2000 rise 2 fall 5
#nova VNC访问,不考虑VIP在谁身上,这个后端 服务可以负载均衡访问
listen nova_vncproxy_cluster_6080
bind 10.40.42.10:6080
balance source
option tcpka
option tcplog
server controller1 10.40.42.1:6080 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:6080 check inter 2000 rise 2 fall 5
#neutron api访问,不考虑VIP在谁身上,这个后端服务可以负载均衡访问
listen neutron_api_cluster_9696
bind 10.40.42.10:9696
balance source
option tcpka
option httpchk
option tcplog
server controller1 10.40.42.1:9696 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:9696 check inter 2000 rise 2 fall 5
#swift 块存储访问,虽然这里VIP开启,但后端服务我没开启,暂时挂在这里
listen swift_proxy_cluster_8080
bind 10.40.42.10:8080
balance source
option tcplog
option tcpka
server controller1 10.40.42.1:8080 check inter 2000 rise 2 fall 5
server controller2 10.40.42.2:8080 check inter 2000 rise 2 fall 5
#展示普能用户可用于查询的页面http://10.40.42.10:8888/stats 用户和密码admin:admin
listen admin_stats
bind 0.0.0.0:8888
option httplog
mode http
stats refresh 30s
stats uri /stats
stats realm Haproxy Manager
stats auth admin:admin
#展示管理员 admin 页面可供修改页面http://10.40.42.10:8008/admin-venic 用户和密码venic:venic8888
listen stats_auth 0.0.0.0:8008
mode http
stats enable
stats uri /admin-venic
stats auth venic:venic8888
stats admin if TRUE
两个办公网的控制主备节点配置好后,
systemctl start haproxy.service
systemctl enable haproxy.service
测试监控页面是否生效,以判断haproxy是否正常工作
http://10.120.42.10:8888/stats
http://10.120.42.10:8008/admin-venic
当然,它们启动时,参考上一节的情况,同样是在没有获取到VIP备节点上会发现haproxy服务启动不了。原因如下,
haproxy启动时提示失败:
[ALERT] 164/1100300 (11606) : Starting proxy linuxyw.com: cannot bind socket
修复前,在主备节点上执行 netstat -anp | grep haproxy,检测VIP的端口是否都在监听。
这个问题,其实就是因为你的haproxy没有得到VIP的原因,而你的配置文件又绑定了当前不存在VIP地址,所以会提示以上错误
当然,我们要确保的haproxy服务要提前先启动,不然等故障时,到去手动启动haproxy服务,就无法高可用了。
解决方法:
修改2台controller内核参数:
vi /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1
保存结果,使内核参数生效
sysctl -p
再启动haproxy就可以启动了
接着2台配置nginx1.9嫁接dashboard界面访问
相同的配置步骤:
yum install -y gcc gcc-c++ pcre pcre-devel openssl openssl-devel
接下可以使用yum 安装或者源包安装nginx,但必须是1.9以上的版本才能实现tcp的负载均衡功能。
本环境使用的是源包安装
SCP上传nginx1.9源包至目标controller机器的/home/目录下
创建等下程序安装的路径目录
mkdir /home/local/nginx1.9 -p
解压/home/目录下的源包
tar -zxvf nginx-1.9.12.tar.gz
cd nginx-1.9.12/
./configure --prefix=/home/local/nginx1.9 --with-http_stub_status_module --with-http_ssl_module --with-http_gzip_static_module --with-stream
make && make install
cd /home/local/nginx1.9/conf
创建一个子目录,将来方便管理
mkdir conf.d
修改默认的nginx.conf配置文件
vi nginx.conf
vi nginx.conf
清空原有配置,替代以下全局配置参数,让其声明启动时,向下文件调取*.conf的配置文件
user nobody;
worker_processes auto;
events {
worker_connections 102400;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
include /home/local/nginx1.9/conf/conf.d/http/*.conf;
}
stream {
proxy_timeout 1d;
proxy_connect_timeout 30;
include /home/local/nginx1.9/conf/conf.d/tcp/*.conf;
}
mkdir /home/local/nginx1.9/conf/conf.d/http
mkdir /home/local/nginx1.9/conf/conf.d/tcp
在controller1和controller2上同时操作
cd /home/local/nginx1.9/conf/conf.d/http/
vi 80_controller_10.40.42.1_2_80.conf
upstream controller {
server 10.40.42.1:80;
server 10.40.42.2:80 backup;
}
server {
listen 10.40.42.10:80;
server_name controller;
location / {
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://controller;
}
}
保存退出。
测试配置
/home/local/nginx1.9/sbin/nginx -t
启动nginx
/home/local/nginx1.9/sbin/nginx
加入开机自启
chmod +x /etc/rc.d/rc.local
vi /etc/rc.d/rc.local
尾部添加
/home/local/nginx1.9/sbin/nginx
在自己本上的hosts文件写个 10.40.42.10 controller
测试链接是否可达http://controller/dashboard
当前在还没安装dashboard这条链接是404提示的
中小企业openstack私有云布署实践【4.2 上层代理haproxy+nginx配置 (办公网测试环境)】
声明:以上内容来自用户投稿及互联网公开渠道收集整理发布,本网站不拥有所有权,未作人工编辑处理,也不承担相关法律责任,若内容有误或涉及侵权可进行投诉: 投诉/举报 工作人员会在5个工作日内联系你,一经查实,本站将立刻删除涉嫌侵权内容。