基于keepalived的VRRP协议实现DR模型下的高可用集群
环境及配置前提说明
4台虚拟服务器都为centos6.8.
主机1,ip:192.168.25.140 做后端RS1,80端口向外部提供web服务
主机2,ip:192.168.25.141 做后端RS2,80端口向外部提供web服务
ip:192.168.25.142为lo:0接口的VIP地址,80端口定义为集群服务端口
注意:两台作为DR模型的RS主机需要先配置其内核参数
主机3,ip:192.168.25.138 做keepalived集群主机1,做主模型的,负责调度,192.168.25.142为VIP地址,80端口定义为集群服务端口
主机4,ip:192.168.25.139 做keepalived集群主机2,做备模型的,负责调度,192.168.25.142为VIP地址,80端口定义为集群服务端口
1、给两台后端RS主机安装webserver,这里以httpd为例,并且为其编辑测试主页,并且配置两台RS主机的内核参数,定义其arp级别,以及为其lo:0接口设置vip地址
]# yum install httpd -y ]# vim /var/www/html/index.html <h1>RS 1</h1> <h1>RS 2</h1> ]# vim set_arp.sh 使用脚本的方式来配置内核参数和lo接口的vip地址 #/!bin/bash # vip='192.168.25.142' #vip地址 vport='80' #vip端口 netmask='255.255.255.255' #vip掩码 iface='lo:0' #vip接口 case $1 in start) echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce ifconfig $iface $vip netmask $netmask broadcast $vip up #添加vip地址到lo:0接口 route add -host $vip dev $iface #添加路由 ;; stop) ifconfig $iface down echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce ;; esac ]# ./set_arp.sh start
2、为两台keepalived主机编辑配置文件
]# yum install keepalived -y 安装keepalived
]# cp -a keepalived.conf{,.bak} 先备份
]# vim keepalived.conf 编辑配置文件
]# openssl rand -hex 4 生成8位密码
2.1节点1配置如下
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 172.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass cb77b8da
}
virtual_ipaddress {
192.168.25.142/32 dev eth0 #此处写DR模型中的VIP地址
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
vrrp_instance VI_2 {
state BACKUP
interface eth0
virtual_router_id 2
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass 020c3694
}
virtual_ipaddress {
192.168.25.142/32 dev eth0 #此处写DR模型中的VIP地址
}
}
virtual_server 192.168.25.142 80 { #此处写DR模型中的VIP地址
delay_loop 3
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.25.140 80 { #此处写DR模型中的RS1地址
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.25.141 80 { #此处写DR模型中的RS2地址
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
2.2节点2配置如下;
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from root@localhost
smtp_server 172.0.0.1
smtp_connect_timeout 30
router_id node2
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 1
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass cb77b8da
}
virtual_ipaddress {
192.168.25.142/32 dev eth0 #此处写DR模型中的VIP地址
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 2
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 020c3694
}
virtual_ipaddress {
192.168.25.142/32 dev eth0 #此处写DR模型中的VIP地址
}
}
virtual_server 192.168.25.142 80 { #此处写DR模型中的VIP地址
delay_loop 3
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.25.140 80 { #此处写DR模型中的RS1地址
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.25.141 80 { #此处写DR模型中的RS2地址
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
3进行访问测试
curl http://192.168.25.142

如上环境进行单主配置如下;
节点1配置
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 172.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass cb77b8da
}
virtual_ipaddress {
192.168.25.142/32 dev eth0 #此处写DR模型中的VIP地址
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 192.168.25.142 80 { #此处写DR模型中的VIP地址
delay_loop 3
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.25.140 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.25.141 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
节点2配置
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from root@localhost
smtp_server 172.0.0.1
smtp_connect_timeout 30
router_id node2
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 1
priority 99
advert_int 1
authentication {
auth_type PASS
auth_pass cb77b8da
}
virtual_ipaddress {
192.168.25.142/32 dev eth0
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 192.168.25.142 80 {
delay_loop 3
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.25.140 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 192.168.25.141 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
测试

为两种模型添加sorry_server
前提条件,keepalived主机需要安装web服务,使得sorry_server运行在keepalived主机上
]# yum install httpd -y 下载web服务 ]# vim /var/www/html/index.html 编辑sorry_server主页 <h1>这是sorry_server1</h1> <h1>这是sorry_server2</h1> ]# service keepalived stop 停止keepalived服务 ]# vim keepalived.conf 编辑keepalived主机配置文件,定义sorry_server,在virtual_server的各real_server之外添加 sorry_server 127.0.0.1 80 ]# service keepalived start 启动服务 ]# service httpd start 启动sorry_server服务 ]# curl http://192.168.25.142 在两个RS都处于停机状态下进行访问测试,此时哪一个keepalived节点处于master地位则哪个节点响应


注意:此时如果一个keepalived主机挂了 只有另外一台在线的话 可是可以正常进行调度工作的
keepalived调用外部的辅助脚本进行资源监控,并根据监控的结果状态能实现优先动态调整示例;

]# service keepalived start 启动服务 ]# touch down | rm -f down 在当前的主节点上的/etc/keepalived目录下创建名为down的文件就可实现动态切换
注意:节点1和节点2都需要配置
原创文章,作者:M20-1马星,如若转载,请注明出处:http://www.178linux.com/58161

