Keepalived高可用
高并发lvs
Lvs负载均衡没有解决的问题:
- 后端没有健康检查机制
- 自身如果出现单点故障没有应急处理方案,系统会整体瘫痪。
- 数据倾斜:后端服务如果臃肿,有计算和io瓶颈,lvs是无能为力的。
Keepalived
Keepalived主要目的就是解决lvs的单点故障的问题。Hi的高可用,rserver的健康检查。
- 需要用心跳机制探测后端rs是否提供服务(心跳机制:就是每隔一段时间就会ping一下这个服务,如果出现访问异常的话需要从lvs服务中删除这个realserver节点。当realserver的服务修复好了的话,测试心跳又出现了心跳,所以还要吧realserver的服务加到lvs的服务当中。)
- Lvs dr需要主备(Ha)
Ha主备高可用机制就是当主服务出现单点故障的话会有一个新的服务来继任之前的服务,保证不会出现服务的终止,即:国不可一日无君,那么主服务宕机的话会有一个新的服务器来接替主服务,但是备机有多台的情况下,由哪台来接替主服务呢,这里主要有两种机制:1争抢,2禅让,keepalived采用的是禅让制。
Vrrp协议:masterlvs会向外发一个包,这个包广播到网络中,lvs收到包的话就证明masterserver还是健康状态。如果lvs没有接受到广播,说明服务单点故障,这个时候有备机的ID最大的来继承master的服务。当原主机又被修复的时候,刚刚继任的主机还会将主机的权限禅让给源主机。 -
IP漂移:
在lvs负载均衡服务中,通常来说对外只有一个公开的网络地址端口,客户端访问这一个地址,然后有lvs调度根据调度方式来分发给rservler服务。但是当lvs的主服务宕机的时候,会有一个备机担任主服务,这个时候对外的IP协议将会发生飘移。
安装命令:
Yum install keepalived
Service keepalived start
/etc/keepalived/keepalived.conf
架构拓扑图
Keepalived安装
主机名 | 角色 | keepalived |
---|---|---|
Node01 | lvs | √ |
Node02 | rserver | |
Node03 | rserver | |
Node04 | lvs | √ |
配置node01节点
把eth0:1的配置删除:
[[email protected] ~]# ifconfig eth0:1 down
[[email protected] ~]# ipvsadm -ln #查看ipvsadm的配置信息,然后清除
[[email protected] ~]# ipvsadm -C
到这里就配置成初始的状态了。
[[email protected] ~]# yum install keepalived
[[email protected] ~]# cd /etc/keepalived/
[[email protected] keepalived]# cp keepalived.conf keepalived.conf.bak
[[email protected] keepalived]# vi keepalived.conf
192.168.43.100/24 dev eth0 label eth0:2
virtual_server 192.168.43.100 80 {
lb_kind DR
persistence_timeout 0
real_server node02 80 {
HTTP_GET {
status_code 200 //请求成功就是200
删掉下一个url的实体并配置另一个realserver
Node01:
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER //表示这是一个主机
interface eth0
virtual_router_id 51
priority 100 //权重,谁数字大谁优先继位
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.43.100/24 dev eth0 label eth0:2
}
}
virtual_server 192.168.43.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 0
protocol TCP
real_server 192.168.43.45 80{
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.43.233 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[[email protected] keepalived]# scp ./keepalived.conf [email protected]:`pwd`
! Configuration File for keepalived
global_defs {
notification_email {
[email protected]
[email protected]
[email protected]
}
notification_email_from [email protected]
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 51
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.43.100/24 dev eth0 label eth0:2
}
}
virtual_server 192.168.43.100 80 {
delay_loop 6
lb_algo rr
lb_kind DR
nat_mask 255.255.255.0
persistence_timeout 0
protocol TCP
real_server 192.168.43.45 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.43.233 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
配置node04节点
[[email protected] ~]# yum install keepalived
[[email protected] ~]# cd /etc/keepalived/
[[email protected] keepalived]# cp keepalived.conf keepalived.conf.bak
启动服务运行测试
node01操作
[[email protected] keepalived]# service keepalived start
[[email protected] keepalived]# ifconfig
[[email protected] keepalived]# ipvsadm -ln
没有realserver服务是因为httpd没有开启。
注意:realserver的地址不能用node02/03这样映射,否则查找不到两个服务:
node02操作
[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
[[email protected] ~]# echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[[email protected] ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[[email protected] ~]# ifconfig lo:8 192.168.43.100 netmask 255.255.255.255
[[email protected] ~]# service httpd start
node03操作
[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/conf/eth0/arp_ignore
[[email protected] ~]# echo 2 > /proc/sys/net/ipv4/conf/eth0/arp_announce
[[email protected] ~]# echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
[[email protected] ~]# echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
[[email protected] ~]# ifconfig lo:3 192.168.43.100 netmask 255.255.255.255
service httpd start
运行
测试
把node01这个服务的节点的网卡禁用然后看效果:
[[email protected] ~]# ifconfig eth0 down
继续刷新页面依然有效
并且node04的服务ip已经漂移过来了。
重新回到node01的服务器上开启网卡:ifconfig eth0 up
再用ifconfig来查看eth0:2有重新从node04漂移到node01上了。