系统平台 redhat5.8

IP配置信息:

LVS-DR-master HA1: 172.16.66.6

LVS-DR-backup HA2: 172.16.66.7

LVS-DR-vip: 172.16.66.1

LVS-DR-rs1:172.16.66.4   

LVS-DR-rs2:172.16.66.5

软件包下载参考地址

http://www.linuxvirtualserver.org/software/kernel-2.6/

http://www.keepalived.org/software/


每台机器配置前的准备工作

关闭selinux

# getenforce  查看selinux状态,若为enforcing则执行以下步骤修改

# setenforce 0   

# vim /etc/sysconfig/selinux  (服务器重启后才会永久生效)

修改SELINUX=enforcing为SELINUX=disabled


一、RS的配置过程

1、RS1的配置

1)首先配置本机IP:(网卡要改为桥接方式)

setup -->Network configuration --> Edit Devices --> eth0(eth0) – Advanced Micro Devices [AMD] --> 修改IP为 172.16.66.4

或者vim /etc/sysconfig/network-scripts/ifcfg-eth0 修改IP)

# service network restart 重启服务(每次修改配置后都不要忘了重启服务)

2)编辑lvs.sh开机启动脚本并添加执行权限开启

# vim /etc/init.d/lvs.sh 

#!/bin/bash

#

# Script to start LVS DR real server.

# chkconfig: - 90 10

# description: LVS DR real server

#

. /etc/rc.d/init.d/functions

VIP=172.16.66.1

host=`/bin/hostname`

case "$1" in

start)

# Start LVS-DR real server on this machine.

/sbin/ifconfig lo down

/sbin/ifconfig lo up

echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore

echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

/sbin/ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up

#(broadcast为广播地址,255.255.255.255意味着只跟自己在同一个网段内,全是网络地址)

/sbin/route add -host $VIP dev lo:0

;;

stop)

# Stop LVS-DR real server loopback device(s).

/sbin/ifconfig lo:0 down

echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore

echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce

echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore

echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce

;;

status)

# Status of LVS-DR real server.

islothere=`/sbin/ifconfig lo:0 | grep $VIP`

isrothere=`netstat -rn | grep "lo:0" | grep $VIP`

if [ ! "$islothere" -o ! "isrothere" ];then

# Either the route or the lo:0 device

# not found.

echo "LVS-DR real server Stopped."

else

echo "LVS-DR real server Running."

fi

;;

*)

# Invalid entry.

echo "$0: Usage: $0 {start|status|stop}"

exit 1

;;

esac

# chmod +x /etc/init.d/lvs.sh 添加执行权限

# cd /etc/init.d/

# ./lvs.sh start  启动服务

3)安装httpd服务,提供页面并开启服务

# yum install httpd -y

# echo "RS1.magedu.com">/var/www/html/index/html

# service httpd start

集群之LVS的高可用

4)测试环境是否配置成功

在物理主机上ping 172.16.66.1看看是否能ping通

集群之LVS的高可用

Ping通后 可执行 arp -a 命令查看哪一个IP响应了

ifconfig 验证(虚拟IP为172.16.66.1)

集群之LVS的高可用

2、RS2的配置(与RS1相同)

1)配置IP:(网卡要改为桥接方式)

IP: 172.16.66.5

# vim /etc/sysconfig/network-scripts/ifcfg-eth0 设置IP

# service network restart 重启服务

2)编辑开机启动脚本添加执行权限后开启服务

# vim /etc/init.d/lvs.sh 脚本内容和RS1的相同

# cd /etc/init.d/

# chmod +x lvs.sh 添加执行权限

# ./lvs.sh start 启动服务

3)安装httpd服务提供相应的网页页面并开启服务

# yum install httpd –y

# echo "RS2.magedu.com">/var/www/html/index.html

# service httpd start

集群之LVS的高可用

4) 验证环境是否成功

在物理主机上ping172.16.66.1 查看是否能ping通, 然后执行arp –a查看响应状态

# ifconfig 验证

二、配置节点HA1、HA2

让两个节点各自在本地提供两个页面,以只读方式进行提供

HA1: IP为 172.16.66.6

HA2: IP为 172.16.66.7

vip: 172.16.66.1 (虚拟IP)

以两个节点node1,node2为例

创建节点的集群有几个需要注意的地方:

1)节点名称,对于名称的解析绝不可以依赖于DNS,应依赖于本地配置文件/etc/hosts,每一个节点的节点名称一定要跟uname -n 的节点保持一致

2)ssh互信通信,即以不提供密码的方式能够通过基于**通信的方式互相访问对方节点上的用户

3)集群中各节点时间必须同步,这是我们所依赖的基本前提,因为高可用集群的节点必须时刻监控对方的心跳信息

1、修改HA1、HA2的两台主机的主机名

1)修改HA1主机名

集群之LVS的高可用

# vim /etc/sysconfig/network 修改内容如下

集群之LVS的高可用

2)修改HA2主机名

集群之LVS的高可用

# vim /etc/sysconfig/network修改内容如下:

集群之LVS的高可用


2、配置两主机HA1与HA2双机互信

1)HA1上操作过程

# ssh-****** -t rsa -f ~/.ssh/id_rsa -P ''  复制**到本地

# ssh-copy-id -i .ssh/id_rsa.pub [email protected] 将公钥文件发送到HA2上

集群之LVS的高可用

# ssh 172.16.66.7  与另一台主机HA2进行互信

集群之LVS的高可用

2)HA2上操作过程

# ssh-****** -t rsa -f ~/.ssh/id_rsa -P  ''   

# ssh-copy-id -i .ssh/id_rsa.pub [email protected]  

# ssh 172.16.66.6 'ifconfig'   与节点HA1进行通信


3、配置主机解析和时间同步

HA1上操作

1)主机解析配置

# vim /etc/hosts 添加如下内容

172.16.66.6 node1.magedu.com node1

172.16.66.7 node2.magedu.com node2

集群之LVS的高可用

# ping node2       测试一下能否ping通node2

集群之LVS的高可用

# scp /etc/hosts node2:/etc/ 复制主机解析配置文件到HA2下,使其双方保持一致

# iptables -L 确保iptables没有被限定

集群之LVS的高可用

2)时间同步配置

# date

# ntpdate 172.16.0.1 通过另外一台服务器同步时间

集群之LVS的高可用

# service ntpd stop   关闭ntpd服务器

# chkconfig ntpd off 确保ntpd服务器开机时不能自动启动

# crontab -e  为了保证以后的时间是同步的,编辑配置文件,添加内容如下

*/5 * * * * /sbin/ntpdata 172.16.0.1 &>/dev/null

# scp /var/spool/cron/root node2:/var/spool/cron/ 复制到同步时间的配置文件到HA2

HA2上操作

# ping node1 查看是否能ping通节点1

# ping node1.magedu.com

# date

# crontab -l 查看编辑的同步时间的配置文件是否从node1上复制过来了

集群之LVS的高可用

三、利用keepalived实现LVS的高可用

1、HA1和HA2上分别安装keepalived和ipvsadm   (ipvsadm本系统自带的有所以直接安装)

# yum -y --nogpgcheck localinstall keepalived-1.2.7-5.el5.i386.rpm 安装keepalived包

# scp keepalived-1.2.7-5.el5.i386.rpm node2:/root/ 复制软件包到node2上

# yum -y install ipvsadm 装上工具方便监测

2、我们服务的转移情况

HA1 主节点上配置

[[email protected] ~]# cd /etc/keepalived/

[[email protected] keepalived]# ls 查看配置文件

keepalived.conf keepalived.conf.haproxy_example notify.sh

[[email protected] keepalived]# cp keepalived.conf keepalived.conf.bak 备份主配置文件

[[email protected] keepalived]# vim keepalived.conf 修改内容如下

! Configuration File for keepalived

global_defs {

notification_email {

[email protected]

}

notification_email_from [email protected]

smtp_server 127.0.0.1

smtp_connect_timeout 30

router_id LVS_DEVEL

}

vrrp_instance VI_1 {

state MASTER

interface eth0    # 虚拟接口通过哪个物理接口进行发送

virtual_router_id 79

priority 101

advert_int 1

authentication {

auth_type PASS

auth_pass keepalivedpass pass  # 表示简单进行认证

}

virtual_ipaddress {

172.16.66.1 /16 dev eth0 lable eth0:0  # 这是我们的虚拟地址,要配置在网卡接口上的

}

virtual_server 172.16.66.1 80 {

delay_loop 6

lb_algo rr

lb_kind DR

nat_mask 255.255.0.0

# persistence_timeout 50 # 不需要持久连接的

protocol TCP

real_server 172.16.66.4 80 {

weight 1

HTTP_GET {

url {

path /

status_code 200

}

connect_timeout 2

nb_get_retry 3

delay_before_retry 1

}

}

real_server 172.16.66.5 80 {

weight 1

HTTP_GET {

url {

path /

status_code 200

}

connect_timeout 2

nb_get_retry 3

delay_before_retry 1

}

}

}

(vrrp_instance VI_1定义vrrp的虚拟路由,对于虚拟路由而言我们两端的特色初始状态一端为master,一端为backup,为master的一端要比backup的一端大点,当我们的服务遇到故障时要进行监测,并降低优先级,降低的优先级要比我们的备节点要小,减去降低的优先级后要比backup定义的优先级要小。)

3、把配置文件复制到另一个节点HA2一份

[[email protected] keepalived]# scp keepalived.conf node2:/etc/keepalived/

HA2上配置

[[email protected] keepalived]# vim keepalived.conf 修改如下内容(只修改两处)

vrrp_instance VI_1 {

state BACKUP

interface eth0

virtual_router_id 79

priority 100 要比master的小

advert_int 1

authentication {

auth_type PASS

auth_pass keepalivedpass

}

4、分别在两个节点上启动服务

[[email protected] keepalived]# service keepalived start

Starting keepalived: [ OK ]

[[email protected] keepalived]# service keepalived start

Starting keepalived: [ OK ]

5、查看IP和ipvsadm规则并在物理主机*问

集群之LVS的高可用

查看ipvsadm规则

集群之LVS的高可用

在物理主机*问172.16.66.1

集群之LVS的高可用

刷新网页

集群之LVS的高可用

再查看ipvsadm规则

集群之LVS的高可用

四、利用keepalived实现web服务的高可用

此过程的实现只需两台虚拟机HA1、HA2

1、HA1的配置

[[email protected] ~]# service keepalived stop

[[email protected] ~]# yum -y install httpd

[[email protected] ~]# vim /var/www/html/index.html

集群之LVS的高可用

[[email protected] keepalived]# service httpd start

Starting httpd: [ OK ]

在物理主机上浏览器访问172.16.66.6则会出现node1内容 (或者在本系统curl http://172.16.66.7直接查看)


2、HA2的配置

[[email protected] ~]# service keepalived stop  停止keepalived服务

[[email protected] ~]# yum -y install httpd    安装httpd

[[email protected] keepalived]# vim /var/www/html/index.html

集群之LVS的高可用

[[email protected] keepalived]# service httpd start

Starting httpd: [ OK ]

在物理主机浏览器*问172.16.66.7  (或者在本系统机curl http://172.16.66.7)

集群之LVS的高可用

3、编辑节点1的keepalived的配置文件并提供相应的脚本后启动服务

[[email protected] ~]# cd /etc/keepalived/

[[email protected] keepalived]# cp keepalived.conf.haproxy_example keepalived.conf

cp: overwrite `keepalived.conf'? yes

分别修改两节点的配置文件并重启服务

HA1上配置

1)修改keepalived配置

[[email protected] keepalived]# vim keepalived.conf 脚本内容如下

! Configuration File for keepalived

global_defs {

notification_email {

[email protected]

[email protected]

}

notification_email_from [email protected]

smtp_connect_timeout 3

smtp_server 127.0.0.1

router_id LVS_DEVEL

}

vrrp_script chk_httpd {

script "killall -0 haproxy"

interval 2

# check every 2 seconds

weight -2

# if failed, decrease 2 of the priority

fall 2

# require 2 failures for failures

rise 1

# require 1 sucesses for ok

}

vrrp_script chk_schedown {

script "[[ -f /etc/keepalived/down ]] && exit 1 || exit 0"

interval 2

weight -2

}

vrrp_instance VI_1 {

interface eth0

# interface for inside_network, bound by vrrp

state MASTER

# Initial state, MASTER|BACKUP

# As soon as the other machine(s) come up,

# an election will be held and the machine

# with the highest "priority" will become MASTER.

# So the entry here doesn't matter a whole lot.

priority 101

# for electing MASTER, highest priority wins.

# to be MASTER, make 50 more than other machines.

virtual_router_id 51

# arbitary unique number 0..255

# used to differentiate multiple instances of vrrpd

# running on the same NIC (and hence same socket).

garp_master_delay 1

authentication {

auth_type PASS

auth_pass password

}

track_interface {

eth0

}

# optional, monitor these as well.

# go to FAULT state if any of these go down.

virtual_ipaddress {

172.16.66.1/16 dev eth0 label eth0:0

}

#addresses add|del on change to MASTER, to BACKUP.

#With the same entries on other machines,

#the opposite transition will be occuring.

#<IPADDR>/<MASK> brd <IPADDR> dev <STRING> scope <SCOPE> label <LABEL>

track_script {

chk_haproxy

chk_schedown

}

notify_master "/etc/keepalived/notify.sh master"

notify_backup "/etc/keepalived/notify.sh backup"

notify_fault "/etc/keepalived/notify.sh fault"

}

#vrrp_instance VI_2 {

# interface eth0

# state MASTER # BACKUP for slave routers

# priority 101 # 100 for BACKUP

# virtual_router_id 79

# garp_master_delay 1

#

# authentication {

# auth_type PASS

# auth_pass password

# }

# track_interface {

# eth0

# }

# virtual_ipaddress {

# 172.16.66.2/16 dev eth0 label eth0:1

# }

# track_script {

# chk_haproxy

# chk_mantaince_down

# }

#

# notify_master "/etc/keepalived/notify.sh master eth0:1"

# notify_backup "/etc/keepalived/notify.sh backup eth0:1"

# notify_fault "/etc/keepalived/notify.sh fault eth0:1"

#}

2)复制配置文件内容到节点2上

# scp keepalived.conf notify.sh node2:/etc/keepalived/

集群之LVS的高可用

[[email protected] keepalived]# service keepalived restart

Stopping keepalived: [ OK ]

Starting keepalived: [ OK ]

HA2的配置

[[email protected] keepalived]# vim keepalived.conf 修改如下内容

vrrp_instance VI_1 {

interface eth0

# interface for inside_network, bound by vrrp

state BACKUP

# Initial state, MASTER|BACKUP

# As soon as the other machine(s) come up,

# an election will be held and the machine

# with the highest "priority" will become MASTER.

# So the entry here doesn't matter a whole lot.

priority 100

# for electing MASTER, highest priority wins.

# to be MASTER, make 50 more than other machines.

[[email protected] keepalived]# service keepalived restart

Stopping keepalived: [ OK ]

Starting keepalived: [ OK ]

4、模拟master出现故障

首先停掉主服务器的web服务,然后查看IP是否漂移到了从服务器上

[[email protected] keepalived]# service httpd stop

Stopping httpd: [ OK ]

主服务器上IP显示为

集群之LVS的高可用

从服务器上IP显示为

集群之LVS的高可用

测试:在物理主机*问:172.16.66.1

集群之LVS的高可用

五、利用keepalived实现web服务的双主模型

双主模型的实现是在主从模型的基础上做的

1、编辑两个节点的配置文件

HA1:

[[email protected] keepalived]# vim keepalived.conf 修改配置文件如下

vrrp_instance VI_2 {

interface eth0

state BACKUP # BACKUP for slave routers

priority 100 # 100 for BACKUP

virtual_router_id 79

garp_master_delay 1

authentication {

auth_type PASS

auth_pass password

}

track_interface {

eth0

}

virtual_ipaddress {

172.16.66.2/16 dev eth0 label eth0:1

}

track_script {

chk_haproxy

chk_mantaince_down

}

notify_master "/etc/keepalived/notify.sh master eth0:1"

notify_backup "/etc/keepalived/notify.sh backup eth0:1"

notify_fault "/etc/keepalived/notify.sh fault eth0:1"

}

HA2:[[email protected] keepalived]# vim keepalived.conf

vrrp_instance VI_2 {

interface eth0

state MASTER # BACKUP for slave routers

priority 101 # 100 for BACKUP

virtual_router_id 103

garp_master_delay 1

authentication {

auth_type PASS

auth_pass password

}

track_interface {

eth0

}

virtual_ipaddress {

172.16.66.2/16 dev eth0 label eth0:1

}

track_script {

chk_httpd

chk_schedown

}

notify_master "/etc/keepalived/notify.sh master eth0:1"

notify_backup "/etc/keepalived/notify.sh backup eth0:1"

notify_fault "/etc/keepalived/notify.sh fault eth0:1"

}

2、分别重启启动两节点的keepalived服务

[[email protected] keepalived]# service keepalived restart

Stopping keepalived: [ OK ]

Starting keepalived: [ OK ]

[[email protected] keepalived]# service keepalived restart

Stopping keepalived: [ OK ]

Starting keepalived: [ OK ]

3、验证,首先查看两节点的IP,然后在物理主机*问

查看节点1的显示为

集群之LVS的高可用

查看节点2的IP显示为

集群之LVS的高可用

在物理主机*问172.16.66.1

集群之LVS的高可用

在物理主机*问172.16.66.2

集群之LVS的高可用

4、模拟节点1 宕掉

[[email protected] keepalived]# touch down

查看节点1的IP ifconfig(节点转移走)

集群之LVS的高可用

查看节点2的IP

集群之LVS的高可用

在物理主机*问验证

访问172.16.66.1和172.16.66.2都是由node2主机返回结果

访问 172.16.66.1

集群之LVS的高可用

访问 172.16.66.2

集群之LVS的高可用

[[email protected] keepalived]# rm –rf down 删除down掉的节点

删除down文件后查看节点IP是否夺回了资源

集群之LVS的高可用

集群之LVS的高可用

这就利用keepalived实现的一些高可用的部分功能,希望对您有所帮助。