Lvs+Keepalived自动安装配置(附带脚本)
LVS keepalived负载均衡高可用 配置安装大全
LVS+Keepalived实现高可用集群一、基础介绍 (2)二、搭建配置LVS-NA T模式 (2)三、搭建配置LVS-DR模式 (4)四、另外一种脚本方式实现上面LVS-DR模式 (6)五、keepalived + LVS(DR模式) 高可用 (8)六、Keepalived 配置文件详细介绍 (11)一、基础介绍(一)根据业务目标分成三类:High Availability 高可用Load Balancing 负载均衡High Performance 高性能(二)实现集群产品:HA类:rhcs、heartbeat、keepalivedLB类:haproxy、lvs、nginx、f5、piranhaHPC类:/index/downfile/infor_id/42(三)LVS 负载均衡有三种模式:LVS-DR模式(direct router)直接路由模式进必须经过分发器,出就直接出LVS-NAT模式(network address translation)进出必须都经过分发器LVS-TUN模式(ip tunneling)IP隧道模式服务器可以放到全国各地二、搭建配置LVS-NAT模式1 、服务器IP规划:DR服务器添加一张网卡eth1,一个网卡做DIP,一个网口做VIP。
设置DIP、VIP IP地址:DIP的eth1和所有RIP相连同一个网段CIP和DIP的eth0(Vip)相连同一个网段Vip eth0 192.168.50.200Dip eth1 192.168.58.4客户机IP:Cip 192.168.50.32台真实服务器IP:Rip1 192.168.58.2Rip2 192.168.58.32 、R ealServer1配置:mount /dev/xvdd /media/vi /var/www/html/index.html写入:this is realserver1启动httpdvi /etc/sysconfig/network-scripts/ifcfg-eth0设置RIP,子网掩码必须设置成DIPIPADDR=192.168.58.2NETMASK=255.255.255.0GA TEWAY=192.168.58.43 、R ealServer2 配置:vi /var/www/html/index.html写入:this is realserver2启动httpdvi /etc/sysconfig/network-scripts/ifcfg-eth0设置RIP,子网掩码必须设置成DIPIPADDR=192.168.58.3NETMASK=255.255.255.0GA TEWAY=192.168.58.44 、在DR服务器上做以下设置:开启IP数据包转发vi /etc/sysctl.confnet.ipv4.ip_forward = 0 ##0改成1 ,此步很重要查看是否开启:sysctl -p5 、安装LVS服务:ipvsadmyum -y install ipvsadmlsmod |grep ip_vsTurbolinux系统没有带rpm包,必须下载源码安装:#ln -s /usr/src/kernels/2.6.18-164.el5-x86_64/ /usr/src/linux##如果不做连接,编译时会包错#tar zxvf ipvsadm-1.24.tar.gz#cd ipvsadm-1.24#make && make install运行下ipvsadm ,就加到ip_vs模块到内核了lsmod | grep ip 可以看到有ip_vs模块了6 、配置DR服务器,添加虚拟服务ipvsadm -L -n 查询信息ipvsadm -A -t 192.168.50.200:80 -s rr #添加集群服务、调度算法,rr为调度算法ipvsadm -a -t 192.168.50.200:80 -r 192.168.58.2 -m -w 1 # -m代表net模式,-w代表权重ipvsadm -a -t 192.168.50.200:80 -r 192.168.58.3 -m -w 2ipvsadm -L -n 再次查看是就有了realserverservice ipvsadm save 保存配置iptables -L 关闭或者清空防火墙watch -n 1 'ipvsadm -L -n' 查看访问记录的数显示如下:-> RemoteAddress:Port Forward Weight ActiveConn InActConnTCP 192.168.50.200:80 rr-> 192.168.58.2:80 Masq 1 0 13-> 192.168.58.3:80 Masq 2 0 12ActiveConn是活动连接数,也就是tcp连接状态的ESTABLISHED;InActConn是指除了ESTABLISHED以外的,所有的其它状态的tcp连接.7 、测试:http://192.168.58.200配完后若想修改算法:ipvsadm -E -t 192.168.58.200:80 -s wlc修改Rip的权重:ipvsadm -e -t 192.168.58.200:80 -r 192.168.58.2 -m -w 1ipvsadm -e -t 192.168.58.200:80 -r 192.168.58.3 -m -w 5三、搭建配置LVS-DR模式lo:1 回应客户端,lo:1上的IP跟机器有关,跟网卡没有关系arp_announce 对网络接口上本地IP地址发出的ARP回应作出相应级别的限制arp_ignore 定义对目标地址为本地IP的ARP询问不同的请求一、3台服务器IP配置规划:DIP:eth0:1 192.168.58.200/32 (VIP)eth0 192.168.58.3/24 (DIP)RIP1 lo:1 192.168.58.200/32 (VIP)eth0 192.168.58.4/24RIP2 lo:1 192.168.58.200/32 (VIP)eth0 192.168.58.5/24 .................................................................RIP n lo:1 192.168.58.200/32 (VIP)eth0 192.168.58.N/24二、每台realserver都加上下面四个步骤配置:1 、配置每台rip的IP、http,web页面2 、关闭每台rip服务器的ARP广播:echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignoreecho 2 > /proc/sys/net/ipv4/conf/lo/arp_announceecho 1 > /proc/sys/net/ipv4/conf/all/arp_ignoreecho 2 > /proc/sys/net/ipv4/conf/all/arp_announce3 、配置VIPifconfig lo:1 192.168.58.200 netmask 255.255.255.255 broadcast 192.168.58.200 up4 、配置网关route add -host 192.168.58.200 dev lo:1三、DR上的配置(DR模式下Dip不用开启转发):1 、配置DIP,在eth0上添加一个子VIP接口:添加VIP:ifconfig eth0:1 192.168.58.200 broadcast 192.168.58.200 netmask 255.255.255.255 up2 、配置网关:route add -host 192.168.58.200 dev eth0:1route -n3 、安装ipvsadm(方法见文档上面)yum -y install ipvsadmlsmod |grep ip_vs4 、配置LVS集群:ipvsadm -A -t 192.168.58.200:80 -s rr #添加集群服务、调度算法,rr为调度算法ipvsadm -a -t 192.168.58.200:80 -r 192.168.58.3 -g -w 1 # -g代表DR模式,-w代表权重ipvsadm -a -t 192.168.58.200:80 -r 192.168.58.2 -g -w 2service ipvsadm saveipvsadm -L -n 查看信息四、测试:http://192.168.58.200四、另外一种脚本方式实现上面LVS-DR模式IP规划:Dip eth0 192.168.58.139VIP:192.168.58.200RIP1:192.168.58.2RIP2:192.168.58.31 、D R服务器上安装ipvsadm#yum -y install ipvsadm#lsmod | grep ip_vs 查看没有输出#modprobe ip_vs 安装即可2 、配置DIP服务器、LVS这里也是个写脚本为了方便vim /etc/init.d/lvsdr#!/bin/bash#lvs of DRVIP=192.168.58.200RIP1=192.168.58.2RIP2=192.168.58.3case "$1" instart)echo "start lvs of DR"/sbin/ifconfig eth0:0 $VIP broadcast $VIP netmask 255.255.255.0 up echo "1" > /proc/sys/net/ipv4/ip_forward/sbin/iptables -C/sbin/ipvsadm -A -t $VIP:80 -s rr/sbin/ipvsadm -a -t $VIP:80 -r $RIP1:80 -g/sbin/ipvsadm -a -t $VIP:80 -r $RIP2:80 -g/sbin/ipvsadm;;stop)echo "stop lvs of DR"echo "0" > /proc/sys/net/ipv4/ip_forward/sbin/ipvsadm -C/sbin/ifconfig eth0:0 down;;*)echo "Usage :$0 {start|stop}"exit1esacexit 0#chmod o+x /etc/init.d/lvsdr启动脚本:#service lvsdr start3 、2台RIP服务器都配置这里我们也都可以写成脚本开启2台RIP的httpd服务。
负载均衡--LVS+Keepalived
利用LVS+Keepalived 实现高性能高可用负载均衡作者:NetSeek 网站: 背景:随着你的网站业务量的增长你网站的服务器压力越来越大?需要负载均衡方案!商业的硬件如F5又太贵,你们又是创业型互联公司如何有效节约成本,节省不必要的浪费?同时实现商业硬件一样的高性能高可用的功能?有什么好的负载均衡可伸张可扩展的方案吗?答案是肯定的!有!我们利用LVS+Keepalived基于完整开源软件的架构可以为你提供一个负载均衡及高可用的服务器。
一.L VS+Keepalived 介绍1.LVSLVS是Linux Virtual Server的简写,意即Linux虚拟服务器,是一个虚拟的服务器集群系统。
本项目在1998年5月由章文嵩博士成立,是中国国内最早出现的自由软件项目之一。
目前有三种IP负载均衡技术(VS/NA T、VS/TUN和VS/DR);八种调度算法(rr,wrr,lc,wlc,lblc,lblcr,dh,sh)。
2.KeepalviedKeepalived在这里主要用作RealServer的健康状态检查以及LoadBalance主机和BackUP主机之间failover的实现二. 网站负载均衡拓朴图.IP信息列表:名称IPLVS-DR-Master 61.164.122.6LVS-DR-BACKUP 61.164.122.7LVS-DR-VIP 61.164.122.8WEB1-Realserver 61.164.122.9WEB2-Realserver 61.164.122.10GateWay 61.164.122.1三. 安装LVS和Keepalvied软件包1. 下载相关软件包#mkdir /usr/local/src/lvs#cd /usr/local/src/lvs#wget /software/kernel-2.6/ipvsadm-1.24.tar.gz #wget /software/keepalived-1.1.15.tar.gz2. 安装LVS和Keepalived#lsmod |grep ip_vs#uname -r2.6.18-53.el5PAE#ln -s /usr/src/kernels/2.6.18-53.el5PAE-i686/ /usr/src/linux#tar zxvf ipvsadm-1.24.tar.gz#cd ipvsadm-1.24#make && make install#find / -name ipvsadm # 查看ipvsadm的位置#tar zxvf keepalived-1.1.15.tar.gz#cd keepalived-1.1.15#./configure && make && make install#find / -name keepalived # 查看keepalived位置#cp /usr/local/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/#cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/#mkdir /etc/keepalived#cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/#cp /usr/local/sbin/keepalived /usr/sbin/#service keepalived start|stop #做成系统启动服务方便管理.四. 配置LVS实现负载均衡1.LVS-DR,配置LVS脚本实现负载均衡#vi /usr/local/sbin/lvs-dr.sh#!/bin/bash# description: start LVS of DirectorServer#Written by :NetSeek GW=61.164.122.1# website director vip.SNS_VIP=61.164.122.8SNS_RIP1=61.164.122.9SNS_RIP2=61.164.122.10./etc/rc.d/init.d/functionslogger $0 called with $1case "$1" instart)# set squid vip/sbin/ipvsadm --set 30 5 60/sbin/ifconfig eth0:0 $SNS_VIP broadcast $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP up/sbin/route add -host $SNS_VIP dev eth0:0/sbin/ipvsadm -A -t $SNS_VIP:80 -s wrr -p 3/sbin/ipvsadm -a -t $SNS_VIP:80 -r $SNS_RIP1:80 -g -w 1/sbin/ipvsadm -a -t $SNS_VIP:80 -r $SNS_RIP2:80 -g -w 1touch /var/lock/subsys/ipvsadm >/dev/null 2>&1;;stop)/sbin/ipvsadm -C/sbin/ipvsadm -Zifconfig eth0:0 downifconfig eth0:1 downroute del $SNS_VIProute del $SS_VIPrm -rf /var/lock/subsys/ipvsadm >/dev/null 2>&1echo "ipvsadm stoped";;status)if [ ! -e /var/lock/subsys/ipvsadm ];thenecho "ipvsadm stoped"exit 1elseecho "ipvsadm OK"fi;;*)echo "Usage: $0 {start|stop|status}"exit 1esacexit 02.配置Realserver脚本.#vi /usr/local/sbin/realserver.sh#!/bin/bash# description: Config realserver lo and apply noarp#Written by :NetSeek SNS_VIP=61.164.122.8. /etc/rc.d/init.d/functionscase "$1" instart)ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP/sbin/route add -host $SNS_VIP dev lo:0echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announcesysctl -p >/dev/null 2>&1echo "RealServer Start OK";;stop)ifconfig lo:0 downroute del $SNS_VIP >/dev/null 2>&1echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "0" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "0" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "0" >/proc/sys/net/ipv4/conf/all/arp_announceecho "RealServer Stoped";;*)echo "Usage: $0 {start|stop}"exit 1esacexit 0或者采用secondary ip address方式配置# vi /etc/sysctl.confnet.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf.all.arp_announce = 2#sysctl –p#ip addr add 61.164.122.8/32 dev lo#ip add list 查看是否绑定3. 启动lvs-dr脚本和realserver启本,在DR上可以查看LVS当前状态:#watch ipvsadm –ln五.利用Keepalvied实现负载均衡和和高可用性1.配置在主负载均衡服务器上配置keepalived.conf#vi /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {notification_email {cnseek@# failover@firewall.loc# sysadmin@firewall.loc}notification_email_from sns-lvs@smtp_server 127.0.0.1# smtp_connect_timeout 30router_id LVS_DEVEL}# 20081013 written by :netseek# VIP1vrrp_instance VI_1 {state MASTER #备份服务器上将MASTER改为BACKUP interface eth0virtual_router_id 51priority 100 # 备份服务上将100改为99advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {61.164.122.8#(如果有多个VIP,继续换行填写.)}}virtual_server 61.164.122.8 80 {delay_loop 6 #(每隔10秒查询realserver状态)lb_algo wrr #(lvs 算法)lb_kind DR #(Direct Route)persistence_timeout 60 #(同一IP的连接60秒内被分配到同一台realserver) protocol TCP #(用TCP协议检查realserver状态)real_server 61.164.122.9 80 {weight 3 #(权重)TCP_CHECK {connect_timeout 10 #(10秒无响应超时)nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 61.164.122.10 80 {weight 3TCP_CHECK {connect_timeout 10nb_get_retry 3delay_before_retry 3connect_port 80}}}2. BACKUP服务器同上配置,先安装lvs再按装keepalived,仍后配置/etc/keepalived/keepalived.conf,只需将红色标示的部分改一下即可.3. vi /etc/rc.local#/usr/local/sbin/lvs-dr.sh 将lvs-dr.sh这个脚本注释掉。
如何利用shell开发keepalived启动脚本
如何利⽤shell开发keepalived启动脚本keepalived是什么?Keepalived软件起初是专为LVS负载均衡软件设计的,⽤来管理并监控LVS集群系统中各个服务节点的状态,后来⼜加⼊了可以实现⾼可⽤的VRRP功能。
因此,Keepalived除了能够管理LVS软件外,还可以作为其他服务(例如:Nginx、Haproxy、MySQL等)的⾼可⽤解决⽅案软件。
Keepalived软件主要是通过VRRP协议实现⾼可⽤功能的。
VRRP是Virtual Router RedundancyProtocol(虚拟路由器冗余协议)的缩写,VRRP出现的⽬的就是为了解决静态路由单点故障问题的,它能够保证当个别节点宕机时,整个⽹络可以不间断地运⾏。
所以,Keepalived ⼀⽅⾯具有配置管理LVS的功能,同时还具有对LVS下⾯节点进⾏健康检查的功能,另⼀⽅⾯也可实现系统⽹络服务的⾼可⽤功能。
1.1 keepalived的相关的⽂件keepalived的执⾏命令:/data/apps/keepalived/sbin/keepalivedkeepalived的配置⽂件:/data/apps/keepalived/etc/keepalived/keepalived.confkeepalived的Pid⽂件:/data/apps/keepalived/run/keepalived.pid1.2 keepalived的启停⽌⽅式注意:不要去修改/data/apps/keepalived/etc/sysconfig/keepalived⽂件启动⽅式:keepalived -f 配置⽂件(绝对路径) -p PID⽂件(绝对路径)关闭⽅式:kill $( cat PID⽂件(绝对路径) )1.3 脚本内容注意:该脚本只能Linux的超级⽤户root才能启动,因为脚本中有进⾏限制#!/bin/bash## Define variablesRETVAL=0Conf="/data/apps/keepalived/etc/keepalived/keepalived.conf"Exce="/data/apps/keepalived/sbin/keepalived"Pid="/data/apps/keepalived/run/keepalived.pid"# Determine the user to executeif [ "$UID" -ne "$RETVAL" ];thenecho "Must be root to run scripts"exit 1fi# Load local functions library[ -f /etc/init.d/functions ] && source /etc/init.d/functions# Define functionsstart(){if [ ! -f "$Pid" ];then$Exce -f $Conf -p $Pid >/dev/null 2>&1RETVAL=$?if [ $RETVAL -eq 0 ];thenaction "Start keepalived service" /bin/trueelseaction "Start keepalived service" /bin/falsefielseecho "keepalived service is running"fireturn $RETVAL}stop(){if [ -f "$Pid" ];thenkill $(cat $Pid) >/dev/null 2>&1RETVAL=$?if [ $RETVAL -eq 0 ];thenaction "Stop keepalived service" /bin/trueelseaction "Stop keepalived service" /bin/falsefielseecho "keepalived service is not running"fireturn $RETVAL}status(){if [ -f "$Pid" ];thenecho "keepalived service is running"elseecho "keepalived service is not running"fireturn $RETVAL}# case local functionscase "$1" instart)startRETVAL=$?;;stop)stopRETVAL=$?;;status)statusRETVAL=$?;;restart)stopsleep 2startRETVAL=$?;;*)echo "USAGE:$0{status|start|stop|restart}"exit 1esac# Scripts return valuesexit $RETVAL总结到此这篇关于使⽤shell开发keepalived启动脚本的⽂章就介绍到这了,更多相关shell keepalived启动脚本内容请搜索以前的⽂章或继续浏览下⾯的相关⽂章希望⼤家以后多多⽀持!。
Keepalived原理及配置详解-选项参数详细
Keepalived原理及配置详解-选项参数详细接着上篇,既然做了mysql+keepalived就需要对这些有个了解,以至于有了知道可以从哪里着手及相关配置;本篇是在网易博客上看到的一篇,就记录并且copy了,请大家尊重原创作者,珍惜汗水劳动者;原url /blog/static/1007689 14201191762253640/keepalived的核心vrrp相关知识:/5675165/1179600什么是Keepalived呢,keepalived观其名可知,保持存活,在网络里面就是保持在线了,也就是所谓的高可用或热备,用来防止单点故障(单点故障是指一旦某一点出现故障就会导致整个系统架构的不可用)的发生,那说到keepalived时不得不说的一个协议就是VRRP协议,可以说这个协议就是keepalived实现的基础,那么首先我们来看看VRRP协议一,VRRP协议二,Keepalived原理Keepalived原理keepalived也是模块化设计,不同模块复杂不同的功能,下面是keepalived的组件core check vrrp libipfwc libipvs-2.4 libipvs-2.6core:是keepalived的核心,复杂主进程的启动和维护,全局配置文件的加载解析等check:负责healthchecker(健康检查),包括了各种健康检查方式,以及对应的配置的解析包括LVS的配置解析vrrp:VRRPD子进程,VRRPD子进程就是来实现VRRP协议的libipfwc:iptables(ipchains)库,配置LVS会用到libipvs*:配置LVS会用到注意,keepalived和LVS完全是两码事,只不过他们各负其责相互配合而已keepalived启动后会有三个进程父进程:内存管理,子进程管理等等子进程:VRRP子进程子进程:healthchecker子进程有图可知,两个子进程都被系统WatchDog看管,两个子进程各自复杂自己的事,healthchecker子进程复杂检查各自服务器的健康程度,例如HTTP,LVS等等,如果healthchecker子进程检查到MASTER上服务不可用了,就会通知本机上的兄弟VRRP子进程,让他删除通告,并且去掉虚拟IP,转换为BACKUP状态三,Keepalived配置文件详解keepalived配置详解keepalived有三类配置区域(姑且就叫区域吧),注意不是三种配置文件,是一个配置文件里面三种不同类别的配置区域全局配置(Global Configuration)VRRPD配置LVS配置一,全局配置全局配置又包括两个子配置:全局定义(global definition)静态路由配置(static ipaddress/routes)1,全局定义(global definition)配置范例全局配置解析global_defs全局配置标识,表面这个区域{}是全局配置表示keepalived在发生诸如切换操作时需要发送email通知,以及email发送给哪些邮件地址,邮件地址可以多个,每行一个****************************************表示发送通知邮件时邮件源地址是谁smtp_server 127.0.0.1表示发送email时使用的smtp服务器地址,这里可以用本地的sendmail来实现mtp_connect_timeout 30连接smtp连接超时时间router_id node1机器标识2,静态地址和路由配置范例这里实际上和系统里面命令配置IP地址和路由一样例如:192.168.1.1/24 brd + dev eth0 scope global 相当于: ip addr add 192.168.1.1/24 brd + dev eth0 scope global就是给eth0配置IP地址路由同理一般这个区域不需要配置这里实际上就是给服务器配置真实的IP地址和路由的,在复杂的环境下可能需要配置,一般不会用这个来配置,我们可以直接用vi /etc/sysconfig/network-script/ifcfg-eth1来配置,切记这里可不是VIP哦,不要搞混淆了,切记切记!二,VRRPD配置VRRPD配置包括三个类VRRP同步组(synchroization group)VRRP实例(VRRP Instance)VRRP脚本1,VRRP同步组(synchroization group)配置范例http和mysql是实例名和下面的实例名一致notify /path/to/notify.sh:smtp alter表示切换时给global defs中定义的邮件地址发送邮件通知2,VRRP实例(instance)配置范例state:state 指定instance(Initial)的初始状态,就是说在配置好后,这台服务器的初始状态就是这里指定的,但这里指定的不算,还是得要通过竞选通过优先级来确定,里如果这里设置为master,但如若他的优先级不及另外一台,那么这台在发送通告时,会发送自己的优先级,另外一台发现优先级不如自己的高,那么他会就回抢占为masterinterface:实例绑定的网卡,因为在配置虚拟IP的时候必须是在已有的网卡上添加的dont track primary:忽略VRRP的interface错误track interface:跟踪接口,设置额外的监控,里面任意一块网卡出现问题,都会进入故障(FAULT)状态,例如,用nginx做均衡器的时候,内网必须正常工作,如果内网出问题了,这个均衡器也就无法运作了,所以必须对内外网同时做健康检查mcast src ip:发送多播数据包时的源IP地址,这里注意了,这里实际上就是在那个地址上发送VRRP通告,这个非常重要,一定要选择稳定的网卡端口来发送,这里相当于heartbeat的心跳端口,如果没有设置那么就用默认的绑定的网卡的IP,也就是interface指定的IP地址garp master delay:在切换到master状态后,延迟进行免费的ARP(gratuitous ARP)请求virtual router id:这里设置VRID,这里非常重要,相同的VRID 为一个组,他将决定多播的MAC地址priority 100:设置本节点的优先级,优先级高的为masteradvert int:检查间隔,默认为1秒virtual ipaddress:这里设置的就是VIP,也就是虚拟IP地址,他随着state的变化而增加删除,当state为master的时候就添加,当state为backup的时候删除,这里主要是有优先级来决定的,和state设置的值没有多大关系,这里可以设置多个IP地址virtual routes:原理和virtual ipaddress一样,只不过这里是增加和删除路由lvs sync daemon interface:lvs syncd绑定的网卡authentication:这里设置认证auth type:认证方式,可以是PASS或AH两种认证方式auth pass:认证密码nopreempt:设置不抢占,这里只能设置在state为backup的节点上,而且这个节点的优先级必须别另外的高preempt delay:抢占延迟debug:debug级别notify master:和sync group这里设置的含义一样,可以单独设置,例如不同的实例通知不同的管理人员,http实例发给网站管理员,mysql的就发邮件给DBA3,VRRP脚本首先在vrrp_script区域定义脚本名字和脚本执行的间隔和脚本执行的优先级变更vrrp_script check_running {script"/usr/local/bin/check_running"interval 10 #脚本执行间隔weight 10 #脚本结果导致的优先级变更:10表示优先级+10;-10则表示优先级-10}然后在实例(vrrp_instance)里面引用,有点类似脚本里面的函数引用一样:先定义,后引用函数名track_script {check_running weight 20}注意:VRRP脚本(vrrp_script)和VRRP实例(vrrp_instance)属于同一个级别LVS配置如果你没有配置LVS+keepalived那么无需配置这段区域,里如果你用的是nginx来代替LVS,这无限配置这款,这里的LVS配置是专门为keepalived+LVS集成准备的。
keepalived编译
Keepalived编译什么是Keepalived?Keepalived是一个用于实现高可用性和负载均衡的开源软件。
它基于VRRP(Virtual Router Redundancy Protocol)协议,可以在多个服务器之间实现故障切换和负载均衡。
通过配置Keepalived,可以将多个服务器组成一个虚拟路由器,并将请求分发到这些服务器上,从而提高系统的可用性和性能。
编译Keepalived的步骤编译Keepalived需要一些准备工作和步骤,下面将详细介绍如何编译Keepalived。
步骤一:安装必要的依赖项在开始编译之前,我们需要安装一些必要的依赖项。
这些依赖项包括:•gcc:C语言编译器•make:构建工具•libssl-dev:OpenSSL库开发包•libpopt-dev:popt库开发包可以使用以下命令来安装这些依赖项:sudo apt-get updatesudo apt-get install gcc make libssl-dev libpopt-dev -y步骤二:下载Keepalived源代码在开始编译之前,我们需要下载Keepalived的源代码。
可以从官方网站或者GitHub上获取最新版本的源代码。
wgettar -zxvf keepalived-2.3.0.tar.gzcd keepalived-2.3.0步骤三:配置编译选项在编译之前,我们需要配置一些选项,例如安装路径、启用的功能等。
可以使用以下命令来配置编译选项:./configure --prefix=/usr/local/keepalived \--sysconf=/etc/keepalived \--enable-libiptc \--enable-snmp \--enable-dynamic-linking这里我们将Keepalived安装到/usr/local/keepalived目录下,并指定了一些启用的功能,例如libiptc、snmp和动态链接。
lvs 简单用法
lvs 简单用法LVS(Linux Virtual Server)是一个基于Linux操作系统的高性能、可扩展的负载均衡器。
它允许将网络流量均匀分配到多个后端服务器,从而提高系统的可用性和性能。
为了使用LVS,首先要确保在服务器上安装了ipvsadm工具包。
使用以下命令可以检查是否安装了该工具包:```ipvsadm -v```如果未安装ipvsadm,可以使用以下命令安装:```sudo apt-get install ipvsadm```安装完ipvsadm后,可以开始配置LVS。
配置LVS需要进行以下几个步骤:1. 配置LVS调度器:LVS调度器是负责接收客户端请求并将其转发至后端服务器的组件。
可以通过编辑`/etc/sysctl.conf`文件来配置LVS调度器。
添加以下行以启用IP转发:```net.ipv4.ip_forward = 1```然后使用以下命令使配置生效:```sudo sysctl -p```2. 配置LVS服务:编辑`/etc/ipvsadm.conf`文件,添加以下内容来配置LVS服务:```# 清除旧的配置sudo ipvsadm --clear# 添加LVS虚拟服务sudo ipvsadm -A -t <虚拟服务IP>:<端口> -s <调度算法>```这里需要将`<虚拟服务IP>`和`<端口>`替换为实际的虚拟服务IP和端口,`<调度算法>`可以选择使用的调度算法,例如`rr`表示使用轮询(Round Robin)算法。
3. 添加后端服务器:使用以下命令将后端服务器添加至LVS服务中:```sudo ipvsadm -a -t <虚拟服务IP>:<端口> -r <后端服务器IP>:<端口> -g```这里需要将`<虚拟服务IP>`和`<端口>`替换为实际的虚拟服务IP和端口,`<后端服务器IP>`和`<端口>`替换为实际的后端服务器IP和端口。
REDIS+KEEPALIVED+HAPROXY 集群,负载均衡,主备自动切换安装手册
REDIS集群,KEEPALIVED+HAPROXY负载均衡,主备自动切换安装手册服务器环境:centos6.3机器1:redis主节点(172.16.8.21:6379)从节点(172.16.8.21:63791)从节点(172.16.8.21:63792)机器2:从节点172.16.8.22:63793从节点172.16.8.23:63794一、REDIS集群安装进入机器1:mkdir /usr/local/redismkdir /usr/local/redis/data1 下载redis,进入/usr/local/src目录2 wget http://download.redis.io/releases/redis-2.8.9.tar.gz3 tar xvf redis-2.8.9.tar.gz4 cd redis-2.8.95 make && make install6 cp redis.conf /usr/local/bin7 cd /usr/local/bin8 cp redis.conf redis-slave19 cp redis.conf redis-slave210 vi redis.conf11 搜索dind设为172.16.8.21dir 设为/usr/local/redis/datadaemonize 设为yes12 vi redis-slave113 搜索bind 设为172.16.8.21pidfile 设为/var/run/redis-slave1.piddbfilename设为dump-slave1.rdbdir设为/usr/local/redis/dataport 设为63791将slaveof前面的#号去掉,改为slaveof 172.16.8.21 6379 daemonize 设为yes14 vi redis-slave215 搜索bind 设为172.16.8.21pidfile 设为/var/run/redis-slave2.piddbfilename设为dump-slave2.rdbdir设为/usr/local/redis/dataport 设为63792将slaveof前面的#号去掉,改为slaveof 172.16.8.21 6379 daemonize 设为yes16redis-server redis.confredis-server redis-slave1redis-server redis-slave2进入机器2mkdir /usr/local/redismkdir /usr/local/redis/data1 下载redis,进入/usr/local/src目录2 wget http://download.redis.io/releases/redis-2.8.9.tar.gz3 tar xvf redis-2.8.9.tar.gz4 cd redis-2.8.95 make && make install6 cp redis.conf /usr/local/bin7 cd /usr/local/bin8 cp redis.conf redis-slave39 cp redis.conf redis-slave410 vi redis-slave311 搜索bind 设为172.16.8.22pidfile 设为/var/run/redis-slave3.piddbfilename设为dump-slave3.rdbdir设为/usr/local/redis/dataport 设为63793将slaveof前面的#号去掉,改为slaveof 172.16.8.21 6379daemonize 设为yes12 vi redis-slave413 搜索bind 设为172.16.8.22pidfile 设为/var/run/redis-slave4.piddbfilename设为dump-slave4.rdbdir设为/usr/local/redis/dataport 设为63794将slaveof前面的#号去掉,改为slaveof 172.16.8.21 6379daemonize 设为yes14 redis-server redis-slave315 redis-server redis-slave4到现在已完成redis集群配置,且只有172.16.8.21 6379可写数据,其余slave机器只能读数据redis-cli -h 172.16.8.21 -p 6379info可以看到这样子就是成功了二、安装haproxy进入机器1cd /usr/loca/srcwget http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.25.tar.gztar xvf http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.25.tar.gzcd haproxy-1.4.25make TARGET=linux26 PREFIX=/usr/local/haproxymake install PREFIX=/usr/local/haproxycd /usr/local/haproxyvi haproxy.cfg添加下面内容# this config needs haproxy-1.1.28 or haproxy-1.2.1globallog 172.16.8.21 local0#log 172.16.8.21 local1 notice#log loghost local0 infomaxconn 4096chroot /usr/local/haproxyuid 99gid 99daemon#debugquietnbproc 2pidfile /usr/local/haproxy/haproxy.piddefaultslog globalmode httpoption httplogoption dontlognulllog 172.16.8.21 local3 inforetries 3redispatchmaxconn 3000contimeout 5000clitimeout 50000srvtimeout 50000listen cluster 0.0.0.0:63790mode tcpbalance roundrobinoption forwardforserver redis-slave1 172.16.8.21:63791 weight 100 check inter 2000 rise 2 fall 3 server redis-slave2 172.16.8.21:63792 weight 100 check inter 2000 rise 2 fall 3 server redis-slave3 172.16.8.22:63793 weight 100 check inter 2000 rise 2 fall 3 server redis-slave4 172.16.8.22:63794 weight 100 check inter 2000 rise 2 fall 3listen 172.16.8.21 *:8888mode http#transparentstats refresh 10sstats uri /haproxyadminstats realm Haproxy \ statisticstats auth admin:adminstats hide-version保存加上日志支持 vi /etc/rsyslog.conf在最下边增加local3.* /var/log/haproxy.loglocal0.* /var/log/haproxy.logvi /etc/sysconfig/rsyslog修改: SYSLOGD_OPTIONS="-r -m 0"重启日志服务service rsyslog restart进入机器2cd /usr/loca/srcwget http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.25.tar.gz tar xvf http://haproxy.1wt.eu/download/1.4/src/haproxy-1.4.25.tar.gz cd haproxy-1.4.25make TARGET=linux26 PREFIX=/usr/local/haproxymake install PREFIX=/usr/local/haproxycd /usr/local/haproxyvi haproxy.cfg添加下面内容# this config needs haproxy-1.1.28 or haproxy-1.2.1globallog 172.16.8.22 local0#log 172.16.8.22 local1 notice#log loghost local0 infomaxconn 4096chroot /usr/local/haproxyuid 99gid 99daemon#debugquietnbproc 2pidfile /usr/local/haproxy/haproxy.piddefaultslog globalmode httpoption httplogoption dontlognulllog 172.16.8.22 local3 inforetries 3redispatchmaxconn 3000contimeout 5000clitimeout 50000srvtimeout 50000listen cluster 0.0.0.0:63790mode tcpbalance roundrobinoption forwardforserver redis-slave1 172.16.8.21:63791 weight 100 check inter 2000 rise 2 fall 3 server redis-slave2 172.16.8.21:63792 weight 100 check inter 2000 rise 2 fall 3 server redis-slave3 172.16.8.22:63793 weight 100 check inter 2000 rise 2 fall 3 server redis-slave4 172.16.8.22:63794 weight 100 check inter 2000 rise 2 fall 3 listen 172.16.8.22*:8888mode http#transparentstats refresh 10sstats uri /haproxyadminstats realm Haproxy \ statisticstats auth admin:adminstats hide-version保存加上日志支持 vi /etc/rsyslog.conf在最下边增加local3.* /var/log/haproxy.loglocal0.* /var/log/haproxy.logvi /etc/sysconfig/rsyslog修改: SYSLOGD_OPTIONS="-r -m 0"重启日志服务service rsyslog restart三、安装keepalived进入机器1cd /usr/local/srcwget /software/keepalived-1.2.12.tar.gztar xvf keepalived-1.2.12.tar.gzcd keepalived-1.2.12./configuremake&&make install注:若这里报错提示没有装openssl,则执行yum –y install openssl-devel安装,若还有其他的包没装,则执行yum命令进行安装cp /usr/local/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/mkdir /etc/keepalivedcp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/ln -s /usr/local/sbin/keepalived /usr/sbin/vi /etc/keepalived/keepalived.conf将内容改为如下! Configuration File for keepalivedvrrp_script chk_haproxy {script "/etc/keepalived/check_haproxy.sh"interval 2global_defs {router_id LVS_DEVEL}vrrp_instance VI_1 {state MASTERinterface eth0virtual_router_id 51priority 150advert_int 1authentication {auth_type PASSauth_pass 1111}track_script {chk_haproxy}virtual_ipaddress {172.16.8.20}}}保存vi /etc/keepalived/check_haproxy.sh添加内容#!/bin/bash#A = `ps -C haproxy --no-header |wc -l`if [[ `ps -C haproxy --no-header |wc -l` -eq 0 ]];thenecho "haproxy not runing,attempt to start up."/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfgsleep 3if [[ `ps -C haproxy --no-header |wc -l` -eq 0 ]];then/etc/init.d/keepalived stopecho "haproxy start failure,stop keepalived"elseecho "haproxy started success"fifi注意`这个符号不是单引号,是esc下面那个键进入机器2cd /usr/local/srcwget /software/keepalived-1.2.12.tar.gztar xvf keepalived-1.2.12.tar.gzcd keepalived-1.2.12./configuremake&&make install注:若这里报错提示没有装openssl,则执行yum –y install openssl-devel安装,若还有其他的包没装,则执行yum命令进行安装cp /usr/local/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/ mkdir /etc/keepalivedcp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/ ln -s /usr/local/sbin/keepalived /usr/sbin/vi /etc/keepalived/keepalived.conf将内容改为如下! Configuration File for keepalivedvrrp_script chk_haproxy {script "/etc/keepalived/check_haproxy.sh"interval 2global_defs {router_id LVS_DEVEL}vrrp_instance VI_1 {state BACKUPinterface eth0virtual_router_id 51priority 120advert_int 1authentication {auth_type PASSauth_pass 1111}track_script {chk_haproxy}virtual_ipaddress {172.16.8.20}}}保存vi /etc/keepalived/check_haproxy.sh添加内容#!/bin/bash#A = `ps -C haproxy --no-header |wc -l`if [[ `ps -C haproxy --no-header |wc -l` -eq 0 ]];thenecho "haproxy not runing,attempt to start up."/usr/local/haproxy/sbin/haproxy -f /usr/local/haproxy/haproxy.cfgsleep 3if [[ `ps -C haproxy --no-header |wc -l` -eq 0 ]];then/etc/init.d/keepalived stopecho "haproxy start failure,stop keepalived"elseecho "haproxy started success"fifi注意`这个符号不是单引号,是esc下面那个键保存在机器1机器2中分别执行service keepalived start然后在浏览器上打开http://172.16.8.20:8888/haproxyadmin用户名和密码是admin只要机器1和机器2中keepalived服务没有同时挂掉,一台机器挂掉后,另一台机器就会绑定172.16.8.20地址,实现主备切换,因此都可以通过172.16.8.20:63790访问该redis 集群Vip压力测试redis-benchmark -h 172.16.8.20 -p 63790 -t get -q -r 1000 -n 100000 -c 800主机器压力测试redis-benchmark -h 172.16.8.21 -p 6379 -t get -q -r 1000 -n 100000 -c 800从节点压力测试redis-benchmark -h 172.16.8.22 -p 63793 -t get -q -r 1000 -n 100000 -c 800本文参考于百度文库地址/link?url=Wd0Z2arJ4wdspy7jw9O1mGZCy2e5GiO4hCIv36 QxoOtNGcFOMG8rPpegmRH_z72Ejc-KAP9Ld2Aieo7DPgmC_b1bXB2BZVSKPTXsoz BNNYi。
Keepalived配置详解
Keepalived配置详解Keepalived 配置⽂件解释Keepalived的所有配置都在⼀个配置⽂件⾥⾯,主要分为三类:全局配置VRRPD配置LVS 配置配置⽂件是以配置块的形式存在,每个配置块都在⼀个闭合的{}范围内,所以编辑的时候需要注意⼤括号的闭合问题。
#和!开头都是注释。
全局配置全局配置是对整个 Keepalived ⽣效的配置,⼀个典型的配置如下:global_defs {notification_email { #设置 keepalived 在发⽣事件(⽐如切换)的时候,需要发送到的email地址,可以设置多个,每⾏⼀个。
acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.loc #设置通知邮件发送来⾃于哪⾥,如果本地开启了sendmail的话,可以使⽤上⾯的默认值。
smtp_server 192.168.200.1 #指定发送邮件的smtp服务器。
smtp_connect_timeout 30 #设置smtp连接超时时间,单位为秒。
router_id LVS_DEVEL #是运⾏keepalived的⼀个表⽰,多个集群设置不同。
}VRRPD配置VRRPD 的配置是 Keepalived ⽐较重要的配置,主要分为两个部分 VRRP 同步组和 VRRP实例,也就是想要使⽤ VRRP 进⾏⾼可⽤选举,那么就⼀定需要配置⼀个VRRP实例,在实例中来定义 VIP、服务器⾓⾊等。
VRRP Sync Groups不使⽤Sync Group的话,如果机器(或者说router)有两个⽹段,⼀个内⽹⼀个外⽹,每个⽹段开启⼀个VRRP实例,假设VRRP配置为检查内⽹,那么当外⽹出现问题时,VRRPD认为⾃⼰仍然健康,那么不会发⽣Master和Backup的切换,从⽽导致了问题。
keepalived iptables规则
keepalived iptables规则Keepalived是一个用于实现高可用性(LVS)的软件,而Iptables是一个用于配置Linux防火墙规则的工具。
在使用Keepalived来实现负载均衡和故障转移时,我们可以通过Iptables来增强网络安全性,以保护服务器免受恶意攻击。
为了确保Keepalived正常运行,我们需要在服务器上设置一些Iptables规则。
这些规则主要包括:1. 开启必要的端口:Keepalived使用一些特定的端口进行通信,因此我们需要在防火墙上开放这些端口。
一般来说,这些端口包括VIP(Virtual IP)、VRRP (Virtual Router Redundancy Protocol)和其他用于监控和通信的端口。
举例来说,如果Keepalived使用VIP 192.168.0.1和VRRP端口号112,我们可以使用以下命令将它们添加到Iptables规则中:```iptables -A INPUT -p vrrp -j ACCEPTiptables -A INPUT -d 192.168.0.1 -j ACCEPT```2. 禁止非授权的访问:为了防止未授权的访问,我们需要限制对Keepalived服务的访问。
只有特定IP地址的请求才应该被允许通过,而其他请求应该被丢弃。
举例来说,假设我们只允许IP地址为192.168.0.2和192.168.0.3的主机进行访问,其他IP地址的请求应该被拒绝。
我们可以使用以下命令设置相应的规则:```iptables -A INPUT -p tcp -s 192.168.0.2 -j ACCEPTiptables -A INPUT -p tcp -s 192.168.0.3 -j ACCEPTiptables -A INPUT -p tcp -j DROP```3. Log记录:为了方便排查问题和监控网络活动,我们可以对Keepalived的网络流量进行日志记录。
mysql mha keepalive vip安装配置
MySQL+MHA+keepalive+vip 安装配置在mysql的复制,当主服务崩溃了,利用mha实现主服务自动切换,并能使其他从服务切换到新的主机。
下面是部署步骤(1)准备三机器:主服务192.168.8.120,备主192.168.8.121 ,从服务和管理节点192.168.8.122(2)修改各台主机名如管理节点192.168.8.122cat /etc/hosts[root@centos3 mha]# more /etc/hosts127.0.0.1 localhost192.168.8.120 centos1192.168.8.121 centos2192.168.8.122 centos3(3)数据节点安装mha4mysql-node-0.53.tar.gzmha4mysql-manager-0.53.tar.gz,由于mha4mysql-node 依赖perl-DBD-MySQL,mha4mysql-manager依赖perl-Config-Tiny perl-Params-Validate perl-Log-Dispatch perl-Parallel-ForkManager 。
所以现在这些依赖包。
实验使用yum 安装。
对三台mariadb数据节点只需安装mha4mysql-node-0.53.tar.gz ,本文没有写mariadb的安装以及复制。
[root@centos1mha]#rpm -ivh/pub/epel/5/i386/epel-release-5-4 .noarch.rpm[root@centos1mha]#yum -y install perl-DBD-MySQLncftp[root@centos1mha]#tar -zxfmha4mysql-node-0.53.tar.gz[root@centos1mha]# cdmha4mysql-node-0.53[root@centos1mha]#perl Makefile.PL[root@centos1mha]#make && make install(4)管理节点[root@sh-gs-dbmg0227 ~]# rpm -ivh/pub/epel/5/i386/epel-release-5-4 .noarch.rpm //这个是centos5.x 如果是6.x rpm -ivh /pub/epel/6/i386/epel-release-6-8 .noarch.rpm[root@centos3 mha]# yum -y installperl-DBD-MySQL ncftp[root@centos3 mha]# tar -zxfmha4mysql-node-0.53.tar.gz[root@centos3 mha]# cd mha4mysql-node-0.53[root@centos3 mha]#perl Makefile.PL[root@centos3 mha]#make && make install[root@centos3 mha]#yum -y install perl-Config-Tiny perl-Params-Validate perl-Log-Dispatch perl-Parallel-ForkManagerperl-Config-IniFiles[root@centos3 mha]# tar -zxfmha4mysql-manager-0.53.tar.gz[root@centos3 mha]#perl Makefile.PL如果在该过程中出现下面错误Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: inc/usr/local/lib64/perl5 /usr/local/share/perl5/usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) atinc/Module/Install/Can.pmline 6.解决方法:yum installperl-CPAN[root@centos3 mha]#make && make install [root@centos3mha]# mkdir/etc/masterha[root@centos3mha]# mkdir -p/master/app1[root@centos3mha]# mkdir -p/scripts[root@centos3mha]# cp samples/conf/*/etc/masterha/[root@centos3mha]# cpsamples/scripts/*/scripts配置管理节点[root@centos3mha]#more/etc/masterha/masterha_f [server default]user=rootpassword=123456ssh_user=rootrepl_user=replrepl_password=qwertmaster_binlog_dir= /app/mysqlremote_workdir=/app/mhasecondary_check_script= masterha_secondary_check -s192.168.12.234-s192.168.12.232ping_interval=1master_ip_failover_script=/scripts/master_ip_failover #shutdown_script=/scripts/power_managerreport_script= /scripts/send_reportmaster_ip_online_change_script=/scripts/master_ip_online_change[root@centos3mha]# more/etc/masterha/f[server default]manager_workdir=/app/mhamanager_log=/app/mha/manager.log[server1] hostname=192.168.8.120candidate_master=1[server2]hostname=192.168.8.121candidate_master=1[server3]hostname=192.168.8.122no_master=1(5)在mysql 添加用户,复制设置mysql 主节点grant replication slave on *.* to 'repl'@'192.168.8.%' identifiedby 'qwert';grant all on *.* to 'root'@'192.168.8.122' identified by '123456';备主节点grant replication slave on *.* to 'repl'@'192.168.8.%' identifiedby 'qwert';grant all on *.* to 'root'@'192.168.8122' identified by '123456';set read_only=1set relay_log_purge=0从节点grant all on *.* to 'root'@'192.168.8.122' identified by '123456';set read_only=1set relay_log_purge=0(6)配置ssh[[root@centos3~#ssh-keygen -t rsa[root@centos3~]# ssh-copy-id -i.ssh/id_rsa.pub root@192.168.8.120[root@centos3~]#ssh-copy-id -i .ssh/id_rsa.pub root@192.168.8.121 [root@centos1~]# ssh-keygen -t rsa[root@centos1~]# ssh-copy-id -i.ssh/id_rsa.pub root@192.168.8.121[root@centos1~]# ssh-copy-id -i.ssh/id_rsa.pub root@192.168.8.122[root@centos2~]# ssh-keygen -trsa[root@centos2~]# ssh-copy-id -i.ssh/id_rsa.pub root@192.168.8.120[root@centos2~]#ssh-copy-id -i .ssh/id_rsa.pub root@192.168.8.122(7)测试ssh[root@centos3 mha]#masterha_check_ssh--global_conf=/etc/masterha/masterha_f--conf=/etc/masterha/fSat Aug 10 06:15:39 2013 - [info] Reading default configuratoinsfrom /etc/masterha/masterha_f..Sat Aug 10 06:15:39 2013 - [info] Reading application defaultconfigurations from /etc/masterha/f..Sat Aug 10 06:15:39 2013 - [info] Reading server configurationsfrom /etc/masterha/f..Sat Aug 10 06:15:39 2013 - [info] Starting SSH connectiontests..Sat Aug 10 06:15:42 2013 - [debug]Sat Aug 10 06:15:39 2013 - [debug] Connecting via SSH from root@192.168.8.120(192.168.8.120:22) to root@192.168.8.121(192.168.8.121:22)..Sat Aug 10 06:15:41 2013 -[debug] ok.Sat Aug 10 06:15:41 2013 - [debug] Connecting via SSH from root@192.168.8.120(192.168.8.120:22) to root@192.168.8.122(192.168.8.122:22)..Sat Aug 10 06:15:42 2013 -[debug] ok.Sat Aug 10 06:15:43 2013 - [debug]Sat Aug 10 06:15:40 2013 - [debug] Connecting via SSH from root@192.168.8.121(192.168.8.121:22) toroot@192.168.8.120(192.168.8.120:22)..Sat Aug 10 06:15:41 2013 -[debug] ok.Sat Aug 10 06:15:41 2013 - [debug] Connecting via SSH from root@192.168.8.121(192.168.8.121:22) to root@192.168.8.122(192.168.8.122:22)..Sat Aug 10 06:15:43 2013 -[debug] ok.Sat Aug 10 06:15:44 2013 - [debug]Sat Aug 10 06:15:40 2013 - [debug] Connecting via SSH from root@192.168.8.122(192.168.8.122:22) to root@192.168.8.120(192.168.8.120:22)..Sat Aug 10 06:15:42 2013 -[debug] ok.Sat Aug 10 06:15:42 2013 - [debug] Connecting viaSSH from root@192.168.8.122(192.168.8.122:22) toroot@192.168.8.121(192.168.8.121:22)..Sat Aug 10 06:15:44 2013 -[debug] ok.Sat Aug 10 06:15:44 2013 - [info] All SSH connection tests passedsuccessfully.(8)测试复制[root@centos3 mha]# masterha_check_repl--global_conf=/etc/masterha/masterha_f--conf=/etc/masterha/fSat Aug 10 06:26:13 2013 - [info] Reading default configuratoinsfrom /etc/masterha/masterha_f..Sat Aug 10 06:26:13 2013 - [info] Reading application defaultconfigurations from /etc/masterha/f..Sat Aug 10 06:26:13 2013 - [info] Reading server configurationsfrom /etc/masterha/f..Sat Aug 10 06:26:13 2013 - [info] MHA::MasterMonitor version0.53.Sat Aug 10 06:26:13 2013 - [info] Dead Servers:Sat Aug 10 06:26:13 2013 - [info] Alive Servers:Sat Aug 10 06:26:13 2013 -[info]192.168.8.120(192.168.8.120:3306)Sat Aug 10 06:26:13 2013 -[info]192.168.8.121(192.168.8.121:3306)Sat Aug 10 06:26:13 2013 -[info]192.168.8.122(192.168.8.122:3306)Sat Aug 10 06:26:13 2013 - [info] Alive Slaves:Sat Aug 10 06:26:13 2013 -[info]192.168.8.121(192.168.8.121:3306)Version=5.5.29-log (oldest major version between slaves) log-bin:enabledSat Aug 10 06:26:13 2013 -[info]Replicating from 192.168.8.120(192.168.8.120:3306)Sat Aug 10 06:26:13 2013 -[info]Primary candidate for the new Master (candidate_master is set)Sat Aug 10 06:26:13 2013 -[info]192.168.8.122(192.168.8.122:3306)Version=5.5.29-log (oldest major version between slaves) log-bin:enabledSat Aug 10 06:26:13 2013 -[info]Replicating from 192.168.8.120(192.168.8.120:3306)Sat Aug 10 06:26:13 2013 -[info]Not candidate for the new Master (no_master is set)Sat Aug 10 06:26:13 2013 - [info] Current Alive Master: 192.168.8.120(192.168.8.120:3306)Sat Aug 10 06:26:13 2013 - [info] Checking slave configurations..Sat Aug 10 06:26:13 2013 - [info] Checking replication filteringsettings..Sat Aug 10 06:26:13 2013 - [info] binlog_do_db= ,binlog_ignore_db=Sat Aug 10 06:26:13 2013 - [info] Replicationfiltering check ok.Sat Aug 10 06:26:13 2013 - [info] Starting SSH connection tests..Sat Aug 10 06:26:17 2013 - [info] All SSH connection tests passedsuccessfully.Sat Aug 10 06:26:17 2013 - [info] Checking MHA Node version..Sat Aug 10 06:26:18 2013 - [info] Version checkok.Sat Aug 10 06:26:18 2013 - [info] Checking SSH publickey authentication settings on the current master..Sat Aug 10 06:26:18 2013 - [info] HealthCheck: SSH to192.168.8.120is reachable.Sat Aug 10 06:26:18 2013 - [info] Master MHA Node version is0.53.Sat Aug 10 06:26:18 2013 - [info] Checking recovery script configurations on the current master..Sat Aug 10 06:26:18 2013 -[info] Executing command:save_binary_logs --command=test --start_pos=4--binlog_dir=/app/mysql--output_file=/app/mha/save_binary_logs_test--manager_version=0.53--start_file=mysql-bin.000010Sat Aug 10 06:26:18 2013 -[info] Connecting toroot@192.168.8.120(192.168.8.120)..Creating /app/mha if notexists..ok.Checking output directory is accessible ornot..ok.Binlog found at /app/mysql, up tomysql-bin.000010Sat Aug 10 06:26:18 2013 - [info] Master setting check done.Sat Aug 10 06:26:18 2013 - [info] Checking SSH publickey authentication and checking recovery script configurations on allalive slave servers..Sat Aug 10 06:26:18 2013 -[info] Executing command :apply_diff_relay_logs --command=test --slave_user=root --slave_host=192.168.8.121 --slave_ip=192.168.8.121 --slave_port=3306 --workdir=/app/mha--target_version=5.5.29-log--manager_version=0.53--relay_log_info=/app/mysql/--relay_dir=/app/mysql/ --slave_pass=xxxSat Aug 10 06:26:18 2013 -[info] Connecting toroot@192.168.8.121(192.168.8.121:22)..Checking slave recovery environmentsettings..Opening/app/mysql/ ... ok.Relay logfound at /app/mysql, up to mysql-relay-bin.000004 Temporaryrelay log file is /app/mysql/mysql-relay-bin.000004Testingmysql connection and privileges.. done.Testingmysqlbinlog output.. done.Cleaning uptest file(s).. done.Sat Aug 10 06:26:19 2013 -[info] Executing command :apply_diff_relay_logs --command=test --slave_user=root --slave_host=192.168.8.122 --slave_ip=192.168.8.122 --slave_port=3306 --workdir=/app/mha--target_version=5.5.29-log--manager_version=0.53--relay_log_info=/app/mysql/--relay_dir=/app/mysql/ --slave_pass=xxxSat Aug 10 06:26:19 2013 -[info] Connecting toroot@192.168.8.122(192.168.8.122:22)..Checking slave recovery environment settings..Opening/app/mysql/ ... ok.Relay logfound at /app/mysql, up to mysql-relay-bin.000004Temporaryrelay log file is /app/mysql/mysql-relay-bin.000004Testingmysql connection and privileges.. done.Testingmysqlbinlog output.. done.Cleaning uptest file(s).. done.Sat Aug 10 06:26:20 2013 - [info] Slaves settings check done.Sat Aug 10 06:26:20 2013 - [info]192.168.8.120 (current master)+--192.168.8.121+--192.168.8.122Sat Aug 10 06:26:20 2013 - [info] Checking replication health on192.168.8.121..Sat Aug 10 06:26:20 2013 - [info] ok.Sat Aug 10 06:26:20 2013 - [info] Checking replication health on192.168.8.122..Sat Aug 10 06:26:20 2013 - [info] ok.Sat Aug 10 06:26:20 2013 - [info] Checkingmaster_ip_failover_script status:Sat Aug 10 06:26:20 2013 -[info]/scripts/master_ip_failover --command=status--ssh_user=root--orig_master_host=192.168.8.120--orig_master_ip=192.168.8.120--orig_master_port=3306Sat Aug 10 06:26:20 2013 - [info] OK.Sat Aug 10 06:26:20 2013 - [warning] shutdown_script is notdefined.Sat Aug 10 06:26:20 2013 - [info] Got exit code 0 (Not masterdead).MySQL Replication Health is OK.(9)启动management[root@centos3 mysql]# nohup masterha_manager--global-conf=/etc/masterha/masterha_f--conf=/etc/masterha/fSat Aug 10 06:29:36 2013 - [info] MHA::MasterMonitor version0.53.Sat Aug 10 06:29:37 2013 - [info] Dead Servers:Sat Aug 10 06:29:37 2013 - [info] Alive Servers:Sat Aug 10 06:29:37 2013 -[info]192.168.8.120(192.168.8.120:3306)Sat Aug 10 06:29:37 2013 -[info]192.168.8.121(192.168.8.121:3306)Sat Aug 10 06:29:37 2013 -[info]192.168.8.122(192.168.8.122:3306)Sat Aug 10 06:29:37 2013 - [info] Alive Slaves:Sat Aug 10 06:29:37 2013 -[info]192.168.8.121(192.168.8.121:3306)Version=5.5.29-log (oldest major version between slaves) log-bin:enabledSat Aug 10 06:29:37 2013 -[info]Replicating from 192.168.8.120(192.168.8.120:3306)Sat Aug 10 06:29:37 2013 -[info]Primary candidate for the new Master (candidate_master is set)Sat Aug 10 06:29:37 2013 -[info]192.168.8.122(192.168.8.122:3306)Version=5.5.29-log (oldest major version between slaves) log-bin:enabledSat Aug 10 06:29:37 2013 -[info]Replicating from 192.168.8.120(192.168.8.120:3306)Sat Aug 10 06:29:37 2013 -[info]Not candidate for the new Master (no_master is set)Sat Aug 10 06:29:37 2013 - [info] Current Alive Master: 192.168.8.120(192.168.8.120:3306)Sat Aug 10 06:29:37 2013 - [info] Checking slave configurations..Sat Aug 10 06:29:37 2013 - [info] Checking replication filteringsettings..Sat Aug 10 06:29:37 2013 - [info] binlog_do_db= , binlog_ignore_db=Sat Aug 10 06:29:37 2013 - [info] Replicationfiltering check ok.Sat Aug 10 06:29:37 2013 - [info] Starting SSH connection tests..Sat Aug 10 06:29:40 2013 - [info] All SSH connection tests passedsuccessfully.Sat Aug 10 06:29:40 2013 - [info] Checking MHA Node version..Sat Aug 10 06:29:41 2013 - [info] Version checkok.Sat Aug 10 06:29:41 2013 - [info] Checking SSH publickey authentication settings on the current master..Sat Aug。
keepalived编译安装配置自启动
Centos配置Keepalived 做双机热备切换分类:网站架构2009-07-25 13:53 7823人阅读评论(0) 收藏举报centosserverdelayauthenticationsnscompilerKeepalived系统环境:************************************************************两台服务器都装了CentOS-5.2-x86_64系统Virtual IP: 192.168.30.20Squid1+Real Server 1:网卡地址(eth0):192.168.30.12Squid2+Real Server 2:网卡地址(eth0):192.168.30.13************************************************************软件列表:keepalived/software/keepalived-1.1.17.tar.gzopenssl-develyum -y install openssl-devel***************************************************************配置:配置基于高可用keepalived,确定LVS使用DR模式1.安装配置keepalived1.1安装依赖软件如果系统为基本文本安装,需要安装一下软件# yum -y install ipvsadm# yum -y install kernel kernel-devel# reboot 重启系统切换内核# yum -y install openssl-devel ;安装keepalived依赖软件#ln -s /usr/src/kernels/`uname -r`-`uname -m`/ /usr/src/linux;建立内核链接,编译keepalived支持lvs时需要注意建立链接的内核名和当前运行的内核一致,否则导致安装失败#tar zxvf keepalived-1.1.17.tar.gz#cd keepalived-1.1.17#./configure --prefix=/usr --sysconf=/etcKeepalived configuration------------------------Keepalived version : 1.1.17Compiler : gccCompiler flags : -g -O2Extra Lib : -lpopt -lssl -lcryptoUse IPVS Framework : Yes ;注意编译时一定要支持lvsIPVS sync daemon support : YesUse VRRP Framework : YesUse LinkWatch : NoUse Debug flags : No#make#make install1.2编辑keepalived配置文件#Vi /etc/keepalived/keepalived.confglobal_defs {notification_email {test@}notification_email_from root@localhostsmtp_server 127.0.0.1smtp_connect_timeout 30router_id LVS_DEVEL}vrrp_instance VI_1 {state MASTER #备份服务器上将MASTER改为BACKUPinterface eth0 # HA 监测网络接口virtual_router_id 51 #主、备机的virtual_router_id必须相同priority 100 #主、备机取不同的优先级,主机值较大,备份机值较小 advert_int 2 # VRRP Multicast 广播周期秒数authentication {auth_type PASS #VRRP 认证方式auth_pass 1111 #VRRP 口令字}virtual_ipaddress {192.168.30.20 # VRRP HA 虚拟地址如果有多个VIP,继续换行填写 }}virtual_server 192.168.30.20 80 {delay_loop 2 #每隔6秒查询realserver状态lb_algo rr #lvs 算法lb_kind DR #Direct Routepersistence_timeout 50 #同一IP 的连接60 秒内被分配到同一台realserverprotocol TCP #用TCP协议检查realserver状态real_server 192.168.30.12 80 {weight 3 #(权重)TCP_CHECK {connect_timeout 10 #(10秒无响应超时)nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 192.168.30.13 80 {weight 3 #(权重)TCP_CHECK {connect_timeout 10 #(10秒无响应超时)nb_get_retry 3delay_before_retry 3connect_port 80}}}1.3BACKUP服务器同上配置,先安装lvs再按装keepalived,仍后配置/etc/keepalived/keepalived.conf,只需将红色标示的部分改一下即可.global_defs {notification_email {test@}notification_email_from root@localhostsmtp_server 127.0.0.1smtp_connect_timeout 30router_id LVS_DEVEL}vrrp_instance VI_1 {state BACKUP #备份服务器上将MASTER改为BACKUPinterface eth0 # HA 监测网络接口virtual_router_id 51 #主、备机的virtual_router_id必须相同priority 99 #主、备机取不同的优先级,主机值较大,备份机值较小 advert_int 2 # VRRP Multicast 广播周期秒数authentication {auth_type PASS #VRRP 认证方式auth_pass 1111 #VRRP 口令字}virtual_ipaddress {192.168.30.20 # VRRP HA 虚拟地址}}virtual_server 192.168.30.20 80 {delay_loop 2 #每隔6秒查询realserver状态lb_algo rr #lvs 算法lb_kind DR #Direct Routepersistence_timeout 50 #同一IP 的连接60 秒内被分配到同一台realserver protocol TCP #用TCP协议检查realserver状态real_server 192.168.30.12 80 {weight 3 #(权重)TCP_CHECK {connect_timeout 10 #(10秒无响应超时)nb_get_retry 3delay_before_retry 3connect_port 80}}real_server 192.168.30.13 80 {weight 3 #(权重)TCP_CHECK {connect_timeout 10 #(10秒无响应超时)nb_get_retry 3delay_before_retry 3connect_port 80}}}2. 配置lvs客户端脚本[c-sharp]view plaincopyprint?1.#vi /usr/local/sbin/realserver.sh2. #!/bin/bash3. # description: Config realserver lo and apply noarp4. #Written by :NetSeek 5.6. SNS_VIP=192.168.30.207.8. . /etc/rc.d/init.d/functions9.10.case"$1"in11. start)12. ifconfig lo:0 $SNS_VIP netmask 255.255.255.255 broadcast $SNS_VIP13. /sbin/route add -host $SNS_VIP dev lo:014. echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore15. echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce16. echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore17. echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce18. sysctl -p >/dev/null 2>&119. echo "RealServer Start OK"20.21. ;;22. stop)23. ifconfig lo:0 down24. route del $SNS_VIP >/dev/null 2>&125. echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore26. echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce27. echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore28. echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce29. echo "RealServer Stoped"30. ;;31. *)32. echo "Usage: $0 {start|stop}"33. exit 134. esac35.36. exit 0或者采用secondary ip address方式配置# vi /etc/sysctl.confnet.ipv4.conf.lo.arp_ignore = 1net.ipv4.conf.lo.arp_announce = 2net.ipv4.conf.all.arp_ignore = 1net.ipv4.conf.all.arp_announce = 2#sysctl -p#ip addr add 192.168.30.20/32 dev lo#ip add list 查看是否绑定3. 启动lvs-dr脚本和realserver启本,在DR上可以查看LVS当前状态: #watch ipvsadm -ln4. 启动keepalived 服务#service httpd start#/etc/init.d/keepalived start查看keepalived的安装位置:# find / -name keepalived将keepalived作为服务添加到chkconfig中,并设置开机启动# chkconfig --add keepalived# chkconfig --level 35 keepalived on# chkconfig --list keepalivedkeepalived 0:关闭 1:关闭 2:关闭 3:启用 4:关闭 5:启用 6:关闭“各等级”包括2、3、4、5等级等级0表示:表示关机等级1表示:单用户模式等级2表示:无网络连接的多用户命令行模式等级3表示:有网络连接的多用户命令行模式等级4表示:不可用等级5表示:带图形界面的多用户模式等级6表示:重新启动5. 测试lvs服务是否正常5.1通过浏览器访问http://192.168.30.20是否正常多次刷新浏览器,在主LVS上看连接数变化5.2停止主LVS上的keepalived 看看备份keepalived是否正常接管服务注:realserver如果为windows主机的话需要安装microsoft loopback,设置IP为VIP确认之后搜索注册表,关键字"VIP"把搜到结果的每项里面的subnet mask都改成255.255.255.255然后重启即可。
keepalived 虚拟ip原理
keepalived 虚拟ip原理
keepalived是一个开源的网络高可用性解决方案,其原理可以
简单概括为以下几个步骤:
1. 客户端请求连接到keepalived虚拟IP地址(VIP)。
2. 在keepalived的负载均衡器(LVS)上,VIP与实际服务器
之间建立一条透明的网络链路。
3. keepalived使用多种算法(如轮询、加权轮询、源IP散列等)将客户端请求转发到实际服务器上进行处理。
4. keepalived通过与实际服务器之间的心跳检测,实时监测服
务器的状态。
5. 如果某个服务器宕机或无响应,keepalived会自动将该服务
器从服务列表中移除,确保请求不会被转发到故障服务器。
6. 同时,keepalived会将VIP重新分配给其他正常工作的服务器,确保服务的连续性。
7. 一旦故障服务器恢复正常,keepalived会将其重新加入到服
务列表中,并根据算法再次进行负载均衡。
总的来说,keepalived通过建立虚拟IP地址,并对实际服务器进行实时监控和负载均衡,实现了对服务的高可用性和可靠性。
linuxkeepalived配置参数详解
linuxkeepalived配置参数详解Keepalived是Linux上一款用于实现高可用服务的软件,它使用VRRP(虚拟路由冗余协议)来实现故障转移和负载均衡。
在配置Keepalived时,可以通过修改不同的参数来实现各种不同的功能。
下面将详细介绍一些常用的配置参数。
1. global_defs:该选项用于定义全局变量,语法格式为"global_defs {变量1: 值1, 变量2: 值2, ...}"。
常用的变量包括router_id、notification_email、notification_email_from等。
2. vrrp_script:该选项用于定义一个检查脚本,用于检测服务是否正常工作。
语法格式为 "vrrp_script 脚本名 { script 脚本路径 }"。
可以使用该脚本来检查服务的健康状态,例如通过ping命令检查目标服务器的连通性。
3. vrrp_instance:该选项用于定义一个VRRP实例,语法格式为"vrrp_instance 实例名 { 参数1: 值1, 参数2: 值2, ... }"。
常用的参数包括 state、interface、virtual_router_id和priority等。
- state:该参数用于定义实例的状态,可以是MASTER或BACKUP。
MASTER是主节点的状态,BACKUP是备节点的状态。
- interface:该参数用于定义实例绑定的网络接口。
- virtual_router_id:该参数用于定义实例的虚拟路由器ID,该ID 在局域网中必须是唯一的。
- priority:该参数用于定义实例的优先级,优先级高的节点将成为MASTER节点。
- advert_int:该参数用于定义实例之间的心跳间隔,默认值为1秒。
- virtual_ipaddress:该参数用于配置实例的虚拟IP地址,可以配置多个虚拟IP。
keepalived 日志规则
keepalived 日志规则摘要:一、keepalived日志规则概述二、keepalived日志级别及意义三、keepalived日志配置方法四、keepalived日志查看与分析五、keepalived日志在故障排查中的应用六、总结与建议正文:keepalived是一款高性能的负载均衡器,广泛应用于服务器、网络设备等领域。
keepalived日志记录了keepalived组件在运行过程中的各种信息,对于故障排查、性能优化等方面具有重要的参考价值。
本文将详细介绍keepalived日志规则、日志级别及意义、日志配置方法,以及在故障排查中的应用。
一、keepalived日志规则概述keepalived日志遵循一定的规则进行记录,这些规则包括:1.日志级别:keepalived日志分为debug、info、warning、error、crit 五大级别,级别越高,日志信息重要性越高。
2.日志输出:keepalived日志默认输出到syslog,也可以自定义日志输出目标,如文件、网络服务器等。
3.日志时间格式:keepalived日志时间格式为“YYYY-MM-DDHH:MM:SS”。
4.日志条目格式:每条日志条目包括日志级别、时间、组件名称、日志信息等内容。
二、keepalived日志级别及意义1.debug:详细信息,用于调试程序。
:一般性信息,表示keepalived组件正常运行。
3.warning:警告信息,提示可能存在的问题,需关注。
4.error:错误信息,表示keepalived组件运行出现故障。
5.crit:严重错误信息,严重影响keepalived组件正常运行。
三、keepalived日志配置方法1.修改配置文件:编辑keepalived的配置文件(如/etc/keepalived/keepalived.conf),设置日志相关参数,如日志级别、输出目标等。
2.修改日志级别:根据实际需求,调整各个组件的日志级别,使其更加符合故障排查和性能优化的需求。
Keepalive配置文件
Keepalive配置⽂件# 默认配置⽂件[root@ct1 ~]# cat /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs { # 全局配置notification_email { # 报警邮件地址acassen@firewall.locfailover@firewall.locsysadmin@firewall.loc}notification_email_from Alexandre.Cassen@firewall.loc # 指定发件⼈smtp_server 192.168.200.1 # 指定smtp服务器地址smtp_connect_timeout 30 # 指定连接smtp服务器超时时间router_id LVS_DEVEL # 虚拟路由标识符,发邮件时显⽰在标中的信息vrrp_skip_check_adv_addr # 跳过检查数据报⽂vrrp_strict # 严格遵循VRRP协议,vrrp_garp_interval 0 # ARP报⽂发送延迟时间vrrp_gna_interval 0 # 信息发送延迟时间}vrrp_script SCRIPT_NAME { # 定义⽤于实例执⾏的脚本}# vrrp⽰例配置部分vrrp_instance VI_1 { # VRRP实例配置模块,由{}包裹起来state MASTER # 定义⾓⾊,MASTER或BACKUPinterface eth0 # 绑定⽹卡设置virtual_router_id 51 # 虚拟路由ID,注意在同⼀个vrrp_instance中不能重复priority 100 # 优先级advert_int 1 # 探测时间间隔,⼼跳间隔,单位为秒,MASTER和BACKUP必须⼀致authentication { # 认证部分auth_type PASS # 认证⽅式,PASS或HAauth_pass 1111 # 密码,在⼀个vrrp_instance中MASTER和BACKUP的密码必须⼀致}virtual_ipaddress { # 设置虚拟IP地址,可设置多个。
使用LVS实现负载均衡原理及安装配置详解
如上图。FULLNAT模式对入报文做了DNAT+SNAT,即将报文的目的地址改为RS的地址,源地址改为LVS设备地址;RS上不需要配置路由策略,出报文到了LVS设备上后做 SNAT+DNAT,即将报文的源地址改为LVS设备上的地址,目的地址改为真实的用户地址。
LVS FULLNAT类型特性
ipvsadm:用户空间的命令行工具,规则管理器,用于管理集群服务及RealServer ipvs:工作于内核空间的netfilter的INPUT钩子之上的框架
LVS集群类型中的术语
Director:负载均衡器,也称VS(Virtual Server) RS:真实服务器(RealServer) CIP:客户端IP(Client IP) VIP: Client所请求的,提供虚拟服务的IP,可以用Keepalive做高可用 DIP:在Director实现与RS通信的IP RIP:RealServer IP
1.VIP是公网地址,RIP和DIP是私网地址,且通常不在同一IP网络,因此,RIP的网关一般不会指向DIP 2.RS收到的请求报文源地址是DIP,因此只需响应给DIP, Dirctor收到RS的回复报文后将其发往Client 3.请求和响应报文都经由Dirctor 4.支持端口映射
三、LVS调度方法(Scheduler)
2.2 LVS/DR(Direct Routing直接路由) 通过为请求报文重新封装一个MAC首部进行转发,源MAC是DIP所在的接口的MAC,目标MAC是挑选出的RS的RIP所在接口的MAC地址;源IP/PORT,以及目标IP/PORT均保持不变, 请求报文经过Dirctor但响应报文不再经过Dirctor
二、LVS集群的类型
LVS转发模式有四种: lvs-nat: 修改请求报文的目标IP lvs-dr: 操纵封闭新的MAC地址 lvs-tun: 在原请求IP报文之外新加一个IP首部 lvs-fullnat: 修改请求报文的源和目标IP
keepalived编译
要编译Keepalived,您可以按照以下步骤进行操作:
确保您的系统上已安装了所需的依赖项。
具体的依赖项列表可以在Keepalived的官方文档中找到。
下载Keepalived的源代码包。
您可以从Keepalived的官方网站或Git仓库获取最新的源代码包。
解压源代码包。
您可以使用以下命令将源代码包解压到您的系统上:
tar -zxvf keepalived-x.x.x.tar.gz
其中,x.x.x是您下载的源代码包的版本号。
4. 进入解压后的目录。
使用以下命令进入解压后的目录:
bash
cd keepalived-x.x.x
执行配置脚本。
运行以下命令执行配置脚本:
go
./configure
这将根据您的系统和环境进行自动配置。
您可以在配置过程中使用--prefix参数指定Keepalived的安装目录。
6. 编译源代码。
运行以下命令编译源代码:
go
make
这将编译Keepalived的源代码并生成可执行文件。
7. 安装可执行文件。
运行以下命令将可执行文件安装到您的系统上:
go
make install
这将把Keepalived的可执行文件复制到指定的安装目录中。
完成上述步骤后,您应该已成功编译和安装了Keepalived。
您可以使用Keepalived的命令行工具来配置和管理您的系统的高可用性解决方案。
keepalive笔记之二:keepalive+nginx(自定义脚本实现,上述例子也可以实现)
weight -2
//当脚本返回的状态码不是0时,操作权重
}
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_routห้องสมุดไป่ตู้r_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
大佬我设置完后还是乱码浏览器的开发者工具也能看出来是utf8请问可能是有什么原因吗
keepalive笔记之二: keepalive+nginx(自定义脚本实现,上述 例子也可以实现)
keepalive的配置文件
! Configuration File for keepalived
global_defs { notification_email { acassen@firewall.loc failover@firewall.loc sysadmin@firewall.loc } notification_email_from Alexandre.Cassen@firewall.loc smtp_server 192.168.200.1 smtp_connect_timeout 30 router_id LVS_DEVEL
}
vrrp_script check_80 {
//定义vrrp脚本
script '/root/check_code.py'
//脚本路径
interval 2
//脚本检测时间间隔,脚本必须在间隔时间内返回状态,不然日志报错
//Keepalived_vrrp[7813]: Process [7894] didn't respond to SIGTERM
Keepalived源码安装
Keepalived源码安装1.编译、安装# tar -xvf keepalived-1.3.9.tar.gz# cd keepalived-1.3.9/# ./configure -prefix=/usr/local/keepalived-1.3.9# make# make install2.配置成服务cp /usr/local/keepalived-1.3.9/etc/sysconfig/keepalived /etc/sysconfig/3.拷贝配置⽂件# mkdir /etc/keepalived# cp /usr/local/keepalived-1.3.9/etc/keepalived/keepalived.conf /etc/keepalived/4.使⽤软连接或者直接拷贝执⾏⽂件# ln -s /usr/local/keepalived-1.3.9/sbin/keepalived /usr/sbin/或者# cp /usr/local/keepalived-1.3.9/sbin/keepalived /usr/sbin5.加⼊开机启动项# vi /etc/init.d/keepalived 脚本内容如下:#!/bin/sh## keepalived High Availability monitor built upon LVS and VRRP## chkconfig: - 86 14# description: Robust keepalive facility to the Linux Virtual Server project \# with multilayer TCP/IP stack checks.### BEGIN INIT INFO# Provides: keepalived# Required-Start: $local_fs $network $named $syslog# Required-Stop: $local_fs $network $named $syslog# Should-Start: smtpdaemon httpd# Should-Stop: smtpdaemon httpd# Default-Start:# Default-Stop: 0 1 2 3 4 5 6# Short-Description: High Availability monitor built upon LVS and VRRP# Description: Robust keepalive facility to the Linux Virtual Server# project with multilayer TCP/IP stack checks.### END INIT INFO# Source function library.. /etc/rc.d/init.d/functionsexec="/usr/sbin/keepalived"prog="keepalived"config="/etc/keepalived/keepalived.conf"[ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$proglockfile=/var/lock/subsys/keepalivedstart() {[ -x $exec ] || exit 5[ -e $config ] || exit 6echo -n $"Starting $prog: "daemon $exec $KEEPALIVED_OPTIONSretval=$?echo[ $retval -eq 0 ] && touch $lockfilereturn $retval}stop() {echo -n $"Stopping $prog: "killproc $progretval=$?echo[ $retval -eq 0 ] && rm -f $lockfilereturn $retval}restart() {stopstart}reload() {echo -n $"Reloading $prog: "killproc $prog -1retval=$?echoreturn $retval}force_reload() {restart}rh_status() {status $prog}rh_status_q() {rh_status &>/dev/null}case "$1" instart)rh_status_q && exit 0$1;;stop)rh_status_q || exit 0$1;;restart)$1;;reload)rh_status_q || exit 7$1;;force-reload)force_reload;;status)rh_status;;condrestart|try-restart)rh_status_q || exit 0restart;;*)echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2esacexit $?# chmod a+x /etc/init.d/keepalived。
- 1、下载文档前请自行甄别文档内容的完整性,平台不提供额外的编辑、内容补充、找答案等附加服务。
- 2、"仅部分预览"的文档,不可在线预览部分如存在完整性等问题,可反馈申请退款(可完整预览的文档不适用该条件!)。
- 3、如文档侵犯您的权益,请联系客服反馈,我们会尽快为您处理(人工客服工作时间:9:00-18:30)。
Lvs+keepalived 高可用性负载均衡配置前言* 随着互联网的发展,提供用户访问的web服务器,必须要保证每天24不间断服务,访问量不断增加,有什么好的web架构既能实现高可用性负载均衡,而且价格又是免费的呢?答案有木有?有!lvs+keepalived 是不错的选择!一、实验环境:4台centos 6.0 ,以及简单的拓扑图:LVS-Master 192.168.2.108LVS-BACKUP 192.168.2.109LVS-DR-VIP 192.168.2.100WEB1-Realserver 192.168.2.79WEB2-Realserver 192.168.2.80二、安装ipvsadm+keepalived,用脚本自动安装:由于我们使用的是lvs+keepalived,所以这里不需要配置lvs-dr脚本,直接在keepalived.conf里面配置即可!#!/bin/sh###脚本编写目的:自动安装lvs+keepalived###编写时间: 2011年11月25日10:27:59###初稿人:wugk###定义变量DIR1=/usr/srcDIR2=/usr/localcat << EOF++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ++++Welcome to use Linux installed a key LVS+KEEPALIVED shells scripts +++++++++++++++++++++*************************++++++++++++++++++++++++ EOFif[ $UID -ne 0 ];thenecho “This script must use root user ,please exit……”sleep 2exit 0fidownload (){cd $DIR1 && wget -c/software/kernel-2.6/ipvsadm-1.24.tar.gz/software/keepalived-1.1.15.tar.gzif[ $? = 0 ];thenecho "Download LVS Code is OK!"elseecho "Download LVS Code is failed,Please check!"exit 1fi}ipvsadm_install (){ln -s $DIR1/kernels/2.6.* $DIR1/linuxcd $DIR1 && tar xzvf ipvsadm-1.24.tar.gz &&cd ipvsadm-1.24 && make && make install if[ $? -eq 0 ];thenecho "Install ipvsadm success,please waiting install keepalived ..............." elseecho "Install ipvsadm failed ,please check !"exit 1fi}keepalived_install (){cd $DIR1 && tar -xzvf keepalived-1.1.15.tar.gz &&cd keepalived-1.1.15&& ./configure && make && make installif[ $? -eq 0 ];thenecho "Install keepalived success,please waiting configurekeepalived ..............."elseecho "Install keepalived failed ,please check install version !"exit 1fi}######如果以上软件包编译报错的话,请检查相关的版本跟系统版本之间的关系,然后手动下载安装.keepalived_config (){cp $DIR2/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/ && cp$DIR2/etc/sysconfig/keepalived /etc/sysconfig/ && mkdir -p /etc/keepalived &&cp $DIR2/etc/keepalived/keepalived.conf /etc/keepalived/ && cp $DIR2/sbin/keepalived /usr/sbin/if[ $? -eq 0 ];thenecho "Keepalived system server config success!"elseecho "Keepalived system server config failed ,please check keepalived!"exit 1fi}PS3="Please select Install Linux Packages:"select option in download ipvsadm_install keepalived_install keepalived_configdo$optiondone以上脚本分别在lvs-master和lvs-backup上执行安装。
三、配置keepalived.conf:内容如下是lvs-master配置也可以参考配置:/download/keepalived.conf可以直接打开! Configuration File for keepalivedglobal_defs {notification_email {wgkgood@}notification_email_from wgkgood@smtp_server 127.0.0.1smtp_connect_timeout 30router_id LVS_DEVEL}# VIP1vrrp_instance VI_1 {state MASTERinterface eth0lvs_sync_daemon_inteface eth0 virtual_router_id 51priority 100advert_int 5authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.2.100}}#REAL_SERVER_1virtual_server 192.168.2.100 80 { delay_loop 6lb_algo wlclb_kind DRpersistence_timeout 60protocol TCPreal_server 192.168.2.79 80 { weight 100TCP_CHECK {connect_timeout 10nb_get_retry 3delay_before_retry 3connect_port 80}}#REAL_SERVER_2real_server 192.168.2.80 80 { weight 100TCP_CHECK {connect_timeout 10nb_get_retry 3delay_before_retry 3connect_port 80}}}注意***Lvs-backup端同样配置,只需要更改state MASTER为state BACKUP,修改priority 100为priority 90即可。
四、分别在web1、web2上配置好nginx,然后分别执行如下脚本:如下的VIP1指的是lvs-dr-vip地址,及对外提供访问的虚拟ip:#!/bin/shPS3="Please Choose whether or not to start a realserver VIP1 configuration:"select i in "start" "stop"docase "$i" instart)read -p "Please enter the virtual server IP address:" VIP1ifconfig lo:0 $VIP1 netmask 255.255.255.255 broadcast $VIP1 /sbin/route add -host $VIP1 dev lo:0echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "1" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/all/arp_announcesysctl -p >/dev/null 2>&1echo "RealServer Start OK"exit 0;;stop)ifconfig lo:0 downroute del $VIP1 >/dev/null 2>&1echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "0" >/proc/sys/net/ipv4/conf/lo/arp_announceecho "0" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "0" >/proc/sys/net/ipv4/conf/all/arp_announceecho "RealServer Stoped"exit 1;;*)echo "Usage: $0 {start|stop}"exit 2esacdone脚本会提示是否启动,按1即启动,然后输入vip地址 192.168.2.100 ,用ifconfig你会看到:lo:0的ip即表示配置ip成功。