keepalived

前导

  • 存储层:Mysql

                  NFS :共享存储

  • Web(应用层):Httpd

                           Nginx web
                           IIS

  • 负载层: Nginx

                   Haproxy
                   LVS
1.png

Keepalived简介

  • 专门用来监控集群系统中各个服务的状态,如果某个服务出现故障,自动检测并将该节点从集群中剔除掉
  • 加入VRRP功能,虚拟路由冗余协议

    目的:解决静态路由的单点故障问题,具有了故障转移和隔离功能

两个核心功能

  • 健康检查:采用tcp三次握手,发送icmp报文,http请求等等...  
  • 失败切换:为负载均衡器配置主备模式 ,利用vrrp协议维持主备的心跳

     当我们的主设备发生故障时,能够将主设备上的业务请求转移给从设备,从而去减少流量损失,调高稳定性

VRRP协议介绍

  • 主备模式协议,保证当主机的下一跳路由出现故障的时候,能够由另一台机器去代替主设备 (网络连通高可用性)
  • 虚拟路由器:VRRP组中所有的路由器,拥有虚拟的IP地址
  • 主路由器:虚拟机路由器内部的话一般只有一台设备对外提供服务,主路由器的产生由特定的选举算法选取出来
  • 备份路由器:VRRP组中所有除了主路由器,一般情况不对外提供服务,只有当主路由器down,重新通过选举算法去替代主路由器

VRRP选举机制

三种状态

  • Initialze状态:初始化状态,系统启动的时候会进入的状态
  • master状态:活动状态,主路由器的状态
  • backup状态:备份状态,从路由的状态

选举机制

  • 优先级
  • 抢占模式:一旦有优先级高的路由加入,即立刻执行选举算法,称为master节点
  • 非抢占模式:只要master节点正常工作,优先级更高的节点只能够等待

工作原理

  • 保证网络连通的高可用性

     应用层:自定义Keepalived工作方式(脚本)
     网络层:通过ICMP协议向后端服务器集群中发送数据报文
     传输层:利用TCP协议的端口连接和扫描技术检测后端服务器集群是否正常

体系结构

  • 健康检查: 发现故障点
  • 故障转移和恢复:保证集群高可用性

运行启动的3个进程:  

  • core: 负责主进程的启动,维护和加载全局设备
  • check:负责健康检查
  • vrrp:用来实现vrrp协议

注: Keepalived起初是为了实现LVS的高可用方案

相关配置

  • 安装
[root@192 ~]# yum install keepalived -y
  • 主配置文件
[root@192 ~]# cat /etc/keepalived/keepalived.conf | grep -Ev "^$|^#"
! Configuration File for keepalived
global_defs {       # 全局定义
  notification_email { # 邮件通知相关配置
    acassen@firewall.loc
    failover@firewall.loc
    sysadmin@firewall.loc
  }
  notification_email_from Alexandre.Cassen@firewall.loc
  smtp_server 192.168.200.1       # smtp服务:网络主机管理服务
  smtp_connect_timeout 30
  router_id LVS_DEVEL      # router_id  唯一标识
  vrrp_skip_check_adv_addr     # vrrp检查相关地址
  vrrp_strict
  vrrp_garp_interval 0
  vrrp_gna_interval 0
}
vrrp_instance VI_1 {            # 定义vrrp组(VI_1)
   state MASTER        # 状态
   interface eth0      # 访问接口
   virtual_router_id 51    # 虚拟路由器的ID
   priority 100        # 优先级
   advert_int 1
   authentication {    # 身份验证
       auth_type PASS
       auth_pass 1111
   }
   virtual_ipaddress {     # VIP地址
       192.168.200.16
       192.168.200.17
       192.168.200.18
   }
}

实验

实验准备

  • 4台机器
proxyweb
主机192192.168.110.129192.168.110.134
主机test2192.168.110.136192.168.110.137
主机test192.168.110.130
主机test1192.168.110.133
  • 实现

        1. 访问proxy1时能够将请求转发给 -> web1/web2  
           . 访问proxy2时能够将请求转发给 -> web1/web2
    
  • 步骤
#主机192准备步骤
[root@192 ~]# vim /etc/nginx/conf.d/proxy_server.conf
[root@192 ~]# cat /etc/nginx/conf.d/proxy_server.conf
upstream webserver{ 
  server 192.168.110.130:8080 weight=2 max_fails=3 max_conns=10000;
  server 192.168.110.133:8080 weight=1 max_fails=6 max_conns=5000;
}
server {
  listen 8080;
    server_name 192.168.110.134;
  location / {
      proxy_pass http://webserver;
  }
}
[root@192 ~]# nginx -t   #检查编写是否有误
[root@192 ~]# systemctl stop firewalld
[root@192 ~]# setenforce 0
[root@192 ~]# systemctl start nginx

#主机test2准备步骤与主机192相同

#主机test准备步骤
[root@test ~]# vim /etc/nginx/conf.d/server1.conf
[root@test ~]# cat /etc/nginx/conf.d/server1.conf 
server{
  listen 192.168.110.130:8080;
  server_name www.site2.com;
  root /usr/share/nginx/html;
  location / {
    index index.html;
  }
}
[root@test ~]# systemctl stop firewalld
[root@test ~]# setenforce 0
[root@test ~]# systemctl start nginx

#主机test1准备步骤
[root@test1 ~]# vim /etc/nginx/conf.d/server1.conf
[root@test1 ~]# cat /etc/nginx/conf.d/server1.conf
server{
  listen 192.168.110.133:8080;
  server_name www.site2.com;
  root /usr/share/nginx/html;
  location / {
    index index.html;
  }
}
[root@test1 ~]# systemctl stop firewalld
[root@test1 ~]# setenforce 0
[root@test1 ~]# systemctl start nginx

#测试
[root@192 ~]# curl 192.168.110.134:8080
this is test
[root@192 ~]# curl 192.168.110.137:8080
this is test1
[root@192 ~]# curl 192.168.110.134:8080
this is test
[root@192 ~]# curl 192.168.110.137:8080
this is test1

Keepalived+Nginx实验

  • 部署
[root@192 ~]# vim shi.sh
[root@192 ~]# cat shi.sh
#!/bin/bash
mv /etc/keepalived/keepalived.conf{,.bak}
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
   router_id 192 
}
vrrp_instance nginx_group {
    state MASTER         #定义为主路由器
    interface ens33
    virtual_router_id 10
    priority 100 
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {    #定义VIP地址
    192.168.110.100
    }
}
EOF
[root@192 ~]# chmod +x shi.sh
[root@192 ~]# ./shi.sh
[root@192 ~]# cat /etc/keepalived/keepalived.conf    #配置检测脚本
! Configuration File for keepalived
global_defs {
   router_id 192 
}
vrrp_instance nginx_group {
    state MASTER 
    interface ens37
    virtual_router_id 10
    priority 100 
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    192.168.110.100
    }
}
[root@192 ~]#  cd /etc/keepalived/
[root@192 keepalived]# ls
keepalived.conf  keepalived.conf.bak
[root@192 keepalived]# systemctl restart keepalived
[root@192 keepalived]# systemctl status keepalived
……
Mar 21 10:46:32 192 Keepalived_vrrp[1698]: VRRP_Instance(nginx_group) Sending/queueing grat...00
……
[root@192 keepalived]# ip a | grep glo
inet 192.168.110.129/24 brd 192.168.110.255 scope global secondary ens33
inet 192.168.110.134/24 brd 192.168.110.255 scope global noprefixroute ens37
inet 192.168.110.100/32 scope global ens37  #出现被绑定到ens37上的VIP地址

[root@test2 ~]# cd /etc/keepalived/
[root@test2 keepalived]# ls
keepalived.conf
[root@test2 keepalived]# mv keepalived.conf{,.bak}   #备份
[root@test2 keepalived]# vim keepalived.conf
[root@test2 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
   router_id test2 
}
vrrp_instance nginx_group {
    state BACKUP            #定义为从路由器
    interface ens37
    virtual_router_id 10
    priority 98            #优先级要低于主路由器
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    192.168.110.100
    }
}
[root@test2 keepalived]# systemctl restart keepalived
[root@test2 keepalived]# systemctl status keepalived
……
Mar 21 10:53:47 test2 Keepalived_vrrp[1980]: VRRP_Instance(nginx_group) removing pro...s.

……
  • 测试1:在浏览器输入192.168.110.100:8080进行验证
  • 测试2:模拟故障: 关闭192节点或者停止192上的Keepalived服务,VIP地址是否漂移到test2
[root@192 ~]# systemctl stop keepalived

[root@test2 ~]# systemctl status keepalived   #此时的显示信息与之前主机192的相同,即抢占成为主路由器
[root@test2 keepalived]# ip a | grep glo
inet 192.168.110.136/24 brd 192.168.110.255 scope global noprefixroute dynamic ens33
inet 192.168.110.137/24 brd 192.168.110.255 scope global noprefixroute dynamic ens37
inet 192.168.110.100/32 scope global ens37    #VIP地址漂移到test2
  • 测试3:尝试着关闭nginx,观察Keepalived的相关信息
[root@192 ~]# systemctl stop nginx
[root@192 ~]# ip a | grep glo
inet 192.168.110.129/24 brd 192.168.110.255 scope global secondary ens33
inet 192.168.110.134/24 brd 192.168.110.255 scope global noprefixroute ens37
inet 192.168.110.100/32 scope global ens37
[root@192 ~]# vim /etc/keepalived/keepalived.conf
[root@192 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   router_id 192 
}
vrrp_script chk_http_port {
    script "/usr/local/bin/check_nginx_pid.sh"     
    interval 1             
    weight -5
vrrp_instance nginx_group {
    state MASTER 
    interface ens37
    virtual_router_id 10
    priority 100 
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    track_script {
        chk_http_port 
    }
    virtual_ipaddress {
    192.168.110.100
    }
}
[root@192 ~]# systemctl start nginx
[root@192 ~]# systemctl restart keepalived
[root@192 ~]# ip a | grep glo
inet 192.168.110.129/24 brd 192.168.110.255 scope global secondary ens33
inet 192.168.110.134/24 brd 192.168.110.255 scope global noprefixroute ens37

有关健康检查机制

#编辑判断nginx的进程数量
[root@192 ~]# systemctl stop nginx
[root@192 ~]# vim /usr/local/bin/check_nginx_pid.sh
[root@192 ~]# cat /usr/local/bin/check_nginx_pid.sh
#!/bin/bash
A=`ps -C nginx --no-header |wc -l`        
if [ $A -eq 0 ];then                            
    /usr/local/nginx/sbin/nginx             
    if [ `ps -C nginx --no-header |wc -l` -eq 0 ];then  
        exit 1  
    else
        exit 0  
    fi
else
    exit 0
fi
[root@192 ~]# ps -C check_nginx_pid.sh
   PID TTY          TIME CMD
[root@192 ~]# ps -C nginx
   PID TTY          TIME CMD
[root@192 ~]# ps -C nginx --no-header |wc -l
0

#用作对比
[root@test2 keepalived]# ps -C nginx
   PID TTY          TIME CMD
  1767 ?        00:00:00 nginx
  1768 ?        00:00:00 nginx
  1769 ?        00:00:00 nginx
[root@test2 keepalived]# ps -C nginx --no-header |wc -l
3

Keepalived + Haproxy + MySQL主主模型

第一步:完成MySQL主主模型配置

#test节点配置
[root@test ~]# vim mysqlmaster.sh
[root@test ~]# cat mysqlmaster.sh
#!/bin/bash
Ip_addr="192.168.110.133"    # 修改为对端的test1的地址
User_pwd="root"
yum install mariadb-server -y
sed -i '/^\[mysqld\]$/a\binlog-ignore = information_schema' /etc/my.cnf.d/server.cnf
sed -i '/^\[mysqld\]$/a\binlog-ignore = mysql' /etc/my.cnf.d/server.cnf
sed -i '/^\[mysqld\]$/a\skip-name-resolve' /etc/my.cnf.d/server.cnf
sed -i '/^\[mysqld\]$/a\auto-increment-increment = 1' /etc/my.cnf.d/server.cnf   # 注意test1节点上必须不同
sed -i '/^\[mysqld\]$/a\log-bin = mysql-bin' /etc/my.cnf.d/server.cnf
sed -i '/^\[mysqld\]$/a\auto_increment_offset = 1'  /etc/my.cnf.d/server.cnf   # 注意test1节点上必须不同
sed -i '/^\[mysqld\]$/a\server-id = 1' /etc/my.cnf.d/server.cnf     # 注意test1节点上必须不同
systemctl restart mariadb
mysql -uroot -e "grant replication slave on *.* to 'repuser'@'$Ip_addr' identified by '$User_pwd';"
[root@test ~]# chmod +x mysqlmaster.sh
[root@test ~]# ./mysqlmaster.sh
[root@test ~]# mysql -uroot
……
MariaDB [(none)]> show master status;    #查看二进制文件及状态
+------------------+----------+--------------+--------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB         |
+------------------+----------+--------------+--------------------------+
| mysql-bin.000003 |      402 |              | mysql,information_schema |
+------------------+----------+--------------+--------------------------+
1 row in set (0.00 sec)
#添加对端test1的二进制文件及状态等信息
MariaDB [(none)]> change master to master_host='192.168.110.133',master_port=3306,master_user='repuser',master_password='root',master_log_file='mysql-bin.000003',master_log_pos=407; 
Query OK, 0 rows affected (0.36 sec)
MariaDB [(none)]> start slave;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> show slave status \G;   #检查slave状态
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.110.133
                  Master_User: repuser
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000003
          Read_Master_Log_Pos: 407
               Relay_Log_File: mariadb-relay-bin.000002
                Relay_Log_Pos: 529
        Relay_Master_Log_File: mysql-bin.000003
             Slave_IO_Running: Yes     #该进程状态必须为yes,即正常
            Slave_SQL_Running: Yes     #该进程状态必须为yes,即正常
……


#test1节点配置
[root@test1 ~]# vim mysqlmaster.sh
[root@test1 ~]# cat mysqlmaster.sh
#!/bin/bash
Ip_addr="192.168.110.130"    # 修改为对端的test的地址
User_pwd="root"
yum install mariadb-server -y
sed -i '/^\[mysqld\]$/a\binlog-ignore = information_schema' /etc/my.cnf.d/server.cnf
sed -i '/^\[mysqld\]$/a\binlog-ignore = mysql' /etc/my.cnf.d/server.cnf
sed -i '/^\[mysqld\]$/a\skip-name-resolve' /etc/my.cnf.d/server.cnf
sed -i '/^\[mysqld\]$/a\auto-increment-increment = 2' /etc/my.cnf.d/server.cnf 
sed -i '/^\[mysqld\]$/a\log-bin = mysql-bin' /etc/my.cnf.d/server.cnf
sed -i '/^\[mysqld\]$/a\auto_increment_offset = 2'  /etc/my.cnf.d/server.cnf 
sed -i '/^\[mysqld\]$/a\server-id = 2' /etc/my.cnf.d/server.cnf 
systemctl restart mariadb
mysql -uroot -e "grant replication slave on *.* to 'repuser'@'$Ip_addr' identified by '$User_pwd';"
[root@test1 ~]# chmod +x mysqlmaster.sh
[root@test1 ~]# ./mysqlmaster.sh
[root@test1 ~]# systemctl status mariadb  #查看状态
[root@test1 ~]# mysql -uroot
……
MariaDB [(none)]> show master status;   #查看二进制文件及状态
+------------------+----------+--------------+--------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB         |
+------------------+----------+--------------+--------------------------+
| mysql-bin.000003 |      407 |              | mysql,information_schema |
+------------------+----------+--------------+--------------------------+
1 row in set (0.00 sec)
#添加对端test的二进制文件及状态等信息
MariaDB [(none)]> change master to master_host='192.168.110.130',master_port=3306,master_user='repuser',master_password='root',master_log_file='mysql-bin.000003',master_log_pos=402;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> start slave;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> show slave status \G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.110.130
                  Master_User: repuser
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000003
          Read_Master_Log_Pos: 402
               Relay_Log_File: mariadb-relay-bin.000002
                Relay_Log_Pos: 529
        Relay_Master_Log_File: mysql-bin.000003
             Slave_IO_Running: Yes      #该进程状态必须为yes,即正常
            Slave_SQL_Running: Yes      #该进程状态必须为yes,即正常
……
  • 测试

      1. 在test上创建mydb数据库
      2. 在test1上在test数据库中创建test表
      3. 最终要实现test和test1上保持数据同步
    
#test上创建mydb数据库
MariaDB [(none)]> create database mydb;
Query OK, 1 row affected (0.00 sec)

#test1上验证上步所建数据库
MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mydb               |
| mysql              |
| performance_schema |
| test               |
+--------------------+
5 rows in set (0.00 sec)
MariaDB [(none)]> use mydb;     #进入mydb
Database changed
MariaDB [mydb]> create table stu(id int(4),name varchar(10));
Query OK, 0 rows affected (0.01 sec)

#回到test上验证上步所建表格
MariaDB [(none)]> use mydb;
MariaDB [mydb]> show tables;
+----------------+
| Tables_in_mydb |
+----------------+
| stu            |
+----------------+
1 row in set (0.00 sec)
MariaDB [mydb]> desc stu;
+-------+-------------+------+-----+---------+-------+
| Field | Type        | Null | Key | Default | Extra |
+-------+-------------+------+-----+---------+-------+
| id    | int(4)      | YES  |     | NULL    |       |
| name  | varchar(10) | YES  |     | NULL    |       |
+-------+-------------+------+-----+---------+-------+
2 rows in set (0.00 sec)

第二步:完成haproxy对双主模型的负载均衡

[root@192 ~]# vim haproxy_mysql.sh
[root@192 ~]# cat haproxy_mysql.sh
#!/bin/bash
mv /etc/haproxy/haproxy.cfg{,.bak}
cat > /etc/haproxy/haproxy.cfg << EOF
    global
        log         127.0.0.1 local2
        chroot      /var/lib/haproxy
        pidfile     /var/run/haproxy.pid
        maxconn     4000
        user        haproxy
        group       haproxy
        daemon
        stats socket /var/lib/haproxy/stats
    listen mysql_proxy
        bind    0.0.0.0:3306
        mode    tcp
        balance source
        server  mysqldb1    192.168.110.130:3306  weight  1   check
        server  mysqldb2    192.168.110.133:3306  weight  2   check
    listen stats
        mode http
        bind    0.0.0.0:9090   #9090也可以改为其他
        stats   enable
        stats   uri /dbs
        stats   realm   haproxy\    statistics
        stats   auth    admin:admin
EOF
    systemctl start haproxy
[root@192 ~]# chmod +x haproxy_mysql.sh
[root@192 ~]# ./haproxy_mysql.sh
[root@192 ~]# ss -tanl | grep 9090   #9090为监控
LISTEN     0      128          *:9090                     *:*  
[root@192 ~]# ss -tanl | grep 3306   #3306为对mysql的一个负载
LISTEN     0      128          *:3306                     *:*    

#test2上执行相同步骤

第三步:Keepalived相关配置实现haproxy的高可用

#test2上操作
[root@test2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   router_id test2 
}
……
vrrp_instance haproxy_group {    #新添加的haproxy_group
    state MASTER                 #成为主节点
    interface ens37
    virtual_router_id 20
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    192.168.110.99
    }
}
[root@test2 ~]# ip a | grep glo
inet 192.168.110.136/24 brd 192.168.110.255 scope global noprefixroute dynamic ens33
inet 192.168.110.137/24 brd 192.168.110.255 scope global noprefixroute dynamic ens37
inet 192.168.110.99/32 scope global ens37


#192上操作
[root@192 ~]# vim /etc/keepalived/keepalived.conf
[root@192 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
   router_id 192 
}
vrrp_script chk_http_port {
    script "/usr/local/bin/check_nginx_pid.sh"     
    interval 1             
    weight -5
……
vrrp_instance haproxy_group {   #新添加的haproxy_group
    state BACKUP                #使192成为从节点
    interface ens37
    virtual_router_id 20
    priority 98                 #优先级要低于主节点
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    192.168.110.99
    }
}
[root@192 ~]# systemctl restart keepalived
[root@192 ~]# ip a | grep glo
inet 192.168.110.129/24 brd 192.168.110.255 scope global secondary ens33
inet 192.168.110.134/24 brd 192.168.110.255 scope global noprefixroute ens37
  • 测试

      1. 创建一个test用户远程访问
    
#在test中创建远程访问
MariaDB [mydb]> grant all privileges on *.* to "test"@"%" identified by "root";
Query OK, 0 rows affected (0.01 sec)
  2. 使用test用户从远程连接 192.168.110.134(连接的haproxy对外的地址)
  3. 配置Keepalived( 定义一个组,让test2优先成为主节点)后,通过远程test用户访问192.186.110.99(vip)(验证漂移)

  4. 健康检查需要做:同nginx脚本;并且尝试着关闭 haproxy服务发现地址是否漂移

[root@test2 ~]# systemctl stop keepalived

[root@192 ~]# systemctl restart keepalived
[root@192 ~]# systemctl status keepalived
[root@192 ~]# ip a | grep glo
inet 192.168.110.129/24 brd 192.168.110.255 scope global secondary ens33
inet 192.168.110.134/24 brd 192.168.110.255 scope global noprefixroute ens37
inet 192.168.110.99/32 scope global ens37

Keepalived + LVS-DR实验

第一步: RS相关部署

[root@test ~]# vim lvs_dr.sh
[root@test ~]# cat lvs_dr.sh
#!/bin/bash
    vip="192.168.10.99"      #设置VIP地址
    mask="255.255.255.255"
    ifconfig lo:0 $vip broadcast $vip netmask $mask up
    route add -host $vip lo:0
    echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
    echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
    echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
[root@test ~]# chmod +x lvs_dr.sh
[root@test ~]# ./lvs_dr.sh
[root@test ~]# scp lvs_dr.sh 192.168.110.133:/root    #将lvs_dr.sh的内容复制到test1

#在test1中查询并执行
[root@test1 ~]# ls       
anaconda-ks.cfg  lvs_dr.sh  mysqlmaster.sh
[root@test1 ~]# ./lvs_dr.sh
[root@test1 ~]# ip a | grep glo
inet 192.168.10.99/32 brd 192.168.10.99 scope global lo:0    #环回口接口地址
inet 192.168.110.133/24 brd 192.168.110.255 scope global noprefixroute dynamic ens33

第二步:DR相关配置(Keepalived中设置ipvs规则)

[root@192 ~]# yum install keepalived ipvsadm -y
[root@192 ~]# cd /etc/keepalived/
[root@192 keepalived]# ls
keepalived.conf  keepalived.conf.bak
[root@192 keepalived]# mv keepalived.conf{,.bak2}
[root@192 keepalived]# vim keepalived.conf.bak
[root@192 keepalived]# cat keepalived.conf
#!/bin/bash:
! Configuration File for keepalived
global_defs {
   router_id 192      # 设置lvs的id,一个网络中应该唯一
}
vrrp_instance VI_1 {
    state MASTER      # 指定Keepalived的角色为主设备
    interface ens33   # 网卡
    virtual_router_id 10     # 虚拟路由器ID,主备需要一样
    priority 100      # 优先级越大越优,backup路由器需要设置比这小!
    advert_int 1      # 检查间隔1s
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.99      # 定义虚拟IP地址,可以定义多个
    }
}
# 定义虚拟主机,对外服务的IP和port
virtual_server 192.168.10.99 80 { 
    delay_loop 6      # 设置健康检查时间,单位是秒
    lb_algo wrr       # 负责调度算法
    lb_kind DR        # LVS负载均衡机制
    persistence_timeout 10
    protocol TCP
    # 指定RS主机IP和port
    real_server 192.168.110.130 80 {
        weight 2
        # 定义TCP健康检查
        TCP_CHECK {  
        connect_timeout 10         
        nb_get_retry 3  
        delay_before_retry 3  
        connect_port 80
        }
    }
    real_server 192.168.110.133 80 {
        weight 1
        TCP_CHECK {  
        connect_timeout 10         
        nb_get_retry 3  
        delay_before_retry 3  
        connect_port 80
        }
    }
}
[root@192 keepalived]# scp keepalived.conf 192.168.110.136:/etc/keepalived
[root@192 keepalived]# systemctl restart keepalived
[root@192 keepalived]# ipvsadm -ln    #查看策略
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.10.99:80 wrr
  -> 192.168.110.130:80           Route   2      0          0         
  -> 192.168.110.133:80           Route   1      0          0  
[root@192 keepalived]# systemctl status keepalived
[root@192 keepalived]# ip a | grep glo
inet 192.168.10.99/32 scope global ens33
inet 192.168.110.129/24 brd 192.168.110.255 scope global secondary ens33
inet 192.168.110.134/24 brd 192.168.110.255 scope global noprefixroute ens37


[root@test2 keepalived]# ls
keepalived.conf  keepalived.conf.bak  keepalived.conf.bak2
[root@test2 keepalived]# vim keepalived.conf
[root@test2 keepalived]# cat keepalived.conf
#!/bin/bash:
! Configuration File for keepalived
global_defs {
   router_id test2
}
vrrp_instance VI_1 {
    state BACKUP         # 指定Keepalived的角色为从设备
    interface ens33 
    virtual_router_id 10 
    priority 99 
    advert_int 1 
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.99 
    }
}
virtual_server 192.168.10.99 80 { 
    delay_loop 6 
    lb_algo wrr 
    lb_kind DR 
    persistence_timeout 10
    protocol TCP
    real_server 192.168.110.130 80 {
        weight 2
        TCP_CHECK {  
        connect_timeout 10         
        nb_get_retry 3  
        delay_before_retry 3  
        connect_port 80
        }
    }
    real_server 192.168.110.133 80 {
        weight 1
        TCP_CHECK {  
        connect_timeout 10         
        nb_get_retry 3  
        delay_before_retry 3  
        connect_port 80
        }
    }
}
[root@test2 keepalived]# systemctl restart keepalived
  • 测试
  1. 关闭192上的keepalived服务,查看漂移现象
#关闭192上的keepalived服务
[root@192 keepalived]# systemctl stop keepalived
[root@192 keepalived]# ip a | grep glo
inet 192.168.110.129/24 brd 192.168.110.255 scope global secondary ens33
inet 192.168.110.134/24 brd 192.168.110.255 scope global noprefixroute ens37

#在test2上查看漂移现象
[root@test2 keepalived]# ip a | grep glo
inet 192.168.110.136/24 brd 192.168.110.255 scope global noprefixroute dynamic ens33
inet 192.168.10.99/32 scope global ens33
inet 192.168.110.137/24 brd 192.168.110.255 scope global noprefixroute dynamic ens37
  1. 查看ipvsadm 策略(当关闭后端web服务的时候)
#关闭test1 上的HTTP服务
[root@test1 ~]# systemctl stop httpd

#查看ipvs策略
[root@192 keepalived]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.10.99:80 wrr persistent 10
  -> 192.168.110.130:80           Route   2      0          0         #只转发到test上, 而不会转发到test1上