您的位置:澳门402永利com > 操作系统 > Keepalived学习笔记,主主架构实例

Keepalived学习笔记,主主架构实例

发布时间:2019-10-25 00:37编辑:操作系统浏览(175)

    实例拓扑图:


    LVS(Linux Virtual Server):Linux虚构服务器,这里经过keepalived作为负载均衡器
    瑞虎S(Real Server):真实服务器
    VPRADORP(Virtual Router Redundancy Protocol): 虚构路由冗余公约, 消弭局域网中配置静态网关现身单点失效现象的路由协议
    图片 1

    图片 2

     

    DENCORE1和DRAV42布置keepalived和lvs作主从架构或主主架构,奥德赛S1和讴歌MDXS2配备nginx搭建web站点。

    1 Keepalived是哪些,有怎样功能?
    1.1 Keepalived的定义
    Keepalived 是一个依照VRAV4RP协议来促成的LVS高可用方案
    1.2 Keepalived的作用
    1.2.1 通过IP漂移完结高可用
    主副LVS分享二个虚构IP,同有时候独有三个LVS占领VIP并对外提供劳动,若该LVS不可用,则VIP漂移至另后生可畏台LVS并对外提供服务;
    1.2.2 对哈弗S集群实行状态监察和控制
    若途乐S不可用,则keepalived将其从集群中摘除,若本田CR-VS恢复生机,则keepalived将其重新参预集群中。
    2 Keepalived有二种情势,各类格局的雷同点和不一样点是何许?
    2.1 Keepalived的方式类别
    Keepalived有3种情势:NAT(地址转变);D兰德酷威(直接路由);TUN(隧道)
    2.2 Keepalived的依次格局的牵线
    2.2.1 NAT
    亮点:集群中的中华VS可以运用别的扶植TCP/IP操作系统,PAJEROS能够分配Internet的保留私有地址,唯有LVS供给八个法定的IP地址。
    症结:扩大性有限。当LacrosseS节点拉长到贰拾二个或更加多时,LVS将形成任何系统的瓶颈,因为兼具的央求包和答复包都亟需通过LVS再生。
    2.2.2 TUN
    我们开掘,比超多Internet服务(比如WEB服务器)的倡议包极短小,而应答包平常极大。
    可取:LVS只担任将央求包分发给昂科雷S,而CR-VS将回应包直接发放客户。所以,LVS能管理很了不起的哀告量,这种办法,意气风发台载荷均衡能为越过100台的大切诺基S服务,LVS不再是系统的瓶颈。
    劣势:不过,这种措施索要全部的服务器援助"IP Tunneling"(IP Encapsulation)合同,作者仅在Linux系统上落到实处了这些。
    2.2.3 DR
    可取:和TUN同样,LVS也只是散发伏乞,应答包通过单独的路由方法重临给客商端。与TUN比较,D宝马7系这种达成方式无需隧道结构,因而能够使用大好多操作系统做为冠道S。
    不足:须求LVS的网卡必需与LANDS的网卡在叁个网段上
    3 分歧形式的布署格局,验证办法分别是哪些?
    3.1 基本的条件必要
    需要2台LVS和n(n>=2)台RS
    3.1.1 LVS
    设置ipvsadm(LVS处理工科具)和keepalived;
    开启路由转发成效:
    vim /etc/sysctl.conf
    net.ipv4.ip_forward = 1
    验证:
    sysctl -p
    net.ipv4.ip_forward = 1
    3.1.2 RS
    安装httpd(用于最终测量试验)
    3.2 NAT情势配置
    3.2.1 碰到概述
    操作系统 负载均衡方式 VIP NVIP
    RHEL7.4 NAT 193.168.140.80 192.168.102.165

    留神:各节点的光阴供给一起(ntpdate ntp1.aliyun.com);关闭firewalld(systemctl stop firewalld.service,systemctl disable firewalld.service),设置selinux为permissive(setenforce 0);同不经常常间有限支撑各网卡援助MULTICAST(多播)通讯。

    LVS1 LVS2 RS1 RS2
    ens3:192.168.102.161 ens3:192.168.102.162 ens3:192.168.102.163 ens3:192.168.102.164
    ens4:193.168.140.79 ens4:193.168.140.83 网关:192.168.102.165 网关:192.168.102.165

    透过命令ifconfig能够查见到是或不是开启了MULTICAST:

    3.2.2 LVS
    vim /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    global_defs {
    notification_email {
    qingean@163.com #故障选用联系人
    }
    notification_email_from admin@test.com #故障发赠送别人
    smtp_server 127.0.0.1 #本机发送邮件
    smtp_connect_timeout 30
    router_id LVS_MASTER #BACKUP上更换为LVS_BACKUP
    }
    vrrp_instance VI_1 {
    state MASTER #BACKUP上改造为BACKUP
    interface ens4
    virtual_router_id 51 #虚构路由标记,主从相通
    priority 100 #BACKUP上修校订改为90
    advert_int 1
    authentication {
    auth_type PASS
    auth_pass 1111 #骨干认证密码必需生机勃勃致
    }
    virtual_ipaddress {
    193.168.140.80 #虚拟IP(VIP)
    }
    }
    vrrp_instance LAN_GATEWAY { #概念网关
    state MASTER #BACKUP上修改良改为BACKUP
    interface ens3
    virtual_router_id 62 #虚构路由ID,主从类似
    priority 100 #BACKUP上修改为90
    advert_int 1
    authentication {
    auth_type PASS
    auth_pass 1111
    }
    virtual_ipaddress { #ens3网关虚构IP
    192.168.102.165
    }
    }
    virtual_server 192.168.102.165 80 { #概念内网网关设想IP和端口
    delay_loop 6 #检查RS时间,单位秒
    lb_algo rr #安装负载调节算法,轮叫(rr)、加权轮叫(wrr)、最小连接(lc)、加权最小连接(wlc)、基于局部性最小连接(lblc)、带复制的依照局部性起码链接(lblcr)、目之处散列(dh)和源地址散列(sh)
    lb_kind NAT #设置LVS负载均衡NAT情势
    persistence_timeout 50 #同后生可畏IP的连天60秒内被分配到均等台真正服务器(测量试验时提议改为0)
    protocol TCP #利用TCP合同检查EvoqueS状态
    real_server 192.168.102.161 80 { #首先个网关节点
    weight 3 #节点权重值
    TCP_CHECK { #健检方式
    connect_timeout 3 #接连超时
    nb_get_retry 3 #重试次数
    delay_before_retry 3 #重试间隔/S
    }
    }
    real_server 192.168.102.162 80 { #第三个网关节点
    weight 3
    TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
    }
    }
    }
    virtual_server 193.168.140.80 80{ #概念虚构IP
    delay_loop 6
    lb_algo rr
    lb_kind NAT
    persistence_timeout 50
    protocol TCP
    real_server 192.168.102.163 80 { #第一个RS
    weight 3
    TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
    connect_port 80
    }
    }
    real_server 192.168.102.164 80 { #第二个RS
    weight 3
    TCP_CHECK {
    connect_timeout 3
    nb_get_retry 3
    delay_before_retry 3
    connect_port 80
    }
    }
    }
    3.2.3 RS
    为保有途睿欧S增加网关为192.168.102.165:
    vim /etc/sysconfig/network-scripts/ifcfg-ens3
    GATEWAY=192.168.102.165
    重启; 使用route –n查看是不是中标
    IPVS connection entries
    pro expire state source virtual destination
    TCP 01:54 FIN_WAIT 10.167.225.60:53882 193.168.140.80:80 192.168.102.163:80
    TCP 00:37 NONE 10.167.225.60:0 193.168.140.80:80 192.168.102.163:80

           图片 3

    3.3 D奥迪Q5方式配置
    3.3.1 景况概述
    操作系统 负载均衡格局 VIP
    RHEL7.4 DR 193.168.140.80

    keepalived的着力架构

    LVS1 LVS2 RS1 RS2
    ens4:193.168.140.79 ens4:193.168.140.83 ens4:193.168.140.152 ens4:193.168.140.224

    搭建RS1:

    [root@RS1 ~]# yum -y install nginx   #安装nginx
    [root@RS1 ~]# vim /usr/share/nginx/html/index.html   #修改主页
        <h1> 192.168.4.118 RS1 server </h1>
    [root@RS1 ~]# systemctl start nginx.service   #启动nginx服务
    [root@RS1 ~]# vim RS.sh   #配置lvs-dr的脚本文件
        #!/bin/bash
        #
        vip=192.168.4.120
        mask=255.255.255.255
        case $1 in
        start)
            echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
            echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
            echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
            echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
            ifconfig lo:0 $vip netmask $mask broadcast $vip up
            route add -host $vip dev lo:0
            ;;
        stop)
            ifconfig lo:0 down
            echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
            echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
            echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
            echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
            ;;
        *) 
            echo "Usage $(basename $0) start|stop"
            exit 1
            ;;
        esac
    [root@RS1 ~]# bash RS.sh start
    

    3.3.2 LVS
    vim /etc/keepalived/keepalived.conf
    ! Configuration File for keepalived
    global_defs {
    notification_email {
    qingean@163.com
    }
    notification_email_from admin@test.com
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id LVS_MASTER
    }
    vrrp_instance VI_1 {
    state MASTER #BACKUP上改革为BACKUP
    interface ens4
    virtual_router_id 51
    priority 100 #BACKUP上修修正改为90
    advert_int 1
    authentication {
    auth_type PASS
    auth_pass 1111
    }
    virtual_ipaddress {
    193.168.140.80
    }
    }
    virtual_server 193.168.140.80 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    nat_mask 255.255.255.255
    protocol TCP
    real_server 193.168.140.152 80 {
    weight 10
    TCP_CHECK {
    connect_timeout 10
    nb_get_retry 3
    delay_before_retry 3
    connect_port 80
    }
    }
    real_server 193.168.140.224 80 {
    weight 10
    TCP_CHECK {
    connect_timeout 10
    nb_get_retry 3
    delay_before_retry 3
    connect_port 80
    }
    }
    }

    参照他事他说加以考查CR-VS1的安插搭建PRADOS2。

    3.3.3 RS
    为所有RS修改sysctl.conf
    net.ipv4.conf.lo.arp_ignore= 1
    net.ipv4.conf.lo.arp_announce= 2
    net.ipv4.conf.all.arp_ignore= 1
    net.ipv4.conf.all.arp_announce= 2
    net.ipv4.ip_forward= 1
    执行/sbin/ifconfig lo:0 193.168.140.80 broadcast 193.168.140.80 netmask 255.255.255.255
    可用route –n查看是或不是成功
    Kernel IP routing table
    Destination Gateway Genmask Flags Metric Ref Use Iface
    0.0.0.0 193.168.1.1 0.0.0.0 UG 100 0 0 ens4
    193.168.0.0 0.0.0.0 255.255.0.0 U 100 0 0 ens4
    193.168.140.80 0.0.0.0 255.255.255.255 UH 0 0 0 lo
    若不成功试行/sbin/route add -host 193.168.140.80 dev lo:0
    3.4 验证办法
    3.4.1 全部机器关闭防火墙:
    systemctl stop firewalld
    3.4.2 全部凯雷德S写入测量试验页和开启httpd服务
    RS1:echo “RS1″ > /var/www/html/index.html
    RS2:echo “RS2″ > /var/www/html/index.html
    systemctl start httpd
    3.4.3 主副LVS开启keepalived服务
    systemctl start keepalived
    3.4.4 访问
    浏览器访问
    刷新会交替呈现奥迪Q5S1或CR-VS2
    3.4.5 查看当前测量检验机的拜见央浼被转正到哪些服务器
    ipvsadm –lcn
    IPVS connection entries
    pro expire state source virtual destination
    TCP 01:54 FIN_WAIT 10.167.225.60:53882 193.168.140.80:80 192.168.102.163:80
    TCP 00:37 NONE 10.167.225.60:0 193.168.140.80:80 192.168.102.163:80
    3.4.6 测试
    模仿宕掉主LVS,服务器照常工作,再宕掉Web1,那时只会显得Web2,这样就落到实处IP负载均衡,高可用集群。当主LVS恢复后,会切换来主动服务器,假诺Keepalived监察和控制模块质量评定web故障复苏后,苏醒的主机又将此节点参预集群系统中。

    搭建DR1:

    [root@DR1 ~]# yum -y install ipvsadm keepalived   #安装ipvsadm和keepalived
    [root@DR1 ~]# vim /etc/keepalived/keepalived.conf   #修改keepalived.conf配置文件
        global_defs {
           notification_email {
             root@localhost
           }
           notification_email_from keepalived@localhost
           smtp_server 127.0.0.1
           smtp_connect_timeout 30
           router_id 192.168.4.116
           vrrp_skip_check_adv_addr
           vrrp_mcast_group4 224.0.0.10
        }
    
        vrrp_instance VIP_1 {
            state MASTER
            interface eno16777736
            virtual_router_id 1
            priority 100
            advert_int 1
            authentication {
                auth_type PASS
                auth_pass %&hhjj99
            }
            virtual_ipaddress {
              192.168.4.120/24 dev eno16777736 label eno16777736:0
            }
        }
    
        virtual_server 192.168.4.120 80 {
            delay_loop 6
            lb_algo rr
            lb_kind DR
            protocol TCP
    
            real_server 192.168.4.118 80 {
                weight 1
                HTTP_GET {
                    url {
                      path /index.html
                      status_code 200
                    }
                    connect_timeout 3
                    nb_get_retry 3
                    delay_before_retry 3
                }
            }
            real_server 192.168.4.119 80 {
                weight 1
                HTTP_GET {
                    url {
                      path /index.html
                      status_code 200
                    }
                    connect_timeout 3
                    nb_get_retry 3
                    delay_before_retry 3
                }
             }
        }
    [root@DR1 ~]# systemctl start keepalived
    [root@DR1 ~]# ifconfig
        eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
                inet 192.168.4.116  netmask 255.255.255.0  broadcast 192.168.4.255
                inet6 fe80::20c:29ff:fe93:270f  prefixlen 64  scopeid 0x20<link>
                ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
                RX packets 14604  bytes 1376647 (1.3 MiB)
                RX errors 0  dropped 0  overruns 0  frame 0
                TX packets 6722  bytes 653961 (638.6 KiB)
                TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
        eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
                inet 192.168.4.120  netmask 255.255.255.0  broadcast 0.0.0.0
                ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
    [root@DR1 ~]# ipvsadm -ln
        IP Virtual Server version 1.2.1 (size=4096)
        Prot LocalAddress:Port Scheduler Flags
          -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
        TCP  192.168.4.120:80 rr
          -> 192.168.4.118:80             Route   1      0          0         
          -> 192.168.4.119:80             Route   1      0          0
    

    D奥迪Q32的搭建基本同D福特Explorer1,首要改善一下布置文件中/etc/keepalived/keepalived.conf的state和priority:state BACKUP、priority 90. 同期大家发掘作为backup的DLAND2未有启用eno16777736:0的网口:

    图片 4

    客商端进行测验:

    [root@client ~]# for i in {1..20};do curl http://192.168.4.120;done   #客户端正常访问
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    
    [root@DR1 ~]# systemctl stop keepalived.service   #关闭DR1的keepalived服务
    
    [root@DR2 ~]# systemctl status keepalived.service   #观察DR2,可以看到DR2已经进入MASTER状态
    ● keepalived.service - LVS and VRRP High Availability Monitor
       Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
       Active: active (running) since Tue 2018-09-04 11:33:04 CST; 7min ago
      Process: 12983 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
     Main PID: 12985 (keepalived)
       CGroup: /system.slice/keepalived.service
               ├─12985 /usr/sbin/keepalived -D
               ├─12988 /usr/sbin/keepalived -D
               └─12989 /usr/sbin/keepalived -D
    
    Sep 04 11:37:41 happiness Keepalived_healthcheckers[12988]: SMTP alert successfully sent.
    Sep 04 11:40:22 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Transition to MASTER STATE
    Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Entering MASTER STATE
    Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) setting protocol VIPs.
    Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
    Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: VRRP_Instance(VIP_1) Sending/queueing gratuitous ARPs on eno16777736 for 192.168.4.120
    Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
    Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
    Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
    Sep 04 11:40:23 happiness Keepalived_vrrp[12989]: Sending gratuitous ARP on eno16777736 for 192.168.4.120
    
    [root@client ~]# for i in {1..20};do curl http://192.168.4.120;done   #可以看到客户端正常访问
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    <h1> 192.168.4.119 RS2 server</h1>
    <h1> 192.168.4.118 RS1 server </h1>
    

    keepalived的主主架构

     更正奥迪Q5S1和哈弗S2,增添新的VIP:

    [root@RS1 ~]# cp RS.sh RS_bak.sh
    [root@RS1 ~]# vim RS_bak.sh   #添加新的VIP
        #!/bin/bash
        #
        vip=192.168.4.121
        mask=255.255.255.255
        case $1 in
        start)
            echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
            echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
            echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
            echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
            ifconfig lo:1 $vip netmask $mask broadcast $vip up
            route add -host $vip dev lo:1
            ;;
        stop)
            ifconfig lo:1 down
            echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
            echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
            echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
            echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
            ;;
        *)
            echo "Usage $(basename $0) start|stop"
            exit 1
            ;;
        esac
    [root@RS1 ~]# bash RS_bak.sh start
    [root@RS1 ~]# ifconfig
        ...
        lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
                inet 192.168.4.120  netmask 255.255.255.255
                loop  txqueuelen 0  (Local Loopback)
    
        lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
                inet 192.168.4.121  netmask 255.255.255.255
                loop  txqueuelen 0  (Local Loopback) 
    [root@RS1 ~]# scp RS_bak.sh root@192.168.4.119:~
    root@192.168.4.119's password: 
    RS_bak.sh                100%  693     0.7KB/s   00:00
    
    [root@RS2 ~]# bash RS_bak.sh   #直接运行脚本添加新的VIP 
    [root@RS2 ~]# ifconfig
        ...
        lo:0: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
                inet 192.168.4.120  netmask 255.255.255.255
                loop  txqueuelen 0  (Local Loopback)
    
        lo:1: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
                inet 192.168.4.121  netmask 255.255.255.255
                loop  txqueuelen 0  (Local Loopback)
    

    修改DR1和DR2:

    [root@DR1 ~]# vim /etc/keepalived/keepalived.conf   #修改DR1的配置文件,添加新的实例,配置服务器组
        ...
        vrrp_instance VIP_2 {
            state BACKUP
            interface eno16777736
            virtual_router_id 2
            priority 90
            advert_int 1
            authentication {
                auth_type PASS
                auth_pass UU**99^^
            }
            virtual_ipaddress {
                192.168.4.121/24 dev eno16777736 label eno16777736:1
            }
        }
    
        virtual_server_group ngxsrvs {
            192.168.4.120 80
            192.168.4.121 80
        }
        virtual_server group ngxsrvs {
            ...
        }
    [root@DR1 ~]# systemctl restart keepalived.service   #重启服务
    [root@DR1 ~]# ifconfig   #此时可以看到eno16777736:1,因为DR2还未配置
        eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
                inet 192.168.4.116  netmask 255.255.255.0  broadcast 192.168.4.255
                inet6 fe80::20c:29ff:fe93:270f  prefixlen 64  scopeid 0x20<link>
                ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
                RX packets 54318  bytes 5480463 (5.2 MiB)
                RX errors 0  dropped 0  overruns 0  frame 0
                TX packets 38301  bytes 3274990 (3.1 MiB)
                TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
        eno16777736:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
                inet 192.168.4.120  netmask 255.255.255.0  broadcast 0.0.0.0
                ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
    
        eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
                inet 192.168.4.121  netmask 255.255.255.0  broadcast 0.0.0.0
                ether 00:0c:29:93:27:0f  txqueuelen 1000  (Ethernet)
    [root@DR1 ~]# ipvsadm -ln
        IP Virtual Server version 1.2.1 (size=4096)
        Prot LocalAddress:Port Scheduler Flags
          -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
        TCP  192.168.4.120:80 rr
          -> 192.168.4.118:80             Route   1      0          0         
          -> 192.168.4.119:80             Route   1      0          0         
        TCP  192.168.4.121:80 rr
          -> 192.168.4.118:80             Route   1      0          0         
          -> 192.168.4.119:80             Route   1      0          0
    
    [root@DR2 ~]# vim /etc/keepalived/keepalived.conf   #修改DR2的配置文件,添加实例,配置服务器组
        ...
        vrrp_instance VIP_2 {
            state MASTER
            interface eno16777736
            virtual_router_id 2
            priority 100
            advert_int 1
            authentication {
                auth_type PASS
                auth_pass UU**99^^
            }
            virtual_ipaddress {
                192.168.4.121/24 dev eno16777736 label eno16777736:1
            }
        }
    
        virtual_server_group ngxsrvs {
            192.168.4.120 80
            192.168.4.121 80
        }
        virtual_server group ngxsrvs {
            ...
        }
    [root@DR2 ~]# systemctl restart keepalived.service   #重启服务
    [root@DR2 ~]# ifconfig
        eno16777736: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
                inet 192.168.4.117  netmask 255.255.255.0  broadcast 192.168.4.255
                inet6 fe80::20c:29ff:fe3d:a31b  prefixlen 64  scopeid 0x20<link>
                ether 00:0c:29:3d:a3:1b  txqueuelen 1000  (Ethernet)
                RX packets 67943  bytes 6314537 (6.0 MiB)
                RX errors 0  dropped 0  overruns 0  frame 0
                TX packets 23250  bytes 2153847 (2.0 MiB)
                TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
        eno16777736:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
                inet 192.168.4.121  netmask 255.255.255.0  broadcast 0.0.0.0
                ether 00:0c:29:3d:a3:1b  txqueuelen 1000  (Ethernet)
    [root@DR2 ~]# ipvsadm -ln
        IP Virtual Server version 1.2.1 (size=4096)
        Prot LocalAddress:Port Scheduler Flags
          -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
        TCP  192.168.4.120:80 rr
          -> 192.168.4.118:80             Route   1      0          0         
          -> 192.168.4.119:80             Route   1      0          0         
        TCP  192.168.4.121:80 rr
          -> 192.168.4.118:80             Route   1      0          0         
          -> 192.168.4.119:80             Route   1      0          0 
    

    客户端测量试验:

    [root@client ~]# for i in {1..20};do curl http://192.168.4.120;done
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
    [root@client ~]# for i in {1..20};do curl http://192.168.4.121;done
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
        <h1> 192.168.4.119 RS2 server</h1>
        <h1> 192.168.4.118 RS1 server </h1>
    

     

    本文由澳门402永利com发布于操作系统,转载请注明出处:Keepalived学习笔记,主主架构实例

    关键词:

上一篇:DHCP服务器搭建

下一篇:没有了