欢迎您访问 最编程 本站为您分享编程语言代码,编程技术文章!
您现在的位置是: 首页

keepalived - 利用nginx负载平衡实现高可用性

最编程 2024-03-17 21:55:30
...

这里写目录标题

  • keepalived
    • 实现nginx负载均衡的高可用
      • 环境
        • 环境准备
        • 配置后端服务器服务
        • 配置nginx负债均衡
      • keepalived安装
        • 配置主**keepalived**
        • 配置备keepalived
      • 配置keepalived监控nginx负载均衡机
        • 在主keepalived机上添加脚本
      • 配置keepalived加入监控脚本的配置
        • 配置主keepalived
        • **配置备keepalived**
        • 测试
        • 配置主节点不抢占

keepalived

实现nginx负载均衡的高可用

在这里插入图片描述

环境

主机角色 IP 主机名 版本
负载均衡服务器(nginx) 192.168.227.141/24 LB1 rockylinux9
负载均衡服务器(nginx) 192.168.227.153/24 LB2 rockylinux9
后端服务器(apache) 192.168.227.148/24 RS1 rockylinux9
后端服务器(apache) 192.168.227.147/24 RS2 rockylinux9
环境准备
永久关闭以上主机防火墙
[root@LB1 ~]#: systemctl disable --now firewalld
Removed "/etc/systemd/system/multi-user.target.wants/firewalld.service".
Removed "/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service".
[root@LB1 ~]#: setenforce 0
[root@LB1 ~]#: vi /etc/selinux/config 
[root@LB1 ~]#: systemctl status firewalld
○ firewalld.service - firewalld - dynamic firewall daemon
     Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
       Docs: man:firewalld(1)

Mar 02 14:59:51 zyq systemd[1]: Starting firewalld - dynamic firewall daemon...
Mar 02 14:59:53 zyq systemd[1]: Started firewalld - dynamic firewall daemon.
Mar 02 15:30:17 LB1 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Mar 02 15:30:17 LB1 systemd[1]: firewalld.service: Deactivated successfully.
Mar 02 15:30:17 LB1 systemd[1]: Stopped firewalld - dynamic firewall daemon.

配置后端服务器服务
两台后端服务器上安装apache
[root@RS1 ~]#: yum -y install httpd
.....
[root@RS2 ~]# yum -y install httpd

配置后端服务器网页信息,并启动服务
[root@RS1 ~]#: echo 'this is R1' > /var/www/html/index.html
[root@RS1 ~]#: cat /var/www/html/index.html 
this is R1
[root@RS1 ~]#: ss -antl
State      Recv-Q     Send-Q         Local Address:Port         Peer Address:Port    Process     
LISTEN     0          128                  0.0.0.0:22                0.0.0.0:*                   
LISTEN     0          511                        *:80                      *:*                   
LISTEN     0          128                     [::]:22                   [::]:*                   

[root@RS2 ~]# echo 'this is R2' > /var/www/html/index.html
[root@RS2 ~]# 
[root@RS2 ~]# cat /var/www/html/index.html 
this is R2
[root@RS2 ~]# ss -antl
State      Recv-Q     Send-Q         Local Address:Port         Peer Address:Port    Process     
LISTEN     0          128                  0.0.0.0:22                0.0.0.0:*                   
LISTEN     0          128                     [::]:22                   [::]:*                   
LISTEN     0          511                        *:80                      *:*                   
配置nginx负债均衡
两台LB安装nginx
[root@LB1 ~]#: yum -y install nginx
.....
[root@LB2 ~]#: yum -y install nginx

在负载均衡服务器上配置
定义upstream
[root@LB2 ~]# cd /etc/nginx/
[root@LB2 nginx]# ls
conf.d                fastcgi_params          mime.types          scgi_params           win-utf
default.d             fastcgi_params.default  mime.types.default  scgi_params.default
fastcgi.conf          koi-utf                 nginx.conf          uwsgi_params
fastcgi.conf.default  koi-win                 nginx.conf.default  uwsgi_params.default
[root@LB2 nginx]# vim nginx.conf
..... 
upstream webs {                         定义upstream
        server 192.168.227.148;
        server 192.168.227.147;
    }   

    server {
        listen       80;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;
    
    location / {                      添加location信息
            proxy_pass http://webs;  
        }
.....
.....

启动nginx
[root@LB1 nginx]#: systemctl enable --now nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@LB1 nginx]#: ss -antl
State      Recv-Q     Send-Q         Local Address:Port         Peer Address:Port    Process     
LISTEN     0          128                  0.0.0.0:22                0.0.0.0:*                   
LISTEN     0          511                  0.0.0.0:80                0.0.0.0:*                   
LISTEN     0          128                     [::]:22                   [::]:*                   
LISTEN     0          511                     [::]:80                   [::]:*                   

[root@LB2 nginx]# systemctl enable --now nginx
Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service.
[root@LB2 nginx]# ss -antl
State      Recv-Q     Send-Q         Local Address:Port         Peer Address:Port    Process     
LISTEN     0          511                  0.0.0.0:80                0.0.0.0:*                   
LISTEN     0          128                  0.0.0.0:22                0.0.0.0:*                   
LISTEN     0          511                     [::]:80                   [::]:*                   
LISTEN     0          128                     [::]:22                   [::]:*                   


测试负载效果
在这里插入图片描述

keepalived安装

在两台负载均衡机上安装高可用
[root@LB1 ~]#: yum list all | grep keepalived
keepalived.x86_64                                    2.2.8-3.el9                         appstream 
[root@LB1 ~]#: yum -y install keepalived
.....

配置主keepalived
配置文件
[root@LB1 ~]#: cd /etc/keepalived/
[root@LB1 keepalived]#: ls
keepalived.conf
备份原文件
[root@LB1 keepalived]#: mv keepalived.conf{,.bak}
[root@LB1 keepalived]#: ls
keepalived.conf.bak

新配置keepalived配置文件
[root@LB1 keepalived]#: vim keepalived.conf
[root@LB1 keepalived]#: cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01         #主机标识lb01
}

vrrp_instance VI_1 {
    state MASTER            #主高可用MASTER 
    interface ens160
    virtual_router_id  77   #虚拟route
    priority 100            #优先级
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1
    }
    virtual_ipaddress {
        192.168.227.200      #vip
    }
}

virtual_server 192.168.227.200 80 {    #vip
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.227.153 80 {    #主高可用主机IP
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.227.141 80  {     #备高可用主机IP
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

重启keepalived
[root@LB1 keepalived]#: systemctl restart keepalived.service 
配置备keepalived
配置文件
[root@LB2 ~]#: cd /etc/keepalived/
[root@LB2 keepalived]#: ls
keepalived.conf
备份原文件
[root@LB2 keepalived]#: mv keepalived.conf{,.bak}
[root@LB1 keepalived]#: ls
keepalived.conf.bak

新配置keepalived配置文件
[root@LB2 keepalived]#: vim keepalived.conf
[root@LB2 keepalived]# cat keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02      #主机标识lb01
}

vrrp_instance VI_1 {    
    state BACKUP        #备高可用BACKUP
    interface ens160
    virtual_router_id  77    #id要与主相同
    priority 80              #备优先级要比主优先级低
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1
    }
    virtual_ipaddress {
        192.168.227.200      #VIP
    }
}

virtual_server 192.168.227.200 80 {   #VIP
    delay_loop 6 
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.227.153 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.227.141 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

[root@LB2 keepalived]# systemctl restart keepalived.service 

此时可查看VIP在主keepalived机上
[root@LB1 ~]#: ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:92:c0:51 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.227.153/24 brd 192.168.227.255 scope global dynamic noprefixroute ens160
       valid_lft 1192sec preferred_lft 1192sec
    inet 192.168.227.200/32 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe92:c051/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

而备keepalived机上不会出现VIP
[root@LB2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:4b:2c:31 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.227.141/24 brd 192.168.227.255 scope global dynamic noprefixroute ens160
       valid_lft 1190sec preferred_lft 1190sec
    inet6 fe80::20c:29ff:fe4b:2c31/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever

配置keepalived监控nginx负载均衡机

在主keepalived机上添加脚本
[root@LB1 ~]#: mkdir /scripts
[root@LB1 ~]#: cd /scripts/
[root@LB1 scripts]#: vim check_nginx.sh
[root@LB1 scripts]#: cat check_nginx.sh 
#!/bin/bash

nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
if [ $nginx_status -lt 1 ];then
    systemctl stop keepalived
fi
[root@LB1 scripts]#: chmod +x check_nginx.sh 
[root@LB1 scripts]#: ll
total 4
-rwxr-xr-x. 1 root root 143 Mar  2 17:01 check_nginx.sh

监控脚本,此脚本作为备节点才执行的
[root@LB1 scripts]#: vim notify.sh
[root@LB1 scripts]#: cat notify.sh 
#!/bin/bash

case "$1" in
  master)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
        if [ $nginx_status -lt 1 ];then
            systemctl start nginx
        fi
  ;;
  backup)
        nginx_status=$(ps -ef|grep -Ev "grep|$0"|grep '\bnginx\b'|wc -l)
        if [ $nginx_status -gt 0 ];then
            systemctl stop nginx
        fi
  ;;
  *)
        echo "Usage:$0 master|backup"
  ;;
esac
[root@LB1 scripts]#: chmod +x notify.sh 
[root@LB1 scripts]#: ll
total 8
-rwxr-xr-x. 1 root root 143 Mar  2 17:01 check_nginx.sh
-rwxr-xr-x. 1 root root 441 Mar  2 17:04 notify.sh
将此脚本文件传到备节点上
[root@LB1 scripts]#: scp notify.sh 192.168.227.141:/scripts/
root@192.168.227.141's password: 
notify.sh                                                      100%  441     1.0MB/s   00:00    

查看LB2
[root@LB2 ~]# mkdir /scripts
[root@LB2 ~]# cd /scripts/
[root@LB2 scripts]# ls
notify.sh
[root@LB2 scripts]# ll
总用量 4
-rwxr-xr-x. 1 root root 441  3月  2 17:05 notify.sh

配置keepalived加入监控脚本的配置

配置主keepalived
[root@LB1 keepalived]#: vim /etc/keepalived/keepalived.conf
[root@LB1 keepalived]#: cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb01
}

vrrp_script nginx_check {                 #添加此模块
    script "/scripts/check_nginx.sh"
    interval 1
    weight -30                #这里设置主节点优先级减值要低于备节点优先级
}

vrrp_instance VI_1 {
    state MASTER
    interface ens160
    virtual_router_id  77
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1
    }
    virtual_ipaddress {
        192.168.227.200
    }

    track_script {                   # 添加此模块
        nginx_check
    }
}

virtual_server 192.168.227.200 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.227.153 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.227.141 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

[root@LB1 keepalived]#: systemctl restart keepalived
配置备keepalived
[root@LB2 ~]# vim /etc/keepalived/keepalived.conf
[root@LB2 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id lb02
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens160
    virtual_router_id  77
    priority 80
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1
    }
    virtual_ipaddress {
        192.168.227.200
    }
    
    notify_master "/scripts/notify.sh master"   # 添加此两行notify
    notify_backup "/scripts/notify.sh backup"
}

virtual_server 192.168.227.200 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP

    real_server 192.168.227.153 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

    real_server 192.168.227.141 80 {
        weight 1
        TCP_CHECK {
            connect_port 80
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
}

[root@LB2 ~]# systemctl restart keepalived.service 
测试
此时查看LB1和LB2的VIP、nginx负载均衡80端口号

//LB1此时有VIP且nginx服务启动
[root@LB1 ~]#: ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:92:c0:51 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.227.153/24 brd 192.168.227.255 scope global dynamic noprefixroute ens160
       valid_lft 902sec preferred_lft 902sec
    inet 192.168.227.200/32 scope global ens160
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe92:c051/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@LB1 ~]#: ss -antl
State     Recv-Q    Send-Q         Local Address:Port         Peer Address:Port    Process    
LISTEN    0         511                  0.0.0.0:80                0.0.0.0:*                  
LISTEN    0         128                  0.0.0.0:22                0.0.0.0:*                  
LISTEN    0         511                     [::]:80                   [::]:*                  
LISTEN    0         128                     [::]:22                   [::]:*                  


////LB2此时没有VIP且nginx没有服务启动
[root@LB2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:4b:2c:31 brd ff:ff:ff:ff:ff:ff
    altname enp3s0
    inet 192.168.227.141/24 brd 192.168.227.255 scope global dynamic noprefixroute ens160
       valid_lft 929sec preferred_lft 929sec
    inet6 fe80::20c:29ff:fe4b:2c31/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
[root@LB2 ~]# ss -antl
State     Recv-Q    Send-Q         Local Address:Port         Peer Address:Port    Process    
LISTEN    0         128                  0.0.0.0:22                0.0.0.0:*                  
LISTEN    0         128                     [::]:22                   [::]:*                  

在这里插入图片描述

模拟主节点LB1宕机
[root@LB1 ~]#: systemctl stop nginx.service 
[root@LB1 ~]#: systemctl status nginx.service 
○ nginx.service - The nginx HTTP and reverse proxy server
     Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
     Active: inactive (dead) since Mon 2024-03-04 14:30:55 CST; 15s ago
   Duration: 5h 40min 21.447s
    Process: 925 ExecStartPre=/usr/bin/rm -f /run/nginx.pid (code=exited, status=0/SUCCESS)
    Process: 931 ExecStartPre=/usr/sbin/nginx -t (code=exited, status=0/SUCCESS)
    Process: 946 ExecStart=/usr/sbin/nginx (code=exited, status=0/SUCCESS)
   Main PID: 951 (code=exited, status=0/SUCCESS)
        CPU: 396ms

Mar 04 08:50:33 LB1 systemd[1]: Starting The nginx HTTP and reverse proxy server...
Mar 04 08:50:33 LB1 nginx[931]: nginx: the configuration file /etc/nginx/nginx.conf syntax is>
Mar 04 08:50:33 LB1 nginx[931]: nginx: configuration file /etc/nginx/nginx.conf test is succe>
Mar 04 08:50:33 LB1 systemd[1]: Started The nginx HTTP and reverse proxy server.
Mar 04 14:30:54 LB1 systemd[1]: Stopping The nginx HTTP and reverse proxy server...
Mar 04 14:30:55 LB1 systemd[1]: nginx.service: Deactivated successfully.
Mar 04 14:30:55 LB1 systemd[1]: Stopped The nginx HTTP and reverse proxy server.

[root@LB1 ~]#: systemctl status keepalived.service 
○ keepalived.service - LVS and VRRP High Availability Monitor
     Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: dis>
     Active: inactive (dead)

Mar 04 14:30:55 LB1 Keepalived_healthcheckers[1654]: Shutting down service [192.168.227.141]:>
Mar 04 14:30:55 LB1 Keepalived_healthcheckers[1654]: Stopped - used 0.001174 user time, 0.037>
Mar 04 14:30:55 LB1 Keepalived_vrrp[1655]: (VI_1) sent 0 priority
Mar 04 14:30:55 LB1 Keepalived_vrrp[1655]: (VI_1) removing VIPs.
Mar 04 14:30:56 LB1 Keepalived_vrrp[1655]: Stopped - used (self/children) 0.007715/1.940067 u>
Mar 04 14:30:56 LB1 Keepalived[1653]: CPU usage (self/children) user: 0.003873/1.948956 syste>
Mar 04 14:30:56 LB1 Keepalived[1653]: Stopped Keepalived v2.2.8 (04/04,2023), git commit v2.2>
Mar 04 14:30:56 LB1 systemd[1]: keepalived.service: Deactivated successfully.
Mar 04 14:30:56 LB1 systemd[1]: Stopped LVS and VRRP High Availability Monitor.
Mar 04 14:30:56 LB1 systemd[1]: keepalived.service: Consumed 5.140s CPU time.

[root@LB1 ~]#: ss -antl
State     Recv-Q    Send-Q         Local Address:Port         Peer Address:Port    Process    
LISTEN    0         128                  0.0.0.0:22                0.0.0.0:*                  
LISTEN    0         128                     [::]:22                   [::]:*                  
[root@LB1 ~]#: 

在这里插入图片描述

配置主节点不抢占
[root@LB1 ~]#: vim /etc/keepalived/keepalived.conf
......
......
vrrp_instance VI_1 {
    state MASTER                #修改初始值MASTER为BACKUP
    interface ens160
    nopreempt                   #添加此行nopreempt
    virtual_router_id  77
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1
    }
    virtual_ipaddress {
        192.168.227.200
    }

    track_script {
        nginx_check
    }
}
.....
.....

[root@LB2 ~]# systemctl restart keepalived.service