Nginx+Keepalived简略构建高可用集群51CTO博客 - AG环亚娱乐

Nginx+Keepalived简略构建高可用集群51CTO博客

2019年04月06日11时51分30秒 | 作者: 鸿波 | 标签: 构建,集群,可用 | 浏览: 275

曾经一向用heartbeat或corosync+pacemaker构建高可用集群,现在发现keepalived完成起来更简略。
keepalived的master向backup发送播送,当backup一段时刻收不到对方传来的VRRP播送时,backup会经过竞选一个master,master就会从头持有资源。详细的理论知识拜见http://bbs.ywlm.net/thread-790-1-1.html

试验方针:2台Nginx+Keepalived 2台Lamp构建高可用Web集群

规划:

  1. ng1.laoguang.me 192.168.1.22 ng1  

  2. ng2.laoguang.me 192.168.1.23 ng2  

  3. lamp1.laoguang.me   192.168.1.24 lamp1  

  4. lamp2.laoguang.me   192.168.1.25 lamp2

拓扑:

一.根本环境预备
ng1,ng2上装置nginx
lamp1,lamp2上构建LAMP或只装置httpd,我只装置了Httpd,这儿不给咱们演示了,有需求请看我的其它博文,更改lamp1,lamp2的index.html的内容分别为lamp1和lamp2,以简单区别,实践集群中内容应该是共同的,由同享存储供给。

二.ng1,ng2上装置装备keepalived
下载地址:http://www.keepalived.org/download.html
2.1 装置keepalived

  1. tar xvf keepalived-1.2.7.tar.gz  

  2. cd keepalived-1.2.7  

  3. ./configure prefix=/usr/local/keepalived      

  4. 可能会提示装置popt-devel包,yum即可

  5. make && make install

2.2 收拾装备文件与脚本

  1. mkdir /etc/keepalived  

  2. keepalived默许装备文件从/etc/keepalived下读取

  3. cp /usr/local/keepalived/sbin/keepalived /usr/sbin/

  4. 就一个二进制文件,直接复制曩昔即可,多的话就更改PATH吧

  5. cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

  6. 脚本的额定装备文件读取方位  

  7. cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/  

  8. 发动脚本你懂得

  9. cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

  10. 咱们要害的keepalived装备文件

2.3 修正ng1的/etc/keepalived/keepalived.conf

  1. ! Configuration File for keepalived  

  2. global_defs {  

  3.   notification_email {  

  4.     ibuler@qq.com         出毛病发送邮件给谁  

  5.   }    

  6.   notification_email_from keepalived@localhost 毛病用哪个邮箱发送邮件  

  7.   smtp_server 127.0.0.1   SMTP_Server IP

  8.   smtp_connect_timeout 30 超时时刻

  9.   router_id LVS_DEVEL     服务器标识  

  10. }  

  11. vrrp_instance VI_1 {  

  12.    state BACKUP            

  13. 状况,都为BACKUP,它们会推选Master,假如你写MASTER,它就会是Master,

  14.    当Master毛病时Backup会成为Master,当本来的Master康复后,本来的Master会成为Master  

  15.    interface eth0       发送VRRP的接口,细心看你的是不是eth0

  16.    virtual_router_id 51  虚拟路由标识,同一个组应该用一个,即Master与Backup同一个

  17.    priority 100   重要的优先级哦  

  18.    nopreempt      不抢占,一个毛病时,重启后康复后不抢占意资源

  19.    advert_int 1   同步距离时长

  20.    authentication {             认证  

  21.        auth_type PASS            认证方法  

  22.        auth_pass www.laoguang.me 密钥

  23.    }  

  24.    virtual_ipaddress {  

  25.        192.168.1.18/24 dev eth0              VIP  

  26.    }  

  27. }  

  28. 后边的删去吧,LVS上才有用

复制到ng2上一份,只修正priority 90 即可

  1. scp /etc/keepalived/keepalived.conf 192.168.1.23:/etc/keepalived/

  2. Ng2上

  3. vi /etc/keepalived/keepalived.conf  priority 90   其它共同

2.4 ng1,ng2上发动keepalived

  1. service keepalived start

检查日志

  1. tail /var/log/messages  

  2. Nov 27 08:07:54 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Entering BACKUP STATE  

  3. Nov 27 08:07:54 localhost Keepalived_vrrp[41871]: VRRP sockpool: [ifindex(2), proto(112), fd(11,12)]  

  4. Nov 27 08:07:54 localhost Keepalived_healthcheckers[41870]: Using LinkWatch kernel netlink reflector...  

  5. Nov 27 08:07:54 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) forcing a new MASTER election  

  6. Nov 27 08:07:55 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Transition to MASTER STATE  

  7. Nov 27 08:07:56 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Entering MASTER STATE  

  8. Nov 27 08:07:56 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) setting protocol VIPs.  

  9. Nov 27 08:07:56 localhost Keepalived_healthcheckers[41870]: Netlink reflector reports IP 192.168.1.18 added  

  10. Nov 27 08:07:56 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.18  

  11. Nov 27 08:08:01 localhost Keepalived_vrrp[41871]: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.18

检查vip绑定到哪台机器上了

  1. ip addr     ng1上  

  2. ....省掉  

  3. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000  

  4.    link/ether 00:0c:29:e8:90:0b brd ff:ff:ff:ff:ff:ff  

  5.    inet 192.168.1.22/24 brd 192.168.1.255 scope global eth0  

  6.    inet 192.168.1.18/32 scope global eth0  

  7.    inet6 fe80::20c:29ff:fee8:900b/64 scope link  

  8.       valid_lft forever preferred_lft forever  

由此可知vip绑定到ng1上了
三,Keepalived测验

3.1 封闭ng1上的keepalived或许直接封闭ng1 检查vip搬运状况

  1. service keepalived stop  

  2. ip addr  

  3. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000  

  4.    link/ether 00:0c:29:e8:90:0b brd ff:ff:ff:ff:ff:ff  

  5.    inet 192.168.1.22/24 brd 192.168.1.255 scope global eth0  

  6.    inet6 fe80::20c:29ff:fee8:900b/64 scope link  

  7.       valid_lft forever preferred_lft forever

3.2 检查ng2上是否绑定了vip

  1. ip addr  

  2. 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000  

  3.    link/ether 00:0c:29:dd:00:77 brd ff:ff:ff:ff:ff:ff  

  4.    inet 192.168.1.23/24 brd 192.168.1.255 scope global eth0  

  5.    inet 192.168.1.18/32 scope global eth0  

  6.    inet6 fe80::20c:29ff:fedd:77/64 scope link  

  7.       valid_lft forever preferred_lft forever

由此可知ip搬运正常,keepalived设置成功

四.装备Nginx做反向署理

4.1 修正nginx装备文件

  1. vi /etc/nginx/nginx.conf  

  2. user  nginx nginx;   运转nginx的用户和组

  3. worker_processes  2; 发动进程数

  4. error_log /var/log/nginx/error.log  notice; 过错日志记载

  5. pid        /tmp/nginx.pid;                   pid寄存方位

  6. worker_rlimit_nofile 65535;                  线程最大翻开文件数,须合作ulimit -SHn运用  

  7. events {  

  8.    use epoll;                 作业模型  

  9.    worker_connections  65536; 单进程最大衔接数

  10. }  

  11. http {                        http模块      

  12.    include       mime.types;  包括进来

  13.    default_type  application/octet-stream; 默许类型  

  14.    log_format  main  $remote_addr - $remote_user [$time_local] "$request"  

  15.                      $status $body_bytes_sent "$http_referer"  

  16.                      "$http_user_agent" "$http_x_forwarded_for";  

  17. 日志格局

  18.    access_log  /var/logs/nginx/http.access.log  main; 拜访日志

  19.    client_max_body_size 20m;  最大恳求文件巨细

  20.    client_header_buffer_size 16k; 来自客户端恳求header_buffer巨细

  21.    large_client_header_buffers 4 16k; 较大恳求缓冲个数与巨细

  22.    sendfile       on;                 内核空间直接发送到tcp行列  

  23.    tcp_nopush     on;  

  24.    tcp_nodelay    on;  

  25.    keepalive_timeout  65;    长衔接时长  

  26.    gzip  on;                 启用紧缩

  27.    gzip_min_length 1k;        最小紧缩巨细

  28.    gzip_buffers 4 16k;        紧缩缓冲  

  29.    gzip_http_version 1.1;     支撑协议  

  30.    gzip_comp_level 2;         紧缩等级  

  31.    gzip_types text/plain application/x-javascript text/css application/xml;      紧缩类型  

  32.    gzip_vary on;              前端缓存服务器能够缓存紧缩过的页面

  33.    upstream laoguang.me {     用upstream模块界说集群与RS

  34.        server 192.168.1.24:80 max_fails=3fail_timeout=10s;   RS的地址,最大过错数与超时时刻,超过了主动除掉  

  35.        server 192.168.1.25:80 max_fails=3fail_timeout=10s;  

  36. }  

  37. server {  

  38.        listen       80;           监听端口

  39.        server_name  192.168.1.18; servername

  40.        root   html;               根目录  

  41.        index  index.html index.htm; 你懂得

  42.        #charset koi8-r;  

  43.        access_log  logs/192.168.1.18.access.log  main;  

  44.  这个server的拜访日志

  45.        location / {    

  46.                proxy_pass http://laoguang.me;  反向署理

  47.                proxy_redirect off;  

  48.                proxy_set_header X-Real-IP $remote_addr;  

  49. 实在客户ip通知后端

  50.                proxy_set_header X-Forwarded-For Proxy_add_x_forwarded_for;  

  51.        }  

  52.        location /nginx {  

  53.                access_log off;    

  54.                stub_status on; 状况页面

  55.        }  

  56.        error_page   500 502 503 504  /50x.html;  

  57. location = /50x.html {  

  58.            root   html;  

  59.        }  

  60.    }  

  61. }

4.2 复制到ng2上一份

  1. scp /etc/nginx/nginx.conf 192.168.1.23:/etc/nginx/

4.3 测验反向署理能否负载均衡

lamp1,lamp2发动httpd

  1. service httpd start

ng1重启nginx

  1. service nginx restart  

用RealIp拜访测验能否轮询
http://192.168.1.22

相同测验ng2,假如都能完成负载均衡,那么持续

五.测验keepalived与nginx合作运转

现在192.168.1.18在 ng2上,        拜访 http://192.168.1.18 测验能否轮询
ng2上 service keepalived stop     拜访测验 http://192.168.1.18 能否轮询
封闭lamp1上的service httpd stop   拜访测验http://192.168.1.18 是否会报错

到此高可用webserver构建结束,没有单点毛病,任何一点毛病不影响事务。

版权声明
本文来源于网络,版权归原作者所有,其内容与观点不代表AG环亚娱乐立场。转载文章仅为传播更有价值的信息,如采编人员采编有误或者版权原因,请与我们联系,我们核实后立即修改或删除。

猜您喜欢的文章