keepalived——高可用集群

HA Cluster


集群类型:LB、HA、HP

系统可用性的公式:A=MTBF/(MTBF+MTTR)

(0,1)

几个9: 99%, …, 99.999%

    建议使用3个9的系统可用性

如何降低MTTR:冗余(redundant)

active/passive

active –> HEARTBEAT –> passive

    

(1) passive node的数量?

        备用节点可以有多个,HEARTAEAT信息建议使用多播域

        3个周期没收到HEARTAEAT信息,则进行资源切换

(2) 资源切换?

    ip/服务的切换

·shared storage:

        NAS:文件共享服务器

        SAN:存储区域网络,块级别的共享

    

·Network partition:<网络分区>

        隔离设备:

                node:STONITH

                资源:fence

    

·quorum:

        with quorum:> total/2  可以占有资源

        without quorum: <= total/2  必须释放资源

    

·HA Service:

    nginx service:ip/nginx

    

·TWO nodes Cluster?

    辅助设备:ping node, quorum disk

HA Cluster实现方案:

vrrp协议的实现:简单来说就是心跳信息,基于节点优先级实现

   keepalived

    

ais:完备HA集群;上述的quorum投票机制,反而是ais的机制

        heartbeat

        corosync

keepalived:

·vrrp协议:Virtual Redundant Routing Protocol

术语:

        虚拟路由器:Virtual Router

        虚拟路由器标识:VRID(0-255)

        物理路由器:

                master:主设备

                backup:备用设备

        priority:优先级(0-224)

        VIP:Virtual IP

        VMAC:Virutal MAC (00-00-5e-00-01-VRID)

    

通告:心跳,优先级等;周期性

    

抢占式,非抢占式:

        抢占式:当主设备恢复后,把资源抢占过来

        非抢占式:当主设备恢复后,不抢占资料,而是等待下一次选举

    

安全工作:

        认证:

                无认证

                简单字符认证:推荐简单字符认证

                MD5

    

工作模式:

        主/备:单虚拟路径器

        主/主:主/备(虚拟路径器1),备/主(虚拟路径器2)

·keepalived

vrrp协议的软件实现,原生设计的目的为了高可用ipvs服务:

        vrrp协议完成地址流动

        为vip地址所在的节点生成ipvs规则(在配置文件中预先定义)

        为ipvs集群的各RS做健康状态检测

        基于脚本调用接口通过执行脚本完成脚本中定义的功能,进而影响集群事务

    

web服务器的高可用,基本都是基于keepalive实现

    

组件:

        核心组件:

                vrrp stack

                ipvs wrapper

                checkers

        控制组件:配置文件分析器

        IO复用器

        内存管理组件

·HA Cluster的配置前提:

(1)各节点时间必须同步

   ntp, chrony

    

(2)确保iptables及selinux不会成为阻碍

    

(3)各节点之间可通过主机名互相通信(对KA并非必须)

   建议使用/etc/hosts文件实现

    

(4)各节点之间的root用户可以基于密钥认证的ssh服务完成互相通信(并非必须)

keepalived安装配置:

CentOS 6.4+版本以后,keepalived被放置在base仓库中了

    

程序环境:

        配置文件:/etc/keepalived/keepalived.conf

        主程序:/usr/sbin/keepalived

        Unit File:keepalived.service

    

配置文件组件部分:

        TOP HIERACHY

                GLOBAL CONFIGURATION

                        Global definitions

                        Static routes/addresses

                VRRPD CONFIGURATION

                        VRRP synchronization group(s)

                        VRRP instance(s)

                LVS CONFIGURATION

                        Virtual server group(s)

                        Virtual server(s)

配置语法:

·Global definitions:

vrrp_mcast_group4 224.0.100.19:定义keepalived广播的地址

·配置虚拟路由器:

vrrp_instance <STRING> {

    …

}

        

专用参数:

        state MASTER|BACKUP:当前节点在此虚拟路由器上的初始状态;只能有一个是MASTER,余下的都应该为BACKUP

        interface IFACE_NAME:绑定为当前虚拟路由器使用的物理接口

        virtual_router_id VRID:当前虚拟路由器的惟一标识,范围是0-255

        priority 100:当前主机在此虚拟路径器中的优先级;范围1-254

        advert_int 1:vrrp通告的时间间隔

        authentication {

                auth_type AH|PASS

                auth_pass <PASSWORD> #PASS认证的字符串不能超过8个,超过8个只取前8个

        }

        virtual_ipaddress {

                <IPADDR>/<MASK> brd <IPADDR> dev <STRING> scope <SCOPE> label <LABEL>

                192.168.200.17/24 dev eth1

                192.168.200.18/24 dev eth2 label eth2:1

        }

        track_interface {

                eth0

                eth1

                

        }

        配置要监控的网络接口,一旦接口出现故障,则转为FAULT状态

        nopreempt:定义工作模式为非抢占模式

                若某vrrp_instance定义为nopreempt,而所有关于此vrrp_instance的主机的state定义都只能是BACKUP模式

                默认是preempt抢占模式

    

        preempt_delay 300:抢占式模式下,节点上线后触发新选举操作的延迟时长

        

        定义通知脚本:

                notify_master <STRING>|<QUOTED-STRING>:当前节点成为主节点时触发的脚本

                notify_backup <STRING>|<QUOTED-STRING>:当前节点转为备节点时触发的脚本

                notify_fault <STRING>|<QUOTED-STRING>:当前节点转为“失败”状态时触发的脚本

                

                notify <STRING>|<QUOTED-STRING>:通用格式的通知触发机制,一个脚本可完成以上三种状态的转换时的通知,不可和上面3个同时使用

单主机模式:

11.png

主机1:

[root@node1 keepalived]# yum -y install httpd keepalived
[root@node1 keepalived]# vim /var/www/html/index.html
keepalived1
[root@node1 keepalived]# service httpd start
[root@node1 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
    notification_email {
        root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id inode1
    vrrp_mcast_group4 224.0.43.200
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 18
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 9a735491
    }
    virtual_ipaddress {
        10.1.43.100/16 dev eth0
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
[root@node1 keepalived]# service keepalived start
    
[root@node1 keepalived]# cat notify.sh 
#!/bin/bash
#
contact='root@localhost'

notify() {
	mailsubject="$(hostname) to be $1, vip floating"
	mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
	echo "$mailbody" | mail -s "$mailsubject" $contact
}

case $1 in
master)
	notify master
	;;
backup)
	notify backup
	;;
fault)
	notify fault
	;;
*)
	echo "Usage: $(basename $0) {master|backup|fault}"
	exit 1
	;;
esac

主机2:

[root@node2 keepalived]# yum -y install httpd keepalived
[root@node2 keepalived]# vim /var/www/html/index.html
keepalived2
[root@node2 keepalived]# service httpd start
[root@node2 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
    notification_email {
        root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id inode1
    vrrp_mcast_group4 224.0.43.200
}
    
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 18
    priority 98
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 9a735491
    }
    virtual_ipaddress {
        10.1.43.100/16 dev eth0
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
[root@node2 keepalived]# service keepalived start

请求结果:

[root@node3 ~]# curl 10.1.43.100
keepalived1

keepalived——高可用集群

keepalived——高可用集群

双主机模式:

12.png

主机1:

[root@node1 keepalived]# yum -y install httpd keepalived
[root@node1 keepalived]# vim /var/www/html/index.html
keepalived1
[root@node1 keepalived]# service httpd start
[root@node1 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
    notification_email {
        root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id inode1
    vrrp_mcast_group4 224.0.43.200
}
        
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 18
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 9a735491
    }
    virtual_ipaddress {
        10.1.43.100/16 dev eth0
    }
}
    
vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 19
    priority 98
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 3a732491
    }
    virtual_ipaddress {
        10.1.43.200/16 dev eth0
    }
}
[root@node1 keepalived]# service keepalived start

主机2:

[root@node2 keepalived]# yum -y install httpd keepalived
[root@node2 keepalived]# vim /var/www/html/index.html
keepalived2
[root@node2 keepalived]# service httpd start
[root@node2 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
    notification_email {
        root@localhost
    }
    notification_email_from keepalived@localhost
    smtp_server 127.0.0.1
    smtp_connect_timeout 30
    router_id inode1
    vrrp_mcast_group4 224.0.43.200
}
    
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 18
    priority 98
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 9a735491
    }
    virtual_ipaddress {
        10.1.43.100/16 dev eth0
    }
}
    
vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 19
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 3a732491
    }
    virtual_ipaddress {
        10.1.43.200/16 dev eth0
    }
}
[root@node2 keepalived]# service keepalived start

请求结果:

[root@node3 ~]# curl 10.1.43.100
keepalived1
[root@node3 ~]# curl 10.1.43.200
keepalived2

1.png

keepalived——高可用集群

·虚拟服务器:

配置参数:

        virtual_server IP port |

        virtual_server fwmark int

        {

                

                real_server {

                    

                }

                

        }

    

常用参数:

        delay_loop <INT>:服务轮询的时间间隔;对后端服务器进行健康检测的时长

        lb_algo rr|wrr|lc|wlc|lblc|sh|dh:定义调度方法

        lb_kind NAT|DR|TUN:集群的类型

        persistence_timeout <INT>:持久连接时长

        protocol TCP:服务协议,仅支持TCP

        sorry_server <IPADDR> <PORT>:备用服务器地址

        real_server <IPADDR> <PORT>

        {

                weight <INT>

                notify_up <STRING>|<QUOTED-STRING>

                notify_down <STRING>|<QUOTED-STRING>

                HTTP_GET|SSL_GET|TCP_CHECK|SMTP_CHECK|MISC_CHECK { … }:定义当前主机的健康状态检测方法

        }

            

        HTTP_GET|SSL_GET {

                url {

                        path <URL_PATH>:定义要监控的URL

                        status_code <INT>:判断上述检测机制为健康状态的响应码

                        digest <STRING>:判断上述检测机制为健康状态的响应的内容的校验码

                }

                nb_get_retry <INT>:重试次数

                delay_before_retry <INT>:重试之前的延迟时长

                connect_ip <IP ADDRESS>:向当前RS的哪个IP地址发起健康状态检测请求

                connect_port <PORT>:向当前RS的哪个PORT发起健康状态检测请求

                bindto <IP ADDRESS>:发出健康状态检测请求时使用的源地址

                bind_port <PORT>:发出健康状态检测请求时使用的源端口

                connect_timeout <INTEGER>:连接请求的超时时长

        }

            

        TCP_CHECK {

                connect_ip <IP ADDRESS>:向当前RS的哪个IP地址发起健康状态检测请求

                connect_port <PORT>:向当前RS的哪个PORT发起健康状态检测请求

                bindto <IP ADDRESS>:发出健康状态检测请求时使用的源地址

                bind_port <PORT>:发出健康状态检测请求时使用的源端口

                connect_timeout <INTEGER>:连接请求的超时时长

        }

keepalived+lvs来调度httpd服务:

拓扑结构:

13.png

实验环境:

node1主机: 10.1.43.1 提供keepalived服务

node3主机: 10.1.43.2 提供keepalived服务

node4主机: 10.1.43.101 提供httpd服务

node5主机: 10.1.43.102 提供httpd服务

主机1:

[root@node1 keepalived]# yum -y install httpd keepalived
[root@node1 keepalived]# vim /var/www/html/index.html
keepalived1
[root@node1 keepalived]# service httpd start
[root@node1 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
        root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id inode1
   vrrp_mcast_group4 224.0.43.200
}
    
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 18
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 9a735491
    }
    virtual_ipaddress {
        10.1.43.100/16 dev eth0
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
    
vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 19
    priority 98
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 3a732491
    }
    virtual_ipaddress {
        10.1.43.200/16 dev eth0
    }
}
    
virtual_server 10.1.43.100 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    protocol TCP
    sorry_server 127.0.0.1 80
    real_server 10.1.43.101 8080 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
    real_server 10.1.43.102 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
    
virtual_server 10.1.43.200 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    protocol TCP
    sorry_server 127.0.0.1 80
    real_server 10.1.43.101 8080 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
    real_server 10.1.43.102 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
[root@node1 keepalived]# service keepalived start

主机2:

[root@node2 keepalived]# yum -y install httpd keepalived
[root@node2 keepalived]# vim /var/www/html/index.html
keepalived2
[root@node2 keepalived]# service httpd start
[root@node2 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id inode1
   vrrp_mcast_group4 224.0.43.200
}
    
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 18
    priority 98
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 9a735491
    }
    virtual_ipaddress {
        10.1.43.100/16 dev eth0
    }
    notify_master "/etc/keepalived/notify.sh master"
    notify_backup "/etc/keepalived/notify.sh backup"
    notify_fault "/etc/keepalived/notify.sh fault"
}
    
vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 19
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 3a732491
    }
    virtual_ipaddress {
        10.1.43.200/16 dev eth0
    }
}
    
virtual_server 10.1.43.100 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    protocol TCP
    sorry_server 127.0.0.1 80
    real_server 10.1.43.101 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
    real_server 10.1.43.102 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
    
virtual_server 10.1.43.200 80 {
    delay_loop 3
    lb_algo rr
    lb_kind DR
    protocol TCP
    sorry_server 127.0.0.1 80
    real_server 10.1.43.101 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
    real_server 10.1.43.102 80 {
        weight 1
        HTTP_GET {
            url {
              path /
              status_code 200
            }
            connect_timeout 1
            nb_get_retry 3
            delay_before_retry 1
        }
    }
}
[root@node2 keepalived]# service keepalived start

主机4、5上运行此脚本即可,并配置上httpd服务

[root@node4 ~]# cat set.sh
#!/bin/bash
#
vip1=10.1.43.100
vip2=10.1.43.200
ifcfg1=lo:1
ifcfg2=lo:2
netmask=255.255.255.255
    
case $1 in
start)
    echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ifconfig $ifcfg1 $vip1 netmask $netmask broadcast $vip1 up
    ifconfig $ifcfg2 $vip2 netmask $netmask broadcast $vip2 up
    route add -host $vip1 dev $ifcfg1
    route add -host $vip2 dev $ifcfg2
    ;;
stop)
    ifconfig $ifcfg1 down
    ifconfig $ifcfg1 down
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ;;
esac
[root@node4 ~]# yum -y install httpd
[root@node4 ~]# vim /var/www/html
<h1>RS1 CentOS7</h1>
[root@node4 ~]# service httpd start
    
[root@node5 ~]# vim /var/www/html
<h1>RS2 www.gm.com</h1>

访问结果:

[root@node3 ~]# curl 10.1.43.100
<h1>RS1 CentOS7</h1>
[root@node3 ~]# curl 10.1.43.100
<h1>RS2 www.gm.com</h1>
[root@node3 ~]# curl 10.1.43.200
<h1>RS2 www.gm.com</h1>
[root@node3 ~]# curl 10.1.43.200
<h1>RS1 CentOS7</h1>

keepalived——高可用集群

keepalived——高可用集群

keepalived——高可用集群

keepalived——高可用集群

·keepalived能调用外部的辅助脚本进行资源监控,并根据监控的结果状态能实现优先动态调整

分两步:(1) 先定义一个脚本;(2) 调用此脚本;

vrrp_script <SCRIPT_NAME> {

        script ""

        interval INT

        weight -INT

}

track_script {

        SCRIPT_NAME_1

        SCRIPT_NAME_2

        

}   #追踪在vrrp_instance中追踪,即调用脚本在vrrp_instance

keepalived+nginx来调度后端的http服务:

实验拓扑:

14.png

实验环境:

node1主机: 10.1.43.1 提供keepalived服务+nginx服务

node3主机: 10.1.43.2 提供keepalived服务+nginx服务

node4主机: 10.1.43.101 提供httpd服务

node5主机: 10.1.43.102 提供httpd服务

主机1:

[root@node1 keepalived]# yum -y install keepalived
[root@node1 keepalived]# cat keepalived.conf
! Configuration File for keepalived
global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id inode1
   vrrp_mcast_group4 224.0.43.200
}
vrrp_script chk_nginx {
    script "pidof nginx"
    interval 1
    weight -5
}
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    virtual_router_id 18
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 9a735491
    }
    virtual_ipaddress {
        10.1.43.100/16 dev eth0
    }
    track_script {
    chk_nginx
    }
}
vrrp_instance VI_2 {
    state BACKUP
    interface eth0
    virtual_router_id 19
    priority 98
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 3a732491
    }
    track_script {
    chk_nginx
    }
    virtual_ipaddress {
        10.1.43.200/16 dev eth0
    }
}
[root@node1 keepalived]# rpm -ivh  nginx-1.10.0-1.el6.ngx.x86_64.rpm   #此包需要去nginx官网下载
nginx的配置:
在/etc/nginx/nginx.conf中httpd段中配置
upstream gm {
    server 10.1.43.101;
    server 10.1.43.102;
}
在/etc/nginx/conf.d/default.conf的server中配置:
location / {
    root   /usr/share/nginx/html;
    proxy_pass http://gm;
    index  index.html index.htm;
}

主机2:

[root@node2 keepalived]# yum -y install keepalived
[root@node2 keepalived]# cat keepalived.conf.nginx 
! Configuration File for keepalived
global_defs {
   notification_email {
root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id inode2
   vrrp_mcast_group4 224.0.43.200
}
    
vrrp_script chk_nginx {
    script "pidof nginx"
    interval 1
    weight -5
}
    
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 18
    priority 98
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 9a735491
    }
    virtual_ipaddress {
        10.1.43.100/16 dev eth0
    }
    track_script {
        chk_nginx
    }
}
    
vrrp_instance VI_2 {
    state MASTER
    interface eth0
    virtual_router_id 19
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 3a732491
    }
    track_script {
        chk_nginx
    }
    virtual_ipaddress {
        10.1.43.200/16 dev eth0
    }
}
[root@node2 keepalived]# rpm -ivh  nginx-1.10.0-1.el6.ngx.x86_64.rpm  
nginx的配置:
在/etc/nginx/nginx.conf中httpd段中配置
upstream gm {
    server 10.1.43.101;
    server 10.1.43.102;
}
在/etc/nginx/conf.d/default.conf的server中配置:
location / {
    root   /usr/share/nginx/html;
    proxy_pass http://gm;
    index  index.html index.htm;
}

主机4、5的配置同前文keepalived+lvs调度httpd服务

访问结果:

[root@node3 ~]# curl 10.1.43.100
<h1>RS1 CentOS7</h1>
[root@node3 ~]# curl 10.1.43.100
<h1>RS2 www.gm.com</h1>
[root@node3 ~]# curl 10.1.43.200
<h1>RS2 www.gm.com</h1>
[root@node3 ~]# curl 10.1.43.200
<h1>RS1 CentOS7</h1>

原创文章,作者:megedugao,如若转载,请注明出处:http://www.178linux.com/56576

(0)
上一篇 2016-11-01 16:56
下一篇 2016-11-01 17:42

相关推荐

  • lnmap实战之负载均衡架构(无高可用)

    lnmap实战之负载均衡架构(无高可用) 架构图如下: 此次实战软件,全部yum安装 1.准备好机器,同步好时间 192.168.42.150 node1 [负载均衡器]192.168.42.152 node3 [web2]192.168.42.153 node4 [web1]192.168.42.151 node2 [memcached session存储…

    Linux干货 2017-06-22
  • 马哥教育网络班21期-第六周课程练习

    请详细总结vim编辑器的使用并完成以下练习题1、复制/etc/rc.d/rc.sysinit文件至/tmp目录,将/tmp/rc.sysinit文件中的以至少一个空白字符开头的行的行首加#; #cp /etc/rc.d/rc.sysinit /tmp #vim /tmp/rc.sysinit :%s/^[[:space:]]/#…

    Linux干货 2016-08-15
  • 用户和组相关配置文件

    1. /etc/passwd文件详解 输入vi /etc/passwd 可以查看此文件的内容 [root@localhost ~]# vi /etc/passwdroot:x:0:0:root:/root:/bin/bash root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/…

    Linux干货 2016-10-23
  • pxe自动安装linux

    配置自动安装操作系统 1.网卡应支持pxe技术,由网卡作为dhcp的客户端向dhcp服务器请求一个IP地址,dhcp会将ip,网关等信息和的tftp服务器的地址应加载的文件名提供给客户端 2.根据dhcp服务器提供的信息网卡上内置的tftp客户端向tftp服务器发出请求,将文件下载至内存,包括bootloader(可能有选单),initrd,内核等(内核和i…

    Linux干货 2016-11-11
  • sed与vim相关练习

    sed 练习 以行为单位的新增/删除功能 1将 /etc/passwd 的内容列出并且打印行号,同时,请将第 2~5 行删除! 2在第二行后(亦即是加在第三行)加上“the is xing line?”字样! 3在第二行前(亦即是加在第而行)加上“the is xing line?”字样! 4在第二行后面加入两行字,例如“the is xing line &…

    Linux干货 2016-08-10
  • 命令,Linux的独特魅力

    转眼又学了一个星期,那就来分享一下这个星期所学的知识吧! 这个星期学的都是各种命令,而这些命令就如同windoes里面画面上的各种选项,没有这些命令,那么你对它将无从下手。那下面就来看看这些命令的强大功能 help   man help(内部命令帮助),等同于man(外部命令帮助)。当你对一个命令不熟悉时,这两个命令将是你的救命符 选定一个命令,先…

    2017-07-22