Linux网络管理之网卡别名及网卡绑定配置

在日常的运维工作当中,有时候需要在一块物理网卡上配置多个IP地址,这就是网卡子接口的概念,以及多块网卡实现网卡的绑定,通俗来讲就是多块网卡使用的一个IP地址,下面我来详细说明实现的过程。

创建网卡子接口

CentOS系统当中网络是由NetworkManager这个服务来管理的,它提供了一个图形的界面,但此服务不支持物理网卡子接口的设置,所以在配置网卡子接口的时候,我们需要关闭此服务

临时关闭:service NetworkManager stop

永久关闭:chkconfig  NetworkMangager  off

如果有时需要临时创建子接口需要这么操作

[root@server ~]#  ip   addr add 10.1.252.100/16 dev eth0 label  eth0:0

注意:一旦重启网络服务,将会失效

创建永久的网卡子接口,这时候就需要写到网卡的配置文件里面去了网卡的配置文件路径在/etc/sysconfig/network-scripts/目录下以ifcfg开头跟设备名的文件,加入我设置的子接口的配置文件叫做eth0:0

vim /etc/sysconfig/network-scripts/ifcfg-eth0:0(如果你每次编辑网卡配置文件,每次这个路径觉得很长的时候可以定义别名,直接cd切换目录到这个文件的当前目录下)

DEVICE=eth0:0   //网卡的子接口名称                                                                                 

BOOTPROTO=none  //使用的协议这里是静态                                                                   

IPADDR=192.168.1.100   //子接口的IP地址                                                                     

NETMASK=255.255.255.0  //子接口的子网掩码                                                                

GATEWAY=192.168.1.254   //子接口的网关                                                                       

DNS1=8.8.8.8                     //子接口指定的dns                                                                        

编辑网卡的配置文件之后需要重启网络服务                                                                     

[root@server network-scripts]# service network restart                                                    

[root@server network-scripts]# ifconfig                                                                                  

eth0      Link encap:Ethernet  HWaddr 00:0C:29:D1:18:FD                                     

          inet addr:10.1.252.100  Bcast:10.1.255.255  Mask:255.255.0.0          

          inet6 addr: fe80::20c:29ff:fed1:18fd/64 Scope:Link                                        

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1                

          RX packets:47570 errors:0 dropped:0 overruns:0 frame:0                           

          TX packets:1618 errors:0 dropped:0 overruns:0 carrier:0                            

          collisions:0 txqueuelen:1000                                                                                

          RX bytes:3140045 (2.9 MiB)  TX bytes:135945 (132.7 KiB)                         

                                                                                                                                                              

eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:D1:18:FD                                                

          inet addr:192.168.1.100  Bcast:192.168.1.255  Mask:255.255.255.0 

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1                

至此网络子接口就配置完成了

 

 

网卡绑定

在讲解如何实现bonding网卡绑定前我先来讲讲bond的原理以及bond的工作模式,最后将实现网卡绑定的配置

bonding

就是将多块网卡绑定同一IP地址对外提供服务,可以实现高可用或者负载均衡。当然给两块网卡设置同一IP地址是不可能的。通过bonding,虚拟一块网卡对外提供连接,物理网卡被修改为相同的MAC地址。

正常情况下,网卡只接受目的硬件地址是自身MAC的以太网帧,对于别的数据帧都过滤掉,以减轻负担。但是网卡也支持混杂promisc的模式,接收网络上的所有帧,tcpdumpbonding就运行在这个模式下,驱动程序中的mac地址,将两块网卡的MAC地址改成相同,可以接受特定的mac数据帧,然后把相应的数据帧传给bond驱动程序处理。双网卡工作的时候表现为一个虚拟网卡(bond0),该虚拟网卡也需要驱动,驱动名叫bonding

bonding的工作模式

mode 0 balance-rr

轮询(round-robin)策略:从头到尾顺序的在每一个slave接口上面发送数据包。本模式提供负载均衡和容错的能力,两块网卡都工作。

 

mode  1 active-backup

主备策略:在绑定中,只有一个slave被激活。当且仅当活动的slvae接口失败时才会激活其他slave。为了避免交换机发生混乱时绑定的MAC地址只有一个外部端口上可见。

 

mode 3broadcast

广播策略:在所有的slave接口上传送所有的保温。本模式提供容错能力。

 

这里我给大家配置的mode 1模式,我这里使用的是vmware虚拟机来做的实验,在做实验之前需要再添加一块网卡,这样linux系统中才会有两块网卡

第一步:创建bonding设备的配置文件

[root@server network-scripts]# vim ifcfg-bond0                                                                    

DEVICE=bond0                                                                                                                                 

BOOTPROTO=none                                                                                                                          

IPADDR=10.1.252.100                                                                                                                    

NETMASK=255.255.0.0                                                                                                                  

GATEWAY=10.1.0.1                                                                                                                         

DNS1=8.8.8.8                                                                                                                                    

BONDING_OPTS="miimon=100 mode=1"                                                                                 

第二部:编辑两块物理网卡的配置文件                                                                              

[root@server network-scripts]# vim ifcfg-eth0                                                                       

DEVICE=eth0                                                                                                                                    

MASTER=bond0                                                                                                                               

SLAVE=yes                                                                                                                                          

                                                                                                                                                              

[root@server network-scripts]# vim ifcfg-eth1                                                                       

DEVICE=eth1                                                                                                                                    

MASTER=bond0                                                                                                                                

SLAVE=yes              

注:miimon是用来进行链路检测的。如果miimon=100,那么系统每100毫秒检测一次链路状态,如果有一条线路不通就转入另一条线路。

    mode=1表示工作模式为主备模式

    MASTER=bond0 主设备为bond0

 

配置完成只需要重启网络服务即可,测试使用另一台主机来ping bond0IP地址接口,接下来测试bond的状态,将其中的一块网卡down掉,看另一块网卡能不能顶上来,如果能,则表示成功

查看bond的状态:watch –n 1 cat /proc/net/bonding/bond 动态观察bond的状态

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)                                           

Bonding Mode: fault-tolerance (active-backup)                                                             

Primary Slave: None                                                                                                               

Currently Active Slave: eth0                                                                                                

MII Status: up                                                                                                                          

MII Polling Interval (ms): 100                                                                                              

Up Delay (ms): 0                                                                                                                     

Down Delay (ms): 0                                                                                                                

Slave Interface: eth0                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:fd                                                                          

Slave queue ID: 0                                                                                                                    

Slave Interface: eth1                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:07                                                                         

Slave queue ID:  0                                                                                                                                                                                       

 

当我把eth0网卡down掉,当前活动的网卡就变成了eth1

Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)                                           

Bonding Mode: fault-tolerance (active-backup)                                                             

Primary Slave: None                                                                                                               

Currently Active Slave: eth1                                                                                                

MII Status: down                                                                                                                    

MII Polling Interval (ms): 100                                                                                              

Up Delay (ms): 0                                                                                                                     

Down Delay (ms): 0                                                                                                                

Slave Interface: eth0                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:fd                                                                          

Slave queue ID: 0                                                                                                                    

Slave Interface: eth1                                                                                                             

MII Status: up                                                                                                                          

Speed: 1000 Mbps                                                                                                                  

Duplex: full                                                                                                                               

Link Failure Count: 0                                                                                                              

Permanent HW addr: 00:0c:29:d1:18:07                                                                         

Slave queue ID          :        0                                                                                                                                                                                                              

                                                                                                                                                                                                                                             

原创文章,作者:fszxxxks,如若转载,请注明出处:http://www.178linux.com/42839

(0)
fszxxxksfszxxxks
上一篇 2016-09-02 08:47
下一篇 2016-09-02 08:47

相关推荐

  • 设计模式 ( 二十 ) 访问者模式Visitor(对象行为型)

    特此说明:对访问者模式理解不是特别透彻,若有误,请指正,谢谢! 1.概述 在软件开发过程中,对于系统中的某些对象,它们存储在同一个集合collection中,且具有不同的类型,而且对于该集合中的对象,可以接受一类称为访问者的对象来访问,而且不同的访问者其访问方式有所不同。 例子1:顾客在超市中将选择的商品,如苹果、图书等放在购物车中,然后到收银员处付款。在购…

    Linux干货 2015-04-07
  • 编程真难啊

    上周,在Sun的Java论坛上出现了一个这样的帖子,这个贴子的链接如下:http://forums.sun.com/thread.jspa?threadID=5404590&start=0&tstart=0 LZ的贴子翻译如下: 大家好,我是一个Java的新手,我有一个简单的问题:请问我怎么才能反转一个整数的符号啊。比如把-12转成+12。是…

    Linux干货 2015-04-03
  • 当Web访问性能出现问题,如何深探?

    对运维或开发工程师来说,遇到访问性能问题时,最先需要定位的是问题出现在哪个环节,是网络的问题,服务端的问题,还是客户端的问题? 往往技术人员喜欢把精力放在保障后端服务的可用性方面,而对前端界面是否能正常装载,是否能完整渲染不是太关心。但从业务的角度来说,界面承载的才是最终的业务,业务是通过人机交互来实现的。 日常我们遇到哪些场景需要定位访问性能瓶颈? ·不同…

    系统运维 2017-01-09
  • LVS专题: NAT和DR模型实现Web负载均衡

    LVS专题: NAT和DR模型实现Web负载均衡 前言: NAT实现 Real Server配置 Director配置 测试 实验拓扑 实验环境 实验步骤 DR实现 Director配置 Real Server配置 测试 实验拓扑 实验环境 实验步骤 总结: 前言: 在上篇文章中我们讲了一些LVS的基本概念和相应模型的实验原理和流程,本篇文章我们主要使用lv…

    2016-04-05
  • grep的基本用法详解

     grep是linux上常用的一个文本处理工具之一。它有着强大的文本处理能力,学会它,可以让工作更有效率。 一、初识grep   grep: Global search Regular Experssion and Print out line   从名字上也可以直观的了解到它是基于正则表达式进行全局搜索,并把结果打印到屏幕上来…

    系统运维 2015-05-25
  • 如何加密/混乱C源代码

    之前发表了《6个变态的C语言Hello World程序》[酷壳链接] [CSDN链接],主要是是像大家展示了一些C语言的变态玩法。也向大家展示了一下程序是可以写得让人看不懂的,在那篇文章中,可以看到很多人的留言,很多人都觉得很好玩,是的,那本来是用来供朋友们“消遣作乐”,供娱乐娱东而已,不必太过认真。 不过,通过这种极端的写法,大家可以看到源代码都可以写得那…

    Linux干货 2016-05-08