首页 > 代码库 > docker学习笔记3 - 网络配置

docker学习笔记3 - 网络配置

参考: 
http://www.infoq.com/cn/articles/docker-network-and-pipework-open-source-explanation-practice 
http://www.oschina.net/translate/docker-network-configuration

host 模式

配置

--net=host

原理

docker使用linux的namespace进行资源隔离,支持CPU namespace,network namesapce等。一个Network Namespace提供了一份独立的网络环境,包括网卡、路由、Iptable规则等都与其他的Network Namespace隔离。 
如果docker容器使用host模式,docker容器没有分配独立的network namespace,没有独立的IP,和主机共享IP,端口,网卡,路由等

实战

  1. 宿主机,IP为172.31.12.125 
    执行ifconfig:

    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0        inet6 fe80::42:63ff:fe23:ee0d  prefixlen 64  scopeid 0x20<link>        ether 02:42:63:23:ee:0d  txqueuelen 0  (Ethernet)        RX packets 1403670  bytes 864221548 (824.1 MiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 1408014  bytes 1359202385 (1.2 GiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001        inet 172.31.12.125  netmask 255.255.240.0  broadcast 172.31.15.255        inet6 fe80::c0:6ff:fe61:3127  prefixlen 64  scopeid 0x20<link>        ether 02:c0:06:61:31:27  txqueuelen 1000  (Ethernet)        RX packets 4151187  bytes 1602103237 (1.4 GiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 4382161  bytes 1414120207 (1.3 GiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536        inet 127.0.0.1  netmask 255.0.0.0        inet6 ::1  prefixlen 128  scopeid 0x10<host>        loop  txqueuelen 0  (Local Loopback)        RX packets 11231261  bytes 7850587697 (7.3 GiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 11231261  bytes 7850587697 (7.3 GiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

      

  2. 启动docker

    docker run --net="host" -ti test/redis:2.6 /bin/bash

      

  3. 进入容器后,ifconfig查看IP

    eth0  Link encap:Ethernet  HWaddr 02:C0:06:61:31:27      inet addr:172.31.12.125  Bcast:172.31.15.255  Mask:255.255.240.0      inet6 addr: fe80::c0:6ff:fe61:3127/64 Scope:Link      UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1      RX packets:4151539 errors:0 dropped:0 overruns:0 frame:0      TX packets:4382518 errors:0 dropped:0 overruns:0 carrier:0      collisions:0 txqueuelen:1000       RX bytes:1602135600 (1.4 GiB)  TX bytes:1414191537 (1.3 GiB)

      

    发现docker容器的IP和宿主机一样

  4. 在宿主机中启动redis并设置值

    [root@ip-172-31-12-125 ~]# redis-cli redis 127.0.0.1:6379> keys *(empty list or set)redis 127.0.0.1:6379> set a hahaOKredis 127.0.0.1:6379> get a"haha"redis 127.0.0.1:6379> 

      

  5. 在宿主机中查看

    [root@ip-172-31-12-125 ~]# netstat  -nltp | grep 6379tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      23619/redis-server  [root@ip-172-31-12-125 ~]# /usr/local/bin/redis-cli redis 127.0.0.1:6379> get a"haha"

      

发现宿主机中,可以探测到docker容器中启动的redis端口,并且能获取对应的值

优缺点:

todo

brige模式

配置

--net=brige, 默认的网络配置,如果不显示制定--net配置,docker容器默认使用brige模式

原理

brige模式会为每一个容器分配Network Namespace,以及设置IP等,并将一个主机上的Docker容器连接到一个虚拟网桥上。

docker在启动的时候,默认会在宿主机上创建docker0的虚拟网桥,一般docker会选择172.17.0.0/16网段作为容器IP地址段,并将172.17.42.1/16分配给docker0网桥

流程:

- 在宿主机创建veth虚拟网卡,以veth*格式,veth pair以成对方式出现- docker将veth pair的一段放到创建的容器中,并命名为eth0,另一端在宿主机以veth*名字命名,并加入到docker0网桥中,通过brctl show可以查看- 将docker0网段中分一个IP给容器使用,并设置docker0的IP地址为默认网关

实战

  1. 宿主机 
    宿主机执行ifconfig:

    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0        inet6 fe80::42:63ff:fe23:ee0d  prefixlen 64  scopeid 0x20<link>        ether 02:42:63:23:ee:0d  txqueuelen 0  (Ethernet)        RX packets 1403670  bytes 864221548 (824.1 MiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 1408014  bytes 1359202385 (1.2 GiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001        inet 172.31.12.125  netmask 255.255.240.0  broadcast 172.31.15.255        inet6 fe80::c0:6ff:fe61:3127  prefixlen 64  scopeid 0x20<link>        ether 02:c0:06:61:31:27  txqueuelen 1000  (Ethernet)        RX packets 4151187  bytes 1602103237 (1.4 GiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 4382161  bytes 1414120207 (1.3 GiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536        inet 127.0.0.1  netmask 255.0.0.0        inet6 ::1  prefixlen 128  scopeid 0x10<host>        loop  txqueuelen 0  (Local Loopback)        RX packets 11231261  bytes 7850587697 (7.3 GiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 11231261  bytes 7850587697 (7.3 GiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

      

    如图,发现宿主机上已经有docker0虚拟网桥,IP为172.17.0.1(宿主机为AWS的EC2机器,正常物理机的话,IP为172.17.42.1)

  2. 用brige方式启动docker

    docker run --net="bridge" -ti test/redis:2.6 /bin/bash

      

  3. 在容器内查看

    容器内执行ifconfig

    eth0      Link encap:Ethernet  HWaddr 02:42:AC:11:00:03            inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0          inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1          RX packets:8 errors:0 dropped:0 overruns:0 frame:0          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0           RX bytes:648 (648.0 b)  TX bytes:648 (648.0 b)lo        Link encap:Local Loopback            inet addr:127.0.0.1  Mask:255.0.0.0          inet6 addr: ::1/128 Scope:Host          UP LOOPBACK RUNNING  MTU:65536  Metric:1          RX packets:0 errors:0 dropped:0 overruns:0 frame:0          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0          collisions:0 txqueuelen:0           RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

      

    发现容器内分配了一个IP 172.17.0.3

  4. 查看宿主机ifconfig 
    宿主机执行ifconfig

    docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500        inet 172.17.0.1  netmask 255.255.0.0  broadcast 0.0.0.0        inet6 fe80::42:63ff:fe23:ee0d  prefixlen 64  scopeid 0x20<link>        ether 02:42:63:23:ee:0d  txqueuelen 0  (Ethernet)        RX packets 1403678  bytes 864222084 (824.1 MiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 1408014  bytes 1359202385 (1.2 GiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 9001        inet 172.31.12.125  netmask 255.255.240.0  broadcast 172.31.15.255        inet6 fe80::c0:6ff:fe61:3127  prefixlen 64  scopeid 0x20<link>        ether 02:c0:06:61:31:27  txqueuelen 1000  (Ethernet)        RX packets 4636333  bytes 1663112300 (1.5 GiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 4953477  bytes 1534107877 (1.4 GiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536        inet 127.0.0.1  netmask 255.0.0.0        inet6 ::1  prefixlen 128  scopeid 0x10<host>        loop  txqueuelen 0  (Local Loopback)        RX packets 11256543  bytes 7851902509 (7.3 GiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 11256543  bytes 7851902509 (7.3 GiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0veth32796b4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500        inet6 fe80::c85b:82ff:fef3:9163  prefixlen 64  scopeid 0x20<link>        ether ca:5b:82:f3:91:63  txqueuelen 0  (Ethernet)        RX packets 8  bytes 648 (648.0 B)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 8  bytes 648 (648.0 B)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

      

    发选新增了一个veth32796b4的虚拟网卡

    执行brctl show:

    [root@ip-172-31-12-125 ~]# brctl show    bridge name     bridge id               STP enabled     interfaces    docker0         8000.02426323ee0d       no              veth32796b4

      

    发现veth32796b4网卡绑定在docker0网桥上

  5. 在容器内启动redis

    [root@06680c786ada ~]# ./control -m 512mb start &[1] 18[root@06680c786ada ~]# ps uxUSER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMANDroot         1  0.0  0.0  11492  1776 ?        Ss   03:57   0:00 /bin/bashroot        18  0.0  0.0  11352  1440 ?        S    04:09   0:00 /bin/sh ./control -m 512mb startroot        24  0.0  0.0  36416  7528 ?        Sl   04:09   0:00 /usr/local/bin/redis-server /etc/redis/redis.croot        27  0.0  0.0  13368  1044 ?        R+   04:10   0:00 ps ux[root@06680c786ada ~]# netstat -nltp | grep 6379tcp        0      0 0.0.0.0:6379                0.0.0.0:*                   LISTEN      24/redis-server 

      

    如图在容器内启动redis-server,并绑定在6379端口

  6. 宿主机查看端口

    [root@ip-172-31-12-125 ~]# ps aux | grep redis-server | grep -v grep[root@ip-172-31-12-125 ~]# netstat -nltp | grep 6379

      

    如图,由于为docker容器分配了独立的network namespace,因此在宿主机上看不到redis的进程,而且也查看不到绑定的端口

  7. 通过-p启动docker容器和redis

    # 启动docker容器[root@ip-172-31-12-125 ~]# docker run --net="bridge" -ti -p 16379:6379 nd/redis:2.6 /bin/bash# 启动redis[root@aaf1c7f79a1b /]# /root/control -m 512mb start &[1] 13# 查看redis进程[root@aaf1c7f79a1b /]# ps aux | grep redisroot        19  0.0  0.0  36416  7524 ?        Sl   04:18   0:00 /usr/local/bin/redis-server /etc/redis/redis.confroot        23  0.0  0.0   6492   664 ?        S+   04:19   0:00 grep redis# 查看端口占用[root@aaf1c7f79a1b /]# netstat -nltp | grep 6379tcp        0      0 0.0.0.0:6379                0.0.0.0:*                   LISTEN      19/redis-server     [root@aaf1c7f79a1b /]# netstat -nltp | grep 16379

      

    如图,在通过-p把容器的6379绑定到宿主机的16379端口,在容器内redis服务顺利启动,并成功绑定6379端口

  8. 查看宿主机

    # 没有看到对应进程[root@ip-172-31-12-125 ~]# ps aux | grep redis-server | grep -v grep# 查看6379端口,无绑定[root@ip-172-31-12-125 ~]# netstat -nltp | grep 6379tcp6       0      0 :::16379                :::*                    LISTEN      4238/docker-proxy   # 查看16379端口,绑定[root@ip-172-31-12-125 ~]# netstat -nltp | grep 16379tcp6       0      0 :::16379                :::*                    LISTEN      4238/docker-proxy 

none模式

container模式

docker学习笔记3 - 网络配置