docker容器跨主机通信
1 2 3 4 5 6 起那么多台docker,想做集群,就要让容器之间互相通信,然后做成一个架构 现在起一个容器自己决定起在哪一台上,没有对这台的资源进行计算 多机编排工具可以计算哪台机器适合把服务起在哪台机器
Docker跨主机网络类型
1 2 3 4 5 1、静态路由 2、flannel (k8s里面用的比较多) 3、overlay 4、macvlan 5、calico
1、静态路由模式
1 2 3 4 5 6 7 8 9 10 静态路由做起来简单,但是阿里云不让添加路由 1. 添加一条到达网络192.168.1.0/24的路由,网关为192.168.0.1,距离值为10: route add -net 192.168.1.0 netmask 255.255.255.0 gw 192.168.0.1 metric 10 2. 添加一条到达主机192.168.1.100的路由,网关为192.168.0.1: route add -host 192.168.1.100 gw 192.168.0.1 3. 添加一条到达网络192.168.0.0/16的路由,网关为192.168.1.1,距离值为20,经过eth0接口: route add -net 192.168.0.0 netmask 255.255.0.0 gw 192.168.1.1 metric 20 dev eth0
2、flannel (k8s里面用的比较多)
2、flanne网络模式的部署
环境准备
主机名
ip
角色
应用
docker01
10.0.0.101 / 172.16.1.101
互相通信的容器
flanne
docker02
10.0.0.102 / 172.16.1.102
互相通信的容器
flanne
docker03
10.0.0.103 / 172.16.1.103
互相通信的容器
flanne
elk01
10.0.0.76 / 172.16.1.76
储存数据的中介
ETCD
1、部署ETCD
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1、安装etcd [root@elk1 ~]# yum -y install etcd 2、修改etcd配置文件 [root@elk1 ~]# vim /etc/etcd/etcd.conf ETCD_NAME=default ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_CLIENT_URLS="http://172.16.1.76:2379,http://127.0.0.1:2379" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://172.16.1.76:2379" 3、启动etcd并加入开机自启 [root@elk1 ~]# systemctl start etcd [root@elk1 ~]# systemctl enable etcd 查看端口 2380是图形化界面连接的端口
ETCD的基础操作
1 2 3 4 5 6 7 8 9 10 -c 调接口 1、检查集群的健康状态 [root@elk1 ~]# etcdctl -C http://127.0.0.1:2379 cluster-health 2、写入数据 [root@elk1 ~]# etcdctl -C http://127.0.0.1:2379 set /testdir/testkey "hello world" 3、查看数据 [root@elk1 ~]# etcdctl -C http://127.0.0.1:2379 get /testdir/testkey
2、3台机器安装配置flannel
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 1、3台机器安装flannel [root@docker01 ~]# yum -y install flannel [root@docker02 ~]# yum -y install flannel [root@docker02 ~]# yum -y install flannel 2、3台机器修改flannle配置文件 [root@docker02 ~]# vim /etc/sysconfig/flanneld FLANNEL_ETCD_ENDPOINTS="http://172.16.1.76:2379" FLANNEL_ETCD_PREFIX="/atomic.io/network" 3、创建flannel数据再etcd中 [root@elk1 ~]# etcdctl mk /atomic.io/network/config '{"Network":"192.168.0.0/16"}' {"Network" :"192.168.0.0/16" } [root@elk1 ~]# etcdctl -C http://127.0.0.1:2379 set /atomic.io/network/config '{"Network":"192.168.0.0/16"}' 4、3台机器启动flannel [root@docker01 ~]# systemctl start flanneld && systemctl enable flanneld 5、3台机器查看flannel0网卡信息 [root@docker01 ~]# ifconfig flannel0: flags=4305<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST> mtu 1472 inet 192.168.54.0 netmask 255.255.0.0 destination 192.168.54.0
3、关联flannel和docker
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 1、3台机器查看改文件,一会会在启动脚本里面加信息 [root@docker02 ~]# cat /run/flannel/docker DOCKER_OPT_BIP="--bip=192.168.22.1/24" DOCKER_OPT_IPMASQ="--ip-masq=true" DOCKER_OPT_MTU="--mtu=1472" DOCKER_NETWORK_OPTIONS=" --bip=192.168.22.1/24 --ip-masq=true --mtu=1472" 2、3台机器修改docker启动脚本 [root@docker02 ~]# vim /usr/lib/systemd/system/docker.service [Service] EnviromentFile=/run/flannel/docker ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock $DOCKER_NETWORK_OPTIONS [root@docker03 ~]# systemctl daemon-reload && systemctl restart flanneld 3、3台机器开启内核转发 [root@docker03 ~]# echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf 4、3台机器开做如下操作 [root@docker01 ~]# systemctl restart docker [root@docker01 ~]# systemctl start firewalld [root@docker01 ~]# systemctl stop firewalld [root@docker01 ~]# systemctl start firewalld && systemctl stop firewalld [root@docker01 ~]# systemctl restart network [root@docker01 ~]# systemctl restart flanneld [root@docker01 ~]# systemctl restart docker ------------注意--------------------- 1、# 执行docker info 出现警告 WARNING: bridge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled cat > /etc/sysctl.d/docker.conf << EOF net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-iptables=1 net.ipv4.ip_forward=1 EOF [root@docker01 ~]# systemctl restart network [root@docker01 ~]# systemctl restart flanneld [root@docker01 ~]# systemctl restart docker
3、测试是否可以通信
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 1、进去查看3台机器的ip,然后再互相ping IP,如果可以ping通则ok [root@docker01 ~]# docker run -it busybox /bin/sh / eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:36:02 inet addr:192.168.54.2 Bcast:192.168.54.255 Mask:255.255.255.0 [root@docker02 ~]# docker run -it busybox /bin/sh / eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:16:05 inet addr:192.168.22.5 Bcast:192.168.22.255 Mask:255.255.255.0 [root@docker03 ~]# docker run -it busybox /bin/sh / eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:39:05 inet addr:192.168.57.5 Bcast:192.168.57.255 Mask:255.255.255.0
3、overlay跨主机通信
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 http://www.cnblogs.com/CloudMan6/p/7270551.html docker03上: consul存储ip地址的分配 docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap 设置容器的主机名 consul:kv类型的存储数据库(key:value) docker01、02上: vim /etc/docker/daemon.json { "cluster-store" : "consul://10.0.0.13:8500" ,"cluster-advertise" : "10.0.0.11:2376" } vim /usr/lib/systemd/system/docker.service systemctl daemon-reload systemctl restart docker 2)创建overlay网络 docker network create -d overlay --subnet 172.16.2.0/24 --gateway 172.16.2.254 ol1 3)启动容器测试 docker run -it --network ol1 --name oldboy01 busybox /bin/sh 每个容器有两块网卡,eth0实现容器间的通讯,eth1实现容器访问外网
4、macvlan
默认一个物理网卡,只有一个物理mac地址,虚拟多个mac地址
1 2 3 4 5 6 7 8 docker network create --driver macvlan --subnet 10.0.0.0/24 --gateway 10.0.0.254 -o parent=eth0 macvlan_1 ip link set eth0 promisc on docker run -it --network macvlan_1 --ip=10.0.0.200 busybox