kubernets安装部署v1.19 (源码安装)
1、k8s的安装方式
1 2 3 4 5 6 7 8 9 二进制安装 生产推荐 kubeadm 生产推荐 Ansible 二进制安装 Rancher 虽然k8s是有自带的图形化界面,但在企业里面用Rancher的还是比较多 阿里云ACK 亚马逊云EKS 安装k8s1.19版,k8s中小版本变了影响很大,接口是不一样的
环境准备
主机名
ip
角色
配置
配置
master01
10.0.0.200 / 172.16.1.200
master
kubectl,apiserver,scheduler,controller, etcd,kubelet,kube-proxy
1h2G
node01
10.0.0.201 / 172.16.1.201
node
kubectl,apiserver, kube-proxy
1h2G
node02
10.0.0.202 / 172.16.1.202
node
kubectl,apiserver, kube-proxy
1h2G
node03
10.0.0.203 / 172.16.1.203
node
kubectl,apiserver, kube-proxy
2h4G
IP规划
为什么规划IP? 因为所有的POD起来应该在同一个网段,Cluster IP也在同一个网段
三种service
IP
POD IP
10.2.0.0
Cluster IP
10.1.0.0
Node IP
10.0.0.0
注意:如果是二进制安装,默认的pod IP是10.0.0.0网段,宿主机的网段和pod ip·不能一样
①、部署前的环境优化
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 1、配置kublet配置文件,使用系统自带的Cgroup驱动和禁用swap cat >/etc/sysconfig/kubelet <<EOF KUBELET_CGROUP_ARGS="--cgroup-driver=systemd" KUBELET_EXTRA_ARGS="--fail-swap-on=false" EOF KUBELET_CGROUP_ARGS="--cgroup-driver=systemd" KUBELET_EXTRA_ARGS="--fail-swap-on=false" 2、内参数调优,要把iptable功能开启(因为底层端口的转发端口映射都是防火墙做的) cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 vm.swappiness=0 fs.file-max=52706963 fs.nr_open=52706963 EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 vm.swappiness=0 fs.file-max=52706963 fs.nr_open=52706963 3、检查是否配置成功 sysctl --system 执行之后输出结果倒数的行会显示第2步配置的内容 * Applying /etc/sysctl.d/k8s.conf ... net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1 vm.swappiness = 0 fs.file-max = 52706963 fs.nr_open = 52706963 4、更改docker源 [root@master01 ~]# cat > /etc/yum.repos.d/docker-ce.repo <<EOF [docker-ce-stable] name=Docker CE Stable - $basearch baseurl=https://download.docker.com/linux/centos/7/x86_64/stable enabled=1 gpgcheck=1 gpgkey=https://download.docker.com/linux/centos/gpg EOF [root@master01 ~]# sed -i 's+download.docker.com+mirrors.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo 5、安装时间同步服务 (这个不需要写在定时任务里面) yum install -y chrony systemctl start chronyd systemctl enable chronyd 6、关闭swap swapoff -a sed -i '/swap/d' /etc/fstab free -m [root@master01 ~]# free -m total used free shared buff/cache available Mem: 1846 100 1271 9 474 1564 Swap: 0 0 0#变成0就是关闭了 swapoff -a sed -i '/swap/d' /etc/fstab 7、加载ipvs模块 做k8s,底层有转发,需要配合ipvs模块才可以 cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod +x /etc/sysconfig/modules/ipvs.modulessource /etc/sysconfig/modules/ipvs.moduleslsmod|grep -e 'ip_vs' -e 'nf_conntrack_ipv' ipvs:LVS做四层负载,不需要装任何软件,需要下载ipvsadm命令,用ipvsadm这个命令去改主机的路由、路由表、网络相关的东西改掉,把这个主机当成四层负载的机器,底层要做转发的,LVS这个服务比较独特的,没有专门的应用,不像nginx、HAproxy做负载均衡需要安装nginx、HAproxy,LVS没有服务安装,需要安装ipvsadm,用这个命令去修改路由修改转发这些东西,让整个物理机变成转发的机器
②、安装docker指定版本
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 1、查询docker版本 yum list docker-ce yum list docker-ce --showduplicates 2、安装docke 19.03.15版本,因为这个版本比较稳定 yum install -y docker-ce-19.03.15 docker-ce-cli-19.03.15 containerd.io systemctl start docker && systemctl enable docker containerd.io:容器运行时,用k8s新版本,可以直接无痕从docke对接到containerd,默认安装最新版 3、查看是否出现警告,再看一下显示的版本是否对的上 [root@node01 ~]# docker info Client: ... Server Version: 19.03.15 4、配置镜像加速和Cgroup驱动 cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": [ "https://docker.1panel.live", "https://dockercf.jsdelivr.fyi", "https://docker-cf.registry.cyou", "https://docker.chenby.cn", "https://docker.jsdelivr.fyi", "https://docker.m.daocloud.io", "https://docker.m.daocloud.io", "https://docker.mirrors.sjtug.sjtu.edu.cn", "https://docker.mirrors.ustc.edu.cn", "https://docker.nju.edu.cn", "https://dockerproxy.com", "https://docker.rainbond.cc", "https://docker.registry.cyou", "https://dockertest.jsdelivr.fyi", "https://hub-mirror.c.163.com", "https://hub.rat.dev/", "https://mirror.aliyuncs.com", "https://mirror.baidubce.com", "https://mirror.iscas.ac.cn", "https://registry.docker-cn.com" ] } EOF systemctl daemon-reload systemctl restart docker 5、测试拉取 [root@node01 ~]# docker pull busybox [root@node01 ~]# docker images
③、安装kubeadm
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 1、更换华为源 cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.huaweicloud.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.huaweicloud.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.huaweicloud.com/kubernetes/yum/doc/rpm-package-key.gpg EOF 2、安装kubelet node节点容器运行时的控制器 kubeadm做k8s集群 kubectl是k8s的客户端命令 ipvsadm加载jpvs模块 yum install kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3 ipvsadm -y systemctl start kubelet.service && systemctl enable kubelet.service kubelet-1.19.3 :控制容器运行时 kubeadm-1.19.3 :类似于stream kubectl-1.19.3 :执行k8s命令需要的客户端 ipvsadm:上面配置IPVS模块,需要使用这个命令
④、master01初始化集群
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 1、初始化集群 大概要等待几分钟 [root@master01 ~]# kubeadm init \ --apiserver-advertise-address=10.0.0.200 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version=v1.19.3 \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.2.0.0/16 \ --service-dns-domain=cluster.local \ --ignore-preflight-errors=Swap \ --ignore-preflight-errors=NumCPU --apiserver-advertise-address=master的ip \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version=v1.19.3 \ --service-cidr=10.1.0.0/16 \ --pod-network-cidr=10.2.0.0/16 \ --service-dns-domain=cluster.local \ --ignore-preflight-errors=Swap \ --ignore-preflight-errors=NumCPU 2、保存最后几行内容 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME /.kube sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config sudo chown $(id -u):$(id -g) $HOME /.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.200:6443 --token cgex1m.9ck6cjchon5xn4j5 \ --discovery-token-ca-cert-hash sha256:273cdce81c642065666a2d0eb4fd0c3097df69e984fa50836b8daf2ed10de18a 您的Kubernetes控制平面已成功初始化! 要开始使用集群,您需要以普通用户身份运行以下命令: mkdir-p$HOME /.kube sudo cp-i/etc/kubernetes/admin.conf$HOME /.kube/configsudo chown $(id-u):$(id-g)$HOME /.kube/config现在,您应该将pod网络部署到集群。 使用以下列出的选项之一运行“kubectl apply-f[podnetwork].yaml”: https://kubernetes.io/docs/concepts/cluster-administration/addons/ 然后,您可以通过以root身份在每个工作节点上运行以下命令来加入任意数量的工作节点: kubeadm join 10.0.0.200:6443 --token cgex1m.9ck6cjchon5xn4j5 \ --discovery-token-ca-cert-hash sha256:273cdce81c642065666a2d0eb4fd0c3097df69e984fa50836b8daf2ed10de18a 3、可以看到多了好几个镜像,说明刚才的命令执行成功,拉取了需要的镜像 [root@master01 ~]# docker images 4、执行这几个命令 创建隐藏目录,拷贝配置文件,授权 mkdir -p $HOME /.kubesudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/configsudo chown $(id -u):$(id -g) $HOME /.kube/config5、查看k8s集群节点, 会看到master主节点 [root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 NotReady master 4m32s v1.19.3 6、目前只有一个节点,所以要将其他节点加入集群 [root@master01 ~]# kubeadm join 10.0.0.200:6443 --token cgex1m.9ck6cjchon5xn4j5 \ --discovery-token-ca-cert-hash sha256:273cdce81c642065666a2d0eb4fd0c3097df69e984fa50836b8daf2ed10de18a 7、加入集群命令执行之后,最后一行信息显示如下 Run 'kubectl get nodes' on the control-plane to see this node join the cluster. 在控制平面上(master)运行“kubectl get nodes”以查看此节点加入集群。 8、#到master01执行kubectl get nodes [root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 NotReady master 9m33s v1.19.3 node01 NotReady <none> 3m48s v1.19.3 node02 NotReady <none> 118s v1.19.3 node03 NotReady <none> 106s v1.19.3 现在的状态是NotReady没有准备好的,因为网络还没配 9、设置kube-proxy使用ipvs模式 k8s默认使用的是iptables防火墙,可以修改成性能更高的ipvs模式,该模式LVS也在使用 [root@master01 ~]# kubectl edit cm kube-proxy -n kube-system 44行 mode: " " 改为 mode: "ipvs" 10、查看名称空间 [root@master01 ~]# kubectl get ns 或者 [root@master01 ~]# kubectl get namespace NAME STATUS AGE default Active 45m kube-node-lease Active 45m kube-public Active 45m kube-system Active 45m 11、查看指定名称空间中的pod信息 [root@master01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-6d56c8448f-85k2r 0/1 Pending 0 46m coredns-6d56c8448f-cn2t9 0/1 Pending 0 46m etcd-master01 1/1 Running 0 46m kube-apiserver-master01 1/1 Running 0 46m kube-controller-manager-master01 1/1 Running 0 46m kube-proxy-2wfhq 1/1 Running 0 38m kube-proxy-8tmqx 1/1 Running 0 40m kube-proxy-fr9dl 1/1 Running 0 39m kube-proxy-pz4ms 1/1 Running 0 46m kube-scheduler-master01 1/1 Running 0 46m 12、查看指定名称空间中pod的详细信息 可以看到起在哪台机器上 [root@master01 ~]# kubectl get pod -n kube-system -o wide 用的DaemonSet控制器,每台有且只能起一个 13、重启pod 把他删掉,就会自动拉起,以后有哪个组件坏了,就删掉重新拉起 [root@master01 ~]# kubectl get pod -n kube-system|grep 'kube-proxy' |awk '{print "kubectl delete pod -n kube-system "$1}' |bash
**⑤、master 配置flanale **
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 链接地址: https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml 1、下载资源清单 [root@master01 ~]# wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml 2、修改flannel资源清单 [root@master01 ~]# vim kube-flannel.yml "Network" : "10.2.0.0/16" , ...... ..... containers: - args: - --ip-masq - --kube-subnet-mgr - --iface=eth0 spec: selector: matchLabels: app: flannel app: flannel是一个标签选择器,只有打了app标签的node,才会安装flannel 3、只要修改了flannel资源清单,就要执行应用flannel资源清单 [root@master01 ~]# kubectl apply -f kube-flannel.yml 4、#检查这几个状态 ①、检查flannel的pod是否启动成功 4个STATUS显示Running就是成功 [root@master01 ~]# kubectl get pod -n kube-flannel NAME READY STATUS RESTARTS AGE kube-flannel-ds-4ncf5 1/1 Running 0 4m20s kube-flannel-ds-7dnxx 1/1 Running 0 4m20s kube-flannel-ds-dpzqp 1/1 Running 0 4m20s kube-flannel-ds-knmch 1/1 Running 0 4m20s ②、检查k8s集群节点状态 STATUS显示Ready就是准备好的,集群之间起的pod就可以互相通信了 [root@master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master01 Ready master 81m v1.19.3 node01 Ready <none> 75m v1.19.3 node02 Ready <none> 73m v1.19.3 node03 Ready <none> 73m v1.19.3 ③、检查coredns是否正常运行 只要网络配好,dns就ok了,STATUS显示Running状态 [root@master01 ~]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-6d56c8448f-85k2r 1/1 Running 0 81m coredns-6d56c8448f-cn2t9 1/1 Running 0 81m etcd-master01 1/1 Running 0 81m kube-apiserver-master01 1/1 Running 0 81m kube-controller-manager-master01 1/1 Running 0 81m kube-proxy-2skng 1/1 Running 0 30m kube-proxy-fxwbg 1/1 Running 0 30m kube-proxy-w6v6r 1/1 Running 0 30m kube-proxy-zng7l 1/1 Running 0 30m kube-scheduler-master01 1/1 Running 0 81m
1 2 3 4 kubectl label node node01 node-role.kubernetes.io/node= kubectl label node node02 node-role.kubernetes.io/node= kubectl label node node03 node-role.kubernetes.io/node=
2、安装kubectl命令补全黑科技
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 yum install -y bash-completion source /usr/share/bash-completion/bash_completion source <(kubectl completion bash) kubectl completion bash > /etc/bash_completion.d/kubectl 安装完成需要退出在重新连接 logout [root@master01 ~]# kubectl ap 按tab可以补全了 api-resources api-versions apply [root@master01 ~]# kubectl get namespaces 按tab可以补全了 default kube-flannel kube-node-lease kube-public kube-system
🍉2、kubernets安装部署v1.19-源码安装