在ubuntu上通过kubeadm部署K8S(v1.13.4)高可用集群

所有节点全程采用root用户操作。centos用yum安装,ubuntu用apt-get安装。

  • 环境准备:
  1. CPU需2核以上

虚拟机或服务器的master节点CPU需2核以上,可通过下述命令查看:

cat /proc/cpuinfo 查看cpu cores的个数

  1. 时间同步

apt-get install -y ntpdate

ntpdate -u ntp.api.bz

 

  1. 系统配置修改

禁用swap

swapoff -a

同时把/etc/fstab包含swap那行记录删掉。

 

关闭防火墙

systemctl stop firewalld

systemctl disable firewalld

 

禁用Selinux

apt install selinux-utils

setenforce 0

 

  1. 节点规划

主机名    IP&Role

10.30.28.181   ubuntu1804-k8-m1   etcd、Master、keepalived

10.30.28.182   ubuntu1804-k8-m2   etcd、Master、keepalived

10.30.28.183   ubuntu1804-k8-m3   etcd、Master、keepalived

10.30.28.184   ubuntu1804-k8-s1    Node

10.30.28.185   ubuntu1804-k8-s2     Node

10.30.28.250    cluster.kube.com      VIP

 

所有节点主机名和IP加入/etc/hosts解析

vim /etc/hosts 加入以下内容:

10.30.28.181   ubuntu1804-k8-m1  

10.30.28.182   ubuntu1804-k8-m2   

10.30.28.183   ubuntu1804-k8-m3  

10.30.28.184   ubuntu1804-k8-s1   

10.30.28.185   ubuntu1804-k8-s2    

10.30.28.250    cluster.kube.com  

  1. 所有机器安装DOCKER

apt  install docker.io

systemctl enable docker.service

  1.     设置国内下载源

修改apt的源,采用vim /etc/apt/sources.list命令修改替换:

deb http://mirrors.aliyun.com/ubuntu bionic main restricted universe multiverse

  

deb http://mirrors.aliyun.com/ubuntu bionic-security main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu bionic-updates main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu bionic-proposed main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu bionic-backports main restricted universe multiverse

 

deb-src http://mirrors.aliyun.com/ubuntu bionic main restricted universe multiverse

 

deb-src http://mirrors.aliyun.com/ubuntu bionic-security main restricted universe multiverse

 

deb-src http://mirrors.aliyun.com/ubuntu bionic-updates main restricted universe multiverse

deb-src http://mirrors.aliyun.com/ubuntu bionic-proposed main restricted universe multiverse

deb-src http://mirrors.aliyun.com/ubuntu bionic-backports main restricted universe multiverse

 

 

修改kubernetes的源

curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

 

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb http://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

EOF

 

apt-get update

 

 

  1. 准备镜像安装 kubeadm、kubect、kubeet

apt-get install -y kubelet=1.13.4-00 kubeadm=1.13.4-00 kubectl=1.13.4-00

  1. 清单:

K8s master 端:

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.4

docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.4

docker pull mirrorgooglecontainers/kube-scheduler:v1.13.4

docker pull mirrorgooglecontainers/kube-proxy:v1.13.4

docker pull mirrorgooglecontainers/pause:3.1

docker pull mirrorgooglecontainers/etcd:3.2.24

docker pull coredns/coredns:1.2.6

docker tag mirrorgooglecontainers/kube-proxy:v1.13.4  k8s.gcr.io/kube-proxy:v1.13.4

docker tag mirrorgooglecontainers/kube-scheduler:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.4 k8s.gcr.io/kube-apiserver:v1.13.4

docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4

docker tag mirrorgooglecontainers/etcd:3.2.24  k8s.gcr.io/etcd:3.2.24

docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1

docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.4

docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.4

docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.4

docker rmi mirrorgooglecontainers/kube-proxy:v1.13.4

docker rmi mirrorgooglecontainers/pause:3.1

docker rmi mirrorgooglecontainers/etcd:3.2.24

docker rmi coredns/coredns:1.2.6

 

 K8s node端:

Node端需要下载的镜像

docker pull mirrorgooglecontainers/kube-proxy:v1.13.4

docker pull mirrorgooglecontainers/pause:3.1

docker pull coredns/coredns:1.2.6

docker tag mirrorgooglecontainers/kube-proxy:v1.13.4  k8s.gcr.io/kube-proxy:v1.13.4

docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1

docker rmi mirrorgooglecontainers/kube-proxy:v1.13.4

docker rmi mirrorgooglecontainers/pause:3.1

docker rmi coredns/coredns:1.2.6

  • 安装K8S集群

 

  1. 安装haproxy和keepaived

在三台master上安装keepalived,三台配置略有差异,根据备注自己修改

apt-get install -y keepalived

 

[[email protected]] ~$ vim /etc/keepalived/keepalived.conf

 

vrrp_instance VI_1 {

    state MASTER         #备服务器上改为BACKUP

    interface ens160        #改为自己的接口

    virtual_router_id 51

    priority 100         #备服务器上改为小于100的数字,90,80

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111

    }

    virtual_ipaddress {

        10.30.28.250          #虚拟ip,自己设定

    }

}

 

#启动

 

systemctl enable keepalived && systemctl start keepalived

 

#查看虚拟ip是不是在master1上,关闭master1的keepalived服务,测试下vip是否漂移到其他机器

 

ip a

 

  1. 安装haproxy

三台master上配置一样

apt-get install -y haproxy

vim /etc/haproxy/haproxy.cfg

global

    log         127.0.0.1 local2

    chroot      /var/lib/haproxy

    pidfile     /var/run/haproxy.pid

    maxconn     4000

    user        haproxy

    group       haproxy

    daemon

defaults

    mode                    tcp

    log                     global

    retries                 3

    timeout connect         10s

    timeout client          1m

    timeout server          1m

frontend kubernetes

    bind *:8443

    mode tcp

    default_backend kubernetes-master

backend kubernetes-master

    balance roundrobin

    server master1  10.30.28.181:6443 check maxconn 2000

    server master2  10.30.28.182:6443 check maxconn 2000

server master3  10.30.28.183:6443 check maxconn 2000

#启动

systemctl enable haproxy && systemctl start haproxy

 

  1. 初始化master1

vim kubeadm-config.yaml创建集群配置文件:

 

apiVersion: kubeadm.k8s.io/v1beta1

kind: ClusterConfiguration

kubernetesVersion: v1.13.4

apiServer:

  certSANs:

  - "10.30.28.181"

  - "10.30.28.182"

  - "10.30.28.183"

  - "10.30.28.250"

controlPlaneEndpoint: "10.30.28.250:8443"

 

networking:

  podSubnet: "10.244.0.0/16"

在ubuntu上通过kubeadm部署K8S(v1.13.4)高可用集群

注意controlPlaneEndpoint前面的空格要删除,否则会出现解析JSON错误。

 

 

然后执行命令:sudo kubeadm init --config=kubeadm-config.yaml

 

一切正常后执行如下命令:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

若在执行kubectl 命令出现错误,可试着试入以下命令:

export KUBECONFIG=/etc/kubernetes/admin.conf

  1. 安装flannel网络:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yaml

上面这一步需要耐心等待,因为要下载镜像,有点慢,需要通过运行kubectl get nodes确保节点是ready状态。通过下面的命令确保所有pod都是running。

kubectl get pod -n kube-system -w

 

  1. 把证书文件从MASTER1拷贝到MASTER2和MASTER3

MASTER1上执行:

USER=user1 # customizable

CONTROL_PLANE_IPS="10.30.28.182 10.30.28.183"

for host in ${CONTROL_PLANE_IPS}; do

    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:

    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:

    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:

    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:

    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:

    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:

    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt

    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key

    scp /etc/kubernetes/admin.conf "${USER}"@$host:

done

 

Master2 和MASTER3上执行:

USER=user1 # customizable

mkdir -p /etc/kubernetes/pki/etcd

mv /home/${USER}/ca.crt /etc/kubernetes/pki/

mv /home/${USER}/ca.key /etc/kubernetes/pki/

mv /home/${USER}/sa.pub /etc/kubernetes/pki/

mv /home/${USER}/sa.key /etc/kubernetes/pki/

mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/

mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/

mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt

mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key

mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf

 

  1. 在MASTER2和MASTER3节点上加入前面创建的高可用集群

kubeadm join 10.30.28.250:8443 --token maur7r.vw6zi46pkketqto7 --discovery-token-ca-cert-hash sha256:154ff71167ad3aa55740ec1f42696f8dbd3ab7247e910e7f8b95e05ece04293f --experimental-control-plane

注意:由于是以MASTER角色加入集群,需要加上--experimental-control-plane标记。另若有http_proxy 外网代理(如设置vim /etc/profile),需要先去除。

  1. 其它node节点加入集群

kubeadm join 10.30.28.250:8443 --token maur7r.vw6zi46pkketqto7 --discovery-token-ca-cert-hash sha256:154ff71167ad3aa55740ec1f42696f8dbd3ab7247e910e7f8b95e05ece04293f

 

  1. 有些虚机节点若不能上外网,可以采用拷贝另一可上外网虚机上的镜像

你需将doctor镜像保存为tar文件:docker save -o <save image to path> <image name>

然后使用常规文件传输工具(如cp或如scp flannel [email protected]:/home/supcon)将镜像复制到新系统。之后,你可以将镜像加载到docker中:

docker load -i <path to image tar file>

  1. 最终效果

kubectl get nodes都是ready;所有pod都是running(kubectl get pod --all-namespaces)。

在ubuntu上通过kubeadm部署K8S(v1.13.4)高可用集群

 

  1. 安装dashboard

1)在worker node中安装以下镜像

docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

docker rmi mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

2)执行如下命令进行部署

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

3)稍等一会,pod创建好后,查看服务状态

在ubuntu上通过kubeadm部署K8S(v1.13.4)高可用集群

4)授予Dashboard账户集群管理权限

用VIM创建admin-user.yaml文件,

内容如下:

apiVersion: v1

kind: ServiceAccount

metadata:

  name: admin-user

  namespace: kube-system

---

# Create ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRoleBinding

metadata:

  name: admin-user

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: cluster-admin

subjects:

- kind: ServiceAccount

  name: admin-user

  namespace: kube-system

 

然后执行如下命令:kubectl apply -f admin-user.yaml

5)找到admin-user的token,记下这串token,等下登录的时候会使用,这个token默认是永久的。

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

6)Service 目前是 ClusterIP 类型,为了可以从集群外部访问,可以将服务修改成 NodePort 类型。

#也可以用一条命令来执行: kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kube-system

在ubuntu上通过kubeadm部署K8S(v1.13.4)高可用集群

7)输入https://10.30.28.181:32717后即可看到登录界面,采用token登录,输入第5步中出现的token即可进入如下界面。

 

在ubuntu上通过kubeadm部署K8S(v1.13.4)高可用集群

  • 验证集群

可以执行kubectl run nginx --replicas=2 --labels="run=load-balancer-example" --image=nginx --port=80

后查看是否成功创建两个nginx的POD

在ubuntu上通过kubeadm部署K8S(v1.13.4)高可用集群

 

  • 出现的问题
  1. 执行docker pull时曾出现类似以下的错误

Error response from daemon: Get https://registry-1.docker.io/v2/: remote error: tls: handshake failure

可能是由于网络代理后tls有问题,通过将docker的源设置成http的源,而不是https的来绕过该问题:

可以采用将/etc/docker/daemon.json设置http的镜像源,还不是https。

{

        "registry-mirrors": [

                "http://hub-mirror.c.163.com"

        ],

        "dns": ["8.8.8.8","8.8.4.4"]

}

另,单独给docker设置代理:

Create a systemd drop-in directory for the docker service:

 

$ sudo mkdir -p /etc/systemd/system/docker.service.d

 

 

Create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY environment variable:

 

[Service]Environment="HTTP_PROXY=http://proxy.example.com:80/"

 

Or, if you are behind an HTTPS proxy server, create a file called/etc/systemd/system/docker.service.d/https-proxy.conf that adds the HTTPS_PROXY environment variable:

 

[Service]Environment="HTTPS_PROXY=https://proxy.example.com:443/"

 

  1. MASTER1上每次通过xshell进入都要重新设置KUBECONFIG的问题

曾出现MASTER1上每次通过xshell进入都要重新设置KUBECONFIG的问题,原因是/root/.kube下的config配置是错的,可以把该config文件删除,然后重新执行以下命令即可:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1. keepalived服务失效,IP漂移后无法ping通,出现如下错误:

 Keepalived_vrrp[1216]: Unable to load ipset library - libipset.so.3: cannot open shared object file:

可以通过以下命令解决:apt-get install ipset

  1. vip无法ping通 
    keepalived.conf中vip配置好后,通过ip addr可以看到vip已经顺利挂载,但是无法ping通,并且防火墙都已关闭,原因是/etc/keepalived/keepalived.conf配置中默认vrrp_strict打开了,需要把它注释掉。重启keepalived即可ping通。
  2. 安装flannel网络组件时出现过Unable to connect to the server: x509: certificate signed by unknown authority的错误,如下:

在ubuntu上通过kubeadm部署K8S(v1.13.4)高可用集群

解决方法:先用curl https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yaml手工把kube-flannel.yaml下载来来,然后执行kubectl apply -f kube-flannel.yaml即可。(有时候会出现下载不下来的情况,返回404: Not Found