Kubeadm创建高可用Kubernetes k8s集群v1.12 v1.13

来自官网的高可用架构图

Kubeadm创建高可用Kubernetes k8s集群v1.12 v1.13

高可用最重要的两个组件:

  • etcd:分布式键值存储、k8s集群数据中心。

  • kube-apiserver:集群的唯一入口,各组件通信枢纽。apiserver本身无状态,因此分布式很容易。

其它核心组件:

  • controller-manager和scheduler也可以部署多个,但只有一个处于活跃状态,以保证数据一致性。因为它们会改变集群状态。 集群各组件都是松耦合的,如何高可用就有很多种方式了。
kube-apiserver有多个,那么apiserver客户端应该连接哪个了,因此就在apiserver前面加个传统的类似于haproxy+keepalived方案漂个VIP出来,apiserver客户端,比如kubelet、kube-proxy连接此VIP。

安装前准备

1、k8s各节点SSH免密登录。
2、时间同步。
3、各Node必须关闭swap:swapoff -a,否则kubelet启动失败。
4、各节点主机名和IP加入/etc/hosts解析

加载内核模块

$ sudo modprobe br_netfilter
$ sudo modprobe ip_vs

设置系统参数

cat > kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

$ sudo cp kubernetes.conf /etc/sysctl.d/kubernetes.conf

$ sudo sysctl -p /etc/sysctl.d/kubernetes.conf

tcp_tw_recycle 和 Kubernetes 的 NAT 冲突,必须关闭 ,否则会导致服务不通;

关闭不使用的 IPV6 协议栈,防止触发 docker BUG;

设置系统时区

$ # 调整系统 TimeZone
$ sudo timedatectl set-timezone Asia/Shanghai

$ # 将当前的 UTC 时间写入硬件时钟
$ sudo timedatectl set-local-rtc 0

$ # 重启依赖于系统时间的服务
$ sudo systemctl restart rsyslog
$ sudo systemctl restart crond

更新系统时间
ntpdate cn.pool.ntp.org

加载ipvs模块

cat << EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules_dir="/usr/lib/modules/\`uname -r\`/kernel/net/netfilter/ipvs"

for i in \`ls \$ipvs_modules_dir | sed  -r 's#(.*).ko.xz#\1#'\`; do
    /sbin/modinfo -F filename \$i  &> /dev/null
    if [ \$? -eq 0 ]; then
        /sbin/modprobe \$i
    fidone
EOF

chmod +x /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules

部署etcd集群

kubeadm创建高可用集群有两种方法:

  • etcd集群由kubeadm配置并运行于pod,启动在Master节点之上。
  • etcd集群单独部署。

这里采用单独部署。

etcd的正常运行是k8s集群运行的提前条件,因此部署k8s集群首先部署etcd集群。安装CA证书安装CFSSL证书管理工具
直接下载二进制安装包:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
sudo chmod +x cfssl_linux-amd64
sudo mv cfssl_linux-amd64 /usr/local/bin/cfssl

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
sudo chmod +x cfssljson_linux-amd64
sudo mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
sudo chmod +x cfssl-certinfo_linux-amd64
sudo mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

所有k8s的执行文件全部放入/opt/bin/目录下创建CA配置文件

[email protected]:~# mkdir ssl
[email protected]:~# cd ssl/

根据config.json文件的格式创建如下的ca-config.json文件
过期时间设置成了 87600h

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "87600h"
      }
    }
  }
}
EOF

创建CA证书签名请求

cat > ca-csr.json << EOF
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GD",
      "L": "SZ",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

生成CA证书和私匙

[email protected]:~/ssl# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[email protected]:~/ssl# ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

拷贝ca证书到所有Node相应目录

[email protected]:~/ssl# mkdir -p /etc/kubernetes/ssl
[email protected]:~/ssl# cp ca* /etc/kubernetes/ssl
[email protected]:~/ssl# scp -r /etc/kubernetes 10.3.1.21:/etc/
[email protected]:~/ssl# scp -r /etc/kubernetes 10.3.1.25:/etc/

下载etcd文件:

有了CA证书后,就可以开始配置etcd了。

 wget https://github.com/coreos/etcd/releases/download/v3.2.22/etcd-v3.2.22-linux-amd64.tar.gz

[email protected]:$ cp etcd etcdctl /opt/bin/

对于K8s v1.13,其etcd版本不能低于3.2.1

创建etcd证书创建etcd证书签名请求文件

[email protected]:~/ssl# cat > etcd-csr.json << EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "10.3.1.20",
    "10.3.1.21",
    "10.3.1.25"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "GD",
      "L": "SZ",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

#特别注意:上述host的字段填写所有etcd节点的IP,否则会无法启动。生成etcd证书和私钥

生成etcd证书

[email protected]:~/ssl# cfssl gencert --ca=/etc/kubernetes/ssl/ca.pem \
     --ca-key=/etc/kubernetes/ssl/ca-key.pem \
     --config=/etc/kubernetes/ssl/ca-config.json \
     --profile=kubernetes etcd-csr.json | cfssljson -bare etcd

    2018/10/01 10:01:14 [INFO] generate received request
    2018/10/01 10:01:14 [INFO] received CSR
    2018/10/01 10:01:14 [INFO] generating key: rsa-2048
    2018/10/01 10:01:15 [INFO] encoded CSR
    2018/10/01 10:01:15 [INFO] signed certificate with serial number 379903753757286569276081473959703411651822370300
    2018/02/06 10:01:15 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").



[email protected]:~/ssl# ls etcd*
etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

-profile=kubernetes 这个值根据 -config=/etc/kubernetes/ssl/ca-config.json 文件中的profiles字段而来。

拷贝证书到所有节点对应目录:

三个节点都创建该目录
mkdir -p /etc/etcd/ssl

拷贝
[email protected]:~/ssl# cp etcd*.pem /etc/etcd/ssl
[email protected]:~/ssl# scp -r /etc/etcd 10.3.1.21:/etc/
etcd-key.pem                                                      100% 1675    1.5KB/s  00:00                                   
etcd.pem                                                              100% 1407    1.4KB/s  00:00                         
[email protected]:~/ssl# scp -r /etc/etcd 10.3.1.25:/etc/
etcd-key.pem                                                      100% 1675    1.6KB/s  00:00   
etcd.pem                                                              100% 1407    1.4KB/s  00:00

创建etcd的Systemd unit 文件
证书都准备好后就可以配置启动文件了

[email protected]:~# mkdir -p /var/lib/etcd  #必须先创建etcd工作目录

[email protected]:~# vim /etc/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/opt/bin/etcd \
--name=etcd-host0 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls=https://10.3.1.20:2380 \
--listen-peer-urls=https://10.3.1.20:2380 \
--listen-client-urls=https://10.3.1.20:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://10.3.1.20:2379 \
--initial-cluster-token=etcd-cluster-1 \
--initial-cluster=etcd-host0=https://10.3.1.20:2380,etcd-host1=https://10.3.1.21:2380,etcd-host2=https://10.3.1.25:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target 

启动etcd

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

把etcd启动文件拷贝到另外两台节点,修改下配置就可以启动了。
查看集群状态:
由于etcd使用了证书,所以etcd命令需要带上证书:

#查看etcd成员列表

[email protected]:~# etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/kubernetes/ssl/ca.pem member list
702819a30dfa37b8: name=etcd-host2 peerURLs=https://10.3.1.20:2380 clientURLs=https://10.3.1.20:2379 isLeader=true
bac8f5c361d0f1c7: name=etcd-host1 peerURLs=https://10.3.1.21:2380 clientURLs=https://10.3.1.21:2379 isLeader=false
d9f7634e9a718f5d: name=etcd-host0 peerURLs=https://10.3.1.25:2380 clientURLs=https://10.3.1.25:2379 isLeader=false

#或查看集群是否健康

[email protected]:~/ssl# etcdctl --key-file /etc/etcd/ssl/etcd-key.pem --cert-file /etc/etcd/ssl/etcd.pem --ca-file /etc/kubernetes/ssl/ca.pem cluster-health

member 1af3976d9329e8ca is healthy: got healthy result from https://10.3.1.20:2379
member 34b6c7df0ad76116 is healthy: got healthy result from https://10.3.1.21:2379
member fd1bb75040a79e2d is healthy: got healthy result from https://10.3.1.25:2379
cluster is healthy

安装haproxy 以及keepalived

拉取haproxy镜像

docker pull haproxy:1.7.8-alpine
mkdir /etc/haproxy
cat >/etc/haproxy/haproxy.cfg<<EOF
global
 log 127.0.0.1 local0 err
 maxconn 50000
 uid 99
 gid 99
 #daemon
 nbproc 1
 pidfile haproxy.pid

defaults
 mode http
 log 127.0.0.1 local0 err
 maxconn 50000
 retries 3
 timeout connect 5s
 timeout client 30s
 timeout server 30s
 timeout check 2s

listen admin_stats
 mode http
 bind 0.0.0.0:1080
 log 127.0.0.1 local0 err
 stats refresh 30s
 stats uri     /haproxy-status
 stats realm   Haproxy\ Statistics
 stats auth    will:will
 stats hide-version
 stats admin if TRUE

frontend k8s-https
 bind 0.0.0.0:8443
 mode tcp
 #maxconn 50000
 default_backend k8s-https

backend k8s-https
 mode tcp
 balance roundrobin
 server lab1 11.11.11.111:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
 server lab2 11.11.11.112:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
 server lab3 11.11.11.113:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3
EOF

启动haproxy
docker run -d --name my-haproxy \
-v /etc/haproxy:/usr/local/etc/haproxy:ro \
-p 8443:8443 \
-p 1080:1080 \
--restart always \
haproxy:1.7.8-alpine

查看日志
docker logs my-haproxy

浏览器查看状态
http://11.11.11.111:1080/haproxy-status
用户/密码: will  will

拉取keepalived镜像
docker pull osixia/keepalived:1.4.4

启动

载入内核相关模块
lsmod | grep ip_vs
modprobe ip_vs

# 启动keepalived
# eth1为本次实验11.11.11.0/24网段的所在网卡
docker run --net=host --cap-add=NET_ADMIN \
-e KEEPALIVED_INTERFACE=eth1 \
-e KEEPALIVED_VIRTUAL_IPS="#PYTHON2BASH:['11.11.11.110']" \
-e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['11.11.11.111','11.11.11.112','11.11.11.113']" \
-e KEEPALIVED_PASSWORD=hello \
--name k8s-keepalived \
--restart always \
-d osixia/keepalived:1.4.4

查看日志
会看到两个成为backup 一个成为master
docker logs k8s-keepalived

此时会配置 11.11.11.110 到其中一台机器
ping测试
ping -c4 11.11.11.110

如果失败后清理后,重新实验
docker rm -f k8s-keepalived
ip a del 11.11.11.110/32 dev eth1

所有节点安装Docker

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum-config-manager --enable docker-ce-stable

yum install docker-ce -y

修改docker 仓库源

cat > docker-daemon.json <<EOF
{    "registry-mirrors": ["https://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn"],    "max-concurrent-downloads": 20
}
EOF

安装完Docker后,设置FORWARD规则为ACCEPT

#默认为DROP

iptables -P FORWARD ACCEPT

并设置为开机启动

systemctl enable docker
systemctl daemon-reload
systemctl restart docker

安装kubeadm工具

所有节点都需要安装kubeadm

设置阿里yum源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubeadm

#它会自动安装kubeadm、kubectl、kubelet、kubernetes-cni、socat

安装完后,设置kubelet服务开机自启:

systemctl enable kubelet

必须设置Kubelet开机自启动,才能让k8s集群各组件在系统重启后自动运行。

所有节点修改kubelet配置文件

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

#修改或添加

Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/k8sth/pause-amd64:3.0"

命令补全

sudo yum install -y bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

集群初始化

接下开始在三台master执行集群初始化。

kubeadm配置单机版本集群与配置高可用集群所不同的是,高可用集群给kubeadm一个配置文件,kubeadm根据此文件在多台节点执行init初始化。编写kubeadm配置文件

vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1alpha3
kind: ClusterConfiguration
kubernetesVersion: stable

etcd:
  external:
    endpoints:
    - https://172.17.52.236:2379
    - https://172.17.180.111:2379
    - https://172.17.78.205:2379
    caFile: /etc/etcd/ssl/ca.pem
    certFile: /etc/etcd/ssl/etcd.pem
    keyFile: /etc/etcd/ssl/etcd-key.pem
  

networking:
  podSubnet: 10.244.0.0/16


apiServerCertSANs:
  - "k1"
  - "k2"
  - "k3"
  - "172.17.52.236"
  - "172.17.180.111"
  - "172.17.78.205"
  - "127.0.0.1"

controlPlaneEndpoint: 172.17.52.236:8443

featureGates:
  CoreDNS: true

imageRepository: "registry.cn-hangzhou.aliyuncs.com/google_containers"

controllerManagerExtraArgs:
  node-monitor-grace-period: 10s
  pod-eviction-timeout: 10s

配置解析:

版本v1.12的api版本已提升为kubeadm.k8s.io/v1alpha3,kind已变成ClusterConfiguration。
podSubnet:自定义pod网段。
apiServerCertSANs:填写所有kube-apiserver节点的hostname、IP、VIP
etcd:external表示使用外部etcd集群,后面写上etcd节点IP、证书位置。
如果etcd集群由kubeadm配置,则应该写local,加上自定义的启动参数。
token:可以不指定,使用指令 kubeadm token generate 生成。

确保swap已关闭

第一台master上执行init

[email protected]:~/kubeadm-config# kubeadm init --config kubeadm-config.yaml
输出如下信息:

…………
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
# 记录下面这句,在其它Node加入时用到。
  kubeadm join 10.3.1.20:6443 --token w79yp6.erls1tlc4olfikli --discovery-token-ca-cert-hash sha256:7aac9eb45a5e7485af93030c3f413598d8053e1beb60fb3edf4b7e4fdb6a9db

根据提示执行:

[email protected]:~# mkdir -p $HOME/.kube
[email protected]:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[email protected]:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时有一台了,且状态为"NotReady"

[email protected]:~# kubectl get node
NAME          STATUS    ROLES    AGE    VERSION
k8s-master01  NotReady  master  3m50s  v1.12.0

查看第一台Master核心组件运行为Pod

[email protected]:~# kubectl get pod -n kube-system -o wide
NAME                                  READY  STATUS    RESTARTS  AGE    IP          NODE          NOMINATED NODE
coredns-576cbf47c7-2dqsj              0/1    Pending  0          4m29s  <none>      <none>        <none>
coredns-576cbf47c7-7sqqz              0/1    Pending  0          4m29s  <none>      <none>        <none>
kube-apiserver-k8s-master01            1/1    Running  0          3m46s  10.3.1.20  k8s-master01  <none>
kube-controller-manager-k8s-master01  1/1    Running  0          3m40s  10.3.1.20  k8s-master01  <none>
kube-proxy-dpvkk                      1/1    Running  0          4m30s  10.3.1.20  k8s-master01  <none>
kube-scheduler-k8s-master01            1/1    Running  0          3m37s  10.3.1.20  k8s-master01  <none>

拷贝生成的pki目录到各master节点

[email protected]:~# scp -r /etc/kubernetes/pki [email protected]:/etc/kubernetes/
[email protected]:~# scp -r /etc/kubernetes/pki [email protected]:/etc/kubernetes/

把kubeadm的配置文件也拷过去

[email protected]:~/# scp kubeadm-config.yaml [email protected]:~/
[email protected]:~/# scp kubeadm-config.yaml [email protected]:~/

第一台Master部署完成了,接下来的第二和第三台,无论后面有多少个Master都使用相同的kubeadm-config.yaml进行初始化

第二台执行kubeadm init

[email protected]:~# kubeadm init --config kubeadm-config.yaml

第三台master执行kubeadm init

[email protected]:~# kubeadm init --config kubeadm-config.yaml

最后查看Node:

[email protected]:~# kubectl get node
NAME          STATUS    ROLES    AGE    VERSION
k8s-master01  NotReady  master  31m    v1.12.0
k8s-master02  NotReady  master  15m    v1.12.0
k8s-master03  NotReady  master  6m52s  v1.12.0

查看各组件运行状态:

[email protected]:~# kubectl get pod -n kube-system -o wide
………………………………、
…………………………

去除所有master上的taint(污点),让master也可被调度:

[email protected]:~# kubectl taint nodes --all  node-role.kubernetes.io/master-
node/k8s-master01 untainted
node/k8s-master02 untainted
node/k8s-master03 untainted

所有节点是"NotReady"状态,需要安装CNI插件

安装Calico网络插件:

curl https://docs.projectcalico.org/v3.4/getting-started/kubernetes/installation/hosted/calico.yaml -O
POD_CIDR="10.244.0.0/16" \
sed -i -e "s?192.168.0.0/16?$POD_CIDR?g" calico.yaml

sed -i '[email protected]*etcd_endpoints:.*@\ \ etcd_endpoints:\ \"https://172.17.180.113:2379,https://172.17.180.111:2379,https://172.17.78.205:2379\"@gi' calico.yaml

export ETCD_CERT=`cat /etc/etcd/ssl/etcd.pem | base64 | tr -d '\n'`
export ETCD_KEY=`cat /etc/etcd/ssl/etcd-key.pem | base64 | tr -d '\n'`
export ETCD_CA=`cat /etc/etcd/ssl/ca.pem | base64 | tr -d '\n'`
sed -i "[email protected]*etcd-cert:.*@\ \ etcd-cert:\ ${ETCD_CERT}@gi" calico.yaml
sed -i "[email protected]*etcd-key:.*@\ \ etcd-key:\ ${ETCD_KEY}@gi" calico.yaml
sed -i "[email protected]*etcd-ca:.*@\ \ etcd-ca:\ ${ETCD_CA}@gi" calico.yaml

sed -i '[email protected]*etcd_ca:.*@\ \ etcd_ca:\ "/calico-secrets/etcd-ca"@gi' calico.yaml
sed -i '[email protected]*etcd_cert:.*@\ \ etcd_cert:\ "/calico-secrets/etcd-cert"@gi' calico.yaml
sed -i '[email protected]*etcd_key:.*@\ \ etcd_key:\ "/calico-secrets/etcd-key"@gi' calico.yaml
kubectl apply -f calico.yaml

再次查看Node状态:

[email protected]:~# kubectl get node
NAME          STATUS  ROLES    AGE  VERSION
k8s-master01  Ready    master  39m  v1.12.0
k8s-master02  Ready    master  24m  v1.12.0
k8s-master03  Ready    master  15m  v1.12.0

各master上所有组件已正常:

[email protected]:~# kubectl get pod -n kube-system -o wide
NAME                                      READY  STATUS    RESTARTS  AGE    IP              NODE          NOMINATED NODE
calico-etcd-dcbtp                          1/1    Running  0          102s  10.3.1.25        k8s-master03  <none>
calico-etcd-hmd2h                          1/1    Running  0          101s  10.3.1.20        k8s-master01  <none>
calico-etcd-pnksz                          1/1    Running  0          99s    10.3.1.21        k8s-master02  <none>
calico-kube-controllers-75fb4f8996-dxvml  1/1    Running  0          117s  10.3.1.25        k8s-master03  <none>
calico-node-6kvg5                          2/2    Running  1          117s  10.3.1.21        k8s-master02  <none>
calico-node-82wjt                          2/2    Running  1          117s  10.3.1.25        k8s-master03  <none>
calico-node-zrtj4                          2/2    Running  1          117s  10.3.1.20        k8s-master01  <none>
coredns-576cbf47c7-2dqsj                  1/1    Running  0          38m    192.168.85.194  k8s-master02  <none>
coredns-576cbf47c7-7sqqz                  1/1    Running  0          38m    192.168.85.193  k8s-master02  <none>
kube-apiserver-k8s-master01                1/1    Running  0          37m    10.3.1.20        k8s-master01  <none>
kube-apiserver-k8s-master02                1/1    Running  0          22m    10.3.1.21        k8s-master02  <none>
kube-apiserver-k8s-master03                1/1    Running  0          12m    10.3.1.25        k8s-master03  <none>
kube-controller-manager-k8s-master01      1/1    Running  0          37m    10.3.1.20        k8s-master01  <none>
kube-controller-manager-k8s-master02      1/1    Running  0          21m    10.3.1.21        k8s-master02  <none>
kube-controller-manager-k8s-master03      1/1    Running  0          12m    10.3.1.25        k8s-master03  <none>
kube-proxy-6tfdg                          1/1    Running  0          23m    10.3.1.21        k8s-master02  <none>
kube-proxy-dpvkk                          1/1    Running  0          38m    10.3.1.20        k8s-master01  <none>
kube-proxy-msqgn                          1/1    Running  0          14m    10.3.1.25        k8s-master03  <none>
kube-scheduler-k8s-master01                1/1    Running  0          37m    10.3.1.20        k8s-master01  <none>
kube-scheduler-k8s-master02                1/1    Running  0          22m    10.3.1.21        k8s-master02  <none>
kube-scheduler-k8s-master03                1/1    Running  0          12m    10.3.1.25        k8s-master03  <none>

部署Node

在新node节点上,安装kubeadm后, 使用kubeadm jonin 命令加如即可。

在k8s-node01加入集群:

[email protected]:~# kubeadm join 10.3.1.20:6443 --token w79yp6.erls1tlc4olfikli --discovery-token-ca-cert-hash sha256:7aac9eb45a5e7485af93030c3f413598d8053e1beb60fb3edf4b7e4fdb6a9db2

查看Node运行的组件:

[email protected]:~# kubectl get pod -n kube-system -o wide |grep node01
calico-node-hsg4w                          2/2    Running            2          47m    10.3.1.63        k8s-node01    <none>
kube-proxy-xn795                          1/1    Running            0          47m    10.3.1.63        k8s-node01    <none>

查看现在的Node状态。

#现在有四个Node,全部Ready

[email protected]:~# kubectl get node
NAME          STATUS  ROLES    AGE    VERSION
k8s-master01  Ready    master  132m  v1.12.0
k8s-master02  Ready    master  117m  v1.12.0
k8s-master03  Ready    master  108m  v1.12.0
k8s-node01    Ready    <none>  52m    v1.12.0

修改node连接为vip ,如果初始化的文件里没有.指定vip 需要修改。.

在执行kubeadm init时,Node上的两个组件kubelet、kube-proxy连接的是本地的kube-apiserver,因此这一步是修改这两个组件的配置文件,将其kube-apiserver的地址改为VIP验证集群
1,修改kube-proxy

kubectl get configmap -n kube-system kube-proxy -o yaml > kube-proxy.yaml

修改kube-proxy.yaml 连接的api地址为VIP并重启

kubectl apply -f kube-proxy.yaml

2,修改 kubelet

vim /etc/kubernetes/kubelet.conf

systemctl restart kubelet

完成

忘记初始master节点时的node节点加入集群命令怎么办

  • 简单方法
kubeadm token create --print-join-command
  • 第二种方法
token=$(kubeadm token generate)
kubeadm token create $token --print-join-command --ttl=0

或者:

列出证书
 kubeadm token list

获取ca证书sha256编码hash值

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

5443ce1592bb287ba362cedd3128c261c108c2c44fd48b5a60b90ee4e8460a3f

544开头的token,即为加如master的token