centOS7.2使用yum安装kubernetes

2015年9月1日,CentOS 已经把 Kubernetes 加入官方源,所以现在安装Kubernetes已经方便很多。

master包含kube-apiserver kube-scheduler kube-controller-manager etcd四个组件
node包含kube-proxy kubelet flannel 3个组件

  1. kube-apiserver:位于master节点,接受用户请求。
  2. kube-scheduler:位于master节点,负责资源调度,即pod建在哪个node节点。
  3. kube-controller-manager:位于master节点,包含ReplicationManager,Endpointscontroller,Namespacecontroller,and Nodecontroller等。
  4. etcd:分布式键值存储系统,共享整个集群的资源对象信息。
  5. kubelet:位于node节点,负责维护在特定主机上运行的pod。
  6. kube-proxy:位于node节点,它起的作用是一个服务代理的角色

1.准备工作

在3台服务器上都执行下面的操作。
master:192.168.52.130
node:192.168.52.132

1关闭防火墙

每台机器禁用iptables 避免和docker 的iptables冲突:

#systemctl stop firewalld
#systemctl disable firewalld
#iptables -P FORWARD ACCEPT

2安装NTP

为了让各个服务器的时间保持一致,还需要为所有的服务器安装NTP:

#yum -y install ntp
#systemctl start ntpd
#systemctl enable ntpd

3禁用selinux

#vi /etc/selinux/config

#SELINUX=enforcing
SELINUX=disabled

2.部署master

1.安装etcd和kubernetes(这会自动安装docker)

[[email protected] etc]#yum -y install etcd kubernetes-master

2.修改etcd.conf

[[email protected] etc]# vi /etc/etcd/etcd.conf    
ETCD_NAME=node1 
#数据存放位置
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#监听其他 Etcd 实例的地址
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
#监听客户端地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""

#[cluster]
~ #通知其他 Etcd 实例地址
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.52.130:2380"
#if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
#初始化集群内节点地址
ETCD_INITIAL_CLUSTER="node1=http://192.168.52.130:2380,node2=http://192.168.52.132:2380"
#初始化集群状态,new 表示新建
ETCD_INITIAL_CLUSTER_STATE="new"
#初始化集群 token
ETCD_INITIAL_CLUSTER_TOKEN="mritd-etcd-cluster"
#通知 客户端地址
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.52.130:2379,http://192.168.52.130:4001"

3.修改kube-master配置文件

 [email protected] kubernetes]# vi /etc/kubernetes/apiserver 
###
#kubernetes system config
#
#The following values are used to configure the kube-apiserver
#
#The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
##The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
#Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
#Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16 --service-node-port-range=1-65535"
#default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
#去掉ServiceAccount,解决kubectl get pods时 No resources found.问题
#Add your own!
KUBE_API_ARGS=""
[[email protected] /]# vi /etc/kubernetes/controller-manager
###
#the following values are used to configure the kubernetes controller-manager

#defaults from config and apiserver should be adequate
#Add your own!
#KUBE_CONTROLLER_MANAGER_ARGS=""
KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --pod-eviction-timeout=10s"
~
[[email protected] /]# vi /etc/kubernetes/config  
###
#kubernetes system config
#
#the following values are used to configure various aspects of all
#kubernetes services, including
#
#kube-apiserver.service
#kube-controller-manager.service
#kube-scheduler.service
#kubelet.service

#kube-proxy.service
#logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

#journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

#Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

#How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.52.130:8080"

其中的8080,如果被占用了,或者不想用这个端口,可以修改

4.启动服务

让 etcd kube-apiserver kube-scheduler kube-controller-manager 随开机启动

[[email protected] /]# systemctl enable etcd kube-apiserver kube-scheduler kube-controller-manager

启动

[[email protected] /]# systemctl start etcd kube-apiserver kube-scheduler kube-controller-manager

5.配置etcd中的网络

定义etcd中的网络配置,nodeN中的flannel service会拉取此配置

[[email protected] /]# etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'

3.部署minions(node节点)

1安装kubernetes-node和 flannel(会自动安装docker)

[[email protected] ~]# yum -y install kubernetes-node flannel

2修改kube-node

[[email protected] ~]# vi /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"

#journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

#Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

#How the controller-manager, scheduler, and proxy find the apiserver
#KUBE_MASTER="--master=http://127.0.0.1:8080"
KUBE_MASTER="--master=http://192.168.52.130:8080"

hostname改为node自己的ip或名称

[[email protected] ~]# vi /etc/kubernetes/kubelet

###
#kubernetes kubelet (minion) config

#The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"

#The port for the info server to serve on
#KUBELET_PORT="--port=10250"

#You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=http://192.168.52.132"

#location of the api-server
KUBELET_API_SERVER="--api-servers=http://http://192.168.52.130:8080"

#pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

#Add your own!
#KUBELET_ARGS=""
KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause"

3修改flannel

为etcd服务配置flannel,修改配置文件 /etc/sysconfig/flanneld

[[email protected] ~]# vi /etc/sysconfig/flanneld 
#etcd url location.  Point this to the server where etcd runs
#FLANNEL_ETCD="http://127.0.0.1:2379"
FLANNEL_ETCD="http://192.168.52.130:2379"


#etcd config key.  This is the configuration key that flannel queries
#For address range assignment
#FLANNEL_ETCD_KEY="/atomic.io/network"
FLANNEL_ETCD_KEY="/coreos.com/network"


#Any additional options that you want to pass
FLANNEL_OPTIONS=" -iface=ens33"



 FLANNEL_OPTIONS=" -iface=ens33" 其中的ens33是网卡名称(用ifconfig可查询出来,centos7如果你没有改网卡名,那可以是enoXXXXX)

4.启动服务

[[email protected] ~]# systemctl restart flanneld docker
[[email protected] ~]# systemctl start kubelet kube-proxy
[[email protected] ~]# systemctl enable flanneld kubelet kube-proxy

ifconfig下,看到每个minions(node)会有docker0和flannel0这2个网卡。这2个网卡在不同的minons都是不同的.

4.验证

1.Testing Create First Pod

在master上创建docker Nginx Pod

[[email protected] ~]#  kubectl create deployment nginx --image=nginx
[[email protected] ~]#  kubectl describe deployment nginx

创建服务端口

[[email protected] ~]# kubectl create service nodeport nginx --tcp=80:80

[[email protected] ~]# kubectl describe service nginx
Name:			nginx
Namespace:		default
Labels:			app=nginx
Selector:		app=nginx
Type:			NodePort
IP:			10.254.4.244
Port:			80-80	80/TCP
NodePort:		80-80	30862/TCP
Endpoints:		172.17.48.2:80
Session Affinity:	None
No events.

2.在node上查看docker nignx容器ip是否对应的 Endpoints

[[email protected] ~]# docker inspect 423a3b8b26b2
[
   {
       "Id": "423a3b8b26b2f511ceed97cdc5c5c14e0c4ce69dae5f5818406f0013566da67b"
       "Created": "2019-02-26T01:02:22.4188594Z",
       "Path": "/pause",
       "Args": [],
       "State": {
           "Status": "running",
           "Running": true,
           "Paused": false,
           "Restarting": false,
           "OOMKilled": false,
           "Dead": false,
           "Pid": 25352,
           "ExitCode": 0,
           "Error": "",
           "StartedAt": "2019-02-26T01:02:24.196708758Z",
           "FinishedAt": "0001-01-01T00:00:00Z"
       },
       "Image": "sha256:f9d5de0795395db6c50cb1ac82ebed1bd8eb3eefcebb1aa724e0123
       "ResolvConfPath": "/var/lib/docker/containers/423a3b8b26b2f511ceed97cdc5013566da67b/resolv.conf",
       "HostnamePath": "/var/lib/docker/containers/423a3b8b26b2f511ceed97cdc5c53566da67b/hostname",
       "HostsPath": "/var/lib/docker/containers/423a3b8b26b2f511ceed97cdc5c5c146da67b/hosts",
       "LogPath": "",
       "Name": "/k8s_POD.c73fd98d_nginx-3121059884-4k8vd_default_baed9fe9-38b9-406c",
       "RestartCount": 0,
       "Driver": "overlay2",
       "MountLabel": "",
       "ProcessLabel": "",
       "AppArmorProfile": "",
       "ExecIDs": null,
       "HostConfig": {
           "Binds": null,
           "ContainerIDFile": "",
           "LogConfig": {
               "Type": "journald",
               "Config": {}
           },
           "NetworkMode": "default",
           "PortBindings": {},
           "RestartPolicy": {
               "Name": "",
               "MaximumRetryCount": 0
           },
           "AutoRemove": false,
           "VolumeDriver": "",
           "VolumesFrom": null,
           "CapAdd": null,
           "CapDrop": null,
           "Dns": [
               "192.168.52.2"
           ],
           "DnsOptions": null,
           "DnsSearch": [
               "localdomain"
           ],
           "ExtraHosts": null,
           "GroupAdd": null,
           "IpcMode": "",
           "Cgroup": "",
           "Links": null,
           "OomScoreAdj": -998,
           "PidMode": "",
           "Privileged": false,
           "PublishAllPorts": false,
           "ReadonlyRootfs": false,
           "SecurityOpt": [
               "seccomp=unconfined"
           ],
           "UTSMode": "",
           "UsernsMode": "",
           "ShmSize": 67108864,
           "Runtime": "docker-runc",
           "ConsoleSize": [
               0,
               0
           ],
           "Isolation": "",
           "CpuShares": 2,
           "Memory": 0,
           "NanoCpus": 0,
           "CgroupParent": "",
           "BlkioWeight": 0,
           "BlkioWeightDevice": null,
           "BlkioDeviceReadBps": null,
           "BlkioDeviceWriteBps": null,
           "BlkioDeviceReadIOps": null,
           "BlkioDeviceWriteIOps": null,
           "CpuPeriod": 0,
           "CpuQuota": 0,
           "CpuRealtimePeriod": 0,
           "CpuRealtimeRuntime": 0,
           "CpusetCpus": "",
           "CpusetMems": "",
           "Devices": [],
           "DiskQuota": 0,
           "KernelMemory": 0,
           "MemoryReservation": 0,
           "MemorySwap": -1,
           "MemorySwappiness": -1,
           "OomKillDisable": false,
           "PidsLimit": 0,
           "Ulimits": null,
           "CpuCount": 0,
           "CpuPercent": 0,
           "IOMaximumIOps": 0,
           "IOMaximumBandwidth": 0
       },
       "GraphDriver": {
           "Name": "overlay2",
           "Data": {
               "LowerDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdea2729e3e6f43b-init/diff:/var/lib/docker/overlay2/8b2860fbde3dec06a9b19e127c49cc9ac62c5/diff:/var/lib/docker/overlay2/5558c6c8eb694182c22e68a223223ff03cd64c70c6612ar/lib/docker/overlay2/602d9c3d734dba42cceddf7e88775efed7e477b95894775a7772149cc
               "MergedDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdead729e3e6f43b/merged",
               "UpperDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdea2729e3e6f43b/diff",
               "WorkDir": "/var/lib/docker/overlay2/1bf4efe3b1c04a93dc5efcdea2b29e3e6f43b/work"
           }
       },
       "Mounts": [],
       "Config": {
           "Hostname": "nginx-3121059884-4k8vd",
           "Domainname": "",
           "User": "",
           "AttachStdin": false,
           "AttachStdout": false,
           "AttachStderr": false,
           "Tty": false,
           "OpenStdin": false,
           "StdinOnce": false,
           "Env": [
               "KUBERNETES_SERVICE_PORT=443",
               "NGINX_SERVICE_HOST=10.254.4.244",
               "NGINX_PORT=tcp://10.254.4.244:80",
               "NGINX_PORT_80_TCP_PORT=80",
               "NGINX_PORT_80_TCP_ADDR=10.254.4.244",
               "KUBERNETES_PORT_443_TCP_ADDR=10.254.0.1",
               "NGINX_PORT_80_TCP=tcp://10.254.4.244:80",
               "KUBERNETES_SERVICE_HOST=10.254.0.1",
               "KUBERNETES_SERVICE_PORT_HTTPS=443",
               "NGINX_SERVICE_PORT_80_80=80",
               "NGINX_PORT_80_TCP_PROTO=tcp",
               "KUBERNETES_PORT=tcp://10.254.0.1:443",
               "KUBERNETES_PORT_443_TCP=tcp://10.254.0.1:443",
               "KUBERNETES_PORT_443_TCP_PROTO=tcp",
               "KUBERNETES_PORT_443_TCP_PORT=443",
               "NGINX_SERVICE_PORT=80",
               "HOME=/",
               "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/b
           ],
           "Cmd": null,
           "Image": "kubernetes/pause",
           "Volumes": null,
           "WorkingDir": "",
           "Entrypoint": [
               "/pause"
           ],
           "OnBuild": null,
           "Labels": {
               "io.kubernetes.container.hash": "c73fd98d",
               "io.kubernetes.container.name": "POD",
               "io.kubernetes.container.restartCount": "0",
               "io.kubernetes.container.terminationMessagePath": "",
               "io.kubernetes.pod.name": "nginx-3121059884-4k8vd",
               "io.kubernetes.pod.namespace": "default",
               "io.kubernetes.pod.terminationGracePeriod": "30",
               "io.kubernetes.pod.uid": "baed9fe9-38b9-11e9-bd0c-000c291410a9"
           }
       },
       "NetworkSettings": {
           "Bridge": "",
           "SandboxID": "0fe4522e3b99b460d78ed3fea3beddef34044ce8e339f134998ca4
           "HairpinMode": false,
           "LinkLocalIPv6Address": "",
           "LinkLocalIPv6PrefixLen": 0,
           "Ports": {},
           "SandboxKey": "/var/run/docker/netns/0fe4522e3b99",
           "SecondaryIPAddresses": null,
           "SecondaryIPv6Addresses": null,
           "EndpointID": "e31d625288e553a87e5e7bae1da03badbfeddf83305dc3d43196e
           "Gateway": "172.17.48.1",
           "GlobalIPv6Address": "",
           "GlobalIPv6PrefixLen": 0,
           "IPAddress": "172.17.48.2",
           "IPPrefixLen": 24,
           "IPv6Gateway": "",
           "MacAddress": "02:42:ac:11:30:02",
           "Networks": {
               "bridge": {
                   "IPAMConfig": null,
                   "Links": null,
                   "Aliases": null,
                   "NetworkID": "901845a1f83d292f893a37bfe735ab0ca022ed0be45817
                   "EndpointID": "e31d625288e553a87e5e7bae1da03badbfeddf83305dc
                   "Gateway": "172.17.48.1",
                   "IPAddress": "172.17.48.2",
                   "IPPrefixLen": 24,
                   "IPv6Gateway": "",
                   "GlobalIPv6Address": "",
                   "GlobalIPv6PrefixLen": 0,
                   "MacAddress": "02:42:ac:11:30:02"
               }
           }
       }
   }
]

3.在master上执行

[[email protected] ~]# kubectl get nodes
NAME              STATUS    AGE
192.168.52.132   Ready     20m
[[email protected] ~]#kubectl get pods
NAME                                           READY     STATUS        RESTARTS   AGE
nginx-3121059884-4k8vd   1/1         Running       12         21h
[[email protected] /]kubectl describe pods nginx-3121059884-4k8vd
Name:		nginx-3121059884-4k8vd
Namespace:	default
Node:		192.168.52.132/192.168.52.132
Start Time:	Mon, 25 Feb 2019 12:56:48 +0800
Labels:		app=nginx
 	pod-template-hash=3121059884
Status:		Running
IP:		172.17.48.2
Controllers:	ReplicaSet/nginx-3121059884
Containers:
nginx:
 Container ID:		docker://b1f59f8025255f03c5f7f1a9c5c7847fc9e178d5d4bf5c51b6855db328894a70
 Image:			nginx
 Image ID:			docker-pullable://docker.io/[email protected]:dd2d0ac3fff2f007d99e033b64854be0941e19a2ad51f174d9240dda20d9f534
 Port:			
 State:			Running
   Started:			Tue, 26 Feb 2019 09:02:30 +0800
 Last State:			Terminated
   Reason:			Completed
   Exit Code:		0
   Started:			Mon, 25 Feb 2019 17:03:29 +0800
   Finished:			Tue, 26 Feb 2019 09:02:08 +0800
 Ready:			True
 Restart Count:		12
 Volume Mounts:		<none>
 Environment Variables:	<none>
Conditions:
Type		Status
Initialized 	True 
Ready 	True 
PodScheduled 	True 
No volumes.
QoS Class:	BestEffort
Tolerations:	<none>
No events.

[[email protected] ~]# kubectl get svc
NAME         CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   10.254.0.1     <none>        443/TCP        4d
nginx        10.254.4.244   <nodes>       80:30862/TCP   4d
[[email protected] ~]#kubectl describe svc nginx
Name:			nginx
Namespace:		default
Labels:			app=nginx
Selector:		app=nginx
Type:			NodePort
IP:			10.254.4.244
Port:			80-80	80/TCP
NodePort:		80-80	30862/TCP
Endpoints:		172.17.48.2:80
Session Affinity:	None
No events.

4.在node机上测试

[[email protected] ~]#docker container ls
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
b1f59f802525        nginx               "nginx -g 'daemon ..."   About an hour ago   Up About an hour                        k8s_nginx.9179dbd3_nginx-3121059884-4k8vd_default_baed9fe9-38b9-11e9-bd0c-000c291410a9_e251e27f
423a3b8b26b2        kubernetes/pause    "/pause"                 About an hour ago   Up About an hour                        k8s_POD.c73fd98d_nginx-3121059884-4k8vd_default_baed9fe9-38b9-11e9-bd0c-000c291410a9_bcb3406c

查看监听端口
[[email protected] ~]# netstat -lnpt|grep kube-proxy

tcp        0      0 127.0.0.1:10249         0.0.0.0:*               LISTEN      25180/kube-proxy    
tcp6       0      0 :::30862                :::*                    LISTEN      25180/kube-proxy    

[[email protected] ~]#curl http://192.168.52.132:30862/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

2.ie上访问

http://192.168.52.132:30862/

centOS7.2使用yum安装kubernetes
这样etcd+flannel + kubernetes在centOS7上就搭建起来了.