Rainbond安装维护

Rainbond安装目录:
Rainbond安装维护

Rainbond组建说明:
Rainbond安装维护

Rainbond节点属性分类:

Master节点(管理节点)
Worker节点(计算节点)
Storage节点(存储节点)

Rainbond各服务组件及其版本信息:
Rainbond安装维护
注意:

其他组建详细说明请查看:

http://www.rainbond.com/docs/stable/operation-manual/cluster-management/manager-services/rbd-api.html

Rainbond 中rbd-dns服务内部组件互相访问的域名解析:
Rainbond安装维护

Rainbond 部分服务端口说明:
Rainbond安装维护

提示:

etcd的4001为非安全端口,2379为安全端口
kube-apiserver的8181为非安全端口,6443为安全端口
rainbond API端口当只有一个数据中心时不需要对外开放,当多数据中心,且在不同网络时需要对外开放,8888非安全端口,8443为安全端口
rbd-lb提供的80与443端口是为HTTP协议应用提供,20001~60000是为TCP协议的应用提供。

Rainbond自动部署安装过程:

公网环境(阿里云,腾讯云等云上环境)可以指定公网ip --eip <公网ip>, 可选 # 云帮版本,目前支持(v3.7.1,v3.7.2),v3.7版本默认为最新版本v3.7.2 --rainbond-version <版本信息>, 可选

wget https://pkg.rainbond.com/releases/common/v3.7.2/grctl 
chmod +x ./grctl

可选参数 eip,rainbond-version

  ./grctl init --eip <公网ip> --rainbond-version <版本信息>

Rainbond安装完成后查看状态与管理平台登录:

#集群整体状态

grctl cluster

集群节点状态

 grctl node list

控制台访问地址

 <管理节点>:7070 192.168.1.162:7070 yuechao zbbt1314 

#Artifactory管理平台 主要为Java工程所需Maven管理

  <管理节点>:8081 192.168.1.162:8081 admin password

命令维护:

1.grctl命令

grctl命令是rainbond自带的集群管理工具,它具备如下主要功能特性:

Rainbond安装维护

2.grclis批量管理服务

批量stop当前节点所有服务

grclis stop

批量start当前节点所有服务

grclis start

批量更新镜像版本

grclis update all*

3.docker相关命令

docker:(1)docker基本命令使用及发布镜像:https://blog.csdn.net/yueaini10000/article/details/83784397

docker:(2)通过Dockerfile构建镜像并发布web项目:https://blog.csdn.net/yueaini10000/article/details/83784535

docker:(3)docker容器挂载宿主主机目录:https://blog.csdn.net/yueaini10000/article/details/83784656
4.GlusterFS的安装
①hosts解析

[[email protected] ~]# cat /etc/hosts
127.0.0.1 localhost
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.81.29.87 server1
10.81.9.115 server2

② 安装glusterfs

#所有存储节点执行

    yum install centos-release-gluster -y
    yum install glusterfs-server -y

③ 启动GlusterFS服务

#所有存储节点执行
systemctl  start   glusterd.service
systemctl  enable  glusterd.service
systemctl  status  glusterd.service

④ 配置信任池(一端添加就行)

[[email protected] ~]# gluster peer probe server2
peer probe: success.
[[email protected] ~]# gluster peer status
Number of Peers: 1

Hostname: server2
Uuid: be69468e-94b6-45a6-8a3d-bea86c2702dc
State: Peer in Cluster (Connected)

⑤ 创建卷

#所有节点都需执行

mkdir  -p /data/glusterfs

#创建一个卷

gluster volume create data replica 2 server1:/data/glusterfs server2:/data/glusterfs

Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
 (y/n) y
volume create: data: success: please start the volume to access data

⑥查看卷的信息

[[email protected] ~]# gluster volume info

Volume Name: data
Type: Replicate
Volume ID: 8c16603c-2fab-4117-8020-2310b0041b35
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: server1:/data/glusterfs
Brick2: server2:/data/glusterfs
Options Reconfigured:
transport.address-family: inet
nfs.disable: on

⑦启动卷

[[email protected] ~]# gluster volume start data
volume start: data: success

⑧挂载测试

#server1 挂载
[[email protected] ~]# mount -t glusterfs server1:/data /mnt

#server2 挂载
[[email protected] ~]# mount -t glusterfs server2:/data /mnt

#在server2上创建文件
[[email protected] ~]# touch /mnt/{1..10}test.txt

#在server1上查看文件是否同步,如出现上一步创建的文件,说明同步成功
[[email protected] ~]# ls /mnt/
10test.txt  2test.txt  4test.txt  6test.txt  8test.txt
1test.txt   3test.txt  5test.txt  7test.txt  9test.txt

5.扩容计算节点:

grctl node add --host compute01 --iip <internal ip> -p <root pass> -r worker
grctl node add --host compute02 --iip <internal ip> -p <root pass> -r worker

6.Rainbond卸载

grclis stop

systemctl disable docker
systemctl disable etcd
systemctl disable node
systemctl disable calico
systemctl disable salt-master
systemctl disable salt-minion
systemctl disable kube-apiserver
systemctl disable kube-controller-manager
systemctl disable kube-scheduler
systemctl disable kubelet

cd /etc/systemd/system/
systemctl disable rbd-*

cclear

systemctl stop docker
systemctl stop salt-master
systemctl stop salt-minion

yum remove -y gr-docker*
yum remove -y salt-*

rm -rf /etc/systemd/system/kube-*
rm -rf /etc/systemd/system/rbd-*
rm -rf /etc/systemd/system/kubelet*
rm -rf /etc/systemd/system/node.service
rm -rf /etc/systemd/system/etcd.service
rm -rf /etc/systemd/system/calico.service
rm -rf /usr/lib/systemd/system/docker.service

rm -rf /opt/rainbond
rm -rf /cache
rm -rf /grdata/
rm -rf /etc/goodrain/
rm -rf /srv/
rm -rf /etc/salt/*

cat > /etc/hosts <<EOF 127.0.0.1 localhost EOF # /usr/local/bin/
可以根据需求删除:calicoctl  ctop  dc-compose  docker-compose  domain-cli  etcdctl  grcert  grctl  kubectl  kubelet  node  scope  yq

管理节点维护:

http://www.rainbond.com/docs/stable/operation-manual/platform-maintenance/management-node.html

计算节点维护:

http://www.rainbond.com/docs/stable/operation-manual/platform-maintenance/compute-node.html

更多维护命令可查看:http://www.rainbond.com/docs/stable/operation-manual/cli.html