MongoDB3.0集群安装和安全认证指南
MongoDB3.0集群安装和安全认证指南
本文旨在建议一个高可用的MongoDB集群,为应用提供不间断,高性能的数据库服务,同时于集群中添加认证机制,提高数据库的安全性。
在建议一个高可用的MongoDB集群,为应用提供不间断,高性能的数据库服务,同时于集群中添加认证机制,提高数据库的安全性。
环境信息
安装版本信息:
版本:mongodb-linux-x86_64-rhel62-3.0.10.tgz
操作系统版本:CentOS release 6.6 (Final)
存储引擎:mmapv1
集群架构
该集群有设置3个shard servers,3个Config servers,2个mongos servers,共15个实例进程。
服务器信息
Machine Type |
Components Installed |
Description |
IP Address |
Hostname |
App Server 1 |
Application, Mongos |
This server will server dual role of app server as well as the mongos server |
192.168.56.102 |
mongodb01 |
App Server 1 |
Application, Mongos |
This server will server dual role of app server as well as the mongos server |
192.168.56.103 |
mongodb02 |
Mongo Config 1 |
Mongo Config Server |
Used as mongodb config server |
192.168.56.102 |
mongodb01 |
Mongo Config 2 |
Mongo Config Server |
Used as mongodb config server |
192.168.56.103 |
mongodb02 |
Mongo Config 3 |
Mongo Config Server |
Used as mongodb config server |
192.168.56.104 |
mongodb03 |
Shard 1 Primary |
Mongo DB |
Used as primary DB server in shard 1 |
192.168.56.102 |
mongodb01 |
Shard 1 Secondary |
Mongo DB |
Used as secondary DB server in shard 1 |
192.168.56.103 |
mongodb02 |
Shard 1 Secondary |
Mongo DB |
Used as secondary DB server in shard 1 |
192.168.56.104 |
mongodb03 |
Shard 2 Primary |
Mongo DB |
Used as primary DB server in shard 2 |
192.168.56.102 |
mongodb01 |
Shard 2 Secondary |
Mongo DB |
Used as secondary DB server in shard 2 |
192.168.56.103 |
mongodb02 |
Shard 2 Secondary |
Mongo DB |
Used as secondary DB server in shard 2 |
192.168.56.104 |
mongodb03 |
Shard 3 Primary |
Mongo DB |
Used as primary DB server in shard 3 |
192.168.56.102 |
mongodb01 |
Shard 3 Secondary |
Mongo DB |
Used as secondary DB server in shard 3 |
192.168.56.103 |
mongodb02 |
Shard 3 Secondary |
Mongo DB |
Used as secondary DB server in shard 3 |
192.168.56.104 |
mongodb03 |
部署步骤(无验证模式)
安装mongodb
在3台mongodb服务器上都安装mongodb,版本是mongodb-linux-x86_64-rhel62-3.0.10。安装步骤如下:
上传mongodb安装包到预先创建的目录/app目录下。
解压mongodb安装包 命令:tar –zxvf mongodb-linux-x86_64-rhel62-3.0.10.tgz。
安装路径创建
分别在3台机器运行一个mongod实例(称为mongod shard11,mongod shard12,mongod shard13)组织replica set1,作为cluster的shard1。
分别在3台机器运行一个mongod实例(称为mongod shard21,mongod shard22,mongod shard23)组织replica set2,作为cluster的shard2。
分别在3台机器运行一个mongod实例(称为mongod shard31,mongod shard32,mongod shard33)组织replica set3,作为cluster的shard3。
每台机器运行一个mongod实例,作为3个config server
每台机器运行一个mongs进程,用于客户端连接
创建配置、日志、分片、key文件存储目录及验证文件
在各台服务器上创建文件夹:
在三台shard服务器上创建shard数据文件目录和日志文件目录
mkdir -p /app/mongodb/mmapv1/shard11
mkdir -p /app/mongodb/mmapv1/shard21
mkdir -p /app/mongodb/mmapv1/shard31
Server2
mkdir -p /app/mongodb/mmapv1/shard12
mkdir -p /app/mongodb/mmapv1/shard22
mkdir -p /app/mongodb/mmapv1/shard32
Server3
mkdir -p /app/mongodb/mmapv1/shard13
mkdir -p /app/mongodb/mmapv1/shard23
mkdir -p /app/mongodb/mmapv1/shard33
在三台config服务器上创建config数据文件目录和日志文件目录
mkdir -p /app/mongodb/mmapv1/config
在三台route服务器上创建日志
mkdir -p /app/mongodb/mmapv1/logs
在三台key服务器上创建key
mkdir -p /app/mongodb/mmapv1/key
创建验证与无验证目录
目录命名规范
规范命名
创建shard11.conf、shard12.conf、shard13.conf、shard21.conf、shard22.conf、shard23.conf、shard31.conf、shard32.conf、shard33.conf、configsvr.conf、mongos.conf于/app/mongodb/mmapv1/security目录与/app/mongodb/mmapv1/nosecurity目录
配置relica set分片1
1. 配置shard1所用到的replica sets:
Server1:
cd /app/mongodb/mmapv1/nosecurity
设置config信息,配置文本命名为:shard11.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard11.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard11 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard11/mongod.pid # location of pidfile
# network interfaces net: port: 27017 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
#security: #"security": #{ # "authorization":"enabled", # "clusterAuthMode":"keyFile", # "keyFile":"/app/mongodb/mmapv1/shard11/mongodb.key" #}
#operationProfiling:
replication: oplogSizeMB: 100 replSetName: shard1
sharding: clusterRole: shardsvr
|
创建文件shard1.sh,内容如下:
# more shard1.sh
/app/mongodb/bin.3.0.10/mongod -f /app/mongodb/bin.3.0.10/nosecurity/shard11.conf
对文件shard1.sh赋予执行权限:
# chmod -R 777 shard1.sh
通过以下命令启动实例:
sh shard1.sh
Server2:
cd /app/mongodb/mmapv1
设置config信息,配置文本命名为:shard12.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard12.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard12 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard12/mongod.pid # location of pidfile
# network interfaces net: port: 27017 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
#security: #"security": #{ # "authorization":"enabled", # "clusterAuthMode":"keyFile", # "keyFile":"/app/mongodb/mmapv1/shard11/mongodb.key" #}
#operationProfiling:
replication: oplogSizeMB: 100 replSetName: shard1
sharding: clusterRole: shardsvr
|
创建文件shard1.sh,内容如下:
# cd /app/mongodb/bin.3.0.10/nosecurity
# cat shard1.sh
/app/mongodb/bin.3.0.10/mongod -f /app/mongodb/bin.3.0.10/nosecurity/shard12.conf
对文件shard1.sh赋予执行权限:
# chmod -R 777 shard1.sh
通过以下命令启动实例:
sh shard1.sh
Server3:
cd /app/mongodb/mmapv1
设置config信息,配置文本命名为:shard13.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard13.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard13 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard13/mongod.pid # location of pidfile
# network interfaces net: port: 27017 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
#security: #"security": #{ # "authorization":"enabled", # "clusterAuthMode":"keyFile", # "keyFile":"/app/mongodb/mmapv1/shard13/mongodb.key" #}
#operationProfiling:
replication: oplogSizeMB: 100 replSetName: shard1
sharding: clusterRole: shardsvr
|
创建文件shard1.sh,内容如下:
# cd /app/mongodb/bin.3.0.10/nosecurity
# cat shard1.sh
/app/mongodb/bin.3.0.10/mongod -f /app/mongodb/bin.3.0.10/nosecurity/shard13.conf
对文件shard1.sh赋予执行权限:
# chmod -R 777 shard1.sh
通过以下命令启动实例:
sh shard1.sh
用mongo连接其中一个mongod,命令:
# cd /app/mongodb/bin.3.0.10/
#./mongo -port 27017
执行:
> config = {_id: 'shard1', members: [
{_id: 0, host: '192.168.56.102:27017'},
{_id: 1, host: '192.168.56.103:27017'},
{_id: 2, host: '192.168.56.104:27017'}]
}
> rs.initiate(config);
配置relica set分片2
同样方法,配置shard2用到的replica sets:
cd /app/mongodb/mmapv1
设置config信息,配置文本命名为:shard21.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard21.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard21 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard21/mongod.pid # location of pidfile
# network interfaces net: port: 27018 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
#security:
#operationProfiling:
replication: oplogSizeMB: 100 replSetName: shard2
sharding: clusterRole: shardsvr
|
创建文件shard2.sh,内容如下:
# cd /app/mongodb/bin.3.0.10/nosecurity
# cat shard2.sh
/app/mongodb/bin.3.0.10/mongod -f /app/mongodb/bin.3.0.10/nosecurity/shard21.conf
对文件shard2.sh赋予执行权限:
# chmod -R 777 shard2.sh
通过以下命令启动实例:
sh shard2.sh
Server2:
cd /app/mongodb/mmapv1
设置config信息,配置文本命名为:shard22.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard22.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard22 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard22/mongod.pid # location of pidfile
# network interfaces net: port: 27018 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
#security:
#operationProfiling:
replication: oplogSizeMB: 100 replSetName: shard2
sharding: clusterRole: shardsvr
|
创建文件startupsh22.sh,内容如下:
# cd /app/mongodb/bin.3.0.10/nosecurity
# cat shard2.sh
/app/mongodb/bin.3.0.10/mongod -f /app/mongodb/bin.3.0.10/nosecurity/shard22.conf
对文件shard2.sh赋予执行权限:
# chmod -R 777 shard2.sh
通过以下命令启动实例:
sh shard2.sh
Server3:
cd /app/mongodb/mmapv1
设置config信息,配置文本命名为:shard23.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard23.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard23 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard23/mongod.pid # location of pidfile
# network interfaces net: port: 27018 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
#security:
#operationProfiling:
replication: oplogSizeMB: 100 replSetName: shard2
sharding: clusterRole: shardsvr
|
创建文件shard2.sh,内容如下:
# cd /app/mongodb/bin.3.0.10/nosecurity
# cat shard2.sh
/app/mongodb/bin.3.0.10/mongod -f /app/mongodb/bin.3.0.10/nosecurity/shard23.conf
对文件shard2.sh赋予执行权限:
# chmod -R 777 shard2.sh
通过以下命令启动实例:
sh shard2.sh
初始化replica set
用mongo连接其中一个mongod,命令:
# cd /app/mongodb/bin.3.0.10/
执行:
> config = {_id: 'shard2', members: [
{_id: 0, host: '192.168.56.102:27018'},
{_id: 1, host: '192.168.56.103:27018'},
{_id: 2, host: '192.168.56.104:27018'}]
}
> rs.initiate(config);
配置relica set分片3
同样方法,配置shard3用到的replica sets。
到此就配置好了三个replica sets,也就是准备好了三个shards。
配置三台config server
在三台config server上分别配置configsvr.conf文件。
# cd /app/mongodb/mmapv1/nosecurity
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/config.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/config journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/config/mongod.pid # location of pidfile
# network interfaces net: port: 20000 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
#"security": #{ # "authorization":"enabled", # "clusterAuthMode":"keyFile", # "keyFile":"/app/mongodb/mmapv1/key/security" #} #operationProfiling:
#replication: # oplogSizeMB: 100 # replSetName: shard1
sharding: clusterRole: configsvr
|
# cd /app/mongodb/bin.3.0.10/nosecurity
# cat configsvr.sh
/app/mongodb/bin.3.0.10/mongod -f /app/mongodb/bin.3.0.10/nosecurity/configsvr.conf
对文件configsvr.sh赋予执行权限:
# chmod -R 777 configsvr.sh
通过以下命令启动实例:
sh configsvr.sh
启动路由服务
在三台route server上分别执行:
# cd /app/mongodb/bin.3.0.10/nosecurity
# cat mongos.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
configdb =192.168.56.102:20000,192.168.56.103:20000,192.168.56.104:20000 port = 30000 chunkSize = 5 logpath = /app/mongodb/mmapv1/logs/mongos.log logappend = true fork = true
|
# cat mongos.sh
/app/mongodb/bin.3.0.10/mongos -f /app/mongodb/bin.3.0.10/security/mongos.conf
对文件mongos.sh赋予执行权限:
# chmod -R 777 mongos.sh
通过以下命令启动实例:
sh mongos.sh
配置mongs
连接到其中一个mongos进程,并切换到admin数据库做以下配置:
1. 连接到mongs,并切换到admin
#./mongo 192.168.56.102:30000
>use admin
db.runCommand( { addshard : “replicaSetName/<serverhostname>[:port] [,serverhostname2[:port],…]” } )
如添加192.168.0.1、192.168.0.2、192.168.0.3三台服务器的shard的命令为:
>db.runCommand( { addshard : "shard1/192.168.56.102:27017,192.168.56.103:27017,192.168.56.104:27017",name:"s1"} );
>db.runCommand( { addshard : "shard2/192.168.56.102:27018,192.168.56.103:27018,192.168.56.104:27018",name:"s2"} );
db.runCommand( { addshard : "shard3/192.168.56.102:27019,192.168.56.103:27019,192.168.56.104:27019",name:"s3"} );
3. Listing shards
命令:>db.runCommand( { listshards : 1 } )
如果列出了以上两个你加的shards,表示shards已经配置成功。
验证:
# ./mongo 192.168.56.102:30000/admin -u admin -p admin
MongoDB shell version: 3.0.10
connecting to: 192.168.56.102:30000/admin
mongos> show dbs
INTEGRATOR_FS 0.078GB
admin 0.016GB
config 0.016GB
mongos>
命令格式:db.runCommand( { enablesharding : “<dbname>” } );
命令:
>use admin;
> db.runCommand( { enablesharding : "INTEGRATOR_FS" } );
其中INTEGRATOR_FS为数据库名称。
命令格式:
>db.runCommand( { shardcollection : "<namespace>”,key : <shardkeypatternobject> });
命令:
>db.runCommand( { shardcollection : "INTEGRATOR_FS.EVENT",key : {TransactionId: 1} } )
>db.runCommand( { shardcollection : "INTEGRATOR_FS.EVENT_DOC",key : {TransactionId: 1} } )
>db.runCommand( { shardcollection : "INTEGRATOR_FS.EVENT_DETAIL",key : {EventId: 1} } )
>db.runCommand( { shardcollection : "INTEGRATOR_FS.INBOUND_MSG",key : {TransactionId: 1} } )
>db.runCommand( { shardcollection : "INTEGRATOR_FS.OUTBOUND_MSG",key : {TransactionId: 1} } )
>db.runCommand( { shardcollection : "INTEGRATOR_FS.EVENT_WPA",key : {EventId: 1} } )
>db.runCommand( { shardcollection : "INTEGRATOR_FS.DOC_CONTENT",key : {DocContentId: 1} } )
EVENT、EVENT_DOC、INBOUND_MSG、OUTBOUND_MSG、EVENT_DETAIL、EVENT_WPA、DOC_CONTENT为INTEGRATOR_FS中的七个collection。其中EVENT、EVENT_DOC、INBOUND_MSG、OUTBOUND_MSG分片的片键为TransactionId,EVENT_DETAIL、EVENT_WPA分片的片键为EventId,DOC_CONTENT分片的片键为DocContentId。
6.添加索引
集群环境,因为各集合在分片时已经设置了片键,自动为索引,所以只需要增加以下两个索引,命令:
>use INTEGRATOR_FS
>db.EVENT_DOC.ensureIndex({"DocContentId":1})
>db.EVENT.ensureIndex({"EventId":1})
7.分片结果验证
执行以下命令:
>use INTEGRATOR_FS;
>db.EVENT.stats();
当出现如图所示的结果时,表示EVENT的分片成功了。其余collection的分片结果同样可以验证。
分片结果验证
配置集群参数
在系统配置文件system.properties中,配置如图所示的集群参数,该参数配置的是三台route服务器的IP和端口:common.mongodb.cluster=192.168.56.102:30000,192.168.56.103:30000,192.168.56.104:30000。
设置索引
命令:./mongo -port 27017
命令:use INTEGRATOR_FS
命令:db.EVENT.ensureIndex({"TransactionId":1})
命令:db.EVENT.ensureIndex({"EventId":1})
命令:db.EVENT_DOC.ensureIndex({"TransactionId":1})
命令:db.EVENT_DOC.ensureIndex({"DocContentId":1})
命令:db.EVENT_DETAIL.ensureIndex({"EventId":1})
命令:db.DOC_CONTENT.ensureIndex({"DocContentId":1})
命令:db.OUTBOUND_MSG.ensureIndex({"TransactionId":1})
命令:db.INBOUND_MSG.ensureIndex({"TransactionId":1})
mongodb集群参数配置
安全认证(验证模式)
mongodb存储所有的用户信息在admin 数据库的集合system.users中,保存用户名、密码和数据库信息。mongodb默认不启用授权认证,只要能连接到该服务器,就可连接到mongod。若要启用安全认证,需要更改配置文件参数auth。
采用Keyfile的方式进行认证,即随机生成一个文本文件,之后副本集的节点只有使用这个文件启动才能成功加入副本集中,提供了安全性,同时,keyfile启动自动附带auth机制。
从无验证模式切换到验证模式,主要做两项工作:
1、在conf配置文件中,加入验证信息。
2、创建admin管理员,并配置角色
3、创建integrator用户,并配置角色。
创建用户
登录mongos实例,并创建admin和integrator用户。(这里创建的全局用户)
mongos> use admin
mongs> db.createUser(
{
user: "admin",
pwd: "admin",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
}
)
授予权限:
mongos> db.grantRolesToUser("admin",[{role:"clusterManager",db:"admin"}])
mongos> db.grantRolesToUser("admin",[{role:"clusterAdmin",db:"admin"}])
取消权限:
db.revokeRolesFromUser("admin",[{"role":"read",db:"INTEGRATOR_FS"}])
./mongo 192.168.56.102:30000/admin -u admin -p admin
在INTEGRATOR_FS数据库创建用户integrator,并给该用户admin数据库上clusterAdmin和readAnyDatabase的角色,INTEGRATOR_FS数据库上readWrite角色。
mongos> use INTEGRATOR_FS
mongos> db.createUser(
{
"user":"integrator",
"pwd":"in2#St",
"customData":{"system":"sfdrp"},
"roles":
[
{"role":"clusterAdmin",db:"admin"},
{"role":"readAnyDatabase",db:"admin"},
"readWrite"
]
}
)
验证
./mongo 192.168.56.102:30000 -u integrator -p in2#St --authenticationDatabase INTEGRATOR_FS
查看用户信息
mongos> use admin
switched to db admin
mongos> db.system.users.find().pretty()
{
"_id" : "admin.dba",
"user" : "dba",
"db" : "admin",
"credentials" : {
"SCRAM-SHA-1" : {
"iterationCount" : 10000,
"salt" : "osz4IUGL5ouYj/BWPkz2lw==",
"storedKey" : "9ir3w5Ztpl6Id7wzYtOArC1Fa70=",
"serverKey" : "WHR7ue+b29D2GzX/Ea0OoqDdRE8="
}
},
"roles" : [
{
"role" : "userAdminAnyDatabase",
"db" : "admin"
}
]
}
分别登录三个分片副本实例,并创建admin用户(这里创建的是分片局部用户)。
mongos> use admin
mongs> db.createUser(
{
user: "admin",
pwd: "admin",
roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
}
)
授予权限:
mongos> db.grantRolesToUser("admin",[{role:"clusterManager",db:"admin"}])
mongos> db.grantRolesToUser("admin",[{role:"clusterAdmin",db:"admin"}])
Keyfile创建
在各个节点上创建验证文件security于/app/mongodb/mmapv1/key目录,关赋予可读权限,命令如下
# cd /app/mongodb/mmapv1/key
# openssl rand-base64 741> security
# chmod 600 security
Config文件配置
修改所有的配置文件认证信息。
# cd /app/mongodb/bin.3.0.10/security
# cat shard11.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard11.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard11 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard11/mongod.pid # location of pidfile
# network interfaces net: port: 27017 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
#security: "security": { "authorization":"enabled", "clusterAuthMode":"keyFile", "keyFile":"/app/mongodb/mmapv1/key/security" }
#operationProfiling:
replication: oplogSizeMB: 100 replSetName: shard1
sharding: clusterRole: shardsvr |
# cat shard21.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard21.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard21 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard21/mongod.pid # location of pidfile
# network interfaces net: port: 27018 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces. "security": { "authorization":"enabled", "clusterAuthMode":"keyFile", "keyFile":"/app/mongodb/mmapv1/key/security" }
replication: oplogSizeMB: 100 replSetName: shard2
sharding: clusterRole: shardsvr
|
# cat configsvr.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/config.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/config journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/config/mongod.pid # location of pidfile
# network interfaces net: port: 20000 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
"security": { "authorization":"enabled", "clusterAuthMode":"keyFile", "keyFile":"/app/mongodb/mmapv1/key/security" } #operationProfiling:
#replication: # oplogSizeMB: 100 # replSetName: shard1
sharding: clusterRole: configsvr
|
# cat mongos.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
configdb =192.168.56.102:20000,192.168.56.103:20000,192.168.56.104:20000 port = 30000 chunkSize = 5 logpath = /app/mongodb/mmapv1/logs/mongos.log logappend = true fork = true keyFile = /app/mongodb/mmapv1/key/security |
# cat shard12.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard12.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard12 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard12/mongod.pid # location of pidfile
# network interfaces net: port: 27017 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
#security: "security": { "authorization":"enabled", "clusterAuthMode":"keyFile", "keyFile":"/app/mongodb/mmapv1/key/security" }
#operationProfiling:
replication: oplogSizeMB: 100 replSetName: shard1
sharding: clusterRole: shardsvr
|
[[email protected] security]# cat shard22.conf # mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard22.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard22 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard22/mongod.pid # location of pidfile
# network interfaces net: port: 27018 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces. "security": { "authorization":"enabled", "clusterAuthMode":"keyFile", "keyFile":"/app/mongodb/mmapv1/key/security" }
replication: oplogSizeMB: 100 replSetName: shard2
sharding: clusterRole: shardsvr
|
# cat shard13.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard13.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard13 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard13/mongod.pid # location of pidfile
# network interfaces net: port: 27017 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces.
#security: "security": { "authorization":"enabled", "clusterAuthMode":"keyFile", "keyFile":"/app/mongodb/mmapv1/key/security" }
#operationProfiling:
replication: oplogSizeMB: 100 replSetName: shard1
sharding: clusterRole: shardsvr |
# cat shard23.conf
# mongod.conf
# for documentation of all options, see: # http://docs.mongodb.org/manual/reference/configuration-options/
# where to write logging data. systemLog: destination: file logAppend: true path: /app/mongodb/mmapv1/logs/shard23.log
# Where and how to store data. storage: dbPath: /app/mongodb/mmapv1/shard23 journal: enabled: true # engine: # mmapv1:true # wiredTiger:
# how the process runs processManagement: fork: true # fork and run in background pidFilePath: /app/mongodb/mmapv1/shard23/mongod.pid # location of pidfile
# network interfaces net: port: 27018 # bindIp: 127.0.0.1,192.168.56.102,192.168.56.103,192.168.56.104 # Listen to local interface only, comment to listen on all interfaces. "security": { "authorization":"enabled", "clusterAuthMode":"keyFile", "keyFile":"/app/mongodb/mmapv1/key/security" }
replication: oplogSizeMB: 100 replSetName: shard2
sharding: clusterRole: shardsvr
|
关闭集群(无验证模式)
关闭三个Mongos实例:
# cd /app/mongodb/bin.3.0.10
# ./mongo 192.168.56.102:30000
mongos> use admin
switched to db admin
mongos> db.shutdownServer()
# ./mongo 192.168.56.103:30000
# ./mongo 192.168.56.104:30000
关闭三个Configsvr实例:
# ./mongo 192.168.56.102:20000
# ./mongo 192.168.56.103:20000
# ./mongo 192.168.56.104:20000
关闭三个分片副本集:
# ./mongo 192.168.56.102:27017
# ./mongo 192.168.56.102:27018
......
验证:
# ps -ef|grep mongo
启动集群(验证模式)
启动分片副本集:
Server1:
# cd /app/mongodb/bin.3.0.10/security
# sh shard1.sh
# sh shard2.sh
# sh shard3.sh
Server2:
# cd /app/mongodb/bin.3.0.10/security
# sh shard1.sh
# sh shard2.sh
# sh shard3.sh
Server3:
# cd /app/mongodb/bin.3.0.10/security
# sh shard1.sh
# sh shard2.sh
# sh shard3.sh
验证分片
# cd /app/mongodb/bin.3.0.10
# ./mongo 192.168.56.102:27017/admin -u admin -p admin
# ./mongo 192.168.56.102:27018/admin -u admin -p admin
# ./mongo 192.168.56.102:27019/admin -u admin -p admin
启动三个Configsvr实例:
# cd /app/mongodb/bin.3.0.10/security
# sh configsvr.sh
启动三个Mongos实例:
# cd /app/mongodb/bin.3.0.10/security
# sh mongos.sh
验证集群:
# cd /app/mongodb/bin.3.0.10
# ./mongo 192.168.56.102:30000/admin -u admin -p admin
# ./mongo 192.168.56.102:30000/INTEGRATOR_FS -u integrator -p in2#St
mongos> show dbs INTEGRATOR_FS 0.078GB admin 0.016GB config 0.016GB mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("57b3dd77ba564d20da1b56ac") } shards: { "_id" : "s1", "host" : "shard1/192.168.56.102:27017,192.168.56.103:27017,192.168.56.104:27017" } { "_id" : "s2", "host" : "shard2/192.168.56.102:27018,192.168.56.103:27018,192.168.56.104:27018" } balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "integrator", "partitioned" : false, "primary" : "s1" } { "_id" : "INTEGRATOR_FS", "partitioned" : true, "primary" : "s1" } INTEGRATOR_FS.DOC_CONTENT shard key: { "DocContentId" : 1 } chunks: s1 1 { "DocContentId" : { "$minKey" : 1 } } -->> { "DocContentId" : { "$maxKey" : 1 } } on : s1 Timestamp(1, 0) INTEGRATOR_FS.EVENT shard key: { "TransactionId" : 1 } chunks: s1 1 { "TransactionId" : { "$minKey" : 1 } } -->> { "TransactionId" : { "$maxKey" : 1 } } on : s1 Timestamp(1, 0) INTEGRATOR_FS.EVENT_DETAIL shard key: { "EventId" : 1 } chunks: s1 1 { "EventId" : { "$minKey" : 1 } } -->> { "EventId" : { "$maxKey" : 1 } } on : s1 Timestamp(1, 0) INTEGRATOR_FS.EVENT_DOC shard key: { "TransactionId" : 1 } chunks: s1 1 { "TransactionId" : { "$minKey" : 1 } } -->> { "TransactionId" : { "$maxKey" : 1 } } on : s1 Timestamp(1, 0) INTEGRATOR_FS.EVENT_WPA shard key: { "EventId" : 1 } chunks: s1 1 { "EventId" : { "$minKey" : 1 } } -->> { "EventId" : { "$maxKey" : 1 } } on : s1 Timestamp(1, 0) INTEGRATOR_FS.INBOUND_MSG shard key: { "TransactionId" : 1 } chunks: s1 1 { "TransactionId" : { "$minKey" : 1 } } -->> { "TransactionId" : { "$maxKey" : 1 } } on : s1 Timestamp(1, 0) INTEGRATOR_FS.OUTBOUND_MSG shard key: { "TransactionId" : 1 } chunks: s1 1 { "TransactionId" : { "$minKey" : 1 } } -->> { "TransactionId" : { "$maxKey" : 1 } } on : s1 Timestamp(1, 0) { "_id" : "test", "partitioned" : false, "primary" : "s2" }
mongos>
|
日常维护
关闭集群(验证模式)
关闭三个Mongos实例:
# cd /app/mongodb/bin.3.0.10
# ./mongo 192.168.56.102:30000/admin -u admin -p admin
mongos> use admin
switched to db admin
mongos> db.shutdownServer()
# ./mongo 192.168.56.103:30000/admin -u admin -p admin
# ./mongo 192.168.56.104:30000/admin -u admin -p admin
关闭三个Configsvr实例:
# ./mongo 192.168.56.102:20000/admin -u admin -p admin
# ./mongo 192.168.56.103:20000/admin -u admin -p admin
# ./mongo 192.168.56.104:20000/admin -u admin -p admin
关闭三个分片副本集:
# ./mongo 192.168.56.102:27017/admin -u admin -p admin
# ./mongo 192.168.56.102:27018/admin -u admin -p admin
......
验证:
# ps -ef|grep mongo
参考信息
createUser定义
创建一个数据库新用户用db.createUser()方法,如果用户存在则返回一个用户重复错误。
语法
db.createUser(user, writeConcern)
user这个文档创建关于用户的身份认证和访问信息;
writeConcern这个文档描述保证MongoDB提供写操作的成功报告。
user文档,定义了用户的以下形式:
{ user: "<name>",
pwd: "<cleartext password>",
customData: { <any information> },
roles: [
{ role: "<role>", db: "<database>" } | "<role>",
...
]
}
user文档字段介绍:
user字段,为新用户的名字;
pwd字段,用户的密码;
cusomData字段,为任意内容,例如可以为用户全名介绍;
roles字段,指定用户的角色,可以用一个空数组给新用户设定空角色;
在roles字段,可以指定内置角色和用户定义的角色。
Built-In Roles(内置角色)
1. 数据库用户角色:read、readWrite;
2. 数据库管理角色:dbAdmin、dbOwner、userAdmin;
3. 集群管理角色:clusterAdmin、clusterManager、clusterMonitor、hostManager;
4. 备份恢复角色:backup、restore;
5. 所有数据库角色:readAnyDatabase、readWriteAnyDatabase、userAdminAnyDatabase、dbAdminAnyDatabase
6. 超级用户角色:root
7. 内部角色:__system
内置角色官方链接:https://docs.mongodb.com/manual/core/security-built-in-roles/