- A+
所属分类:Linux
一.环境准备
系统版本:Centos 7.7
软件版本:3.6.17
集群规划
服务器1 | 服务器2 | 服务器3 |
172.17.1.30 | 172.17.1.31 | 172.17.1.32 |
mongos | mongos | mongos |
config server | config server | config server |
shard server1 主节点 | shard server1 副节点 | shard server1 仲裁 |
shard server2 副节点 | shard server2 仲裁 | shard server2 主节点 |
shard server3 仲裁 | shard server3 主节点 | shard server3 副节点 |
端口分配
mongos:20000 config:21000 shard1:27001 shard2:27002 shard3:27003
二.安装
1.下载
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-3.6.17.tgz; tar zxvf mongodb-linux-x86_64-rhel70-3.6.17.tgz; mv mongodb-linux-x86_64-rhel70-3.6.17 /opt/mongodb;
2.创建相应目录
mkdir -p /data/mongo-cluster/{config,shard1,shard2,shard3}/{data,log}; mkdir -p /data/mongo-cluster/mongos/log; mkdir -p /data/mongo-cluster/{conf,keyfile};
3.配置环境变量
tee -a /etc/profile <<-'EOF' # MongoDB export MONGODB_HOME=/opt/mongodb export PATH=$MONGODB_HOME/bin:$PATH EOF
立即生效
source /etc/profile;
4.创建密钥文件
openssl rand -base64 756 > mongo.key; mv mongo.key /data/mongo-cluster/keyfile; chmod 400 /data/mongo-cluster/keyfile/mongo.key;
5.config server配置服务器
tee -a /data/mongo-cluster/conf/config.conf <<-'EOF' systemLog: destination: file logAppend: true path: /data/mongo-cluster/config/log/config.log logRotate: rename timeStampFormat: ctime storage: dbPath: /data/mongo-cluster/config/data journal: enabled: true commitIntervalMs: 100 directoryPerDB: false syncPeriodSecs: 60 engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 6 journalCompressor: snappy indexConfig: prefixCompression: true processManagement: fork: true pidFilePath: /data/mongo-cluster/config/log/configsrv.pid net: port: 21000 bindIp: 0.0.0.0 maxIncomingConnections: 29000 wireObjectCheck: true ipv6: false #security: # authorization: enabled # keyFile: /data/mongo-cluster/keyfile/mongo.key operationProfiling: slowOpThresholdMs: 100 mode: slowOp replication: oplogSizeMB: 2048 replSetName: configs sharding: clusterRole: configsvr archiveMovedChunks: true EOF
6.配置分片
分片一
tee -a /data/mongo-cluster/conf/shard1.conf <<-'EOF' systemLog: destination: file logAppend: true path: /data/mongo-cluster/shard1/log/shard1.log logRotate: rename timeStampFormat: ctime storage: dbPath: /data/mongo-cluster/shard1/data journal: enabled: true commitIntervalMs: 100 directoryPerDB: false syncPeriodSecs: 60 engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 3 journalCompressor: snappy indexConfig: prefixCompression: true processManagement: fork: true pidFilePath: /data/mongo-cluster/shard1/log/shard1.pid net: port: 27001 bindIp: 0.0.0.0 maxIncomingConnections: 29000 wireObjectCheck: true ipv6: false #security: # authorization: enabled # keyFile: /data/mongo-cluster/keyfile/mongo.key operationProfiling: slowOpThresholdMs: 100 mode: slowOp replication: oplogSizeMB: 2048 replSetName: shard1 sharding: clusterRole: shardsvr archiveMovedChunks: true EOF
分片二
tee -a /data/mongo-cluster/conf/shard2.conf <<-'EOF' systemLog: destination: file logAppend: true path: /data/mongo-cluster/shard2/log/shard2.log logRotate: rename timeStampFormat: ctime storage: dbPath: /data/mongo-cluster/shard2/data journal: enabled: true commitIntervalMs: 100 directoryPerDB: false syncPeriodSecs: 60 engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 3 journalCompressor: snappy indexConfig: prefixCompression: true processManagement: fork: true pidFilePath: /data/mongo-cluster/shard2/log/shard2.pid net: port: 27002 bindIp: 0.0.0.0 maxIncomingConnections: 29000 wireObjectCheck: true ipv6: false #security: # authorization: enabled # keyFile: /data/mongo-cluster/keyfile/mongo.key operationProfiling: slowOpThresholdMs: 100 mode: slowOp replication: oplogSizeMB: 2048 replSetName: shard2 sharding: clusterRole: shardsvr archiveMovedChunks: true EOF
分片三
tee -a /data/mongo-cluster/conf/shard3.conf <<-'EOF' systemLog: destination: file logAppend: true path: /data/mongo-cluster/shard3/log/shard3.log logRotate: rename timeStampFormat: ctime storage: dbPath: /data/mongo-cluster/shard3/data journal: enabled: true commitIntervalMs: 100 directoryPerDB: false syncPeriodSecs: 60 engine: wiredTiger wiredTiger: engineConfig: cacheSizeGB: 3 journalCompressor: snappy indexConfig: prefixCompression: true processManagement: fork: true pidFilePath: /data/mongo-cluster/shard3/log/shard3.pid net: port: 27003 bindIp: 0.0.0.0 maxIncomingConnections: 29000 wireObjectCheck: true ipv6: false #security: # authorization: enabled # keyFile: /data/mongo-cluster/keyfile/mongo.key operationProfiling: slowOpThresholdMs: 100 mode: slowOp replication: oplogSizeMB: 2048 replSetName: shard3 sharding: clusterRole: shardsvr archiveMovedChunks: true EOF
7.配置路由服务器 mongos
tee -a /data/mongo-cluster/conf/mongos.conf <<-'EOF' systemLog: destination: file logAppend: true path: /data/mongo-cluster/mongos/log/mongos.log processManagement: fork: true pidFilePath: /data/mongo-cluster/mongos/log/mongos.pid net: port: 20000 bindIp: 0.0.0.0 maxIncomingConnections: 50000 #security: # keyFile: /data/mongo-cluster/keyfile/mongo.key sharding: configDB: configs/172.17.1.30:21000,172.17.1.31:21000,172.17.1.32:21000 EOF
传输到另外两台服务器上
scp -r /data/mongo-cluster/ root@ip:/data
8.启动config server并初始化
启动三台服务器的config server
mongod -f /data/mongo-cluster/conf/config.conf
配置副本集,登录任意一台服务器即可
mongo 172.17.1.31:21000 config = { _id : "configs", members : [ {_id : 0, host : "172.17.1.30:21000" }, {_id : 1, host : "172.17.1.31:21000" }, {_id : 2, host : "172.17.1.32:21000" } ] }
初始化副本集
rs.initiate(config)
查看分区状态
rs.status();
9.启动分片服务,配置副本集
分片一
mongod -f /data/mongo-cluster/conf/shard1.conf;
配置副本集,登录主或副节点即可,仲裁节点上执行会报错
mongo --port 27001 use admin; config = { _id : "shard1", members : [ {_id : 0, host : "172.17.1.30:27001" }, {_id : 1, host : "172.17.1.31:27001" }, {_id : 2, host : "172.17.1.32:27001", arbiterOnly: true } ] }
初始化副本集
rs.initiate(config)
查看分区状态
rs.status();
分片二
mongod -f /data/mongo-cluster/conf/shard2.conf;
配置副本集,登录主或副节点即可,仲裁节点上执行会报错
mongo --port 27002 use admin; config = { _id : "shard2", members : [ {_id : 0, host : "172.17.1.30:27002" }, {_id : 1, host : "172.17.1.31:27002", arbiterOnly: true }, {_id : 2, host : "172.17.1.32:27002" } ] }
初始化副本集
rs.initiate(config)
查看分区状态
rs.status();
分片三
mongod -f /data/mongo-cluster/conf/shard3.conf;
配置副本集,登录主或副节点即可,仲裁节点上执行会报错
mongo 172.17.1.31:27003 use admin; config = { _id : "shard3", members : [ {_id : 0, host : "172.17.1.30:27003",arbiterOnly: true }, {_id : 1, host : "172.17.1.31:27003" }, {_id : 2, host : "172.17.1.32:27003" } ] }
初始化副本集
rs.initiate(config)
查看分区状态
rs.status();
10.启动mongos并初始化
启动三台服务器的mongos
mongos -f /data/mongo-cluster/conf/mongos.conf;
11.启用分片
mongo --port 20000 use admin; #串联路由服务器与分配副本集 sh.addShard("shard1/172.17.1.30:27001,172.17.1.31:27001,172.17.1.32:27001") sh.addShard("shard2/172.17.1.30:27002,172.17.1.31:27002,172.17.1.32:27002") sh.addShard("shard3/172.17.1.30:27003,172.17.1.31:27003,172.17.1.32:27003") #查看集群状态 sh.status()
列出所有分片信息
db.runCommand({ listshards : 1})
指定数据库分片生效
db.runCommand( { enablesharding :"history"}); db.runCommand( { shardcollection : "数据库名称.集合名称",key : {分片键: 1} } )
因mongodb默认是主节点读写数据,副本节点上不允许读,如想让副节点可读,可以设置如下。
db.getMongo().setSlaveOk();
use config; #查询当前块大小 db.settings.find({"_id":"chunksize"}) #修改块大小 db.settings.save( { _id:"chunksize", value: 64 } );
三.创建用户
添加管理员用户
db.createUser({ user:'admin',pwd:'password', roles:[ {role:'clusterAdmin',db:'admin'}, {role:'userAdminAnyDatabase',db:'admin'}, {role:'dbAdminAnyDatabase',db:'admin'}, {role:'readWriteAnyDatabase',db:'admin'} ]})
添加普通用户
use history; db.createUser({user:'history',pwd:'history',roles:[{role:'readWrite',db:'history'}]});
启动认证,去掉之前的注释即可
config.conf shard1.conf shard2.conf shard3.conf security: authorization: enabled keyFile: /data/mongo-cluster/keyfile/mongo.key mongos.conf security: keyFile: /data/mongo-cluster/keyfile/mongo.key
四.设置数据平衡窗口
1.确认balance开启中
use config; sh.getBalancerState()
如未开启,执行命令
sh.setBalancerState( true )
2.修改balance窗口时间
db.settings.update( { _id: "balancer" }, { $set: { activeWindow : { start : "<start-time>", stop : "<stop-time>" } } }, { upsert: true } )
eg:
db.settings.update({ _id : "balancer" }, { $set : { activeWindow : { start : "00:00", stop : "4:00" } } }, true )
3.删除balance窗口
use config db.settings.update({ _id : "balancer" }, { $unset : { activeWindow : true } })
4.关闭balance
sh.stopBalancer(); sh.getBalancerState();
关闭了,重新打开
sh.setBalancerState(true); use config; db.settings.update( { _id: "balancer" }, { $set : { stopped: false } } , { upsert: true } );
四.服务启动脚本
tee -a /data/mongo-cluster/start.sh <<-'EOF' #!/bin/bash mongod -f /data/mongo-cluster/conf/config.conf; mongod -f /data/mongo-cluster/conf/shard1.conf; mongod -f /data/mongo-cluster/conf/shard2.conf; mongod -f /data/mongo-cluster/conf/shard3.conf; mongos -f /data/mongo-cluster/conf/mongos.conf; ps -efl | grep mongo; netstat -tnlp; EOF
tee -a /data/mongo-cluster/stop.sh <<-'EOF' #!/bin/bash killall mongod killall mongos EOF
如无killall这个命令,请安装一下
yum install psmisc -y
优化关闭提示
echo never >> /sys/kernel/mm/transparent_hugepage/enabled; echo never >> /sys/kernel/mm/transparent_hugepage/defrag;
五.备份恢复
锁库(在分片上操作)
db.fsyncLock()
查看锁的状态
db.fsyncLock()
解锁
db.fsyncUnlock()
导出与导入
nohup mongodump --port 20000 -d history -c history_xx -o /data/bak/ >> ouput.log 2>&1 & nohup mongorestore --port 20000 -d history -c history_20xx /data/bak/history/history_xx.bson >> input.log 2>&1 &